Saturday, December 4, 2010

Proof in Programming

This post is based on some discussions that have been happing in my Next-Gen Programming languages class. This follows on the thoughts of an earlier post relating to AI and automation. This article on the military using robots is also relevant.

The basic argument in this is that software systems are going to increase in size and complexity and languages/tools need to do something to keep up with that. I am a strong proponent of static type checking in large part because it allows the compiler to prove that certain aspects of a program are correct. Without this you have to perform more testing at every step and because testing never gets complete coverage, you can never be as certain based as tests as you could be with type checking.

Type checking only does correctness proofs for a certain fraction of the program. It seems to me that in order to really trust software to drive our cars and shoot our guns, we need to have more of the systems correct to the level of proof. You can prove an algorithm correct by hand. It is tedious and painful and it certainly doesn't scale very far. However, proof can be fairly well automated in most places and my question is how far that idea could be pushed and what are the possible implications.

We are seeing more and more of type inference systems in languages. ML and family have had it for many years. Now it is in languages like Scala and even the next version of C++ (C++0x) has type inference in the mix. The idea here is that types are fairly algorithmic as well. The compiler can figure out types for things most of the time. Humans just have to step in at certain points to help out.

What if a language/programming system had a similar concept for proofs of correctness? The system automatically discovers as much as it can about what a function or some other grouping of code does. Every so often a human might step in to help out if the automatic system doesn't "get it". However, the human's overrides won't be allowed to be incorrect, just set constraints that make it so the proving agent can figure more out. That is what type inference systems do. The user can give a more constrained type for something to help out, but if they provide an incorrect type, they get an error.

This made me think of something I had learned about Modula-3. Modula-3 had safe and unsafe modules. The safe modules were garbage collected and had pointers that were like Java references. They were always safe and the language/system dealt with things. Of course, you can't do systems programming in Java and the engineers at Digital wanted to allow systems programming in Modula-3. Unsafe modules allowed pointer arithmetic and let you do the unsafe things you can do in C/C++. The idea was that you only used them when needed and did most of your programming in safe modules.

So consider extending this a step further to "proved" modules. These modules will only have code in them that has been proven to be "correct". In my head I picture something like hover-overs you get in newer editors when type inference is involved. The automatic proof system will tell you everything that it can about the code. The programmer would set constrains until the desired result was attained. Then the parts of that result you really cared about could be "locked in" in some way. That way, if you or someone else changed the code later in such a way that a desired outcome wasn't provable, you would get an error.

You would probably want the ability to move code around, but moving things down a step from proven to safe or safe to unsafe might trigger all types of errors/warning when dependent code was no longer safe or provable. The goal though is to make as much code as possible in a system proven correct and what can't be proven you want to be safe unless it absolutely has to be unsafe. Moving things up levels wouldn't be as challenging, you just have to clean it up a bit to match the requirements of the higher level of safety. Code calling on it would generally go unchanged.

This leads to a very interesting theoretical question, how much can you prove? What fraction of the large systems that we are interested in could go into a proven module? I don't know if we have a good answer for this yet, but it would be interesting to find out. My gut tells me that the answer is a lot and if that is true, this would make programming a much safer and more reliable process.

Now to something even more theoretical and esoteric. I have a certain conjecture that while human intelligence is computable, the things we consider artistic aren't. We consider them art in large part because we can't give an algorithmic description of how to do them so they are more individual and subjective. My new conjecture on this line is that intelligent behavior, while computable, isn't provable. In order to get intelligent behavior, you would need to include significant non-computable aspects. If this conjecture is true, then limiting development as much as possible to provable modules could also be what is required to prevent scenarios like Terminator or the Matrix. Granted, there are all types of ifs in this paragraph and it is nothing more than conjecture.

Monday, October 18, 2010

Understanding of Gravity -> Technology Revolution?

This post relates to a thought that I had driving into work this morning. Most of the technology that we use today and the advances that literally drive our economy come as the result of our understanding of quantum mechanics. We have gotten extremely good at manipulating things in the small. An understanding of solid state physics along with control of light and other subatomic particles have allowed for remarkable advances in miniaturization. Humans have dominated the small scale down to atomic nuclei and electrons. We don't do much below that yet, but perhaps that will come. Still, what we have done has far outstripped most of the imaginings of science fiction writers and those who would predict future trends.

On the other hand, we have failed miserably to keep up with these things at larger scales. We don't jet off into space whenever we want. Humans can't visit other bodies in our solar system. We are constrained to Earth and we are constrained to move around fairly slowly. We have plans for tall buildings and every so often a new structure is built that goes higher than where we got before, but there is no space elevator. There are also no flying cars that are really useful. In a sense, we have mastered the microscopic and failed to do much with the macroscopic.

One of the biggest areas of future discovery for humans is the merging of quantum mechanics and relativity. Both theories work wonderfully in their areas. They are both very accurate in the domains where they dominate. However, we know there is a problem with our understanding because the two theories don't get along. A lot of the research in basic physics has been aimed at figuring out how a combined theory would work or at getting experimental data that will help point us in the direction of a combined theory. I have argued in the past that having funding go to this is essential to our future. We don't know what advances will come out of it, but I have always felt confident that they would be huge and have significant social implications. My thought this morning was just a tiny insight into what that might be.

So what could we really gain from a Grand Unified Theory as the theories that merge QM and relativity are often called? How about mastery of the macroscopic? Our understanding of the force of electromagnetism along with the weak and strong nuclear forces has been instrumental in mastering the microscopic. We don't make full use yet of the two nuclear forces, but the most advanced of our technologies do rely on our understanding of these. The forces that we have a unified theory of only control behaviors in the small. The macroscopic world is dominated by gravity, the one force we know of that we have brought into unification. It is the force that holds us to the planet and makes it so darn hard to fly or visit other planets. Really understanding gravity might be what helps us to get past that.

I'm not proposing that we will find a way to "get around" gravity. We certainly haven't stopped electromagnetism of the nuclear forces either. What we did was learn how to manipulate them to do the things that we want to get from them. Perhaps the same will be possible with gravity. Perhaps finding the GUT will allow us to slowly build our ability to manipulate gravity in the same way we have slowly developed our ability to manipulate the other forces and eventually give us the control of the macroscopic that we currently enjoy over the microscopic.

Automation and Social Change

I haven't posted in quite a while. Life is busy and I didn't feel I had all that much to say. However, some things have come into my mind that I thought might be worth putting up here.
This is an article that I shared on Facebook. This is something that I had thought about a fair bit, but this opened my eyes a little because I had made a few mistakes in my assumptions. The article focuses on how robots are replacing middle-class jobs. I have been thinking for a few years about the fact that robots will be replacing jobs and the impact that would have on society. I made a mistake in my thinking though in that I was considering the most vulnerable jobs to be at the low end of the pay scale. The idea being that those are low skill and easy to replace. I hadn't seen that happening so I thought this was just something for the future.

In reality low-skill jobs often require things that are hard to make robots do. What is more, they don't pay much so the robots will have to be really cheap before they can replace humans. The mid-level jobs are much more vulnerable because if they are repetitive, the pay scale is high enough to validate using robots instead. So automation eats away starting at the middle, not the bottom. That makes slight alterations to the social implications, though the fall out in the end is similar.

Something else happened recently that brought this topic up to the front of my thinking.

This is further ahead than I thought things were. Not in a bad way. I really want my car to drive itself. I don't care a bit if it flies. I just don't want to have to drive so I can be productive on the way to work or anywhere else.

The reality is that computers are going to eat out larger and larger fractions of the job market in the coming years. When they can drive, transportation and warehouses will be completely run by machines. Humans will just get in the way or only come into play for oversight. As time goes on the amount of oversight will drop. Once the robots are cheap enough, they will flip your burgers and make your fries. You will only be served by a human if you want to pay for the "personal" aspect of such service.

This will have a huge impact on society. Yes, the automation improves efficiency and reduces the cost of everything, but what happens when unemployment soars because there simply aren't enough lower level jobs to put people in? Today's unemployment numbers of ~10% are a significant concern. What happens when that goes to 30-40% because so many jobs are automated? It will then continue to grow, assuming society doesn't fall apart. How far it goes depends a lot on ones beliefs about the limits of computing power and AI. It also depends on the rate of development of wetware. Here are some different options:
  • Societal collapse - modern society is rather delicate in that we all need a lot of infrastructure to do things. Anti-automation mobs could push things over the edge and send us into a modern version of the Dark Ages.
  • Human Utopia - This could happen with either a high level of weak AI or a well behaved strong AI. Automation takes over everything and humans just sit back and enjoy it doing whatever they want. Think Wall-E here. However, the implications of this aren't all that clear. What happens to humans who don't have any motivation to do much of anything? The first generation probably behaves much like their parents, but I could see this going in odd directions after a few generations.
  • Singularity - This is Ray Kurzweil's prediction. This requires strong AI and that wetware keep up enough so that humans can integrate themselves ever more tightly with the growing capabilities of the AI. A more negative view of this might be the Borg from Star Trek Next Generation.
  • Human Eradication - What happens if you have strong AI that has a will of its own and wetware doesn't keep up? Think Terminator or the Matrix except remove the plot line because humans don't stand a chance. The computers become self-sustaining and increase in efficiency and capabilities exponentially. We become insignificant and if we are lucky they just ignore us. If we aren't luck they see us as competition for resources and eradicate us.
  • Elite Workforce - This is one that occurred to me yesterday that I haven't heard anyone discuss before. What happens if you get really weak AI? It isn't just inferior to humans, it never manages to be capable of some of the top level activities we need in society. Then 90+% of jobs get automated, but you still need smart humans in the remaining 10% to keep things running. What world does that leave us with? It probably depends on how the 10% of employable humans treat the rest of humanity.
There are probably cases I'm not thinking of. Feel free to comment on those. So what is the time line for these things? I think that in 10-15 years we will start seeing the impact of the social squeeze of more things being automated. However, things won't really blow up until the 20-30 year time frame. That is when we will find out if AI can be strong or if it will always remain weak. We'll also see how well wetware does in keeping up the pace with normal hardware. One way or the other, I fully expect to see a very different society before I die.

Additional note: A few years back I saw a Microsoft presentation on the Virtual Receptionist technology. Here's a link. Just another segment of jobs that can disappear in a few years.

Friday, August 27, 2010

x86 Open64 Compiler

I recently updated both my laptop and my home machine to Ubuntu 10.04. This had one significant negative side effect on my workstation in that it broke the Intel compiler I had installed by the system maker. I started doing a bit of searching and the problem I have is a known problem with the newer libraries so it isn't clear that paying Intel more money would actually fix the problem. However, during my searching I came across the x86 Open64 compiler. This is an optimizing compiler from AMD. It is fairly new. I have vague memories of seeing announcements of the release back in 2009, but I didn't look closely at it at the time. Not having a working compiler other than gcc on my system made me consider taking a second look.

While they say they have only really tested it under RedHat and Fedora, the binaries work just fine for me on Ubuntu. I have now also installed it on all the machines at Trinity. I need to do a direct speed test between it and the Intel compiler on my cluster, but I haven't done so yet. The set of optimizations that they list is quite impressive and from what I have seen it certainly isn't significantly slower than Intel on my home machine.

There is one very odd thing that I have noticed on my home machine though. My rings code uses OpenMP for multithreading and big simulations typically run on all eight cores of my home machine. Smaller simulations often sit at a load of 5-7. I started a large simulation using x86 Open64 and it keeps all 8 cores busy. However, smaller simulations that run through steps really quickly are running at loads of 3-4. The load level is also far less ragged than what I had seen with the Intel compiler. At this point I can't say if this is a good thing or not. At first I worried it wasn't properly parallelizing. Now I'm wondering if it just does a better job of keeping jobs to specific cores and keeping those cores loaded instead of having stuff jump around. That would certainly be a smart thing to do as it would improve cache performance, but only benchmarking and profiling will tell me for certain.

Tuesday, August 24, 2010


This blog has been dormant for a while because I spent a week in Oklahoma visiting my wife's dad with the family. It's a good place for me to get work done that can be done offline. Many of the tasks that I normally procrastinate get done there. This year I worked on papers and textbooks. The Scala book started getting a bit big and I thought maybe I should try out a different tool. There are lots of reasons why big documents should be done in LaTeX. I've been well aware of the arguments. I just haven't been willing to clutter my brain with the extra commands. LaTeX, like many other tools along those lines, has a decent learning curve and you can't do much of anything until you have climbed a fair bit of the curve. I also like OpenOffice for the GUI. I didn't want to edit my big documents in vi.

I had looked around for GUI programs that sit on top of LaTeX and already had this program called LyX installed. I had played with it just a bit, but not enough to figure out anything about it. They said read the manuals and after a few paragraphs they lost my attention so I stuck with OpenOffice. Oklahoma provided more time though and with the Scala book getting to 100 pages I decided it was time to seriously try it out. The book was long enough to benefit from the LaTeX advantages and if it got much bigger I wasn't going to be willing to convert it.

So I did a massive cut and paste and started reformatting. It took a fair bit of a day, but the result was definitely worth it. Seeing the wonderful and automatically generated Table of Contents for the first time made it worth while. As I have learned more about LyX I have been able to use more capabilities. It is a really nice way to interact with LaTeX. I highly recommend it to anyone who has thought that LaTeX might be nice but wasn't willing to climb the learning curve to get started. You can even have it show you the LaTeX source as you write so it can help you learn LaTeX along the way.

Friday, August 6, 2010

Another Huge Simulation

I've talked some in earlier posts about the F ring simulation that I recently started. I've now posted a fair bit from that simulation including plots and movies that are constantly being updated as the simulation advances. Just today I started another simulation. This one is related to my work on Negative Diffusion. I have a paper that I submitted to Icarus and got back a review for on the topic. The reviewers wanted a number of changes that I hope to get done before the beginning of classes.

One of the issues with this work is that I have to use some interesting boundary conditions to use a local cell with a perturbing moon. These boundary conditions wrap particles in the azimuthal direction such that they preserve the gradient in the epicyclic phase induced by the rate of passage of the cell by the moon. These boundary conditions aren't something that everyone uses. One reviewer wanted me to explain them better, which I can certainly do. However, I realized something that I had thought of before which was that to really convince people, I needed to do a large simulation without the periodic local cell, something more global, and show that the same thing happens. I know it will work because I have seen this behavior in global simulations of the F ring and nearly global simulations of the Keeler gap. I know that this isn't a result of the boundary conditions. However, I feel it would be good to do this with a simulation that basically matches what I had done for one of the local cell simulations for a direct confirmation. The only problem is, there was a reason I was using a local cell. With a local cell these simulations only need 10^5 - 10^6 particles. If you don't use a local cell, the requirements get a lot higher.

So I just started the simulation using 14 nodes of my cluster. That's 112 cores and 112 GB of RAM. I tried using a particle size at the top end of what I had used in the paper. Unfortunately, that ran into swapping. It had nearly 400 million particles and the master machine wasn't happy about that. It wasn't so big that it flat out crashed, but it was just big enough that the memory footprint caused it to run too slowly. So I increased the particle size just a bit to a 156 cm diameter. This gives me just under 266 million particles. That's still a huge simulation, but it is just small enough that it runs without the master doing significant swapping.

I started the simulation this morning and I looked at the first output and things seem to be going well. It looks like it will take about 5 hours per orbit and get to the point where conclusions can be made in a little over a month. That's pretty good for the biggest simulation I've ever run.

Tuesday, August 3, 2010

First snapshots of F ring simulation (and a comment on moonlets)

I described in an earlier post the mother of all F ring simulations. This has now gone far enough that I feel I can say it is working properly and I can make a plot of what it happening in the early going. I have put this up on my normal research site. I have a fair bit of discussion along with two plots. The plots don't show all that much beyond the initial configuration because there simply hasn't been that much time for the system to evolve yet. Still, I think they will give anyone with a feel for the F ring a feel for what this simulation might be able to show us.

It also hit me this evening that while I feel like this simulation isn't running all that fast, the reality is that it is running in close to real time. It finishes an orbit in about 7 hours. The real material in the F ring takes almost that long to get around Saturn. So on the whole I'd say that is pretty good. Maybe after this one has gone far enough to wipe out the transients I can consider scaling it up a bit more. I'd be really interested in seeing what happens if I make it so that Prometheus really does run into the apron of material at the edge of the ring. I expect the computers will be less than happy about such an experiment though.

As a side note, on the recommendation of Matt Tiscareno I did some simulations on moonlet stability that didn't include the background. Unlike previous work by Porco, et al., I am not putting a large central core into these moonlets. I am assuming all particles of nearly the same size. This is significant because the Hill sphere of the large body can enclose nearby smaller bodies when two bodies of the same size sit largely outside of one another's Hill spheres. I was able to find a configuration where a moonlet was stable without background material and got broken up when background material was present. The moonlet was made as a lattice roughly filling a triaxial ellipsoid at 130,000 km from Saturn. The lowest density I could get to be stable i this configuration with a 2:1:1 ratio was a bulk density of 0.7 g/cm^3 or 1.0 g/cm^3 for the constituent bodies. Even with that high a density, it got knocked apart when I put it in a background. I'm not going to be doing too much more work on this right now because Crosby found it interesting and will likely work on it as part of his senior research project in Physics.

Writing a Scala Textbook

So this coming fall I am going to be teaching the CS1 course at Trinity (CSCI 1320) using Scala. This is part of an experiment we are conducting to see how we might want to change the languages used in our introductory sequence. One of the challenges in using Scala is that there really aren't that many resources currently out for the language and none that work as a textbook for new programmers. All the current material assumes that students know how to program in one or more other languages and then builds on top of that. To address this, I am writing my own materials going into the beginning of the semester.

I have done this previously for Java with our CS2 (CSCI 1321) and CS0.5 (CSCI 1311), the latter using Greenfoot which also didn't have an existing text when I started using it. In some ways, the topics of introductory CS are the same no matter what language you use. However, the nature of the language can alter the ideal way to present information.

As it stands, we intend to keep our CS1 as a normal introduction to programming. We don't want this to be an objects-first type of class. The second semester will get into OO. The first semester is just about building logic and figuring out how to break programs down and use a programming language to construct solutions. So while Scala is 100% OO, that isn't being stressed early on. The functional aspects can be stressed early on.

Indeed, it is the more functional side that convinced me that Scala was worth trying to a CS1 course. Scala has a REPL and allows writing scripts. It works very well for programming in the small. However, unlike normal scripting languages, Scala is statically typed so students get to learn about types and see compile time error messages. Indeed, I'm hoping to run the class very much like what I have done in C for the last decade or so. We will work in Linux with the command line. They will edit scripts with vi. Unlike C, they can use the REPL to test out little things or load in the code from files instead of running them as standalone scripts.

Using Scala as the language though does change some aspects of what I see as the optimal order of presentation. It also adds richness to certain other topics. The big ones I've run into so far are functions and collections. Normally I do my introduction to Linux and the basics of C. Then I talk about binary numbers. That is followed by sequential execution, functions, conditional execution, and recursion. After that will come loops and arrays. In Scala, the functions aspect gets richer because it is a functional language. It has function literals that can be created on the fly. It is possible to easily pass functions as arguments and return them. In C, function pointers and void* were about the only topics I didn't cover. Those same ideas will be covered in Scala and early on because they are a lot smoother.

The biggest change so far though is where I am doing "arrays". In C there wasn't really any point introducing arrays until you had loops. You simply couldn't do much with an array without a loop. That isn't true in Scala. Collections in Scala, be they arrays, lists, whatever, have a rich set of methods defined for them. Indeed, the Scala for loop does nothing more than translate to calls of these other methods. So in Scala the logic goes the other way. Because they know functions, it makes sense to introduce collections and talk about the methods like map, filter, and reduce. Once those have been covered, then we talk about loops and see how they provide us with a nicer way to write a lot of the iteration behavior that we had done before using either recursion or higher order functions on collections.

One thing that I haven't figured out exactly how to include is Ranges. These are really significant in for loops. If you just want to have a counting for loop in Scala you would write for(i <- 1 to 10). The 1 to 10 creates a range which is an iterable over exactly those values. They have a lot more power too. The question is, do they go in the chapter on collections or in the chapter on loops with the for? The chapter on collections is already going to have quite a bit in it and right now I was planning to cover only arrays and lists. Those are basically the most fundamental mutable and immutable collections respectively. The range almost seems out of place at that point and it opens up a whole big area of the fact that there are a lot of other types of iterables, sequences, etc. in Scala. That will have to get opened at some point though. The question is whether to do it with the other collections or wait until they are really needed with the for loop.

If anyone has suggestions on this, please post a comment. One thing I would point out is that higher order functions on ranges can be fun to play with as well. The for loop isn't really needed. To see this, just consider the following definition of factorial for positive numbers.

def fact(n:Int) = (1 to n).product

So the range isn't just for counting in a loop. That just happens to be the wa most people will use it by default.

Monday, August 2, 2010

Update on F ring

So after watching the F ring simulation I described earlier go for the weekend, I made some changed and updates. While this forced me to stop the simulation and lose what I had done over the weekend, one of the changes was to use a longer time step because this is a sparse simulation with gravity. The result is that now it is doing an orbit in under 8 hours instead of 36. That factor of 4.5 is huge considering it was looking like this would require 2 years to get anything significant done. Now it should have nice results in 6 months and be in a very good spot by next summer when I will have the time to really go look at what is happening.

The early stuff is fascinating. In particular, I gave the moons inclination, something I had never done before, and they are giving it to the ring. Because gravity is a 1/r^2 force, proximity matters more than elevation. So the ring gets its maximum inclination near the point of closest approach and not when the moon is at maximum elevation.

One of the big questions is how the inclination will impact negative diffusion. It is possible that it could lead to more vertical displacement and less migration of semimajor axis. Only by going through the simulations will I know for certain. I expect that I will have some results to put on my main web site before classes start. Of course, when I do I will put a post here telling all about it.

Saturday, July 31, 2010

The mother of all F ring simulations

There was a paper published recently on new observations of the F ring. The F ring is a very dynamic narrow ring a bit outside of Saturn's main rings. The main reason it is so dynamic is that it has two moons that orbit near it, Prometheus and Pandora. I have done some of my own work on the F ring because it is a fun system to simulate. I started with simulations that included just Prometheus. Putting Prometheus on an eccentric orbit alone is challenging because it forces the simulation to use a fairly large cell size. The cell needs to have dimensions such that the full cell drifts past Prometheus in one orbit of the moon so that the perturbations on the two edges match and wrapping doesn't create a discontinuity.

If you only have Prometheus though, you create a ring that is perfectly periodic and the real F ring isn't. To break this, you need to include Pandora. The problem is, if you include Pandora you lose any chance to have a small cell. Prometheus and Pandora are not in a mean motion resonance so to have proper boundaries, the simulation has to be global. This means that it needs to go all the way around Saturn. A few years ago I spent some computer time doing a global F ring simulation with Prometheus and Pandora, both on eccentric orbits.

The movie shows an initially uniform ring getting kicks from the moons. The kicks from Prometheus are larger than those from Pandora because of size and proximity. It shows how collisions in the compression regions lead to negative diffusion and cause the ring material to move into a tight core. It also shows fairly complex structures forming.

While significant, this simulation has three significant limitations. First, the radial extent of the particles is too small. Second, the F ring is circular when in reality the ring is eccentric. Third, there is no self-gravity between the ring particles. I have tried at times to address different parts of this. In fact, I ran an eccentric ring simulation for a while. However, the guiding center coordinates we normally use do not handle this properly and I got an artificial precession of the rings relative to the moons. Now I am giving it another shot and trying to run what you might call the mother of all F ring simulations. First, I spread out the particle distribution a fair bit. It is now a Gaussian that covers roughly 100 km radially. Second, I'm using an extended form of the guiding center coordinates that Glen Stewart derived as they should handle the eccentric ring properly. Third, everything is self-gravitating. Even the moons are normal particles having normal gravity and collisions.

This simulation has 50 million particles in it. The down side of doing that with full self-gravity is that even though I am doing it across 12 machines and 96 cores, it is taking about 1.5 days to complete an orbit. At that rate, it will complete about two synodic periods in a year. From earlier work I know that I need 3-4 synodic periods before the system will get out of the transient state. Maybe I need to ask someone for some faster computers. :)

Tuesday, July 27, 2010

Ring Dynamics Work

While the summer session is just about ended for my research students, there continue to be interesting results that they are documenting on the wiki. Cameron is doing timing results on the lock graph. He is comparing it to the older lock grid for some different simulations. The results aren't exactly what would have been predicted. He'll finish out the week looking at some other systems.

Crosby has gone back to looking at the silicate hiding simulations. He had looked at the A ring previously. Now he is looking at the B ring simulations. There are unexpected results here as well. In the A ring he found that the reflected light was only showing silicate with 25% of the amount of silicates present. So if 4% of the particles were silicate, only 1% of the light would be coming off silicate particles. This 25% was consistent across 4%, 2%, and 1% silicate fractions. The B ring simulations are confusing because it isn't a constant fraction. A lot more exploration is needed here. he'll also have to look at the C ring simulations that have been done.

Lastly, I continue to look at moonlet stability. On a suggestion from Matt Tiscareno I looked at stability without the background. This showed me that I had upped my time step too high based on stuff I did earlier this summer looking for optimal speed. However, there is still a story here. I found the break point in density for where a lattice of equal sized particles is stable at 130,000km. This is a higher density than what was found by Porco, et al. (2007) for a bunch of rubble on a single large core. I took the lowest density value for which I got a stable moonlet alone and dropped in into the background. The simulation hasn't gone all that far, but it is already apparent that the moonlet is being broken up by collisions. Here again there is a lot more work to do. In fact, it looks like this might be an area that Crosby uses for his senior thesis. It has the advantage that the simulations looking at stability in moonlets without background are fairly small simulations and he can run a bunch of them. He has done a little bit to document the experiments that he will look at running.

Wednesday, July 21, 2010

Design of ScalaVis

I spent some time this weekend working on the design of ScalaVis. The goals of the project are to include all the capabilities currently in SwiftVis in the GUI and also enable scripting or interactive usage with the REPL. The SwiftVis design goals of a free, platform independent data analysis/exploration/plotting package with decent speed pointed nicely to Java. However, Java doesn't include an REPL or real scripting ability. Scala does. While Scala can call Java code seamlessly, it isn't possible to easily retrofit SwiftVis into scripting because so many of the settings are completely encapsulated and can only be accessed through the GUI.

The design I came up with leverages the functional nature of Scala and data-flow programming. There is a concept of Elements which are basically functional data transformers. You can script or work interactively with Elements. On top of that sits Builders. As the name implies, Builders are supposed to build elements. They will go into the GUI and have graphical representations and GUI components that enable altering settings.

I have also been working on making the GUI construction aspect easier. The Scala Swing library helps with this to a certain extent. However, I am trying to push it further so that there are more reusable pieces and people have to create fewer components on their own.

Monday, July 19, 2010

Abrupting Moonlets

Lots of simulation and programming stuff going on here. Some I'll be showing tomorrow or so. Others I'll post about later today. The cool result of the morning though is that our moonlets keep getting busted up. Over the weekend I did a simulation where the moonlet was set to model a density of 0.5 g/cm^3 (particle internal density of 0.5/0.7 g/cm^3) with larger pieces in the lattice. It still got sheared out. So next up was to go back to the smaller particles with a higher density. I started a simulation with a lattice modeling 0.6 g/cm^3. When I came in this morning I loaded it up and while the moonlet is holding together better than before, it isn't going to make it. It is still falling apart.

This is very significant because those particles have an internal density of 0.85 g/cm^3. That is close to the density of solid ice, yet they aren't holding together against the impacts of the outside size distribution. Next up is modeling 0.7 g/cm^3 and a particle density of 1.0 g/cm^3. At that point though you pretty much have to say that internal strength is playing a role in holding these things together.

Friday, July 16, 2010


This year I have become really fond of the Scala programming language. It is a better Java than Java with functional elements that make it highly expressive. I expect that there will be a lot more posts on this blog related to Scala in the future. Also, for the last decade or so, I have been working on a data analysis, exploration, and plotting package to use with N-Body simulations called SwiftVis (page, Wiki). I am very fond of SwiftVis and use it for all of the data analysis on my ring simulations. It is also used by a number of other researchers and some of my former students.

The original design goals of SwiftVis included that it be free, multiplatform, have fairly good speed, and not require users to program. The first three requirements made Java an obvious choice for the development. The last requirement, while very beneficial can be a limitation if taken too far. SwiftVis is very flexible and easy to extend. However, the only way to interact with it is through the GUI. Many of the components in SwiftVis could be more useful than they currently are if they weren't locked into this one GUI.

While there are many things I like about Scala, one of the things I caught onto later, especially as it pertains to education, was the power offered by the fact that it has a REPL and scripting capabilities. It also has the ability to call Java code directly. Putting all of these things together I've been picturing in my head a new program that has all the power and extensibility of SwiftVis, but written in Scala in a way that also brings in serious programming interactivity. I'm also hoping to simplify the way in which GUI controls would be written.

I intend to keep the current data flow system that is in SwiftVis in this new work. One of the big questions is how to deal with the data. SwiftVis sends everything around as DataElements. A DataElement has an array of ints and an array of floating point numbers. Having the int is a bit of a pain in the code, takes memory, and in practice it doesn't add all that much to the code. On the other hand, there have been times when it might have been nice to have other types of data in the elements. My current thought is to start with just having double values. I'll have to do some memory testing to see if it is better to use a Vector in Scala or if I should create my own type that wraps an array to make it immutable. The answer to that depends on whether the Vector class will compile to primitives the way the array does.

At this point I have pretty much convinced myself that this is a project I would enjoy and that is worth my time. Now I just need to play with it some to get the design off the ground.

Numerical Significance of Coefficient of Restitution

Long ago I learned the my simulation code didn't like using a fixed coefficient of restitution. It was fine with the Bridges et al. velocity dependent coefficient of restitution, but if I used a fixed value of 0.5 it would run much slower and potentially grind to a halt processing a huge number of collisions. This didn't bother me with rings because I consider Bridges et al. to be more realistic anyway. However, I started using a fixed value in my work with Josh and as such, I needed a way around this.

The actual problem, it turns out, is that at very low velocities, the Bridges et al. becomes conservative. So when particles go into a "rolling" state, they actually do little bounces and the size of the bounces stays fixed. Without this, the bounce size keeps decreasing and we get a Zenoish behavior where the bounce frequency increases over time. So the fix was to make it so that if the velocity is sufficiently small, the coefficient of restitution goes to 1.0. I made this change fairly early on this summer for particle collisions. However, I hadn't made it for the bounces off the surfaces in my work with Josh. I learned last night that was a bad idea.

In my setup simulation where I pack the spheres into the proper shape, I found it would get a lot slower toward the end of the simulation. That trend continued when I put in the marble and let it drop. Last night I decided I'd try altering the coefficient of restitution with the walls in the same way I had done with particle collisions to see if that helped. It worked wonderfully. Without the change the number of collision checks went up with every step and was getting way out of control. (I was seeing numbers over 300 million.) With the change the collision checking numbers actually went down a little bit and the simulation continues to run quickly instead of slowly grinding down.

Sometimes it is good to push a simulation code in different directions. It helps you understand their behavior better and can show you optimization possibilities that you hadn't known existed.

Thursday, July 15, 2010

Density/Strength Requirements on Moonlets

So as my last post indicated, one of my most recent moonlet simulations had the interesting result that the moonlet fell apart within an orbit. This wasn't at all expected and it was something I hadn't seen before so it begs the question of why it happened. My first work with moonlets couldn't have this happen because the moonlets were single large particles. Having so many small particles resting on a single large particle though is computationally intensive. As such, when I started doing higher resolution simulations I switched to making the moonlet out of a lattice of smaller particles. They are bigger than the background particles and have their internal density enhanced by 1/0.7 to account for the fact that such a lattice doesn't completely fill the space the moonlet would have.

This had been working fine in all of my earlier tests. So what changed? I lowered the density of the particles in the lattice moonlet. Earlier I had things set up to model a moonlet with a density of 0.8 g/cm^3. So the individual particles had a density of 0.8/0.7 g/cm^3. That makes them fairly high density and their gravity holds them together. For this most recent simulation, the lattice was made of smaller particles and it was set to model a moonlet with internal density of 0.5 g/cm^3. So the actual particles have a density of 0.5/0.7 g/cm^3.

I'm currently running another simulation with this lower density and a bigger moonlet. Stay tuned to hear if it falls apart. (Early indications are that it will.) Hopefully I'll know by tomorrow. If not, I'll definitely know by Monday. If it does fall apart, that will begin a search for what parameters are required to make it hold together.

The more interesting question is, what does this say about real moonlets? The simulation uses perfect, smooth spheres with no adhesion. So the moonlet has no internal strength. Only gravity is holding it together. What these simulations are indicating is that the bounds on what gravity can hold together might be lower than many people expect. Collisions might be more efficient at breaking things apart than had previously been expected. So are real moonlets higher density? Do they have real internal strength instead of being just rubble piles? What would either of these say about the formation scenarios for moonlets?

Update: The moonlet did indeed get broken up and start to shear out. Now I'm trying a moonlet using a lattice of larger particles. If that also gets destroyed then I will begin a search for the required internal density.

Moonlet Surprise

So Crosby has been working on doing data analysis on the higher resolution moonlet simulations that I had done over the last two years or so. These simulations use smaller particles in the background and were intended to help answer the question of exactly what we are seeing with small propellers in the rings. He has made some movies that I will put up on the rings research page under the second moonlets paper. We have seen huge differences in what appears based on the size of the moonlet. Yesterday we got another surprise as well. I had started a new simulation that had a size distribution in the hopes that we could get a better feel for how that impacts things. I made some other changes to this particular simulation as well. I upped the surface density and lowered the internal particle density to try to better match what other people have been using. The result, the moonlet fell apart.

That last bit needs some explaining. For numerical reasons I stopped using a single large particle for my moonlet a while back. Instead, I use a lattice of spheres with an enhanced density so the total mass matches that of a single sphere. In my other simulations this has worked great and the lattice holds together. In this one, it got knocked to pieces in just one orbit. We haven't worked out all the implications of this, but here are a few possibilities.
  • The internal density of constituent parts of moonlets has to be high enough so this doesn't happen.
  • The moonlets can't be loose rubble piles. They have to have physical strength.
  • The constituent pieces of a moonlet have to be significantly larger than the top end of the surrounding size distribution.
Granted, it could be some combination of these. In this simulation, the moonlet was fairly small too so I need to do something with a larger moonlet without varying anything else and see if it gets broken up.

Wednesday, July 14, 2010

Lock Graph Implementation

So I got the "lock graph" implemented today. Crosby has also found some cool things about moonlets. I'll have to write that up in more detail later. I also started a Wiki for my research students. I'm optimistic that might make my life easier when it comes to keeping track of what they are doing.

Tuesday, July 13, 2010

Constant Acceleration Population

I couldn't sleep this morning. Woke up around 4am or so and just couldn't get my mind back to rest. That will make for a less than ideal day, but it had one significant positive. My brain was working on some useful ideas. The main one, and the one that finally pulled me out of bed to code it up, was to add a constant acceleration to my BasicPopulation. That sentence probably needs a bit of explanation.

My simulation code is rather modular. One of the more significant modules is an element called a population. The population keeps track of the bodies in the simulation and has logic in it to handle their movement, find collision times, process collisions, etc. I basically have two populations that I use. My ring simulations use what I call the GCPopulation. GC stands for guiding center. In this population, the particles don't move on straight lines. Instead, they follow guiding center motion which is a first order analytic solution to Hill's equations. This complicates collision finding, but has benefits that make it well worth while otherwise. Basically, the gravity of Saturn is implicit.

For general research purposes when I want to look at other types of systems I have created a "BasicPopulation". In this population particles travel on straight lines between collisions in the middle of a time step. I was using this for the simulations for Josh and I had a second force of constant acceleration in addition to the collision forcing.

This constant acceleration causes some issues numerically that are best resolved by taking small time steps. This is common in numerical integrations. The integration is more accurate if you take smaller time steps. The simulation also really grinds at the end when the particles are sitting right on top of one another in a close packed configuration. Part of the reason it grinds in this configuration is that the gravity force is applied as a kick at the beginning of the time step and all the particles are given a downward motion which then has to be canceled out by collisions propagating back up through them.

The thing about constant acceleration is that you really don't need a numerical integrator. There is a nice analytic solution to the motion, it is a parabola. So the idea was to create a population that uses that motion somewhat like the GCPopulation uses guiding center motion. When I turned this around in my head a bit (and didn't fall back asleep as a result) I realized that I didn't even need a new population. It's an easy addition to the BasicPopulation. How easy? Turns out to be one line. It was a little harder to make it so one other element of the simulation would work, but that wasn't too bad either. Now I'm testing it.

Why do I care? My hope is that at this will have a very helpful impact on the end of the simulation when all the particles are tight packed. Without the constant acceleration force the particles won't all be given a kick in the downward direction. Instead, they will follow a true parabola with a derivative of zero at apex. This should provide much nicer numerical behavior at that point and allow me to take larger time steps because I don't have to worry about numerical accuracy of the integration.

Now if I can just get that locking graph completed so I can drop a marble on the final configuration.

Monday, July 12, 2010

Two Argument Recursion

So today Cameron Swords and I talked through a possible solution to the problem of the last post. The solution involves using the tree to build a graph of "lock nodes". These are nodes in the tree that we use as locks instead of having a grid of locks to tell us if a collision is safe. In theory this method should work quite well. It will be as dynamic and optimized as the tree and have little additional overhead. For simulations with a size distribution of a dynamic distribution of particle relative velocities, it should provide optimal locking to keep the simulation running nicely on multiple threads.

One little detail of this method that I felt was worth commenting on was the creation of this graph. The "lock nodes" need to have some basic properties. There should be one on the path between the root and any leaf. It could be the leaf. (In theory it could be the root, but that would suck because there would be no parallelism in the collisions.) We would also like to use nodes that are as far down the tree as possible without making the graph too dense. The edges in the graph connect nodes that are sufficiently close that it isn't safe to do collisions in both at the same time. So we want nodes that have a size on the order of the search distance for collision pairs. Identifying these nodes isn't that hard. It can be done with a nice recursive decent. Once we have them we would have to build the graph which could be done easily in code by doing a recursive decent for each of the lock nodes. Such an approach is easy to describe and would be O(n log n) with a coefficient that isn't huge so it wouldn't be the limiting factor in the simulation. However, there is a better way.

This better way, as the title of this post alludes to, is to do recursion over two arguments. In this way, we can identify the lock nodes and build the graph all in a single recursive call. However, that recursive call now recurses on two arguments and basically runs through pairs of nodes. The advantage is that it prevents it from doing work on the top potion of the tree multiple times and should provide something close to that absolute minimum work needed to identify all of the pairings needed for the graph.

This isn't the first time I have used this type of method. In fact, it is the fourth time and the third time in this particular simulation code. I first used the method for my implementation of a symplectic tree code that I have worked on with Hal Levison. In that code the interactions occur only between nodes on the same level of the tree. As such, the two argument recursion makes great sense. I had never seen it used before and still haven't, so for all I know, it hasn't been widely discovered as an approach for numerical work.

My second use was in the collision finding in my main simulation code when a tree is used for finding the collisions. More recently I updated gravity to use it as well. In the case of gravity, switching to this approach dropped the number of distance calculations and improved code performance by a factor of 2-3 times. Now it looks like I'll be using it a fourth time to find pairs of lock nodes that should have edges between them in the gravity/collision tree.

As a general method, what this approach does is to efficiently run through and identify pairs of nodes in a tree when there is a criteria that can eliminate subtrees from consideration. In my work, the criteria is a distance issue, but it could be other things. This could also be generalized to recursion over three or more arguments for finding triples, etc. What makes it efficient is that the recursion doesn't pass over the top level nodes doing the check for inclusion multiple times. Instead, it goes through them once for each argument that it is recursing on and then goes on down to other pairs where the check is more significant.

(Interested readers might wonder how one could make a single recursive call like this operate in parallel. If anyone requests a description of how we have found to do that in a way that leads to generally good load balancing, I'd be happy to post it at that time.)

Collision Dynamics and Granular Flows

So my main field of research is numerical simulations of planetary rings. I keep a page on this work on my main site. Recently I've started working with Josh Colwell at the University of Central Florida on a slightly different project. Josh's project is intended to look at particle collisions to help develop our understanding of accretion in the early stages of planet formation. My role is to do some simulations to compare to the experiments.

Originally I was just going to do simulations of particles orbiting in heliocentric space. Another code was going to be used to do direct comparisons with the simulations. However, this summer I decided I could probably do some of that work with my code. I've been working on that and I have posted a few movies showing this work. This morning though I hit a snag that will take some creative thinking to get around. The problem came when I added a larger impactor body into the simulation to strike the pile of glass spheres that had piled up. Adding this larger impactor caused some problems for the parallel locking mechanism that the code uses. First, the code assumes a fairly 2-D distribution of particles. That's a rather safe assumption for rings. It isn't a very good one for these simulations. It also uses a uniform grid which gets really course when I introduce the large particle.

The code is using a kD-tree to efficiently find collisions. That structure is very dynamic. The nodes have good information on relative velocities and the sizes of particles in them. That makes it very good for non-uniform distributions and also for 3-D simulations. However, the locking mechanism doesn't use the tree. This is because locking needs to be very fast. Grids are O(1) with a small coefficient. Trees are O(log n) with a higher coefficient. We don't want the code that checks if collisions are safe to take more time than the code that actually processes the collisions. So the question is, how can we get around this? That will be my creative thinking exercise for the day.

Sunday, July 11, 2010

First Post

Welcome to the "Dynamics of Programming" blog. This is my first step into blogging. It was somewhat suggested by a former student of mine so I'm trying it out. The title of the blog deals with two of my significant interest, dynamics and programmings. I do research work on planetary dynamics through numerical simulations. I also have a strong interest in programming, programming languages, and teaching programming as my position is Associate Professor in the Department of Computer Science at Trinity University. While this blog will largely be about my personal musing and a place to collect some ideas, perhaps something will appear occasionally that other people find worth reading.