Saturday, July 31, 2010

The mother of all F ring simulations

There was a paper published recently on new observations of the F ring. The F ring is a very dynamic narrow ring a bit outside of Saturn's main rings. The main reason it is so dynamic is that it has two moons that orbit near it, Prometheus and Pandora. I have done some of my own work on the F ring because it is a fun system to simulate. I started with simulations that included just Prometheus. Putting Prometheus on an eccentric orbit alone is challenging because it forces the simulation to use a fairly large cell size. The cell needs to have dimensions such that the full cell drifts past Prometheus in one orbit of the moon so that the perturbations on the two edges match and wrapping doesn't create a discontinuity.

If you only have Prometheus though, you create a ring that is perfectly periodic and the real F ring isn't. To break this, you need to include Pandora. The problem is, if you include Pandora you lose any chance to have a small cell. Prometheus and Pandora are not in a mean motion resonance so to have proper boundaries, the simulation has to be global. This means that it needs to go all the way around Saturn. A few years ago I spent some computer time doing a global F ring simulation with Prometheus and Pandora, both on eccentric orbits.

The movie shows an initially uniform ring getting kicks from the moons. The kicks from Prometheus are larger than those from Pandora because of size and proximity. It shows how collisions in the compression regions lead to negative diffusion and cause the ring material to move into a tight core. It also shows fairly complex structures forming.

While significant, this simulation has three significant limitations. First, the radial extent of the particles is too small. Second, the F ring is circular when in reality the ring is eccentric. Third, there is no self-gravity between the ring particles. I have tried at times to address different parts of this. In fact, I ran an eccentric ring simulation for a while. However, the guiding center coordinates we normally use do not handle this properly and I got an artificial precession of the rings relative to the moons. Now I am giving it another shot and trying to run what you might call the mother of all F ring simulations. First, I spread out the particle distribution a fair bit. It is now a Gaussian that covers roughly 100 km radially. Second, I'm using an extended form of the guiding center coordinates that Glen Stewart derived as they should handle the eccentric ring properly. Third, everything is self-gravitating. Even the moons are normal particles having normal gravity and collisions.

This simulation has 50 million particles in it. The down side of doing that with full self-gravity is that even though I am doing it across 12 machines and 96 cores, it is taking about 1.5 days to complete an orbit. At that rate, it will complete about two synodic periods in a year. From earlier work I know that I need 3-4 synodic periods before the system will get out of the transient state. Maybe I need to ask someone for some faster computers. :)

Tuesday, July 27, 2010

Ring Dynamics Work

While the summer session is just about ended for my research students, there continue to be interesting results that they are documenting on the wiki. Cameron is doing timing results on the lock graph. He is comparing it to the older lock grid for some different simulations. The results aren't exactly what would have been predicted. He'll finish out the week looking at some other systems.

Crosby has gone back to looking at the silicate hiding simulations. He had looked at the A ring previously. Now he is looking at the B ring simulations. There are unexpected results here as well. In the A ring he found that the reflected light was only showing silicate with 25% of the amount of silicates present. So if 4% of the particles were silicate, only 1% of the light would be coming off silicate particles. This 25% was consistent across 4%, 2%, and 1% silicate fractions. The B ring simulations are confusing because it isn't a constant fraction. A lot more exploration is needed here. he'll also have to look at the C ring simulations that have been done.

Lastly, I continue to look at moonlet stability. On a suggestion from Matt Tiscareno I looked at stability without the background. This showed me that I had upped my time step too high based on stuff I did earlier this summer looking for optimal speed. However, there is still a story here. I found the break point in density for where a lattice of equal sized particles is stable at 130,000km. This is a higher density than what was found by Porco, et al. (2007) for a bunch of rubble on a single large core. I took the lowest density value for which I got a stable moonlet alone and dropped in into the background. The simulation hasn't gone all that far, but it is already apparent that the moonlet is being broken up by collisions. Here again there is a lot more work to do. In fact, it looks like this might be an area that Crosby uses for his senior thesis. It has the advantage that the simulations looking at stability in moonlets without background are fairly small simulations and he can run a bunch of them. He has done a little bit to document the experiments that he will look at running.

Wednesday, July 21, 2010

Design of ScalaVis

I spent some time this weekend working on the design of ScalaVis. The goals of the project are to include all the capabilities currently in SwiftVis in the GUI and also enable scripting or interactive usage with the REPL. The SwiftVis design goals of a free, platform independent data analysis/exploration/plotting package with decent speed pointed nicely to Java. However, Java doesn't include an REPL or real scripting ability. Scala does. While Scala can call Java code seamlessly, it isn't possible to easily retrofit SwiftVis into scripting because so many of the settings are completely encapsulated and can only be accessed through the GUI.

The design I came up with leverages the functional nature of Scala and data-flow programming. There is a concept of Elements which are basically functional data transformers. You can script or work interactively with Elements. On top of that sits Builders. As the name implies, Builders are supposed to build elements. They will go into the GUI and have graphical representations and GUI components that enable altering settings.

I have also been working on making the GUI construction aspect easier. The Scala Swing library helps with this to a certain extent. However, I am trying to push it further so that there are more reusable pieces and people have to create fewer components on their own.

Monday, July 19, 2010

Abrupting Moonlets

Lots of simulation and programming stuff going on here. Some I'll be showing tomorrow or so. Others I'll post about later today. The cool result of the morning though is that our moonlets keep getting busted up. Over the weekend I did a simulation where the moonlet was set to model a density of 0.5 g/cm^3 (particle internal density of 0.5/0.7 g/cm^3) with larger pieces in the lattice. It still got sheared out. So next up was to go back to the smaller particles with a higher density. I started a simulation with a lattice modeling 0.6 g/cm^3. When I came in this morning I loaded it up and while the moonlet is holding together better than before, it isn't going to make it. It is still falling apart.

This is very significant because those particles have an internal density of 0.85 g/cm^3. That is close to the density of solid ice, yet they aren't holding together against the impacts of the outside size distribution. Next up is modeling 0.7 g/cm^3 and a particle density of 1.0 g/cm^3. At that point though you pretty much have to say that internal strength is playing a role in holding these things together.

Friday, July 16, 2010

ScalaVis

This year I have become really fond of the Scala programming language. It is a better Java than Java with functional elements that make it highly expressive. I expect that there will be a lot more posts on this blog related to Scala in the future. Also, for the last decade or so, I have been working on a data analysis, exploration, and plotting package to use with N-Body simulations called SwiftVis (page, Wiki). I am very fond of SwiftVis and use it for all of the data analysis on my ring simulations. It is also used by a number of other researchers and some of my former students.

The original design goals of SwiftVis included that it be free, multiplatform, have fairly good speed, and not require users to program. The first three requirements made Java an obvious choice for the development. The last requirement, while very beneficial can be a limitation if taken too far. SwiftVis is very flexible and easy to extend. However, the only way to interact with it is through the GUI. Many of the components in SwiftVis could be more useful than they currently are if they weren't locked into this one GUI.

While there are many things I like about Scala, one of the things I caught onto later, especially as it pertains to education, was the power offered by the fact that it has a REPL and scripting capabilities. It also has the ability to call Java code directly. Putting all of these things together I've been picturing in my head a new program that has all the power and extensibility of SwiftVis, but written in Scala in a way that also brings in serious programming interactivity. I'm also hoping to simplify the way in which GUI controls would be written.

I intend to keep the current data flow system that is in SwiftVis in this new work. One of the big questions is how to deal with the data. SwiftVis sends everything around as DataElements. A DataElement has an array of ints and an array of floating point numbers. Having the int is a bit of a pain in the code, takes memory, and in practice it doesn't add all that much to the code. On the other hand, there have been times when it might have been nice to have other types of data in the elements. My current thought is to start with just having double values. I'll have to do some memory testing to see if it is better to use a Vector in Scala or if I should create my own type that wraps an array to make it immutable. The answer to that depends on whether the Vector class will compile to primitives the way the array does.

At this point I have pretty much convinced myself that this is a project I would enjoy and that is worth my time. Now I just need to play with it some to get the design off the ground.

Numerical Significance of Coefficient of Restitution

Long ago I learned the my simulation code didn't like using a fixed coefficient of restitution. It was fine with the Bridges et al. velocity dependent coefficient of restitution, but if I used a fixed value of 0.5 it would run much slower and potentially grind to a halt processing a huge number of collisions. This didn't bother me with rings because I consider Bridges et al. to be more realistic anyway. However, I started using a fixed value in my work with Josh and as such, I needed a way around this.

The actual problem, it turns out, is that at very low velocities, the Bridges et al. becomes conservative. So when particles go into a "rolling" state, they actually do little bounces and the size of the bounces stays fixed. Without this, the bounce size keeps decreasing and we get a Zenoish behavior where the bounce frequency increases over time. So the fix was to make it so that if the velocity is sufficiently small, the coefficient of restitution goes to 1.0. I made this change fairly early on this summer for particle collisions. However, I hadn't made it for the bounces off the surfaces in my work with Josh. I learned last night that was a bad idea.

In my setup simulation where I pack the spheres into the proper shape, I found it would get a lot slower toward the end of the simulation. That trend continued when I put in the marble and let it drop. Last night I decided I'd try altering the coefficient of restitution with the walls in the same way I had done with particle collisions to see if that helped. It worked wonderfully. Without the change the number of collision checks went up with every step and was getting way out of control. (I was seeing numbers over 300 million.) With the change the collision checking numbers actually went down a little bit and the simulation continues to run quickly instead of slowly grinding down.

Sometimes it is good to push a simulation code in different directions. It helps you understand their behavior better and can show you optimization possibilities that you hadn't known existed.

Thursday, July 15, 2010

Density/Strength Requirements on Moonlets

So as my last post indicated, one of my most recent moonlet simulations had the interesting result that the moonlet fell apart within an orbit. This wasn't at all expected and it was something I hadn't seen before so it begs the question of why it happened. My first work with moonlets couldn't have this happen because the moonlets were single large particles. Having so many small particles resting on a single large particle though is computationally intensive. As such, when I started doing higher resolution simulations I switched to making the moonlet out of a lattice of smaller particles. They are bigger than the background particles and have their internal density enhanced by 1/0.7 to account for the fact that such a lattice doesn't completely fill the space the moonlet would have.

This had been working fine in all of my earlier tests. So what changed? I lowered the density of the particles in the lattice moonlet. Earlier I had things set up to model a moonlet with a density of 0.8 g/cm^3. So the individual particles had a density of 0.8/0.7 g/cm^3. That makes them fairly high density and their gravity holds them together. For this most recent simulation, the lattice was made of smaller particles and it was set to model a moonlet with internal density of 0.5 g/cm^3. So the actual particles have a density of 0.5/0.7 g/cm^3.

I'm currently running another simulation with this lower density and a bigger moonlet. Stay tuned to hear if it falls apart. (Early indications are that it will.) Hopefully I'll know by tomorrow. If not, I'll definitely know by Monday. If it does fall apart, that will begin a search for what parameters are required to make it hold together.

The more interesting question is, what does this say about real moonlets? The simulation uses perfect, smooth spheres with no adhesion. So the moonlet has no internal strength. Only gravity is holding it together. What these simulations are indicating is that the bounds on what gravity can hold together might be lower than many people expect. Collisions might be more efficient at breaking things apart than had previously been expected. So are real moonlets higher density? Do they have real internal strength instead of being just rubble piles? What would either of these say about the formation scenarios for moonlets?

Update: The moonlet did indeed get broken up and start to shear out. Now I'm trying a moonlet using a lattice of larger particles. If that also gets destroyed then I will begin a search for the required internal density.

Moonlet Surprise

So Crosby has been working on doing data analysis on the higher resolution moonlet simulations that I had done over the last two years or so. These simulations use smaller particles in the background and were intended to help answer the question of exactly what we are seeing with small propellers in the rings. He has made some movies that I will put up on the rings research page under the second moonlets paper. We have seen huge differences in what appears based on the size of the moonlet. Yesterday we got another surprise as well. I had started a new simulation that had a size distribution in the hopes that we could get a better feel for how that impacts things. I made some other changes to this particular simulation as well. I upped the surface density and lowered the internal particle density to try to better match what other people have been using. The result, the moonlet fell apart.

That last bit needs some explaining. For numerical reasons I stopped using a single large particle for my moonlet a while back. Instead, I use a lattice of spheres with an enhanced density so the total mass matches that of a single sphere. In my other simulations this has worked great and the lattice holds together. In this one, it got knocked to pieces in just one orbit. We haven't worked out all the implications of this, but here are a few possibilities.
  • The internal density of constituent parts of moonlets has to be high enough so this doesn't happen.
  • The moonlets can't be loose rubble piles. They have to have physical strength.
  • The constituent pieces of a moonlet have to be significantly larger than the top end of the surrounding size distribution.
Granted, it could be some combination of these. In this simulation, the moonlet was fairly small too so I need to do something with a larger moonlet without varying anything else and see if it gets broken up.

Wednesday, July 14, 2010

Lock Graph Implementation

So I got the "lock graph" implemented today. Crosby has also found some cool things about moonlets. I'll have to write that up in more detail later. I also started a Wiki for my research students. I'm optimistic that might make my life easier when it comes to keeping track of what they are doing.

Tuesday, July 13, 2010

Constant Acceleration Population

I couldn't sleep this morning. Woke up around 4am or so and just couldn't get my mind back to rest. That will make for a less than ideal day, but it had one significant positive. My brain was working on some useful ideas. The main one, and the one that finally pulled me out of bed to code it up, was to add a constant acceleration to my BasicPopulation. That sentence probably needs a bit of explanation.

My simulation code is rather modular. One of the more significant modules is an element called a population. The population keeps track of the bodies in the simulation and has logic in it to handle their movement, find collision times, process collisions, etc. I basically have two populations that I use. My ring simulations use what I call the GCPopulation. GC stands for guiding center. In this population, the particles don't move on straight lines. Instead, they follow guiding center motion which is a first order analytic solution to Hill's equations. This complicates collision finding, but has benefits that make it well worth while otherwise. Basically, the gravity of Saturn is implicit.

For general research purposes when I want to look at other types of systems I have created a "BasicPopulation". In this population particles travel on straight lines between collisions in the middle of a time step. I was using this for the simulations for Josh and I had a second force of constant acceleration in addition to the collision forcing.

This constant acceleration causes some issues numerically that are best resolved by taking small time steps. This is common in numerical integrations. The integration is more accurate if you take smaller time steps. The simulation also really grinds at the end when the particles are sitting right on top of one another in a close packed configuration. Part of the reason it grinds in this configuration is that the gravity force is applied as a kick at the beginning of the time step and all the particles are given a downward motion which then has to be canceled out by collisions propagating back up through them.

The thing about constant acceleration is that you really don't need a numerical integrator. There is a nice analytic solution to the motion, it is a parabola. So the idea was to create a population that uses that motion somewhat like the GCPopulation uses guiding center motion. When I turned this around in my head a bit (and didn't fall back asleep as a result) I realized that I didn't even need a new population. It's an easy addition to the BasicPopulation. How easy? Turns out to be one line. It was a little harder to make it so one other element of the simulation would work, but that wasn't too bad either. Now I'm testing it.

Why do I care? My hope is that at this will have a very helpful impact on the end of the simulation when all the particles are tight packed. Without the constant acceleration force the particles won't all be given a kick in the downward direction. Instead, they will follow a true parabola with a derivative of zero at apex. This should provide much nicer numerical behavior at that point and allow me to take larger time steps because I don't have to worry about numerical accuracy of the integration.

Now if I can just get that locking graph completed so I can drop a marble on the final configuration.

Monday, July 12, 2010

Two Argument Recursion

So today Cameron Swords and I talked through a possible solution to the problem of the last post. The solution involves using the tree to build a graph of "lock nodes". These are nodes in the tree that we use as locks instead of having a grid of locks to tell us if a collision is safe. In theory this method should work quite well. It will be as dynamic and optimized as the tree and have little additional overhead. For simulations with a size distribution of a dynamic distribution of particle relative velocities, it should provide optimal locking to keep the simulation running nicely on multiple threads.

One little detail of this method that I felt was worth commenting on was the creation of this graph. The "lock nodes" need to have some basic properties. There should be one on the path between the root and any leaf. It could be the leaf. (In theory it could be the root, but that would suck because there would be no parallelism in the collisions.) We would also like to use nodes that are as far down the tree as possible without making the graph too dense. The edges in the graph connect nodes that are sufficiently close that it isn't safe to do collisions in both at the same time. So we want nodes that have a size on the order of the search distance for collision pairs. Identifying these nodes isn't that hard. It can be done with a nice recursive decent. Once we have them we would have to build the graph which could be done easily in code by doing a recursive decent for each of the lock nodes. Such an approach is easy to describe and would be O(n log n) with a coefficient that isn't huge so it wouldn't be the limiting factor in the simulation. However, there is a better way.

This better way, as the title of this post alludes to, is to do recursion over two arguments. In this way, we can identify the lock nodes and build the graph all in a single recursive call. However, that recursive call now recurses on two arguments and basically runs through pairs of nodes. The advantage is that it prevents it from doing work on the top potion of the tree multiple times and should provide something close to that absolute minimum work needed to identify all of the pairings needed for the graph.

This isn't the first time I have used this type of method. In fact, it is the fourth time and the third time in this particular simulation code. I first used the method for my implementation of a symplectic tree code that I have worked on with Hal Levison. In that code the interactions occur only between nodes on the same level of the tree. As such, the two argument recursion makes great sense. I had never seen it used before and still haven't, so for all I know, it hasn't been widely discovered as an approach for numerical work.

My second use was in the collision finding in my main simulation code when a tree is used for finding the collisions. More recently I updated gravity to use it as well. In the case of gravity, switching to this approach dropped the number of distance calculations and improved code performance by a factor of 2-3 times. Now it looks like I'll be using it a fourth time to find pairs of lock nodes that should have edges between them in the gravity/collision tree.

As a general method, what this approach does is to efficiently run through and identify pairs of nodes in a tree when there is a criteria that can eliminate subtrees from consideration. In my work, the criteria is a distance issue, but it could be other things. This could also be generalized to recursion over three or more arguments for finding triples, etc. What makes it efficient is that the recursion doesn't pass over the top level nodes doing the check for inclusion multiple times. Instead, it goes through them once for each argument that it is recursing on and then goes on down to other pairs where the check is more significant.

(Interested readers might wonder how one could make a single recursive call like this operate in parallel. If anyone requests a description of how we have found to do that in a way that leads to generally good load balancing, I'd be happy to post it at that time.)

Collision Dynamics and Granular Flows

So my main field of research is numerical simulations of planetary rings. I keep a page on this work on my main site. Recently I've started working with Josh Colwell at the University of Central Florida on a slightly different project. Josh's project is intended to look at particle collisions to help develop our understanding of accretion in the early stages of planet formation. My role is to do some simulations to compare to the experiments.

Originally I was just going to do simulations of particles orbiting in heliocentric space. Another code was going to be used to do direct comparisons with the simulations. However, this summer I decided I could probably do some of that work with my code. I've been working on that and I have posted a few movies showing this work. This morning though I hit a snag that will take some creative thinking to get around. The problem came when I added a larger impactor body into the simulation to strike the pile of glass spheres that had piled up. Adding this larger impactor caused some problems for the parallel locking mechanism that the code uses. First, the code assumes a fairly 2-D distribution of particles. That's a rather safe assumption for rings. It isn't a very good one for these simulations. It also uses a uniform grid which gets really course when I introduce the large particle.

The code is using a kD-tree to efficiently find collisions. That structure is very dynamic. The nodes have good information on relative velocities and the sizes of particles in them. That makes it very good for non-uniform distributions and also for 3-D simulations. However, the locking mechanism doesn't use the tree. This is because locking needs to be very fast. Grids are O(1) with a small coefficient. Trees are O(log n) with a higher coefficient. We don't want the code that checks if collisions are safe to take more time than the code that actually processes the collisions. So the question is, how can we get around this? That will be my creative thinking exercise for the day.

Sunday, July 11, 2010

First Post

Welcome to the "Dynamics of Programming" blog. This is my first step into blogging. It was somewhat suggested by a former student of mine so I'm trying it out. The title of the blog deals with two of my significant interest, dynamics and programmings. I do research work on planetary dynamics through numerical simulations. I also have a strong interest in programming, programming languages, and teaching programming as my position is Associate Professor in the Department of Computer Science at Trinity University. While this blog will largely be about my personal musing and a place to collect some ideas, perhaps something will appear occasionally that other people find worth reading.