∇⋅ B = 0

This is the condition that there is no divergence in the magnetic field, which in physical terms means that there are no magnetic charges. That is, we don't have particles floating about that create magnetic field lines. All of the magnetic flux that enters a given volume must leave the volume.Mathematically this is usually straight forward (well sort of, except for those that have had to suffer through Jackson), but when it comes to computer simulations it turns out that this condition is very hard to preserve. There are several different methods used to preserve this condition and for those that are interested there is an excellent paper by Gábor Tóth that describes many of these methods (Gábor Tóth, Journal of Computational Physics, 161, 2, 2000, Pages 605-652).

While I am not going to discuss how these methods work, I will note that they are all imperfect. Because of the fundamental way simulations and computers work, there will always be some error, which means the divergence condition will never be met. The way of dealing with this usually involves making the error from the numerical solver to be as small as possible. While some ways of solving the MHD equations can make the error

*very*small, it comes at the cost of making the code run slower (for example the projection method may be better than other methods in almost all situations, but it comes at the cost of a 20% increase in the time needed compared to the base method for they hydro equations. For reference, of the seven methods investigated by Tóth the next slowest only added 7% to the time).

But the question is, how do we test the accuracy of these different methods. Well, the easiest way is to have them solve a problem that we already know the answer for analytically and then see how well the code does at getting the correct answer. While many codes will do well with a simple problem, the idea is to get a very difficult problem and see how the code handles it.

One such problem is the Smooth Alfvén Wave test. This test takes a polarized magnetic field and has it propagate across the grid. Using this special set up it will create standing Alfvén waves. Because of the nature of the Alfvén waves and how they interact with the material, for this particular test the density should not change and the wave should not dissipate. That is, after one period the entire system should return to its original state. Any change in the density, energy, velocity or magnetic field is an indication of the errors introduced by the numeric solver. By knowing the initial conditions we can then calculate the change and therefore the error.

So in a real simulation, what would this look like? Well something like this:

This is a video of the Smooth Alfvén Wave test, with a circularly polarized magnetic field. This is a 2D simulation (well, 2.5D but anyway...) with a constant density slab of material with constant pressure. The wave propagates from the top right corner towards the bottom left. The color of the slab indicates the density. The initial density is a grey color and thus any other color you see in the slab indicates an error in the solver (there is a color legend in the video). The arrows indicate the velocity (and the color of the arrows does not mean anything, I just made them pink so that you could see them). The velocity points in the same direction as the magnetic field, so the arrows also show magnetic field. The test runs for five periods.

As I mentioned, any color, red or blue, you see in the slab indicates a numerical error, but while it make look like this was a bad simulation, note that the difference is only 4.3 e -4. This is still small and acceptable for most applications. If I ran it for longer, the errors would grow. If I started adding in other things, again the error would grow. In this case I can compare the result to the analytic solution, but in the case where there is no analytic solution we have to rely of either proven methods, better resolution or in the worst case, just assume that the errors are small (shudder).

This simulation was run in Athena, which uses a constrained transport method to address the ∇⋅ B = 0 problem, and it took about 3 minutes to run on my laptop. I rendered it in Paraview, which after taking a few minutes to set it up, it only took 2 or 3 minuets to render.

It's always interesting to see all the clever tests people come up with to test their code.

ReplyDeleteI for one would love to hear of the discovery of a magnetic monopole. But then again, the LHC is hinting that the universe isn't going to give any clues outside of the standard model and possibly dark matter.

This is an interesting problem, especially for finite-difference codes in Cartesian domains. We avoid this problem entirely by using a poloidal-toroidal decomposition in our pseudo-spectral code. Basically we define our magnetic field in terms of two "magnetic potentials" so that

ReplyDelete\vec{B} = \nabla x (A \hat{r}) + \nabla x \nabla x (C \haT{r})

A and C are our evolution variables, so by definition our magnetic field is always exactly divergence-free.

QL42 & NN,

ReplyDeleteThis post reminds me of old days when I had to struggle with del.V = 0 for complex components with varing surface resistivity. I did most of the computation by hand (and, it was pain).

It was really good to review Maxwell at Wiki.

Nick,

ReplyDeleteIn the Tóth paper he mentions this as a possibility, but the reason for using the other, less accurate, methods is due to the nature of the problems being solved. Many of these problems deal with strong shocks where it would be impossible to use the vector potential. Also in some astronomical situations where the magnetic field is mostly random and highly volatile and the vector potential is unknown or practically impossible to find. And even if it can be found numerically it just adds a second derivative to the problem.

QL42,

ReplyDeleteIn pseudo-spectral codes additional spatial derivatives are extremely simply to compute, so the higher order derivatives don't bother us. Also, we don't have any shocks - stellar interiors are pretty smooth domains - so that's not a problem. As you mentioned, it simply depends strongly on the type of problems you would like to solve and the numerical method you choose.