Pages

Showing posts with label physics. Show all posts
Showing posts with label physics. Show all posts

Thursday, April 19, 2012

How Science-Ready Are the Kids?

As a teacher of intro astronomy here at CU, I have the privilege of seeing the math and science readiness of some of the best and brightest Colorado has to offer the world.  While many of my students struggle with things like unit conversion and basic algebra, many are also able to apply calculus and even differential equations to astrophysical problems. So how science-ready are Coloradans? From the very smart folks at the AIP's Statistical Research Center, the answer is a resounding "average".
I'm really glad I'm not teaching intro astronomy in Mississippi.

Looking at the map, it looks surprisingly similar to another map I recently saw of the distribution of minority populations in the US.


With the exception of West Virginia and Nebraska, all of the other states that rate "Far Below Average" or worse have large minority populations, and all of the states that rate "Far Above Average" except New York don't. 

Additionally, another map looks pretty similar too, this one of the average income per household.
With the exception of California and Nevada (note that this is 2008 data so the full impact of the recession hasn't been included), all of the under-preforming states are green or blue, while all of the over-achievers are yellow or red, with the exception of Indiana and Maine.

Wednesday, November 2, 2011

Do Cars and Construction Equipment Discourage Women in Physics?

Physics has a gender problem and to see an example you need look no further than the list of this blog's authors to your left.  You'll note that all of us are male.  A broader look into this problem yields what is known as the "scissors diagram".
Here the black lines show the fraction of men and women at various stages of physics careers while the red lines show the expected fraction from historical trends (because when current full professors were in high school there were far fewer women taking physics than there are today).  It appears that for some reason men and women both take high school physics in nearly equal numbers, but that for some reason women are far less likely to study physics in college.  Physics is not alone in this problem, as I've written about previously, but we are having a much harder time fixing it than fields like math or chemistry.

There are a lot of ideas as to why this might be the case but here's one from a Physics Today article that I hadn't considered before - problem sets based on cars and construction equipment.  I recommend reading the whole article as it is well-written and insightful, but allow me to over-simplify the basic argument:  homework problems in introductory physics courses generally use examples from topics like cars and construction work that are more likely for men to be familiar with than women.

My initial reaction was skepticism - how much difference can using terms like "pile driver" instead of "a machine that drops a heavy weight on [a metal rod], lifts the weight, and drops it again" possibly make?  But the more I think about it, the more I start to think that maybe the authors have a point. I don't think that the real issue is that men are more familiar with pile drivers than women - I think the issue is that when textbook problems appeal more to men than women, a subtle message is sent that women are out of place in physics, and no one wants to feel out of place.

Now don't get me wrong - I'm not arguing that physics problems should avoid real-world examples or that women can't understand problems talking about cars going around banked turns - but I do think it would be wise for physics faculty to try to use more examples from fields that have a higher concentration of women - like health care or preforming arts.  Instead of asking questions about baseball and football only, mix in some questions about ballet.  Ask more questions about blood pressure and less about pneumatic nail guns.  I'm sure this single step won't fix the larger problem, but I think it's generally a good idea to do everything we can to attract the best people to our field and not just the best men.

Sunday, October 16, 2011

Big News: Einstein Is Still Right

A couple weeks ago I posted that the OPERA collaboration had measured neutrinos moving faster than the speed of light in violation of Einstein's theory of special relativity.  The experiment was very simple - neutrinos produced at CERN on the broader between Switzerland and France were shot towards a detector in Italy.  The time between the pulses' creation and detection was measured using atomic clocks and the distance traveled was measured using GPS satellites.  The result was that on average the neutrinos arrived 60 nanoseconds faster than if they moved at the speed of light.

It turns out that the problem wasn't too much Einstein but rather not enough.  A Dutch physicist named Ronald van Elburg found that the OPERA team made a small mistake in the way they applied special relativistic corrections due to the velocity of the GPS satellites.  This correction to the OPERA team's calculation should decrease the travel time of the neutrinos by 64 nanoseconds.  You can read the pre-print here.

In response, Einstein says:

Thursday, September 22, 2011

Neutrinos Behaving Badly

The OPERA Collaboration has announced what they claim is a 6-sigma measurement that neutrinos move faster than the speed of light in a vacuum (see stories by Nature and the BBC).  OPERA is an Italian experiment designed to measure neutrino oscillations in a stream of the elusive sub-atomic particles coming from CERN.  To date they have detected the arrival of over 16,000 neutrinos (which is quite a few considering how hard it is to stop a neutrino) and now they claim that they have good evidence that those particles move just a hair faster than what was thought to be the universal speed limit.

The OPERA folks are being very cautious with their claims and the point out that this could simply be some systematic error for which they haven't accounted.  Interestingly, Nature is reporting that there was some evidence for similar speeding neutrinos at the MINOS neutrino experiment, but that the distance from Fermilab (where the neutrinos were being made) to the MINOS detector wasn't know to sufficient precision to be able to make such a claim.

So what does this mean?  The joke that half of all 3-sigma results are false rings in my ears.  If I had to bet, my money would be on some sort of error or some effect of a previous theory that accounts for the apparent effect.  I guarantee that there will be a lot of very smart experimentalists thinking about systematic errors in measuring neutrino speeds and that there will be a whole slew of papers from theorists showing that super-luminal neutrinos are predicted by their brand of quantum gravity, super-symmetry, or f(R) theory of gravity.  What does it mean if this is true?  Well, we can start by going back to the drawing board on relativity.

UPDATE:  The paper is up on the arXiv here.  I am not a particle physicist, but it looks like they have done a very nice, careful job in not over-selling their result.  The measured velocity was about 7430 +/- 1740 m/s faster than the speed of light.

One other interesting note:  When the famous supernova 1987A went off in our friendly neighborhood Large Magellanic Cloud, neutrino bursts were measured at three different detectors around the globe about 3 hours before the first light from the supernova was seen.  This is generally explained because 1987A was a core-collapse supernova and while the neutrinos it produced could simply stream outward through the outer layers of the collapsing star, the light could not and therefore was delayed slightly.  That supernova was about 168,000 light-years away.  Assuming that neutrinos actual do travel faster than light by the amount measured by the OPERA team, those neutrinos should have arrived 4 years prior to the first light from the supernova.  Things like this are why most physicists are looking for a problem with the experiment rather than throwing relativity out the window.

Friday, August 5, 2011

Physics PhDs: How Many? How Long? How Worthwhile?

I regularly sing the praises of the Statistical Research Center at the American Institute of Physics, so forgive me if you've heard this song before.  The latest data release from the SRC has a couple interesting tidbits profiling the newly minted PhDs in physics and astronomy.

First of all, the number of PhDs awarded continued its decade-long rise since the low of the late 90's dot-com boom.  The last time the US produced this many PhDs in physics was the mid-1960's when the space race, the nuclear arms race, and dozen of other defense-related Cold War initiatives drove hoards of students into PhD programs. 

The second interesting tidbit is the distribution of time-to-PhD for recent graduates.  I have seen averages previously, but it's great to see the histogram.  It's clear that the "5-year PhD" model is really a myth more than anything at this point.

Finally, the last tidbit is a fun little question that the AIP asked.
Interestingly only 22% of American students would change anything about their PhD experience while half of non-US citizens would.  Perhaps American are either too proud or too complacent to admit they would have done something different if they had it all to do again.

Tuesday, July 5, 2011

Dynamos in Physics Today

One of the standard jokes in astrophysical modeling is that if there is ever a discrepancy between your model and observations you simply attribute it to some combination magnetic fields and turbulence.  Let's say your model of star formation fails miserably to reproduce the observed initial mass function in our galaxy.  What do you do?  Blame turbulence and magnetic fields, present some overly simplistic explanation of why turbulence and magnetic fields would give you the right answer if you could just capture them properly, and then promise to include them in some ill-defined future simulation.

These two phenomena have a fundamental link  - dynamo action.  In the words of a very well-written article by two of the top researchers in the field of laboratory dynamos, Cary Forest and Daniel Lathrop, "[a]ll astrophysical plasmas are, as far as we know, magnetized and turbulent" and thus ripe for dynamo action.  The problem is that we are orders of magnitude removed in both simulations and experiments from some of the physical regimes where dynamo action takes place, even within our own solar system (see the graph on the right).

Check out the article over at Physics Today for a great explanation of why dynamos are so ubiquitous, why they are so hard to predict, and what is being done with theory, modeling, and laboratory experiments to unravel the mystery.

Tuesday, June 28, 2011

Summer Conference Travels

You may have noticed that things have gotten a little quiet around here and while I can't say exactly what everyone else is doing, I can guess the it has something to do with the summer conference season (and possibly also babies in one or two instances). Since this really isn't a blog about babies, let's do a quick check of where everyone is headed this summer.  Post your dream summer vacation to Austin, Texas in August where you'll spend 14 hours a day in a dimly lit room watching poorly prepared PowerPoint presentations in the comments.

Note:  I have actually attended a conference in Austin, Texas in August.  I don't recommend it.

Friday, April 29, 2011

Does the PhD Need Fixing?

Many of you have probably seen the special edition of Nature that is devoted to "The Future of the PhD".  Much of the discussion centers on the career prospects of those with a PhD.  As most of those in graduate school know, there are far more eager 1st-year graduate students than tenure-track positions at R1 universities - and often to have a shot at the few positions available at R1 universities one has to slog through multiple low-paying post-docs after a median of 7 years in a PhD program.  Part of Nature's special feature includes an editorial entitled "Fix the PhD".  But here's my question to those of us in grad school: in your experience, does the PhD system need fixing?

Before we jump into the debate, let me share a little bit of data.  First, Nature has put together a few nice set of graphs showing three relevant tidbits on key aspects of the PhD experience - namely the number of PhDs awarded by field, the median time to completion for the hard sciences, and the employment of science and engineering PhDs 1-3 years after graduation.

Several things that stood out to me.  First, medial and life sciences saw a huge increase in PhD production and many of the anecdotal horror stories I have heard come from those fields.  Second, a median of 7 years in grad school seems high to me - using data from the past 15 years in my department I have personally computed a mean time to completion of 6 years for my program.  Finally, I was surprised not to see a growth in the number of non-tenured faculty.  Other sources have clearly indicated that the ranks of the non-tenured have been growing, but apparently not with new PhDs.

The second bit of data I would like to inject comes from my own department.  CU's Astrophysical and Planetary Sciences department is pretty good, but I would say that CU is somewhat average when it comes to the top-tier of the astrophysics world.  So in the hope that CU's PhDs are in some sense "average", I decided to track all 43 of the PhD recipients from my department between 2000 and 2005 using Google and ADS in order to see where they were now.  I sorted them into 7 categories (post-doc, tenure-track faculty at research institutions, tenure-track faculty at non-research institutions, non-tenure-track faculty, research staff, industry, or other).  The results are on your left.  Note that all of those that still post-docs graduated in 2005.  Interestingly, only 1 of the 43 PhDs is in a non-tenure track faculty position and a very large fraction (67.4%) are still publishing in peer-reviewed journals in astronomy, physics, or planetary science.  As a side-note, the "other" category has some great entries, including a fellow that works for Answers in Genesis, another that works for a foundation that advocates for manta rays in Hawaii, and another that does market research for Kaiser Permanente.

So, there's a bit of data - more is of course welcome - now what does it mean?  Is the PhD system in the US broken and if so, how does one fix it?

Thursday, April 14, 2011

Finally, The Navy Has Laser Guns

If we look at older science fiction (think the original Star Trek), we can see that there are some technological advancements they got very wrong.  We already have better computer displays than whatever Spock was looking into and my cell phone is already smaller than Captain Kirk's.  But for all of our advancements, there is one area in which we are way behind what people in the 60's thought:  laser guns.  It seems like everybody thought we'd trade in bullets for batteries by now.

Well they may not have been that far off.  From the Associated Press:
The Navy for the first time last week successfully tested a solid-state high-energy laser from a ship. The beam, which was aimed at a boat moving through turbulent Pacific Ocean waters, set the target's engine on fire... The baseball-sized laser beam... could be used to stop small crafts from approaching naval ships. It could also target pirates.
That's right - the Navy is going to be shooting pirates with lasers.  We live in an age of wonders.

Of course the whole laser gun thing turns out to be a little less cool than what the starship Enterprise has.  Here's the movie:

So there are no sound effects or explosions and nobody is getting vaporized anytime soon, but hey, I suppose we shouldn't be too hard on ourselves since we still have 300 years to develop phasers.

Wednesday, February 9, 2011

The Sarcity of Conservative Academics

The term "conservative academic" is a bit of an oxymoron, but it's also an accurate description of myself.  I'm a 4th year Ph.D. student who regularly (although not exclusively) votes for Republicans.  Universities - especially secular ones like mine - are overwhelmingly dominated by those on the political left.  The chart on the right shows a break-down of various professions by political self-identification based on data from Dr. Neil Gross, a sociologist at the University of British Columbia.  Interestingly, professors are over three times more likely than the general public to self-identify as liberal but only half as likely to self-identify as conservative.  My guess is that on my campus the breakdown of liberal/moderate/conservative is more like 55/35/10.

Those on the left see this as evidence smart people are liberal and stupid people are conservative, while those on the right see this as an insidious liberal plot to subvert America's youth.  Even though I'm conservative I generally don't like the liberal bias argument, which equates to saying that somehow the lack of conservatives is analogous to the lack of female physics professors or the relative high school drop-out rates of ethnic minorities compared to whites.  Basically, I have never seen an case where an individual's political views were ever considered in admissions to either undergrad or grad school, course grades, or hiring in academia

Take the case of women in physics - it's clearly not that physicists are sexist pigs trying to bar the doors against women, but somehow 50% of the general population only produces 6% of the tenured physics faculty at American universities.  An article in yesterday's New York Times has me thinking that maybe there is something a little more subtle at work in both cases.  As Dr. Jonathan Haidt, a social psychologist stated in the article
“Anywhere in the world that social psychologists see women or minorities underrepresented by a factor of two or three, our minds jump to discrimination as the explanation. But when we find out that conservatives are underrepresented among us by a factor of more than 100, suddenly everyone finds it quite easy to generate alternate explanations.”
In physics conservatives aren't underrepresented by a factor of 100, but one study did find that there are 4.2 registered democrats for every registered republican among the University of California system’s physics faculty.  For comparison, sociology, ethnic studies, and performing arts all have ratios over 16:1.  The only departments with more republicans than democrats were general business, finance, and military science.  Physics is about on par with communications, medicine, and law.

So the big question is why.  Here are four possible explanations:
  1. There exists overt bias against those with conservative views in academia - something like "liberals are smarter than conservatives".
  2. Academics are subtly “politicalist”, meaning while they don't consciously use political affiliation to make hiring or admissions decisions, they do associate the viewpoints held by conservatives with a lack of intelligence or creativity.  
  3. Conservatives are steered away from even trying to become academics because they think that “politicism” exists in academia.  
  4. Conservatives are steered away from academia by other correlated factors like marriage and children that make it difficult to spend 9+ years in post-secondary schooling. 
In my experience I have seen all four of these things impact the careers of graduate students, although I would say that #1 is fairly rare.  I'm interested in what you think.  Have you ever run into overt "politicism"?  Is this even really an issue?

Current Cosmology From Supernova Data.


Ariel Goobar and Bruno Leibundgu have recently submitted an article to Annual Review of Nuclear and Particle Science summing up our current understanding of physics from the current set of supernova data. We have accrued quite a lot of supernova data over the years and so it is interesting to take a look at how much we have learned. I will not report everything but will post a few interesting plots.

Above is the original diagram/scatter plot Hubble used to show the universe is expanding in a way that fits Hubble's law. This is that same diagram today using current supernova data (not a scatter plot any more!): (showing the distance modulous versus redshift.)


As you can see Hubble's law is confirmed by quite a few supernova today. :) Furthermore, the lower plot shows a blue line representing a universe containing cold dark matter and a cosmological constant and a flat dotted line assuming a universe empty of cold dark matter or dark energy/cosmological constant. As can be seen, the supernova data *strongly* favors a universe with dark matter and dark energy/cosmological constant.
The next plot above shows how well we can constrain the percentage of dark matter and dark energy in the universe using supernova, CMB and BAO data. (click on the image to see better.) As you can see the data fits a flat universe with an accelerated expansion very well
The last plot I want to display shows the current constrains we have on the type of beast dark energy is. As a reminder, the prediction we get from dark energy being the cosmological constant is w = -1. As you can see w = -1 still fits the data very well.

Conclusion: It is nice to see as more and more cosmological data pours in the standard flat universe containing dark energy, cold dark matter, accelerated expansion and dark energy best described by a cosmological constant is verified. Cosmology has truly become a precision science.

ResearchBlogging.orgAriel Goobar, & Bruno Leibundgut (2011). Supernova cosmology: legacy and future To Appear In Annual Review of Nuclear and Particle Science arXiv: 1102.1431v1

Thursday, January 20, 2011

Should Bayesian Statistics Decide What Scientific Theory Is Correct?

Bayesian statistics is used frequently (no pun intended for our frequentist friends) to rule between scientific theories.  Why?  Because in a laymen nutshell, what Bayesian statistics does is tell you how likely your theory is given the data you have observed.

But a question arises: should we feel comfortable accepting one theory over another merely because it is more likely than the alternative?  Technically the other may still be correct, just less likely.

And furthermore: what should be the threshold for when we say a theory is unlikely enough that it is ruled out?  The particle physics community has agreed at the 5σ level which is a fancy pants way of saying essentially the theory has a 99.9999426697% chance of being wrong.  Is this too high, too low or just right?

The Inverse Problem: For an example lets assume that supersymmetry (SUSY) is correct and several SUSY particles are observed at the LHC.  Now, it seems like there are 5 bajillion SUSY models that can explain the same set of data.  For example, I coauthored a paper on SUSY where we showed that for a certain SUSY model, a wide variety of next-to-lightest SUSY particles are possible.  (See plot above).  Furthermore, other SUSY models can allow for these same particles.

So, how do we decide between this plethora of models given many of them can find a way to account for the same data?  I am calling this the inverse problem: the problem where many theories allow for the same data so given that data how can you know what theory is correct?

Back to Statistics: Well, for better or for worse we may have to turn to Bayesian statistics.  As already discussed, Bayesian statistics can tell us which theory is more likely given the data.   And knowing what theory is more or less likely may be all we have to go off of in some cases.

So again I will ask: should we really be choosing between two theories that can reproduce the same data but one has an easier time doing it than another?  Is this just a sophisticated application of Ockham's razor?  Should we as scientists say "The LHC has shown this theory X to be true" when in reality theory Y could technically also be true but is just far less likely?

What do you think?

Wednesday, January 12, 2011

First Planck Results: The Sunyaev-Zeldovich Effect.


There's been many bloggers writing about the first Planck results presented here at AAS and in Europe but I would like to write a little more than has been written on the Sunyeav-Zeldovich results as I think they are impressive.  Impressive both in terms of the science we get as well as well as this particular example shows how precise CMB experiments have become.  I will focus on the results from this paper.

Okay, what is this effect anyways? The Sunyaev–Zel'dovich effect: "is the result of high energy electrons distorting the cosmic microwave background radiation (CMB) through inverse Compton scattering, in which the low energy CMB photons receive an energy boost during collision with the high energy cluster electrons."  And the thing is, clusters of galaxies are filled with high energy electrons in what is known as the intra-cluster medium (ICM).

This means that we can use specific distortions in the CMB to both locate clusters of galaxies and infer science from them from estimating the Hubble constant to extracting information on the physics driving galaxy and structure formation.

Look at the image above: it shows the precision at which Planck can observe this "SZ" effect.  (And it is just amazing!)  In this image you should note several things.  First, Planck intentionally is observing the sky at many frequency bands to see this stuff. (And watch the frequency change with tie in the image.)  At the lowest frequencies the boost on CMB photons yields a diminished flux, at higher frequencies it is an enhanced flux, and right at 217 GHz there is should be flux.

And if you look closely at the image up top you can see that Planck is seeing this!  The cluster in the center has diminished flux at low frequencies, denoted by the blue smudge,  no flux at 217 GHz and enhanced flux for high frequencies. (Now the smudge turns red.)  So Planck can see this effect really well and the science going into this effect can be studied in detail.

The next two plots to the right show how the mass and luminosity of these clusters relate to redshift. Redshift again being a measure of how far away these objects are from us.   These relations can now be compared to physical models and tell us a lot of science about the universe. Again, what is so great is Planck is seeing a lot of clusters and is able to see how the physical properties of these clusters relate with redshift. (Or as time progressed throughout the universe.)

Now, this stuff is all interesting but the really cool stuff, the main stuff Planck was built for, won't be released until next year. That should be a good day for cosmology and I for one am very excited! Cosmology has become a very precise science indeed!

Come in B-modes.... Come on! :)
ResearchBlogging.org
The Planck Collaboration. (2011). Planck Early Results: The all-sky Early Sunyaev-Zeldovich cluster sample Submitted to A&A. arXiv: 1101.2024v1

Tuesday, December 28, 2010

Why Raw Data From Science Experiments Can Scare Me.


Cosmology as a field has become precise enough that we may measure theoretical features at the 1/10 of 1% level or better.  For example, one of Planck's greatest successes could be a detection of what is known as primordial non-Gaussianity that, if it exists, is at most a deviation of less than 0.1% from a pure Gaussian spectrum.


With that in mind, let's look at some raw Planck data.  The image above left (black curve) shows raw data recorded by Planck as time goes by.   It has these features:
  1. You see the dipole of the CMB as a "sine-wave" signal as Planck rotates and scans the sky. (See video above for an illustration of this scanning pattern.)
  2. If you look closely,  you see a sharp peak at the same spot in each sine-pattern.  This is Planck observing the galactic plane.
  3. You also see hundreds of spikes that represent cosmic rays hitting the instrument.  
Now here is the point, all those spikes and other large anomalies are much more significant than deviations from a clean Gaussian signal on the order of 0.1%!  Here we are trying to find deviations on the level of 0.1% and the anomalies from false signals are significantly greater that this!

And so, Planck has to somehow remove them.   The graph on the top right shows what their data looks like when these known anomalies/systematics are accounted for.  It looks decent and gives what looks like a near-Gaussian spectrum modulo the peak from the galactic plane.  Hence, naively/hopefully in such data you can now go searching for 0.1% deviations.

But wait!!!

That clean signal assumes at least the following:
  1. That the simulations of the anomalies and the templates and models used for the removal of this stuff are more accurate that the 0.1% level.
  2. That the removal actually worked... beyond what looks good by eye.
  3. That in the process of removing garbage, Planck didn't inadvertently introduce other false signals.
  4. Etc...
And this "scariness" does not just exist for cosmology data.  For example, I have talked with many people working at the LHC who have admitted that their background issues they have to deal with in the data can be frightening in similar ways. 

Conclusion: Now, don't get me wrong, I have a lot of trust in the Planck team/LHC/whoever.  I really do.  But let's just say this still scares me a little.  Some of the most important results from in physics hinge on better that 1/10 of 1% accuracy in both removing false signals and in not introducing fictitious ones during such a removal process.  (And everyone in the trenches knows this can be really hard to get right!) Therefore, this sometimes seems like a scary business... but at the same time it is also a testament to how far we have come in science. :)

Tuesday, December 14, 2010

Why Study This Stuff?: An Example

Last week at a church social activity my wife and I were sitting with some friends - highly educated people - and the discussion turned to my profession.  I explained what it is that I do and then came the question that always comes after I explain that I'm trying to understand dynamo action in sun-like stars, "so why does the government give you money to do that?"  I tried to explain that there are economic benefits to funding basic research and then moved the conversation to something else.  Today, however, I was reading about proposals to cut funding of the NSF is some of the latest debt-reduction proposals and I again thought of that conversation.  Then I thought about some work that a good friend is doing trying to build compact, low-cost, reliable ways to diagnose exactly what strain of viral infection someone has, and I came up with an alternate idea.  So I'm going to try it out on you guys.

Here it is:  we should fund basic science because Issac Newton playing with prisms in the 1660's has led to knowledge that prevents cancer, allows you to microwave your food, and protects us from terrorists.  Here's why.

Wednesday, October 27, 2010

Computer Language Fragmentation in Physics Knows No End.

As a graduate student I have written papers where, in order to collaborate, I had to write code in: C, C++, FORTRAN (77 and 90), IDL, Jython, Matlab, Python and have just been told for my next project I need to modify hundreds of lines of Perl scripts.  Furthermore, UCI did their numerical methods course in Java as they thought that was a good intermediate language. (And I have had to design web pages in html if that in any way counts.)

So, just to get real world physics done this non-CS guy has had to master ~8-10 languages enough to get papers out the door with different collaborators.  Does the computer language fragmentation in physics know no end?  Fortunately, once you have learned 2-3 languages the rest are pretty much straight forward provided you have good documentation.

You all know I love Python.  If people could just write everything in Python except the speed critical parts I would be happy. (And if you must use FORTRAN do not use 77!!!)

Sorry this is not the most profound post but I just needed to vent.

Wednesday, October 20, 2010

Academically I'm Issac Newton's 14th Great-Grandson

Everyone loves to feel a personal connection to history.  If the subject of Native Americans ever comes up around my wife's family all present will be told that they are direct descendants of Pocahontas.  My ancestors' last name used to be Neilson, which is Danish, but they anglicized it to Nelson when they arrived in America.  Now thanks to the American Mathematical Society and some mathematicians at North Dakota State University, academics can so the same.  The AMS and NDSU's math department have combined forces to produce the Mathematics Genealogy Project - an online, search-able database of mathematicians and like-minded physical scientists with over 145,000 individual entries of mathematicians and scientists dating back to the 14th century.

My adviser's degree is in applied math (as far as I can tell British universities call theoretical physics applied math), so I have a link into the system.  Here are a few of my more notable direct academic ancestors:

G.I. Taylor (2nd Great-Grandfather):  Experimentally showed that the inference pattern of photons passing through a double-slit set-up persisted even if only 1 photon was present at a time; one of the early pioneers in turbulence research; famously calculated the yield of the first atomic bomb from a photo on the cover of Life magazine to within 10% (to the annoyance of the US government, who had kept the yield secret)

J.J. Thompson (3rd Great-Grandfather):  Discoverer of the electron (for which he won the 1906 Nobel prize) and isotopes; inventor of the mass spectrometer; proponent of the delicious-sounding "plum pudding" model of the atom, which was tragically later shown to be inaccurate




John William Strutt, 3rd Baron Rayleigh (4th Great-Grandfather):  Discoverer of argon (for which he won the 1904 Nobel Prize) and Rayliegh scattering, which explains why the sky is blue and the sun is yellow; invented the Rayleigh number, a dimensionless fluid parameter which controls the onset of convection; figured out how human ears use phase differences in sound waves to tell where a sound originates


Issac Newton (14th Great-Grandfather):  Inventor of calculus and Newton's laws of motion; discoverer gravity in the scientific sense; invented and built the first reflecting telescope; originator of the corpuscular theory of light and the concept of lumineferous aether because not even Newton could get everything right



Galileo Galilei (17th Great-Grandfather):  Inventor of the telescope; Father of the scientific revolution; had a little misunderstanding with the Pope; discoverer Jupiter's 4 largest moons, the phases of Venus, and sun spots (although their are some indications that Chinese astronomers beat him to it by looking directly at the sun with their bare eyes)



Like I said, everyone likes to feel a connection to the past and then tell everyone else about it.

Tuesday, September 28, 2010

NRC Rankings Are Here... And Extremely Complicated

After several years of waiting with baited breath, the National Research Council's rankings of doctoral programs in the US is here.  In an effort to be true scientists, the rankings come with up to 61 categories which can be weighted in any way your heart desires.  Do you want to see rankings based on average workspace per grad student and availability of academic ethics training? Then look no further.

Another interesting, helpful, and extremely complicated feature is that instead of producing a single set of rankings, they have created probability distribution functions for each schools ranking and then listed the 90% confidence interval.  That means that in Astronomy and Astrophysics programs Harvard, Caltech, UC-Berkeley, and MIT are all ranked "1 - 9", with Arizona and Princeton ranked "1 - 10" and Johns Hopkins ranked "1 - 11".  While more correct given the errors in ranking procedures, it's certainly less fun to say that I'm 90% sure that my department is somewhere between the 5th and the 14th best program in the country using one set of weights for 61 categories measuring aspects of graduate education.

What Happens When A White Dwarf Collides With A Neutron Star?

A comparison between the white dwarf IK Pegasi...                             Image via Wikipedia
Paschalidis et al. recently simulated what will happen when a white dwarf collides with a neutron star in a head on collision incorporating the effects of general relativity.

In each case I will list the mass of the white dwarf and neutron star in solar masses, (meaning the mass of these objects after dividing my the mass of the sun) the ratio of the radius of the white dwarf to that of the neutron star and how much separation there is between the two objects initially in terms of white dwarf radii.

Case 1.
Mass of neutron star: 1.5
Mass of white dwarf: 0.98
RWD/RNS: 9.96
Initial separation: 4 white dwarf radii.


The plot above shows the results for this case.  The neutron star is the object initially on the left, and the white dwarf is initially on the right.  The lines are density contour lines and the more red the color the more dense the region.  As you can see the neutron star is much smaller than the white dwarf and is much more dense.

I will let the authors describe what is happening as I think the description is good.  However, given the technical language I am confessing in advance that the text has been heavily paraphrased in some sections:
In general, the head-on collision of white dwarf-neutron star systems can be decomposed into three phases: the acceleration, the plunge, and the quasiequilibrium phase.  
During the acceleration phase, the two stars accelerate toward one another starting from rest. The separation decreases as a function of time and this phase ends when the two stars first make contact.  As the separation decreases, the increasing neutron star's tidal field strongly distorts the white dwarf. This can be seen in the density contours plotted above.  The neutron star's interior is almost unchanged during this phase. In reality, it oscillates but is not tidally distorted by the white dwarf. Nevertheless, the neutron star's atmosphere does expand. 
During the plunge phase the neutron star penetrates the white dwarf, launching strong shocks that sweep through and heat the interior of the white dwarf. The neutron star's outermost layers are stripped when they encounter the dense central parts of the white dwarf's, and the neutron star is compressed. We find that at maximum compression in this, the neutron star's central density only increases by about 8% of the initial central density. Eventually, strong shocks sweep through the entire white dwarf's interior and then transfer linear momentum to the white dwarf's outer layers, a large fraction of which receives sufficient momentum to escape to infinity. We find that about 18% of the total mass is lost to infinity. Material that did not receive sufficient thrust to escape to infinity starts to rain down onto the neutron star-white dwarf combination and the plunge phase ends when this process is over.
During the quasiequilibrium phase the remnant settles into a spherical quasiequilibrium object whose outer layers oscillate. This can be seen in the two final snapshots of figure above, where we show that the inner equatorial restmass density contours do not change with time, while the outer layers change only a little.  The white dwarf-neutron star final remnant consists of a cold neutron star core with a hot mantle on top.
The prompt collapse to a black hole does not take place in any of the cases studied because strong shock heating gives rise to a hot remnant. Ultimate collapse to a BH is almost certain after the remnant has cooled.
Anyways, I was impressed how much matter is ejected during the collision to do heating.  It's also interesting how the final combo's radii has increased while the core of the new object is that of a neutron star.  I'd be curious if anyone here knows how abundant these types of abjects are in the universe.

These next two cases are qualitatively the same so I won't add any more commentary.  But I am adding the cases anyways as they help see how mass and separation distance effect the processes.

Case 2.
Mass of neutron star: 1.5
Mass of white dwarf: 0.98
RWD/RNS: 4.99
Initial separation: 8 white dwarf radii.


Case 3.
Mass of neutron star: 1.5
Mass of white dwarf: 0.98
RWD/RNS: 20.01
Initial separation: 2 white dwarf radii.



I applaud the authors being able to simulate these collisions incorporating the effects of general relativity. These simulations have been most interesting to me.

Vasileios Paschalidis, Zachariah Etienne, Yuk Tung Liu, & Stuart L. Shapiro (2010). Head-on collisions of binary white dwarf--neutron stars: Simulations in
full general relativity Submitted to PRD. arXiv: 1009.4932v1

Monday, September 27, 2010

Distinguishing Our Universe From Other Similar Universes In The Multiverse.

Srednicki and Hartle have raised an interesting concern recently about a limitation on the predictive power of multiverse theories. They observe that in multiverse theories, exact snapshots of our universe happen several times in different places. So if we want to have a physical theory that describes our universe, the one we live in, then the question arises: how can we tell which one it is from all the others?

From the paper:
Theories of our universe are tested using the data that we acquire. When calculating predictions, we customarily make an implicit assumption that our data D0 occur at a unique location in spacetime. However, there is a quantum probability for these data to exist in any spacetime volume. This probability is extremely small in the observable part of the universe. However, in the large (or infinite) universes considered in contemporary cosmology, the following predictions often hold. 
  • The probability is near unity that our data D0 exist somewhere. 
  • The probability is near unity that our data D0 is exactly replicated elsewhere many times.  An assumption that we are unique is then false.
This paper is concerned with the implications of these two statements for science in a very large universe... 
The possibility that our data may be replicated exactly elsewhere in a very large universe profoundly affects the way science must be done.
In order to solve this problem, the authors propose creating a "xerographic distribution" ξ.  Given the set X of all the similar copies of our universe in the multiverse, this xerographic distribution ξ gives a probability that we are the specific snapsot Xi of that set.

The authors claim that this distribution cannot be derived from the fundamental theory.  The fundamental theory can only predict the structure of the whole universe at large, not which snapshot in it we happen to be.  However, given a certain ξ, we can use Bayes Theorem to test which ξ appears to be most correct, and once that ξ is established, we have now a statistical likelihood hinting at which universe in the whole multiverse is ours.

So, given a fundamental physical theory T and a xerographic distribution ξ, the authors say:
We therefore consider applying the Bayes schema to frameworks (T,ξ). This involves the following elements: First, prior probabilities P(T,ξ) must be chosen for the different frameworks. Next, the... likelihoods P(1p)(D0|T,ξ) must be computed. Finally, the... posterior probabilities are given by
The larger these are, the more favored are the corresponding framework.
The authors then go on to give some examples of how this might work and solve issues with Boltzman Brains etc...

So, just to repeat:
  1. One glaring problem with multiverse theories is our universe happens several times in several places throughout the multiverse.
  2. However, we would like a good physical theory to make predictions about the snapshot we happen to live on.
  3. The fundamental theory of the multiverse cannot tell us which snapshot we are.
  4. However, creating a xerographic distribution ξ we may be able to put probability estimates of which copy is ours using Bayes Theorem.
Some further thoughts and Questions.  I remind the readers, as crazy of a topic this paper covers, it did get published in a respectable journal: Physical Review D.  However, while reading the paper I had several thoughts come to mind and I would appreciate your own thoughts on these issues:
  1. How should we feel about multiverse theories given issues like this arise?
  2. Can only tenured professors get away with writing such articles?  IE... if a grad student wrote papers like these will universities take him/her seriously when applying for a faculty position?
  3. What is your "exact other" in the "other snapshots" doing right now? :) 
Srednicki, M., & Hartle, J. (2010). Science in a very large universe Physical Review D, 81 (12) DOI: 10.1103/PhysRevD.81.123524