Pages

Wednesday, June 22, 2011

Scientific Consensus And/Or Turing Completeness?

(Below isn't the most correct use of Turing Complete but you get my point...)

Today I was reading Not Even Wrong who quotes Susskind on the Multiverse:
In 1974 I had an interesting experience about how scientific consensus forms. People were working on the as yet untested theory of hadrons [subatomic particles such as protons and neutrons], which is called quantum chromodynamics, or QCD. At a physics conference I asked, “You people, I want to know your belief about the probability that QCD is the right theory of hadrons.” I took a poll. Nobody gave it more than 5 percent. Then I asked, “What are you working on?” QCD, QCD, QCD. They were all working on QCD. The consensus was formed, but for some odd reason, people wanted to show their skeptical side. They wanted to be hard-nosed. There’s an element of the same thing around the multiverse idea. A lot of physicists don’t want to simply fess up and say, “Look, we don’t know any other alternative.
This then got me thinking along the lines of being "Turing Complete".  As many of you may know, if you want to solve a problem that can be solved algorithmically, any Turing Complete framework will do the job.

Now back to Susskind's quote.  He implies that people mostly didn't believe in QCD at first, but since everybody was working on it eventually it found the most success in physics.  Did QCD become successful because it is really *the* correct version of what is going on in particle physics or is it because it was the most worked on framework and so ultimately was cleverly engineered to model reality?

Now, QCD makes successful predictions and so it is more than a framework, it is a successful scientific theory.  However, part of me wonders if the physics community used a completely different approach to particle physics and if everyone worked on that alternative approach if eventually they would have found a completely different framework that not only explains particle physics but successfully makes predictions.

So how much of current physical theories are *exact* and *unique* versus how many have a "Turing Completeness" about them such that if the whole community works on them for decades, eventually they both fit the data and make successfully predictions?

So this becomes my question: Are the main theories in physics accepted because they are the unique theories that fit the data and make predictions or are they accepted because the community adopted them early on and cleverly molded them into models that fit the data and eventually make successful predictions?  If the later, are these theories really unique?  Is there a "Turing Complete" set of frameworks that can always describe the same underlying physics and coincidentally make successful predictions making them valid scientific thoeries?

And if this is all a set of "Turing Complete" frameworks, where one framework is favored by the community, can we ever know what is really happening versus what we have forced to work?

Thoughts?

3 comments:

  1. I vote "Turing completeness" for physics.  I think that especially in the last 50 years since the Cold War turned science into a sort of government-funded academic/industrial/economic endeavor, science has become a sort of meld between a meritocracy and a democracy.  Your idea needs to have some magical combination of social clout and brilliance to be accepted by the community and then acted upon. Bad ideas from famous people don't gain traction, and neither do good ideas from nobodies.

    ReplyDelete
  2. Honestly, I think it's a combination.  Physics, and science in general, is much less of the "ivory tower" and much more of a human endeavor (even a "political" human endeavor) than we generally like to admit. 

    You will never have a "unique" model that fits all the data.  Given any finite set of data, there will always be an infinite number of models that correctly reproduce that data.  (Many of these models might involve tiny invisible gnomes making the light bulbs work, but most will not.)  This is where Occam's razor comes in.  (I think, however, the details behind how to apply Occam's razor is something that is still debatable.) 

    However, any time you have two competing models, even if they are really similar, and even if they both have a number of flexible parameters that can be molded to fit experiments, you should be able to come up with an experiment where model 1 predicts A, and model 2 predicts B.  Sure it may take a good deal of time to get all the error bars down and get a consensus of the evidence (and even longer to get funding or technological ability to design, build, and run the experiment), but it should (eventually) come. 

    This being said, I don't think we are there yet for many (if not most) parts of modern physics (especially things like a lot of particle physics and cosmology).  I really think we have a long way to go before we are "sure" of anything like that.  (And even after we are "sure" of things like Newtonian Dynamics or basic geology, there is always the chance of an Albert Einstein or an Alfred Wegener coming along to mess it up.) 

    ReplyDelete
  3. I know a blogger who discusses this phenomenon in terms of epicycles, as in "Recall what I have said on numerous occasions before. Once a "science"
    starts manufacturing epicycles on a regular basis, it's all over but for
    the burial of the previous generation (or three) of failed scientists." Like was said by others above, I think this is a much bigger problem than is generally realized, especially after reading Lee Smolin's The Trouble with Physics.

    ReplyDelete

To add a link to text:
<a href="URL">Text</a>