frequentist friends) to rule between scientific theories. Why? Because in a laymen nutshell, what Bayesian statistics does is tell you how likely your theory is given the data you have observed.
But a question arises: should we feel comfortable accepting one theory over another merely because it is more likely than the alternative? Technically the other may still be correct, just less likely.
And furthermore: what should be the threshold for when we say a theory is unlikely enough that it is ruled out? The particle physics community has agreed at the 5σ level which is a fancy pants way of saying essentially the theory has a 99.9999426697% chance of being wrong. Is this too high, too low or just right?
The Inverse Problem: For an example lets assume that supersymmetry (SUSY) is correct and several SUSY particles are observed at the LHC. Now, it seems like there are 5 bajillion SUSY models that can explain the same set of data. For example, I coauthored a paper on SUSY where we showed that for a certain SUSY model, a wide variety of next-to-lightest SUSY particles are possible. (See plot above). Furthermore, other SUSY models can allow for these same particles.
So, how do we decide between this plethora of models given many of them can find a way to account for the same data? I am calling this the inverse problem: the problem where many theories allow for the same data so given that data how can you know what theory is correct?
So again I will ask: should we really be choosing between two theories that can reproduce the same data but one has an easier time doing it than another? Is this just a sophisticated application of Ockham's razor? Should we as scientists say "The LHC has shown this theory X to be true" when in reality theory Y could technically also be true but is just far less likely?
What do you think?