Pages

Monday, December 27, 2010

Statistics with No Error Bars or Systematics.

I had someone try to convince me something recently using several statistics with no error bars attached or discussion of how they were obtained, any known systematics and how they were addressed, etc...   This seems to be a common practice among most people (my claim with no firm statistic cited :) )  and reminds me of this:

Dilbert.com

I'd be interested, for example, if the world would be any different if advertising campaigns were forced to attach error bars and descriptions of their systematics when quoting statistics.

(Although, admittedly, life and especially casual conversations, wouldn't be as fun if they had to be rigorous all the time.)

Thoughts?

13 comments:

  1. The problem is that by making statistics more accurate one also makes them more complicated. The prime example of this in my mind is the latest NRC rankings which give ranking ranges and probability distribution functions rather than a single rank-ordered list. This is fine for statisticians but when one tries to explain this to a reporter or administrator who hasn't taken a graduate level statistics course, things get really messy really fast. One administrator at CU claimed that my department was the #1 astronomy department in the country due to some combination of deliberate misrepresentation and misunderstanding of the statistical results. So additional information is only useful if those receiving it are capable of using it.

    ReplyDelete
  2. Nick, I agree, and it isn't a big deal but:

    1. Statistics done right would be extremely helpful for society.
    2. Statistics done right is unpractical.

    So basically we are just out of luck and have to learn to live intelligently in Dilbert's world.

    ReplyDelete
  3. I don't know if the world would be different with error bars on adverts. I'm pretty sure that the world would be different if people were aware of their confirmation biases.

    Even statisticians are not impressed by the NRC rankings. http://news.uchicago.edu/btn/nrc.summary.php

    Philosophers even less so.
    http://el-prod.baylor.edu/certain_doubts/?p=2217

    ReplyDelete
  4. Jonathan,

    That's interesting feedback about the NRC rankings. Personally, I think thesis advisors will have a greater impact on the success/failure you will experience than the school you go to. (Though it may be the case that great advisors are correlated with great schools.)

    ReplyDelete
  5. In some cases, you are clearly right -- especially if the advisor is really, really good and/or the advisor has a very interesting project and willingness to share. That said, academic specializations differ both in terms of content and in terms of culture. Philosophy is very "pedigree" oriented. Schools get reputations and coming from a good school really matters. (I will punt on whether it ought to matter.) I have been told that psychology is not like this at all. I have no idea how things work in physics.

    ReplyDelete
  6. Jonathan,

    Since astrophysics is such a media-friendly field the topic of your research matters much more than your pedigree, although there will always be a bonus if you can put an Ivy-league school on your CV due to the fact that many important hiring decisions are made by those outside of your field. Basically if you are in a "hot" field coming from Arkansas Tech isn't going to be too much of a problem.

    ReplyDelete
  7. I have a question: how many of you report precison and accuracy in your precise experiments. and for the analysts/modellers, in your simulation...

    It depends what one is trying to convey. For example, small fluctuations in orbit of the moon are of no import if we are only concerned about rise and set for a given day.

    ReplyDelete
  8. In a sample random testing of 80 billion likely voters, we've found that this article on statistics is truly sadistic....

    ReplyDelete
  9. Ancient1,
    I am not an experimentalist or an observer but I work with several observers and they spend much more of their time and effort trying to quantify the error in their measurements than in making the measurements themselves. When someone reports an observational result in a paper or talk most of the discussion focuses on the error in their measurement rather than the measurement itself.

    One particular bug-a-boo that keeps many observational astronomers up at night are systematic errors - by those I mean errors that are not random but consistently skew the data in one direction. What Joe is worried about in removing the noise from the Plank data is that if they do it in the same way every time they might consistently add in small errors that produce a large cumulative effect. Basically the only way to defend against those types of errors - other than be really thoughtful - is to have multiple experiments/observations that confirm the same result.

    ReplyDelete
  10. As for the error in models or simulations, that is murky ground. You can formally compute the numerical error in approximately solving the equations, but that's not really what anybody cares about. The real question is how close does your model or simulation come to the real thing. When you have detailed data to compare against - as in weather forecasts - you can run your model starting with data from yesterday for a week and see how well you do. Since the weather is a chaotic system (in the mathematical sense), what is done is your model is run 1000 times starting yesterday and some metric is set for a "correct" forecast (Were the temperatures within 10 degrees? Did it snow in Cleveland?) and then you see how many of your forecasts were correct.

    If you're working on the inside of the sun you can check for internal consistency and compare with some observations, but for the most part you really can't quantify how accurate your simulation is. In those cases mostly what matters is if your model can help the observers explain what they see in features that are common to both your model and their observations.

    ReplyDelete
  11. So Nick, if the numerical model doesn't match the data, how do you know if this is due to there being an error in your code versus the underlying theory just doesn't match data?

    ReplyDelete
  12. Joe,
    There are two ways:
    1) Look for bugs in your code. Finding none you then conclude that the underlying theory is wrong.
    2) Wait for somebody else with a code using a different numerical method to come along and either prove you screwed up or validate you. This can take a long time, so until then #1 is the only option.

    ReplyDelete
  13. Idk, error bars on things would be fantastic, especially in advertising. It's not that hard to understand that a stat has error. I definitely thing error estimates should be reported along with statistics.

    ReplyDelete

To add a link to text:
<a href="URL">Text</a>