Statistical significance

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Flying Dutchman
    SBR MVP
    • 05-17-09
    • 2467

    #1
    Statistical significance
    In statistics, a result is called statistically significant if it is unlikely to have occurred by chance. The phrase test of significance was coined by Ronald Fisher.[1]
    The use of the word significance in statistics is different from the standard one, which suggests that something is important or meaningful. For example, a study that included tens of thousands of participants might be able to say with very great confidence that people of one race are more intelligent than people of another race by 1/20th of an IQ point. This result would be statistically significant, but the difference is small enough to be utterly unimportant. Many researchers urge that tests of significance should always be accompanied by effect size statistics, which approximate the size and thus the practical importance of the difference.
    The amount of evidence required to accept that an event is unlikely to have arisen by chance is known as the significance level or critical p-value: in traditional Fisherian statistical hypothesis testing, the p-value is the probability conditional on the null hypothesis of the observed data or more extreme data. If the obtained p-value is small then it can be said either the null hypothesis is false or an unusual event has occurred. It is worth stressing that p-values do not have any repeat sampling interpretation.
    An alternative statistical hypothesis testing framework is the Neyman-Pearson frequentist school which requires that both a null and an alternative hypothesis to be defined and investigates the repeat sampling properties of the procedure i.e. the probability that a decision to reject the null hypothesis will be made when it is in fact true and should not have been rejected: a "false positive" or Type I error and the probability that a decision will be made to accept the null hypothesis when it is false Type II error.
    More typically, the significance level of a test is such that the probability of mistakenly rejecting the null hypothesis is no more than the stated probability. This allows the test to be performed using non-significant statistics which has the advantage of reducing the computational burden while wasting some information.
    It is worth stressing that Fisherian p-values are philosophically different from Neyman-Pearson Type I errors. This confusion is unfortunately propagated by many statistics textbooks.[2]

  • jellobiafra
    SBR Hall of Famer
    • 03-08-09
    • 6291

    #2
    Is this SBR's version of Revenge of the Nerds?
    Comment
    • bighank33
      SBR High Roller
      • 11-03-09
      • 190

      #3
      Comment
      • Swinging Johnson
        SBR Hall of Famer
        • 08-12-09
        • 7604

        #4
        Anyone who misinterprets Fisherian P values for Neyman Pearson type 1 errors should really rethink what they want to do with their lives. Just because it's written doesn't make it true people! Consider the evidence, contemplate the source and question it's authenticity. Dutch, you have demonstrated far too much patience with this audience. People that don't know p values and statistical probabilities just infuriate me. I agree 110% with what you're saying!
        Comment
        • Flying Dutchman
          SBR MVP
          • 05-17-09
          • 2467

          #5
          Originally posted by jellobiafra
          Is this SBR's version of Revenge of the Nerds?

          Comment
          Search
          Collapse
          SBR Contests
          Collapse
          Top-Rated US Sportsbooks
          Collapse
          Working...