Econ One’s expert economists have experience across a wide variety of services including antitrust, class certification, damages, financial markets and securities, intellectual property, international arbitration, labor and employment, and valuation and financial analysis.
Econ One’s expert economists have extensive industry specific experience. Our industry experience spans numerous industries including electric power markets, financial markets, healthcare, insurance, oil and gas, pharmaceutical, and more
Econ One’s resources including blogs, cases, news, and more provide a collection of materials from Econ One’s experts.
Ph.D. in Statistics, University of California, Los Angeles
M.S. in Statistics, University of California, Los Angeles
B.A. in Mathematics/Economics, Claremont McKenna College
Econ One, August 2008 ā Present
University of Pennsylvania, 2007 ā 2008
University of California Los Angeles, 2007 ā 2008
Self-Employed Statistical Consultant, 2004 ā 2008
RAND Statistics Group, 2006
Lockheed Martin Missiles and Space, 2001 ā 2003
U.S. District Court
State Court
Arbitration
Private Mediation
Statistical significance refers to whether an observed result is unlikely to be explained by random chance alone.Ā In research, policy, litigation, and other applied fields, understanding statistical significance is essential for interpreting findings and evaluating evidence.
This article explains what statistical significance is — and what it isnāt — while highlighting common misunderstandings.
The phrase āstatistically significantā is common across medicine, social science, business, and litigation — but it is often misunderstood.
Think of flipping a coin.Ā If you flip it 10 times and get 8 heads, you might wonder: Is this coin fair, or could the difference from the expected 50/50 result be due to chance? Ā Statistical testing asks: Is the observed difference (8 heads vs. 5 expected) likely due to chance alone?Ā What about 80 heads out of 100?Ā 800 heads out of 1,000?
A few basics:
Samples themselves are never significant — itās the results from analyzing them that may be.Ā The better-framed question is whether a sample allows valid probability-based inferences.Ā
A trivial difference in test scores can be statistically significant in a huge dataset but irrelevant in practice.Ā Significance answers, āCould this be due to chance?ā — not āIs this worth caring about?ā That is where understanding practical significance becomes key.
Statistically insignificant differences may reflect either a tiny effect or an underpowered study with too few observations.Ā Remember: absence of evidence is not evidence of absence.Ā For example, a small clinical trial may not detect significance even if a treatment has meaningful effects.Ā Evaluating the alternative hypothesis and being mindful of false negatives is critical here.Ā
Running many tests can increase the odds of false positives, a problem stemming from multiple testing.Ā Without corrections or replication, isolated āsignificantā results can be misleading, especially without a robust statistical framework.
Statistical significance is one piece of the puzzle.Ā A thoughtful interpretation also considers:
Statistical significance is a tool, not a verdict.Ā It tells us whether chance is a likely explanation.Ā Whether the result is important or actionable may be related but is a different question.Ā Sound decisions come from combining significance with context, effect size, and expert judgment.
Itās the probability of observing data as extreme as those found, assuming the null hypothesis is true.
No. Samples are never āstatistically significantā — only results from analyzing them may be. Random samples are advantageous because they put the practitioner in a strong position to make probability-based inferences. Ā The size of a random sample can affect the power of a study (i.e., its ability to detect an effect if one exists), but the label of āsignificantā applies to results, not to the sample or its size.
Not necessarily.Ā A result can be statistically significant but inconsequential as a practical matter. Understanding both statistical and practical significance is crucial.
Yes — but it typically requires a much larger effect to reach significance. Small samples are also more vulnerable to outliers, and at the same time, outliers themselves are harder to detect when data are limited.
No.Ā Statistical significance relates only to probability, not cause and effect.
Effect size, confidence intervals, the hypotheses being tested, and study design all provide valuable context.
Testing many hypotheses in parallel raises the chance of false positives.Ā Without context, āsignificantā results can be misleading.
[1] Other commonly used thresholds for the p-value include 0.01 and 0.10.