Los economistas expertos de Econ One tienen experiencia en una amplia variedad de servicios, como defensa de la competencia, certificación colectiva, daños y perjuicios, mercados financieros y valores, propiedad intelectual, arbitraje internacional, trabajo y empleo, y valoración y análisis financiero.
Los economistas expertos de Econ One cuentan con una amplia experiencia en sectores específicos. Nuestra experiencia abarca numerosos sectores, como los mercados de la energía eléctrica, los mercados financieros, la sanidad, los seguros, el petróleo y el gas, la industria farmacéutica, etc.
Los recursos de Econ One, que incluyen blogs, casos, noticias y mucho más, ofrecen una colección de materiales de los expertos de Econ One.
Doctorado en Estadística, Universidad de California, Los Ángeles
Máster en Estadística, Universidad de California, Los Ángeles
Licenciatura en Matemáticas/Economía, Claremont McKenna College
Econ One, Agosto 2008 - Presente
Universidad de Pensilvania, 2007 - 2008
Universidad de California en Los Ángeles, 2007 - 2008
Consultor estadístico autónomo, 2004 - 2008
RAND Statistics Group, 2006
Lockheed Martin Misiles y Espacio, 2001 - 2003
Tribunal de distrito de EE.UU.
Tribunal del Estado
Arbitraje
Mediación privada
Statistical significance refers to whether an observed result is unlikely to be explained by random chance alone. In research, policy, litigation, and other applied fields, understanding statistical significance is essential for interpreting findings and evaluating evidence.
This article explains what statistical significance is — and what it isn’t — while highlighting common misunderstandings.
The phrase “statistically significant” is common across medicine, social science, business, and litigation — but it is often misunderstood.
Think of flipping a coin. If you flip it 10 times and get 8 heads, you might wonder: Is this coin fair, or could the difference from the expected 50/50 result be due to chance? Statistical testing asks: Is the observed difference (8 heads vs. 5 expected) likely due to chance alone? What about 80 heads out of 100? 800 heads out of 1,000?
A few basics:
Samples themselves are never significant — it’s the results from analyzing them that may be. The better-framed question is whether a sample allows valid probability-based inferences.
A trivial difference in test scores can be statistically significant in a huge dataset but irrelevant in practice. Significance answers, “Could this be due to chance?” — not “Is this worth caring about?” That is where understanding practical significance becomes key.
Statistically insignificant differences may reflect either a tiny effect or an underpowered study with too few observations. Remember: absence of evidence is not evidence of absence. For example, a small clinical trial may not detect significance even if a treatment has meaningful effects. Evaluating the alternative hypothesis and being mindful of false negatives is critical here.
Running many tests can increase the odds of false positives, a problem stemming from multiple testing. Without corrections or replication, isolated “significant” results can be misleading, especially without a robust statistical framework.
Statistical significance is one piece of the puzzle. A thoughtful interpretation also considers:
Statistical significance is a tool, not a verdict. It tells us whether chance is a likely explanation. Whether the result is important or actionable may be related but is a different question. Sound decisions come from combining significance with context, effect size, and expert judgment.
It’s the probability of observing data as extreme as those found, assuming the null hypothesis is true.
No. Samples are never “statistically significant” — only results from analyzing them may be. Random samples are advantageous because they put the practitioner in a strong position to make probability-based inferences. The size of a random sample can affect the power of a study (i.e., its ability to detect an effect if one exists), but the label of “significant” applies to results, not to the sample or its size.
Not necessarily. A result can be statistically significant but inconsequential as a practical matter. Understanding both statistical and practical significance is crucial.
Yes — but it typically requires a much larger effect to reach significance. Small samples are also more vulnerable to outliers, and at the same time, outliers themselves are harder to detect when data are limited.
No. Statistical significance relates only to probability, not cause and effect.
Effect size, confidence intervals, the hypotheses being tested, and study design all provide valuable context.
Testing many hypotheses in parallel raises the chance of false positives. Without context, “significant” results can be misleading.
[1] Other commonly used thresholds for the p-value include 0.01 and 0.10.