What is MAIVE?
Overview
MAIVE (Meta-Analysis Instrumental Variable Estimator) adjusts for publication bias and p-hacking while correcting for “spurious precision” — over-optimistic standard errors that arise when researchers choose methods or models that under-report true uncertainty.
By using an instrumental-variable based on the inverse sample size, MAIVE reduces biases due to p-hacking while leaving publication-bias corrections (e.g. PET-PEESE) intact.
It is most useful for observational research, where standard errors are easiest to game and inverse-variance weights can back-fire. For experimental research, it presents a useful robustness check.
How MAIVE Works
Step 1 (First stage). Regress the reported variances on the inverse sample size: SE² = ψ₀ + ψ₁(1/N) + ν. This isolates the share of variance unlikely to be affected by p-hacking: artificially increasing sample size is harder than increasing precision.
Step 2 (Second stage). Replace each variance in your chosen funnel-plot model (PET, PEESE, PET-PEESE, EK) with the fitted value from Step 1 and drop or adjust inverse-variance weights. The resulting instrumental variable estimator is MAIVE.
Step 3 (Inference). Report a heteroskedasticity-robust standard error, the Anderson-Rubin confidence interval and the first-stage F statistic so users can judge instrument strength.
Key Features
- Instrumental-Variable Correction: Uses inverse sample size as a plausibly exogenous instrument for reported precision.
- Model Agnostic: Works as a drop-in replacement for current meta-analysis models based on the funnel plot.
- Weak-Instrument Robust: Built-in Anderson-Rubin intervals remain valid when the first-stage F statistic is small.
- Minimal Extra Data: Needs only sample sizes, which most meta-analysts already collect.
- Bias Reduction: Simulation and large-scale empirical evidence show that MAIVE adjusts for most bias arising from publication bias, p-hacking, and spurious precision.
Applications
Observational Evidence
Economics, psychology, education, medical research — any field where research design can drive reported precision.
Policy Analysis
Give decision-makers bias-corrected effect sizes when evidence from randomized controlled experiments is scarce.
Data-Quality Audits
Flag clusters of spuriously precise results before they steer conclusions.
Research Validation
Compare meta-analytic estimates with multi-lab replications and gauge overstatement.
Papers and Resources
Getting Started
Ready to correct your data for spurious precision? Upload your dataset and let MAIVE analyze it for you, or run a demo using a synthetic dataset. The process is simple and provides clear, actionable results.