“Common-sense gun laws” are sold as obvious life savers. In a recent report for ReasonTV, host Justin Monticello asks a tougher question: do the studies actually show they work? His answer – grounded in interviews with statistician Aaron Brown and a deep dive into RAND Corporation’s mega-review – isn’t the sound bite many politicians want. The research base is vast, but the solid, causal findings are rare, fragile, and often misused. That doesn’t prove gun laws don’t work; it does suggest we should be humble about what today’s social science can really tell us.
Meet the Messenger

Monticello’s video (“Do Studies Show Gun Control Works?”) frames the debate not as pro- or anti-gun, but as pro-evidence. He leans on Aaron Brown – an NYU/UCSD statistician and risk expert, as well as a Bloomberg columnist – who argues that most published public-policy papers promise a level of causal certainty they simply can’t deliver. Monticello stresses that this is a methodological critique, not culture-war chest-thumping: when the data can’t support big claims, policymaking by press release becomes dangerous.
RAND’s Stunning Filter: 27,900 -> 123

As Monticello reports, RAND (2020) screened 27,900 publications on gun laws’ effectiveness and judged just 123, about 0.4%, to be rigorous enough for meaningful inference. Even then, Brown tells Monticello, many of those 123 still suffer from defects: weak controls, model misspecification, undisclosed data choices, and outcomes too rare to estimate precisely. It’s a sobering baseline: the literature is not just polarized; it’s brittle.
When “Significant” Isn’t Significant

Monticello highlights Brown’s simple math on false positives. Across those 123 papers, researchers ran 722 hypothesis tests. With the conventional 5% significance threshold, you’d expect roughly 36 “statistically significant” results by chance alone – even if gun laws had no effect. The papers found 18 significant law-outcome pairs. That’s not proof of zero effect, but it’s squarely within what random noise might produce. Even more telling, only one significant finding showed a worse outcome after a law – far fewer “negative” results than chance alone would suggest, which Brown reads as a red flag for publication bias.
Data Deserts and Rare Events

A big theme in Monticello’s piece is that many outcomes we care most about – gun homicides, accidental child shootings, mass killings – are statistically rare. That’s morally important but methodologically brutal. Small absolute numbers, lots of confounders, and natural year-to-year volatility make it “next to impossible,” Brown says, to isolate the marginal effect of a narrow rule. Monticello cites a RAND-noted estimate that stricter child-access laws might avert two injuries across 11 states per year, so small that noise from unrelated factors can swamp any signal.
The Before-After Trap

Monticello relays Brown’s point that state gun homicide rates swing by about 6% year-to-year even without new laws. Most modern laws affect only a slice of new sales, which are a tiny share of the gun stock; any immediate effect is likely a fraction of a percent – far below the ambient noise. That means many “before-after” studies simply can’t detect plausible policy effects unless they’re implausibly huge.
Controls That Don’t Control

Researchers try to fix the noise by adding “controls” – comparing a state to national trends or to matched states. But as Monticello notes, Brown finds those controls often add noise. Annual changes inside a state correlate only weakly with national changes, so “state minus U.S.” is shakier than the state series alone. Matching to “similar states” runs into the same problem: cultural, economic, and demographic differences don’t cancel neatly, and small modeling choices move big conclusions.
The Connecticut Case Study

Monticello scrutinizes a widely cited paper (touted by national figures) claiming a 40% drop in Connecticut gun murders after a 1995 permit-to-purchase law – compared not to Connecticut itself or all other states, but to a “synthetic Connecticut” heavily weighted toward Rhode Island. Brown shows the entire effect largely rides on a short-term Rhode Island blip around 1999–2003. Worse, by 2006 the real Connecticut rate had surpassed the synthetic benchmark – yet the authors, publishing in 2015, didn’t include that later data. Monticello’s point isn’t that the law failed; it’s that modeling choices can create dramatic, but fragile, narratives.
Assault-Style Bans and Mass Shootings

If any policy feels urgent, it’s bans on “assault weapons” or large-capacity magazines. Monticello reports that RAND’s updated meta-analysis finds the evidence inconclusive – for familiar reasons: definitions vary (what counts as “assault weapon”? what is a “mass shooting”?), the events are extremely rare, and most gun homicides don’t involve those platforms. Layer on a 10-year federal ban (1994–2004) bouncing through a turbulent crime era, and the data won’t yield clean answers. That’s not a defense of any rifle; it’s a warning against overpromising what the statistics can show.
Do Guns Make You Safer – or Not?

Monticello revisits a famous 1993 New England Journal of Medicine study often paraphrased as “guns in the home raise homicide risk.” Brown flags basic problems: many included murders weren’t by firearm; in gun murders, the gun’s ownership (victim’s vs. someone else’s) often wasn’t established.
More fundamentally, Monticello emphasizes Brown’s point that safety is highly individual. A trained owner with a safe in a high-crime area may be safer; a careless owner in a low-crime area may be riskier. Social science wants a single average answer; reality looks more like “it depends.” Notably, the same NEJM paper found risk multipliers like living alone or being a renter that exceeded the gun variable – details politicians rarely quote.
The Hidden Costs of “Try It and See”

Monticello also reports on costs that rarely make headlines. Brown warns that laws built on weak evidence can criminalize otherwise law-abiding people, expand prosecutorial power, and deepen racial and socioeconomic disparities in the justice system. Crackdowns can also fuel black markets and homemade “ghost guns,” shifting violence rather than reducing it. None of this proves a law is bad; it means any claimed benefits must be weighed against real frictions we’re imposing – especially on marginalized communities.
Why Headlines Love the Worst Studies

According to Monticello, the best papers identified by RAND, with their careful caveats and modest effect sizes, didn’t drive legislation or cable-news chyrons. The flashiest claims from methodologically weak studies did. That dynamic – politics demanding certainty while the data offer ambiguity – encourages cherry-picking time windows, outcomes, and geographies. It’s not unique to gun research, Brown tells him; it’s endemic to social-science fields where the signal is faint and incentives reward big, simple, definitive “findings.”
A Better Research Agenda (and a Public-Health Pivot)

Monticello notes that CDC Director Rochelle Walensky has called for treating gun violence as a public-health challenge, with Congress allocating fresh funds for research. Brown’s view, as Monticello presents it, is that we need deeper basic science on why violence happens – psychological, social, and developmental mechanisms – before we can credibly test the marginal effects of discrete rules. My take: that’s the right path. Focus on people and places at highest risk; invest in credible shooters-and-victims interruption, focused deterrence, trauma care, and environmental tweaks (lighting, blight removal) where randomized or quasi-experimental evidence is stronger. Then, if we pursue regulations, we can nest them in interventions with measurable, near-term effects.
Pump the Brakes on Absolutism

Monticello’s ReasonTV report – anchored by Aaron Brown’s statistical critique and RAND’s sweeping review – doesn’t say gun control can’t work. It says the current evidence base can’t prove that it does, at least not with the sweeping certainty so often claimed. That should make both sides pump the brakes on absolutism. Good policy rests on valid measurement, clear tradeoffs, and a willingness to update when the data finally can speak. Until then, humility isn’t weakness – it’s the only honest starting point.

Raised in a small Arizona town, Kevin grew up surrounded by rugged desert landscapes and a family of hunters. His background in competitive shooting and firearms training has made him an authority on self-defense and gun safety. A certified firearms instructor, Kevin teaches others how to properly handle and maintain their weapons, whether for hunting, home defense, or survival situations. His writing focuses on responsible gun ownership, marksmanship, and the role of firearms in personal preparedness.


































