Measuring Impact: Metrics Localities Should Use to Tell If Opioid Settlement Spending Is Saving Lives
research methodsopioid crisisevaluation

Measuring Impact: Metrics Localities Should Use to Tell If Opioid Settlement Spending Is Saving Lives

cclinical
2026-02-12
10 min read
Advertisement

Propose a standard metric set—overdose rates, MAT capacity, naloxone reach, waiting lists—to show opioid settlement funds are saving lives.

Why local leaders must measure more than dollars: the accountability gap in opioid settlement spending

Communities across the United States have rapidly received a historic influx of opioid settlement funds—more than $50 billion is on the table. Yet counties and states struggle with the same questions: Are these dollars saving lives? And how will residents, clinicians, and funders know? The pain point is clear: without standardized, timely, and actionable metrics, well-intentioned spending becomes politically vulnerable, operationally opaque, and clinically ineffective.

The problem in 2026: gaps in reporting, attribution, and early signals

By 2026 the debate has hardened. Tools like the 2025 KFF/Johns Hopkins/Shatterproof tracker revealed widely divergent uses of settlement cash and strong public concern that some funds are diverted from clinical care into law enforcement and unrelated municipal needs. At the same time, public health surveillance systems are strained by a fentanyl-driven surge in synthetic opioid deaths and by uneven Medicaid coverage after recent budget adjustments in several states.

Those realities make it imperative that localities adopt a standardized measurement package that captures both outcome (what ultimately matters) and process (the early signals that predict outcomes). Below I propose a concise, implementable framework that jurisdictions can use to demonstrate accountable, evidence-aligned spending of opioid settlement funds.

Principles guiding the metric set

  • Actionability: Every metric should guide an operational decision—hiring, service expansion, procurement, or policy change.
  • Timeliness: Prioritize indicators available monthly or quarterly to detect trends quickly.
  • Attribution-aware: Include process measures that change before outcomes and analytic guidance (see section on evaluation design).
  • Equity and stratification: Report by race/ethnicity, age, sex, and ZIP code to surface disparities.
  • Transparency: Public dashboards with raw counts and methodology notes reduce skepticism and enable independent review.

The standardized metric set: minimal and advanced packages

I recommend a two-tier approach: a Minimal Reporting Package for immediate adoption, and an Advanced Evaluation Package for jurisdictions with analytic capacity or external evaluators.

Minimal Reporting Package (monthly/quarterly)

These are near-universal measures that local public health departments can collect and publish quickly.

  • All-cause and opioid-specific overdose deaths
    • Definition: Number of deaths with underlying cause of death coded as drug overdose; opioid-specific when opioids are listed as contributing cause.
    • Numerator/denominator: Count per month/quarter; rate per 100,000 residents (annualized).
    • Data source: Vital records/medical examiner; reporting lag caveat—use provisional death counts for timeliness.
  • Non-fatal overdose emergency visits and EMS naloxone administrations
    • Why: Early indicator of trend changes long before mortality moves.
    • Data sources: Hospital ED syndromic surveillance, EMS run reports.
  • MAT (medication for opioid use disorder) capacity and utilization
    • Measures: Number of providers prescribing buprenorphine/naltrexone/methadone; number of open treatment slots; new enrollments; retention at 30/90/180 days.
    • Data sources: Clinic reporting, state prescription drug monitoring program (PDMP) adjuncts, OTP registries.
  • Naloxone distribution and reversals
    • Measures: Kits distributed to community/organizations; reported reversals (where tracked); pharmacy naloxone dispenses.
    • Data sources: Public health distribution logs, pharmacy sales data.
  • Waiting lists and access delays
    • Measures: Number on waitlists for SUD treatment (by modality); median wait time for first appointment; percent offered same-day intake.
  • Harm reduction reach
    • Measures: Syringe services clients served; fentanyl test strip distribution; peer outreach contacts.
  • Spending alignment and fidelity
    • Measures: Percent of settlement funds spent on evidence-based clinical services (MAT, naloxone, harm reduction, SUD treatment) vs. other categories (law enforcement, general budget).
    • Why: Direct transparency on whether funds match intended public health goals.

Advanced Evaluation Package (quarterly/annual)

For communities tracking impact rigorously or seeking to demonstrate causal effects, add these measures and analytic elements.

  • Interrupted time series (ITS) and synthetic control outcomes
    • Use ITS to test whether changes in outcomes (overdose ED visits, deaths) coincide with funded interventions; synthetic controls help create counterfactuals when multiple jurisdictions are available for comparison.
  • Cost per outcome
    • Measures: Cost per naloxone kit distributed; incremental cost per MAT enrollee retained 90 days; modeled cost per life-year gained where feasible.
  • Retention and long-term outcomes
    • Measures: 12-month retention in treatment; opioid-related hospital admissions avoided; quality-of-life proxies from validated short surveys.
  • Equity impact measures
    • Measures: Disparities in access and outcomes across race/ethnicity, rural vs urban, and socioeconomic strata; trend decomposition to assess whether disparities narrow over time.
  • Program fidelity and implementation measures (RE-AIM inspired)
    • Reach, Effectiveness, Adoption, Implementation fidelity, Maintenance. Track adherence to evidence-based protocols (e.g., low-barrier buprenorphine initiation, take-home naloxone at discharge).

Defining each metric: practical specifications

Standardized definitions avoid apples-to-oranges comparisons. For each metric publish: precise definition, numerator and denominator, data source, reporting cadence, and caveats. Below are concise templates localities can copy into public dashboards.

Example metric template: MAT capacity

  • Definition: Number of treatment slots physically available in clinics and OTPs for medication initiation within 7 days.
  • Numerator: Sum of open slots reported by MAT providers at the end of the month.
  • Denominator: County population aged 15+ (for rate calculations) or demand-estimated target population.
  • Data source: Monthly provider self-report to health department; cross-check with claims where possible.
  • Frequency: Monthly.
  • Use: Operational planning (staffing, hours), resource allocation from settlement funds.

Example metric template: Waiting lists

  • Definition: Number of unique individuals on formal waitlists for SUD treatment at any program in the jurisdiction.
  • Numerator: Count from intake management systems at the end of the reporting period.
  • Denominator: Not applicable; report as count and median wait time.
  • Frequency: Daily snapshot, reported monthly.

Analytic guidance: making causal claims responsibly

Settlement funds arrive into complex systems. Demonstrating that spending “saved lives” requires robust design. Localities should avoid simple pre-post claims and instead:

  • Use multiple indicators: Pair early process metrics (e.g., naloxone distribution) with outcome trends (ED visits, deaths).
  • Employ ITS or synthetic controls: When possible, use ITS to assess changes coinciding with program rollouts and synthetic controls to compare to similar jurisdictions not implementing the program; consider leveraging modern analytic infrastructure and reproducible workflows used by evaluation teams and centralized hubs (see resources on building small analytics teams and resilient cloud-native architectures).
  • Monitor leading indicators: If MAT capacity and naloxone kits increase substantially within 3–6 months, declines in EMS naloxone runs or ED overdoses may follow—these are plausible intermediate outcomes supporting attribution.
  • Document implementation fidelity: Low fidelity explains null results and protects programs from unfair judgment when outcomes lag.

Data sources and practical barriers (and how to solve them)

Common barriers include lagging vital statistics, fragmented data systems, privacy rules, and uneven analytic staff. Recommended solutions:

  • Use provisional and syndromic data for timeliness: Publish provisional death counts with clear caveats; combine with ED syndromic surveillance & EMS data to detect trends faster.
  • Standardize provider reporting templates: Simple monthly Excel forms or secure web forms and micro-apps for MAT slots, waitlists, and naloxone distribution reduce reporting burden.
  • Create data-sharing MOUs: Establish legal agreements with hospitals, OTPs, EMS, and pharmacies to receive de-identified aggregate counts; pair MOUs with secure authorization and auditing tools (see reviews of authorization-as-a-service).
  • Invest a small percentage of settlement funds in analytics: Allocating 1–3% of local settlement receipts to independent evaluation and dashboard development yields high accountability value—small dedicated teams are often more effective than unfunded ad hoc efforts (tiny teams, big impact).
  • Protect privacy: Suppress small cell sizes and share aggregated metrics; use trusted third-party evaluators if necessary.

Equity-first measurement: who benefits matters

A standardized metric set must surface disparities. For every outcome and key process metric require stratification by race/ethnicity, age group, sex, and ZIP code. Report both absolute rates and relative differences and include qualitative community input to explain patterns—especially in communities with historical underinvestment in SUD care. Clinic and outreach design playbooks focused on community-first care can inform equitable rollout strategies (clinic design playbook).

Reporting cadence and public transparency standards

To build public trust, localities should adopt this minimum schedule:

  • Monthly: Naloxone distribution, MAT capacity and slots, ED visits for nonfatal overdose, EMS naloxone runs, waiting list counts.
  • Quarterly: Provisional overdose deaths, program spending by category, number of treatment enrollees and 30/90-day retention.
  • Annually: Finalized mortality files, comprehensive evaluation including ITS or synthetic control analysis, cost-per-outcome calculations, equity impact report.

Dashboards should include clear methodology notes, download links for CSV data, and a short plain-language executive summary for community stakeholders and reporters. Consider cost-effective hosting options and modern serverless stacks when publishing datasets (see practical notes on serverless and free-tier hosting).

Practical example: a hypothetical 12-month timeline

Consider a mid-sized county deploying settlement funds to expand low-barrier buprenorphine, triple naloxone distribution, and scale harm reduction.

  1. Month 0–3: Baseline dashboard published—vital stats, ED visits, EMS runs, MAT slots, waitlists.
  2. Month 4–6: Rapid expansion—monthly reporting shows MAT slots +40% and naloxone kits +300%; EMS naloxone runs plateau.
  3. Month 7–12: ED nonfatal overdose visits decline 8% and provisional overdose deaths begin to fall compared with synthetic control; early ITS suggests an inflection aligned with program start. Cost-per-new-MAT-enrollee estimated. Equity dashboard reveals uneven access in two ZIP codes—targeted outreach launched.

This sequence shows why process metrics (MAT slots, naloxone) are essential early monitors and why equity stratification enabled mid-course correction.

Common pitfalls and how to avoid them

  • Pitfall: Reporting only dollars spent. Fix: Pair spending categories with service and outcome metrics to show impact per dollar.
  • Pitfall: Using only annual mortality data. Fix: Publish monthly and quarterly leading indicators and provisional death counts.
  • Pitfall: Overclaiming causality. Fix: Use ITS or synthetic controls and document implementation timelines; consider partnering with centralized analytics hubs that can run reproducible designs at scale.
  • Pitfall: Ignoring equity. Fix: Mandatory stratification and community advisory review of reports.
"Transparent, timely metrics transformed early skepticism into public support in several communities during 2024–2025. The same tools scaled up in 2026 and are now the baseline expectation for accountable spending." — Local public health director (paraphrased)

Policy and funding recommendations for state and federal partners

Localities cannot do this alone. State and federal agencies should:

  • Provide a mandatory minimum reporting template for jurisdictions receiving settlement funds, aligned with the Minimal Reporting Package above.
  • Fund centralized analytics hubs that small counties can use to produce ITS and synthetic control analyses at low cost; these hubs should combine secure hosting, reproducible code, and small dedicated teams (tiny teams + scalable tooling).
  • Require settlement spending transparency with penalties for noncompliance and incentives for evidence-based allocation.

Actionable checklist for local leaders (first 90 days)

  1. Adopt the Minimal Reporting Package and publish a public dashboard baseline within 30 days.
  2. Allocate 1–3% of settlement receipts to independent evaluation and dashboard maintenance.
  3. Sign MOUs with hospitals, EMS, OTPs, pharmacies, and coroner’s offices for routine aggregate data sharing; pair legal agreements with secure micro-app reporting tools to reduce burden (micro-apps).
  4. Create a community advisory board including people with lived experience to review metrics and priorities monthly (community-first clinic design resources).
  5. Set explicit, time-bound targets (e.g., 30% increase in MAT slots within 6 months; naloxone kits per 1,000 population target) and report progress publicly.

What success looks like in 2026

Success is not a single metric. It is a transparent portfolio of measures showing that:

  • Access expanded (MAT capacity and reduced wait times),
  • Harm reduction reach increased (naloxone, fentanyl test strips),
  • Early clinical indicators trended favorably (ED visits, EMS runs), and
  • Over time, a sustained and equitable reduction in opioid-related deaths is demonstrable using robust analytic methods.

Final takeaways: metrics are the mortar between funding and lives saved

Opioid settlement dollars offer an unprecedented opportunity to change the trajectory of the overdose crisis—but only if localities measure wisely. A standardized, tiered metric set gives communities the ability to show progress, course-correct in real time, and defend the public health intent of settlement funds. In 2026 the public and stakeholders expect transparency, equity-focused reporting, and causal evidence that investments are working. The tools exist—what’s missing in many places is implementation discipline and a small but critical investment in analytics.

Call to action

Adopt this standardized metric set for your jurisdiction this quarter. Publish a baseline dashboard, allocate evaluation funds, and convene a community advisory board. To help, we have prepared reproducible reporting templates and an analytic starter kit you can download and adapt for local use. If your county or state wants a peer review of its proposed metrics or an independent synthetic control analysis, contact our evaluation team for a consultation (secure evaluator options and cloud hosting guides are linked above).

Advertisement

Related Topics

#research methods#opioid crisis#evaluation
c

clinical

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-12T02:51:54.747Z