Loading
Loading
US Pharmacovigilance Systems Are Structurally Blind to Differential Safety Signals Across Populations
The US post-market surveillance system for drugs and medical devices — anchored by FDA's MedWatch adverse event reporting system — was not designed to detect whether products harm some populations more than others. MedWatch reports do not systematically capture social determinants of health, geographic variables, social vulnerability indices, or granular demographic data beyond basic race/ethnicity categories. Active surveillance systems (Sentinel) use insurance claims data that capture diagnoses and billing codes but not the social, environmental, and community-level variables needed to detect equity-relevant safety signals. As a result, differential safety problems — like pulse oximeters being less accurate on darker skin tones, or clinical algorithms deprioritizing Black patients — can persist for years or decades before detection.
Health inequities in drug and device safety are not hypothetical. Pulse oximeters were biased against darker skin tones for decades before systematic research documented the problem. The SOFA score used for COVID-19 ICU triage deprioritized Black patients by approximately 15% because serum creatinine — a biomarker correlated with race — was used as a severity indicator. The NASEM report found that passive adverse event reporting generates data that is "frequently inaccurate, untimely, unverified, and/or biased," skewed toward acute events in populations most likely to report (English-speaking, health-literate, connected to care). A review of 220 university exclusive licensing agreements showed that access-oriented language was "not widely adopted," limiting equitable distribution of publicly funded innovations.
MedWatch voluntary reporting is the backbone of US pharmacovigilance but relies on clinicians and patients to self-report, producing data biased toward populations with healthcare access and health literacy. FDA's Sentinel system uses administrative claims data for active surveillance, enabling faster signal detection than voluntary reporting, but captures billing codes and diagnoses — not the social, environmental, and community-level variables needed to detect population-specific harm. Post-market studies mandated by FDA (PMR/PMC) typically use the same homogeneous populations as pre-market trials. The pharmacovigilance infrastructure was designed in an era when "safety" meant detecting whether a product was harmful on average, not whether it was differentially harmful across populations.
Pharmacovigilance data infrastructure that captures social determinants (housing, income, occupation, environmental exposures, geographic context) alongside clinical adverse events — either through expanded reporting fields or through linkage with existing social determinants databases. Algorithmic approaches that can detect differential safety signals in existing claims and EHR databases despite the absence of explicit equity variables, using proxy measures and geographic correlates. Mandatory disaggregated reporting requirements for post-market surveillance that go beyond basic demographic categories to capture the variables needed for equity-relevant analysis.
A student team could analyze existing FAERS (FDA Adverse Event Reporting System) data for a specific drug class to determine what equity-relevant signals could be extracted from current data fields and what additional variables would be needed to detect differential harm. Alternatively, teams could prototype an algorithmic approach to infer population-level differential safety signals from proxy variables (zip code, insurance type, facility characteristics) in publicly available claims data or hospital discharge databases. Relevant disciplines: health informatics, epidemiology, data science, bioethics, public health.
Distinct from `health-device-real-world-evidence-gap` (which covers general lack of real-world evidence for device performance — a data quantity problem) and `health-pulse-oximeter-skin-tone-bias` (which covers one specific device's calibration bias). This brief addresses the structural inability of the entire pharmacovigilance system to detect differential harm across populations — a system-level design flaw in the surveillance infrastructure, not a device-specific or data-quantity problem. Related to `digital-structured-missingness-ml-bias` (which covers how missing data in ML training creates biased models — a parallel problem in a different domain). Source-bias note: NASEM frames this as requiring "equitable innovation" across the product lifecycle; the binding constraints are genuinely data-structural (surveillance infrastructure doesn't collect the right variables) and design-related (passive reporting inherently underrepresents vulnerable populations), not primarily requiring new institutional coordination.
National Academies of Sciences, Engineering, and Medicine, "Toward Equitable Innovation in Health and Medicine: A Framework," 2023, https://nap.nationalacademies.org/catalog/27184; "Ending Unequal Treatment: Strategies to Achieve Equitable Health Care and Optimal Health for All," 2024, https://www.nationalacademies.org/read/27820/chapter/2; accessed 2026-02-20