Stimulated reporting, Weber effects, and what happens to disproportionality after media coverage
If you have spent any time working with spontaneous reporting data, you have probably had the experience of seeing a disproportionality signal spike and wondering: is this a real safety concern, or did something happen in the news?
That question is more important than it might seem, because the statistical methods we use for signal detection — PRR, ROR, EBGM, IC — all assume, at least implicitly, that reporting patterns reflect something about the underlying pharmacology. When external factors drive reporting up or down independently of actual drug risk, those methods can be led astray.12
Three of the most important distortions in spontaneous reporting data are stimulated reporting, the notoriety bias, and the Weber effect. They are related but distinct, and each creates its own interpretive challenges.
What stimulated reporting looks like
Stimulated reporting occurs when something external to the drug’s actual safety profile causes a surge in adverse event reports. The trigger is usually media coverage, a regulatory safety communication, a high-profile lawsuit, a Dear Healthcare Professional letter, or public discussion on social media. The mechanism is straightforward: when people hear about a potential risk, they become more likely to report events they might otherwise have ignored or not connected to the drug.32
A key study by Hoffman and colleagues examined the impact of FDA-issued safety alerts on reporting in the FDA Adverse Event Reporting System (FAERS). They found that alerts were associated with substantial increases in reporting for the alerted drug–event combinations, confirming that stimulated reporting is not just a theoretical concern but a measurable phenomenon in major pharmacovigilance databases.3
The problem for signal detection is that this increase in reporting is not random. It is concentrated on specific drug–event pairs, which means it inflates the numerator of the disproportionality calculation for those pairs. The result is a signal that looks stronger than the underlying data would justify if reporting had been unaffected by the external stimulus.
The notoriety bias: a more specific problem
The notoriety bias is a specific form of stimulated reporting. It describes the situation where a safety alert or regulatory action itself causes an increase in disproportionality, not because the drug has become more dangerous but because reporting of that particular drug–event combination has been selectively amplified.1
Pariente and colleagues studied four safety alerts in the French national pharmacovigilance database and found striking results. For example, before a safety alert about valvulopathies with pergolide, no cases had been reported. After the alert, 63 cases were reported, including five that had actually occurred before the alert but were only submitted afterward. The resulting reporting odds ratio was enormous — not because the risk had suddenly changed, but because the reporting behaviour had.1
This is a subtle but important distinction. The notoriety bias does not just add noise. It adds systematic, directional bias toward exactly the drug–event combinations that regulators and the public are already watching. That means it can create feedback loops: a signal is detected, a warning is issued, reporting increases, the signal appears to strengthen, which may trigger further regulatory attention.
Interestingly, a subsequent study using the FAERS database found more mixed results, with some safety alerts having a strong impact on reporting and others having little or no measurable effect.4 That inconsistency itself is informative. It suggests that the magnitude of the notoriety bias depends on factors like the seriousness of the event, the size of the exposed population, the intensity of media coverage, and the nature of the regulatory communication.
The Weber effect: does product age change reporting?
The Weber effect is an older and more debated concept. First described in the context of nonsteroidal anti-inflammatory drugs in the 1980s, it posits that adverse event reporting for a new drug peaks around the second year after approval and then steadily declines.5 The intuition is that a new drug attracts heightened attention from prescribers and regulators, leading to higher reporting rates early on. As the drug becomes familiar, reporting tapers off even if the underlying risk remains unchanged.
This matters for disproportionality because it means a drug’s position in its lifecycle can affect how its signals look. A drug in its first two years on the market might generate inflated disproportionality scores simply because reporting is higher than it will be later, not because the drug is more dangerous than alternatives.
However, the evidence for the Weber effect is less consistent than often assumed. A study of 15 oncology drugs in FAERS found no consistent pattern matching the classic Weber curve. Most drugs did not show a second-year peak followed by a steady decline. The authors concluded that the Weber effect, at least in its original formulation, does not appear to hold for contemporary oncology drugs.6
That finding does not mean product age is irrelevant to reporting patterns. It means the relationship is probably more complex and more drug-class-specific than a single universal curve. Reporting dynamics depend on prescribing volume, regulatory scrutiny, the severity profile of the drug’s adverse events, and whether the drug treats a condition that itself attracts reporting attention.
What this means for interpreting disproportionality
These biases have direct consequences for anyone using disproportionality analysis in practice. A comprehensive review of disproportionality methodology identifies several external factors that can distort reporting, including stimulated reporting from media attention, regulatory warnings, product age, and active pharmacovigilance projects. The review emphasises that these factors should be considered when interpreting disproportionality results and advocates for sensitivity analyses and protocol pre-registration to guard against selective outcome reporting.2
In practical terms, this means that a disproportionality signal cannot be interpreted in isolation from its temporal context. Questions that should accompany any signal evaluation include: has there been recent media coverage of this drug or this adverse event? Has a regulatory authority issued a safety communication? Is the drug early in its post-marketing life? Has there been a litigation event, a product recall, or a high-profile case that could have stimulated reporting?
These are not statistical questions. They are contextual questions that require knowledge of the broader pharmacovigilance landscape. And they are one reason why regulatory guidance consistently treats disproportionality analysis as a hypothesis-generating step, not a stand-alone answer.7
Can we adjust for these biases?
Some methodological approaches have been proposed. Time-stratified analyses can help separate genuine changes in risk from changes in reporting behaviour. Comparing disproportionality before and after a known stimulating event can quantify the impact of the bias. Restricting analysis to reports submitted before a safety alert can provide a less contaminated baseline.
But none of these adjustments is perfect. The fundamental challenge is that stimulated reporting, the notoriety bias, and the Weber effect are all rooted in human behaviour, and human behaviour does not follow clean statistical models. A reporter who submits a case after hearing about a safety alert may be reporting a genuine adverse reaction that they would not have submitted otherwise, or they may be retroactively attributing an event to a drug because the alert made the connection seem plausible. Separating these scenarios from the data alone is often impossible.
Why this matters beyond methodology
There is also a broader public health dimension. If safety signals are amplified by media coverage rather than by genuine risk, there is a danger of regulatory overreaction. But if genuine signals are dismissed as artefacts of stimulated reporting, real risks may be ignored.
Getting this balance right is one of the hardest practical problems in pharmacovigilance. It requires not just statistical literacy but judgment, context, and an honest acknowledgement that the data we rely on are shaped by forces that have nothing to do with pharmacology.
The real craft, as always, is knowing how to move from disproportionality to judgment. Stimulated reporting, the notoriety bias, and the Weber effect do not invalidate signal detection. They just make it harder. And understanding how they work is the first step toward interpreting signals responsibly.
-
Pariente A, Gregoire F, Fourrier-Reglat A, Haramburu F, Moore N. Impact of safety alerts on measures of disproportionality in spontaneous reporting databases: the notoriety bias. Drug Safety. 2007;30(10):891–898. ↩ ↩2 ↩3
-
Raschi E, Gatti M, Gisladottir U, et al. Conducting and interpreting disproportionality analyses derived from spontaneous reporting systems. Frontiers in Drug Safety and Regulation. 2023;3:1323057. ↩ ↩2 ↩3
-
Hoffman KB, Demakas AR, Dimbil M, Tatonetti NP, Erdman CB. Stimulated reporting: the impact of US Food and Drug Administration-issued alerts on the Adverse Event Reporting System (FAERS). Drug Safety. 2014;37(11):971–980. ↩ ↩2
-
Neha R, Subeesh V, Beulah E, Gouri N, Maheswari E. Existence of notoriety bias in FDA Adverse Event Reporting System database and its impact on signal strength. Hospital Pharmacy. 2021;56(3):152–158. ↩
-
Weber JCP. Epidemiology of adverse reactions to nonsteroidal anti-inflammatory drugs. Advances in Inflammation Research. 1984;6:1–7. ↩
-
Arora A, Jalali RK, Vohora D. Relevance of the Weber effect in contemporary pharmacovigilance of oncology drugs. Therapeutics and Clinical Risk Management. 2017;13:1195–1203. ↩
-
European Medicines Agency. Guideline on good pharmacovigilance practices (GVP) Module IX – Signal management (Rev 1). EMA, 2017. ↩