Loading
Loading
Social Media Engagement Optimization Systematically Amplifies Harmful Content
Social media platforms achieved extraordinary engagement growth by deploying recommendation algorithms that maximize user interactions (likes, shares, comments, time-on-site). Facebook's 2018 "Meaningful Social Interactions" (MSI) update explicitly prioritized engagement-generating content, intending to foster connections. But the algorithms systematically amplified morally outraged, emotionally provocative, and politically extreme content — because that content reliably generates the most engagement. Internal Facebook research (leaked 2021) found that "64% of all extremist group joins are due to our recommendation tools." The 2018 MSI update, designed to be prosocial, backfired: European political parties reported posting more negative content in response, because angry content received more shares and comments. The mechanism that succeeded at engagement is the same mechanism that amplifies harm.
Germano et al. (2025) demonstrate mathematically that weighting social interactions (likes/shares) in ranking increases engagement but simultaneously increases misinformation spread and polarization — the effects are inseparable under current algorithmic paradigms. Levy's field experiment with 30,000+ Facebook users showed the algorithm systematically sorted users into ideologically homogeneous news diets. Algorithms privilege PRIME information (Prestigious, Ingroup, Moral, and Emotional) regardless of accuracy. The harm is not accidental — it is a structural consequence of optimizing for the proxy metric (engagement) when the actual goal (informed connection) diverges from it.
Platform-level content moderation (removal, labeling, fact-checking) addresses symptoms but not the algorithmic root cause — engagement optimization remains the core business model. The EU Digital Services Act and proposed U.S. legislation attempt regulatory intervention, but platforms argue algorithmic transparency would expose trade secrets. Some platforms experimented with chronological feeds (Twitter's toggle), but users often revert to algorithmic feeds because they are more engaging — the very mechanism causing harm creates the user preference that perpetuates it. Facebook's Oversight Board can review individual content decisions but has no authority over algorithmic design. The fundamental tension: what makes platforms profitable (engagement) is what makes them harmful (amplification of emotionally provocative content).
Algorithmic transparency requirements that allow independent auditing of recommendation systems without exposing trade secrets (differential privacy approaches, academic researcher access APIs). Alternative ranking objectives that optimize for "bridging" content (connecting diverse viewpoints) rather than engagement. Business model innovation that decouples platform revenue from engagement intensity. Formal frameworks for measuring and reporting amplification of harmful content as a platform metric.
A team could design and test an alternative recommendation algorithm that optimizes for content quality or viewpoint diversity rather than engagement, measuring the tradeoff in user satisfaction. Alternatively, a team could build a browser extension that detects and visualizes engagement-optimized amplification in real time, showing users how their feed differs from a chronological or diversity-optimized version. Computer science, behavioral science, and interface design skills apply.
This is a "problems of success" case in the "optimization-metric-divergence" sub-type: the proxy metric (engagement) diverges from the actual goal (informed connection), and optimizing the proxy causes harm. Structurally related to the reflexive measurement pattern identified in the Wave 0 audit (Goodhart's Law instances). Related to existing brief digital-ml-safety-benchmark-dataset-gap (which covers benchmark-to-deployment divergence in AI safety, a different manifestation of the same optimization-proxy problem).
Germano, Gomez & Sobbrio (2025), "Ranking for Engagement," Journal of Public Economics; Levy (2021), "Social Media, News Consumption, and Polarization," American Economic Review; Frances Haugen testimony / leaked Facebook internal research (2021), accessed 2026-02-23