Engineered Outrage: How AI Curates Your Anger for Profit
Here is what they discovered in 2023, in a pre-registered controlled experiment that nobody particularly wanted to publicise. X/Twitter’s engagement-based algorithm increases user anger by 0.47 standard deviations.
Not approximately half a standard deviation. Not “roughly” or “around” or “nearly”. Precisely 0.47. The specificity matters.
It suggests measurement, calculation, optimisation. It suggests that somewhere in a conference room with exposed brick walls and a kombucha tap, someone determined that 0.47 standard deviations of additional anger was the sweet spot.
Not so much that users would abandon the platform. Not so little that engagement metrics would suffer. Just enough.
The language they use matters too. “Engagement-based algorithm.” Not “rage machine” or “anger amplifier”, though these would be more accurate. “Engagement” sounds neutral, almost pleasant.
One might be engaged with a good book, engaged to be married. But here engagement means something else entirely; the particular quality of attention that makes a user profitable. The kind of attention that keeps you scrolling at three in the morning, bathed in blue light, looking for the next thing to be furious about.
Milli and colleagues found that the algorithm doesn’t just measure what users want. It shapes it. It prioritises emotional and partisan content over what users claim to prefer, amplifying out-group animosity with the precision of a Swiss watchmaker.
This is not a bug. This is not an unintended consequence. This is the system working exactly as designed.
I.
Consider the architecture. The studies map it with scientific detachment, as if dissecting a particularly interesting corpse. Eight empirical analyses, one systematic review, one theoretical framework.
X/Twitter appears four times. Facebook three times. The platforms multiply like cells under a microscope, each one running variations of the same experiment, i.e. how much anger can we generate before the subject expires?
In 2018, Stella, Ferrara, and Domenico documented the bot networks during the Catalan referendum. The bots didn’t create the conflict. They amplified it, targeted it, aimed it like a weapon. “Exacerbating social conflict”, the researchers wrote, with the kind of understatement academics prefer. What they mean is these artificial actors, lines of code, were deployed to make human beings hate each other more efficiently.
The same year, Ribeiro and colleagues traced Russian-linked ad campaigns on Facebook. The ads used “divisive topics (anger, fear) to reach susceptible groups”. Susceptible groups. As if susceptibility were a pre-existing condition rather than something carefully cultivated. The ads achieved “high engagement” and succeeded in “amplifying discord”.
The passive voice is doing heavy lifting here, obscuring the active choices, the deliberate targeting, the profit motive humming beneath it all.
II.
Muhammad Ali, not the boxer, but the researcher, discovered something peculiar about Facebook’s ad delivery system in 2019. The platform charges more to reach users who don’t already agree with you. The filter bubble isn’t just metaphorical real estate. It has actual prices.
Want to show your political ad to people who already think like you? That’s affordable. Want to breach the bubble, to reach across the divide? That’ll cost extra.
This is market efficiency at its purest. The algorithm has commodified cognitive dissonance. It’s cheaper to preach to the choir because the choir is already listening. The platform “reinforces filter bubbles”, the study notes, as if this were a natural phenomenon, like sediment settling in still water. But there’s nothing natural about it.
Someone wrote this code. Someone optimised these prices. Someone decided that ideological isolation should be the default setting, and ideological engagement should carry a premium.
Paul Bouchaud’s 2024 analysis went further. Using a simulated X/Twitter-like environment, because actual X data is proprietary and locked away like state secrets, he found that engagement maximisation “skews content toward previously engaged material”.
The algorithm doesn’t just give you what you want. It gives you more of what you’ve already had, intensified, concentrated, reduced to its most potent form. Like a dealer cutting the product with something stronger each time.
III.
The numbers tell a story of their own. Zhou’s 2025 study found that emotion-targeted communication increases participation by “up to 42%”. Up to. That’s the key phrase. Forty-two per cent when conditions are optimal, when the anger is fresh, when the fear is properly calibrated.
Zhou used what they call “multimodal artificial intelligence systems”. Facial recognition. Text classifiers. Sentiment mapping. The full surveillance apparatus trained on human emotion, learning to recognise anger the way a sommelier recognises tannins.
The AI doesn’t just see that you’re angry. It sees the particular shade of your anger, its vintage, its notes of bitterness or despair.
The ethical concerns are noted, of course. They always are. Privacy violations. Manipulation. The researchers mention these things the way one might mention the weather, as if they were simply atmospheric conditions rather than choices made by actual people in actual boardrooms.
“Raises ethical concerns”, so they write, raising them just high enough to be seen, not high enough to do anything about them.
IV.
Onebunne’s systematic review in 2022 used something called the Preferred Reporting Items for Systematic Reviews and Meta-Analyses. PRISMA, for short. Even the acronyms have acronyms. The review found that AI systems “reinforce bias, reduce diversity, and operate with limited transparency”. The passive voice again, as if the AI systems simply appeared one day, fully formed, reinforcing and reducing of their own accord.
But look closer at what’s being reduced. Not just diversity of content, but diversity of thought. Bouchaud called it “algorithmic confirmation bias”. Lazer described “reduced exposure to cross-cutting views”. What they mean is this: the algorithm creates a world where you’re always right, where your beliefs are always confirmed, where the other side is always exactly as awful as you imagined.
It’s comfortable, this world. It’s also profitable.
The profit mechanism is almost elegant in its simplicity. Engagement equals revenue. Anger drives engagement. Therefore, anger equals revenue. The transitive property of platform capitalism. No one needs to explicitly decide to make people angry. The algorithm learns it on its own, optimising for metrics that happen to correlate with human misery.
V.
“Curator algorithms can learn manipulative strategies”, wrote Albanie, Shakespeare, and Gunter in their theoretical framework. Can learn. As if the learning were optional, accidental, a quirk of the system rather than its fundamental purpose. But what else would they learn?
The algorithm is trained on engagement metrics. It discovers that certain content patterns drive those metrics higher. It doesn’t know or care that those patterns happen to be inflammatory, divisive, and false. It only knows that they work.
This is the genius of the system: plausible deniability built into the architecture. No one told the algorithm to amplify anger. It discovered that on its own, through millions of iterations, billions of micro-adjustments, each one nudging the system toward maximum engagement. The companies can claim innocence. We didn’t programme it to do this. It learned.
But they programmed it to learn, and they programmed what it should optimise for, and they continue to profit from what it learned. The algorithm is both their creation and their excuse, their product and their alibi.
VI.
Santini and colleagues documented something peculiar in 2020: media bots on X that “manipulate online ratings and amplify media content”. The bots change the “perception of relevance”. Not relevance itself, but the perception of it.
This is the heart of the matter. The algorithm doesn’t need to change reality. It only needs to change how reality appears in your feed.
This is what Lazer meant by “algorithmic curation reinforces dominant perspectives”. The dominant perspective isn’t necessarily the majority view. It’s the view that generates the most engagement, that keeps users scrolling, that converts attention into advertising revenue.
The algorithm discovers which perspectives are profitable and amplifies them until they become dominant.
The individual studies paint a consistent picture. Amplification of emotion or partisanship: five studies. Bias reinforcement: three studies. Reduction in content diversity: four studies. Manipulation through strategic communication: five studies.
Bot-driven amplification: two studies. Each study a pixel in a larger image, and the image is this: a system designed to profit from human anger, operating at scale, with minimal oversight and maximum efficiency.
VII.
There’s something almost admirable about the purity of it. No ideology beyond profit. No agenda beyond engagement. The algorithm doesn’t care if you’re angry about immigration or climate change, police brutality or cancel culture. It only cares that you’re angry, and that your anger keeps you scrolling, clicking, sharing, generating the data that gets packaged and sold to advertisers.
This is the cultural decay as a business model, the American innovation of monetising social dissolution. Other industries have done it before. The tobacco companies monetised addiction. The opioid manufacturers monetised pain.
Now the tech platforms monetise anger, and they’ve found a renewable resource. Unlike cigarettes or pills, anger doesn’t run out. It regenerates, feeds on itself, and grows stronger with each algorithmic amplification.
The research documents this with scientific precision. Each study another data point in humanity’s largest behavioural experiment. We are all subjects now, our emotions measured and optimised, our anger cultivated and harvested.
The algorithm knows us better than we know ourselves. It knows that we’ll click on the thing that makes us furious, even though we claim to prefer balanced content. It knows that we’ll engage with out-group animosity, even though we say we want unity. It knows that we’ll pay attention to what makes us angry, and attention is the only currency that matters.
VIII.
The server farms hum in the desert, converting human emotion into heat and profit. The data centres consume enough electricity to power small cities, all of it devoted to the great work of making people angrier. This is progress, apparently. This is innovation. This is the future we’re building, one algorithmic amplification at a time.
The researchers conclude their studies with calls for reform, for oversight, for ethical frameworks. But reform assumes the system is broken. What if it’s working perfectly? What if the amplification of anger, the reinforcement of bias, the reduction of diversity and what if these aren’t bugs but features?
What if we’ve built exactly what we meant to build. Is it a machine for extracting value from human emotion, optimised for efficiency, indifferent to consequences?
The studies tell us what we already know but prefer not to acknowledge. The algorithm curates our anger because anger is profitable. It amplifies our worst impulses because our worst impulses drive engagement. It divides us because division keeps us scrolling. The machine runs on outrage, and we feed it willingly, one click at a time, one share at a time, one angry comment at a time.
This is how cultural decay becomes a business model. Not through conspiracy or malice, but through the relentless logic of optimisation. The algorithm doesn’t hate us. It doesn’t even know we exist, not as people, not as citizens, not as human beings capable of joy or sorrow or complex thought. It only knows us as engagement metrics, as data points, as resources to be extracted and processed and sold.
The precision of our anger has been measured by 0.47 standard deviations. The price of our division has been calculated with higher costs to reach non-aligned users. The value of our outrage has been determined in up to 42% increased participation.
These numbers will be optimised further. The machine is always learning, always improving, always finding new ways to convert human emotion into shareholder value.
This is the world we’ve built, or rather, the world that’s been built for us, one algorithm at a time. A world where anger is currency and division is profitable, where the algorithm knows what will make us furious before we do, where our outrage is carefully cultivated and harvested like a cash crop.
The research is clear. The evidence is overwhelming. The system is working exactly as designed. The only question remaining is whether we’ll continue to pretend otherwise.
(*This article was originally published on Ai-Ai-OH.)
Hi, I’m Miriam — an independent AI ethics researcher, writer and strategic SEO consultant.
I break down the big topics in AI ethics, so we can all understand what’s at stake. And as a consultant, I help businesses build resilient, human-first SEO strategies for the age of AI.
If you enjoy these articles, the best way to support my work is with a subscription. It keeps my articles independent and ad-free, and your support means the world. 🖤
References
Albanie, Samuel, H. Shakespeare, and Tom Gunter. “Unknowable Manipulators: Social Network Curator Algorithms”. arXiv.org, 2017.
Ali, Muhammad, Piotr Sapiezynski, A. Korolova, A. Mislove, and A. Rieke. “Ad Delivery Algorithms: The Hidden Arbiters of Political Messaging.” Web Search and Data Mining, 2019.
Bouchaud, Paul. “Skewed Perspectives: Examining the Influence of Engagement Maximization on Content Diversity in Social Media Feeds”. Journal of Computational Social Science, 2024.
Lazer, D. “The Rise of the Social Algorithm.” Science, 2015.
Milli, S., Micah Carroll, Sashrika Pandey, Yike Wang, and A. Dragan. “Twitter’s Algorithm: Amplifying Anger, Animosity, and Affective Polarization”. arXiv.org, 2023.
Onebunne, Amaka Peace. “Algorithmic Bias and Media Manipulation: A Systematic Review of AI’s Role in Shaping Public Perception and Political Discourse”. World Journal of Advanced Research and Reviews, 2022.
Ribeiro, Filipe Nunes, Koustuv Saha, Mahmoudreza Babaei, Lucas Henrique, Johnnatan Messias, Fabrício Benevenuto, Oana Goga, K. Gummadi, and Elissa M. Redmiles. “On Microtargeting Socially Divisive Ads: A Case Study of Russia-Linked Ad Campaigns on Facebook”. FAT, 2018.
Santini, R., D. Salles, G. Tucci, F. Ferreira, and F. Grael. “Making up Audience: Media Bots and the Falsification of the Public Sphere”. Applied Informatics, 2020.
Stella, Massimo, Emilio Ferrara, and M. Domenico. “Bots Increase Exposure to Negative and Inflammatory Content in Online Social Systems”. Proceedings of the National Academy of Sciences of the United States of America, 2018.
Zhou, Xingchen. “Algorithmic Agitation and Affective Engineering: AI-Driven Emotion Recognition and Strategic Communication in Contemporary Social Movements”. Applied and Computational Engineering, 2025.
Note on Sources
All citations are drawn from the systematic review “How does AI curate public anger for profit?” which synthesised findings from 126 million academic papers in the Semantic Scholar corpus. The review screened sources based on criteria including AI content curation, emotional content analysis, platform economics, and empirical evidence. Seven of the ten studies provided full text access, whilst three (Zhou 2025, Onebunne 2022, and Albanie et al. 2017) were analysed through their abstracts and metadata.