If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

How does persuasive technology amplify societal problems?

Truth, memory function, and elections are all being sacrificed to profit

How does persuasive technology amplify societal problems?

“I felt so insecure about myself. My abilities, my looks, my roots, my potential… I was comparing my life with people around me and people I saw on social media.”
– Nathan, 21, Midlaren, Netherlands
“I realized I was becoming more hateful and less open minded.”
– Madison, 23, Louisville, KY
“If you've never experienced addiction, a small warning, it sucks. I mean that literally: it sucks you in and prevents you from being happy, reaching your dreams, or living life.”
– Mahika, 15, San Francisco, CA
A photograph of an angry-looking man and woman yelling at each other
Nathan, Madison, and Mahika’s stories are examples of the ways in which technology shaped by the attention economy is resulting in painful experiences of insecurity, distortion, and addiction. As discussed in the Attention economy unit and on the Center for Humane Technology’s Ledger of Harms, research shows that these problems are being felt throughout society.
Many of these problems have existed in various forms for generations. What’s different now is that they’re being amplified by AI-powered technology used around-the-clock by billions of people around the world. For instance:
  • Fake news spreads six times faster than true news.¹ Misinformation has always been around, but on platforms that thrive on engagement, unexpected, attention-grabbing misinformation is widely shared.
  • The level of social media use on a given day is linked to weaker memory function the next day.² There have always been companies competing for attention, from TV to magazines to billboards. But the frequency and strength of attention hijacking that happens on social media hurts our memory and focus.
  • The outcomes of elections around the world are being more easily manipulated via social media.³ A politician can now deliver customized, emotionally resonant messages to different groups, even if those messages contradict each other, because most people never find out about the contradiction.
  • AI algorithms have shown significant stereotypical bias by gender, race, profession, and religion.⁴ Society has long struggled with these biases, but when they’re embedded in the algorithms that shape platforms, they can become even more prevalent. Watch this bonus clip from The Social Dilemma to explore the topic more:
Khan Academy video wrapper
The Social Dilemma – Bonus Clip: The Discrimination DilemmaSee video transcript
In response to public outcry, Facebook, Twitter, YouTube, and similar platforms have begun to invest heavily in programs designed to track and counteract organized hate and misinformation, address bias, and counteract many of these harms. But so long as their products are incentivized to lift up the posts that get people worked up, variations of these problems will continue to emerge.
We need technology that is accountable to the communities it serves. As long as our attention is highly profitable, and a small number of companies are trying to capture and control the attention of everyone in the world, that accountability will be impossible.

Want to join the conversation?