If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

What harms are caused by persuasive technology?

A subtle form of mind control

What harms are caused by persuasive technology?

“Meanwhile, you get slowly sucked in, spending more and more time on it. I began to be aware that I was believing things that…didn't exist.”
–Jasper, Age 24, Cape Town, South Africa
“I got on social media around high school, and I saw people become more distant because of it. There used to be such freedom in the way that we behaved as kids, and now people were obsessing over likes and hearts and everything.”
–Amanda, Age 19, Sydney, Australia
A photograph of a student, seen from the side and behind, texting instead of paying attention to teacher who is writing on a blackboard.
At the beginning of this course, we shared Dasani and Siri’s stories of distraction on social media. We often hear stories like these and think, “Yeah, social media can be really distracting.” Distractions here and there can be annoying, but they’re no big deal. You get distracted, you move on.
But when distraction happens over and over again, it’s part of something bigger. Social media apps on our phones are doing more than distracting us: as in Jasper and Amanda’s stories, apps can change our behavior, what we think, how we feel, and ultimately how we understand ourselves.
Persuasive technology is meant to drive profit for tech companies. But each of the features we’ve discussed has unintended consequences. Persuasive design can’t control us like a puppet on a string, but it can influence us in ways that add up. It teaches all of us – adults included! – habits that can become compulsions and even addictions. And it often creates a funhouse mirror that can shape what we think about culture, politics, and even our own bodies.
A photograph of hands tied up by the power cord of a smartphone
We’ve discussed how algorithms determine which topics flood our feeds by looking for patterns in past data to make predictions about what will keep us engaged. Now, imagine an AI algorithm observing human behavior, trying to figure out what humans want. It notices that whenever people drive past a car crash, they slow down and give it their close attention. Clearly, people must be drawn to car crashes, so maybe what they want is highways full of car crashes!
Of course not. People look at car crashes because they need to be aware of a potentially dangerous situation, and because we are naturally curious about the world around us.
As ridiculous as this example sounds, our social media platforms constantly do the same thing. They fill our news feeds with metaphorical car crashes by promoting more provocative and performative content, leaving us in the online equivalent of a traffic jam. They show us not what we want, but the things we can’t help looking at.
Just because we look at or click on something, doesn’t mean it’s what we want, or even what we believe is best for us. More often, our actions online reflect how effectively apps are nudging us toward specific behavior, often engagement.

Want to join the conversation?

  • duskpin seedling style avatar for user dawn
    thats rude. what they are trying to do
    (3 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Na KiyaB
    That is kinda strange that the AI algorithm tracks human behaviors.From a salesman point of view that is very smart because your job as a salesman is to make a sale and by AI studying the behavior they know exactly what like and that´s a better chance for a sale.
    (3 votes)
    Default Khan Academy avatar avatar for user