If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

AI in five

What is AI, and what are its potential risks and benefits? By aiEDU

AI in 5 minutes

Get ready for the future in just 5 minutes.

What does AI mean?

A diagram showing images of chatgpt and two humanoid robots. The diagram is labeled "What people mean when they say AI in everyday conversation."
The term “Artificial Intelligence (AI)” has a lot of different meanings depending on who’s talking.
Yeah, AI can mean “smart creepy robots” sometimes. But technology isn’t quite there yet. It turns out “AI” might mean a lot more than you think.
A diagram showing a large yellow circle, with a much smaller blue circle inside it. The blue circle contains images of chatgpt and two humanoid robots and is labeled "What people mean when they say AI in everyday conversation." The much larger yellow circle around it contains images of an online translator, a phone showing Tik Tok, a Google search page, a drone, and a smartphone. The larger yellow circle is labeled "All the possible meanings of AI."
When computer scientists and politicians talk about "AI", they usually mean the technologies we use every day like FacelD, TikTok's algorithm, and Google search. And sometimes they mean killer robots, unfortunately.
Computers have a lot of clout in your everyday life: they decide what posts to show you, whether you've been out of school too much, and predict whether you might commit a crime.
But what's special about Al compared with other computer programs? Al allows computers to learn from past experience.

AI learns from experience

Think about how a person becomes a great musician or gets a top rank in a video game. Practice, practice, practice. Al allows a computer to practice making a decision or taking a guess millions and millions of times until it gets good.
Computer scientists use "data" to train an Al. In short, data just means information. What can data look like?
  • Biometrics
  • Numbers
  • Images
  • Audio
  • Text
We can use these types of data to train computers to do almost anything through practice.
Here are some examples!
Data & PracticeDecision/Prediction
Faces: The computer analyzes lots of faces and practices telling people apart.
Three pictures of different faces
Face recognition: The computer can learn your face and say, “Yes, that’s you!"
A photograph of a single face with a checkmark in a green circle over it
Videos: the computer analyzes millions and millions of videos to see what kinds of people like which types of videos. They figure that out by seeing how long people watch each video.
Three screenshots of videos about owls
Recommendations: The computer learns to make a good guess about what videos you'll like the most, and recommends them in your feed or FYP.
A screenshot of a video where a man is holding a huge owl. The title of the video is "Owl facts, but it gets disturbing."
Text: the computer analyzes some English words and their translation in Spanish, automatically learning which phrases mean what.
A screenshot of three paragraphs of black text, written in Spanish, about Rihanna. Several words throughout the text are bolded and teal
Translation: The computer learns how to translate almost any text you can throw at it from English to Spanish.
A screenshot of a Google translate English to Spanish input screen
So all that is pretty cool. But computers make a lot of decisions. There are a couple things we might worry about.
Sometimes those decisions go wrong, so how do we stop that from happening?
And if Al learns from past experiences, how do we make sure we don't repeat the mistakes of the past?
Al must be built and used responsibly
Al isn't all-knowing. And that means it can make mistakes, sometimes even really big, uncomfortable ones.
  • AI predicts whether people will commit crimes.
A screenshot of a New York Times article. There is an image of an African American man, and written over his image is the title "An Algorithm that Grants Freedom or Takes It Away." Underneath is the subtitle, "Across the United States and Europe, software is making probation decisions and predicting whether teens will commit crime. Opponents want more human oversight."
  • AI has a harder time recognizing the faces of black people compared to white people.
A screenshot of a New York Times article. There is an image of an African American woman, and next to her image is the title "Who Is Making Sure the A.I. Machines Aren't Racist?" The subtitle of the article is "When Google forced out two well-known artificial intelligence experts, a long-simmering research controversy burst into the open."
  • AI can be used to create false or misleading text that spreads incorrect ideas.
A screenshot of a New York Times Magazine article. There is an image of a horrified face made out of words. To the left of the image is the title "A.I. is mastering language. Should we trust what it says?" The subtitle of the article is "OpenAI’s GPT-3 and other neural nets can now write original prose with mind-boggling fluency — a development that could have profound implications for the future."
If we know Al can make mistakes, we need to use and build it responsibly. Here's what you can do right now about that:
  1. Tell other people you know about how Al works and how common it is in real life
  2. Write a letter to your school or representative asking them to help spread the word about Al
  3. Check out podcasts, videos, articles, and classes about Al in your life

Want to join the conversation?