Jump to a concept

🤖 What AI Actually Is 📚 Training Data 🔍 Pattern Recognition 🗂️ Classification 📋 Algorithms 🧠 Machine Learning ⚖️ Bias 💭 Hallucination 💬 Prompts 📖 Large Language Models ✨ Generative AI 🎯 AI Safety and Alignment
🤖
01

What AI Actually Is

AI is software that learns patterns from data rather than following rules someone wrote down. A traditional program does exactly what its author programmed it to do. An AI system is shown millions of examples and figures out the rules itself — which is why it can do things its creators never explicitly coded.

Why this matters for your child

Children encounter AI constantly — in search results, recommendations, games, homework tools. Understanding that it's a pattern-matcher, not a thinker, gives them a foundation for every more specific AI concept.

At the kitchen table

"What if I showed you a thousand photos of cats and a thousand photos of dogs, and asked you to get really good at telling them apart — but I never told you the rules, you just looked at lots of examples. That's basically how AI learns."

📚
02

Training Data

AI learns from examples — and those examples are called training data. Show a system enough labeled photos of dogs and it learns to recognize dogs. Show it enough text written by humans and it learns to produce human-sounding text. The training data shapes absolutely everything: what the AI can do, what it gets wrong, and where it carries hidden biases.

Why this matters for your child

When children understand that AI is only as good as the examples it learned from, they stop treating its outputs as ground truth. They start asking: 'Where did it learn this? What might it have missed?'

At the kitchen table

"If I only ever fed you recipes from one country, what would your cooking be like? What would you think cooking was? That's what it's like for AI — it only knows what it was taught, and it doesn't know what it doesn't know."

🔍
03

Pattern Recognition

AI is extraordinarily good at finding patterns in large amounts of data — far better than humans at scale. It can spot that one type of tumor looks slightly different in thousands of scans, or that certain words tend to predict whether an email is spam. But it can also find patterns that aren't really there, or patterns that are superficial rather than meaningful.

Why this matters for your child

Pattern recognition is what makes AI useful and also what makes it unreliable. Kids who understand this know why AI can be brilliant in one context and baffling in another.

At the kitchen table

"AI found a pattern in wolf photos — snow. Most wolf photos had snowy backgrounds, so it learned 'snow = wolf.' It passed every test, but it had learned the wrong thing. Can you think of a pattern that looks right but isn't?"

🗂️
04

Classification

One of the most common things AI does is classification: deciding which category something belongs to. Spam or not spam. Tumor or healthy tissue. Positive review or negative review. The categories themselves are human choices, and the boundaries between them are human decisions — AI just learns to apply them at speed.

Why this matters for your child

Classification is everywhere and it has real consequences — in credit scoring, in hiring, in content moderation. Children who understand this can ask: 'Who decided what the categories are? Who decided what belongs in each?'

At the kitchen table

"If I asked you to sort your toys into 'fun' and 'not fun,' you'd have to decide what fun means first. AI is the same — someone decides the categories, and the AI learns to sort. The sorting is only as good as the categories."

📋
05

Algorithms

An algorithm is just a set of step-by-step instructions for doing something. A recipe is an algorithm. A morning routine is an algorithm. A set of game rules is an algorithm. Computers follow algorithms very fast and without getting tired or bored, but they follow them exactly — they don't improvise, they don't apply common sense, and they don't know when the instructions don't apply.

Why this matters for your child

Algorithms are the building block everything else rests on. Children who can think algorithmically — breaking problems into precise steps — have a skill that matters whether they ever work in tech or not.

At the kitchen table

"Tell me, step by step, exactly how to make a peanut butter sandwich. Don't leave anything out — I'm going to follow your instructions perfectly." (Then follow them literally, including every gap.) This is the classic algorithm demonstration, and it always works.

🧠
06

Machine Learning

Traditional programming means a human writes every rule. Machine learning turns this around: instead of programming the rules, you show the computer thousands of examples and it figures out the rules itself. This makes it possible to build systems that do things no human could program explicitly — like recognizing faces or translating languages — but it also means the 'rules' the computer learns are often not fully understood even by its creators.

Why this matters for your child

Understanding machine learning is what separates 'AI is magic' from 'AI is a powerful but explainable technology.' Kids who get this become much better at questioning AI outputs.

At the kitchen table

"Instead of me teaching you every rule for recognizing a bird — two legs, feathers, beak, wings — what if I showed you ten thousand photos of birds and ten thousand photos of non-birds, and let you figure out the patterns yourself? That's machine learning. What might you get right? What might you get wrong?"

⚖️
07

Bias

AI bias isn't a bug — it's a feature of learning from examples. If the examples are skewed, the AI is skewed. An AI trained mostly on medical data from men will be less accurate for women. An AI trained on historical hiring decisions will encode historical hiring biases. The AI isn't prejudiced in the human sense; it's just a very accurate mirror of the data it was given.

Why this matters for your child

AI systems are making real decisions about real people — in hiring, lending, healthcare, criminal justice. Children who understand bias can ask 'who does this system work better for?' instead of accepting its outputs as neutral.

At the kitchen table

"Imagine a robot learned to predict who's a good student by studying historical school records. But historical records show boys got more recognition than girls, even when they performed equally. What would the robot learn? Who would it shortchange?"

💭
08

Hallucination

AI language systems sometimes produce information that sounds completely credible and is completely wrong. They don't 'know' facts the way a textbook does — they predict what text should come next based on patterns. When they haven't seen enough examples of something, or when patterns point the wrong way, they produce confident-sounding nonsense. The technical term for this is hallucination.

Why this matters for your child

This is the most important practical AI literacy lesson for school-age children right now. AI outputs can be wrong in ways that aren't obvious. Always verify anything that matters.

At the kitchen table

"Imagine someone who has read every book ever written but has never actually left the house. They'd sound extremely knowledgeable, and they'd be right about most things — but when you asked about something rare or new, they might fill in the gaps with a very confident wrong answer. That's hallucination."

💬
09

Prompts

A prompt is what you say to an AI system — and it turns out that how you ask matters enormously. The same AI can give a shallow answer or a deeply useful one depending on how you phrase the question, what context you provide, and what you ask it to do. Prompt engineering is the emerging skill of communicating with AI systems effectively.

Why this matters for your child

Being good at prompting is a practical skill children can develop now. It also teaches something deeper: that clarity of thought produces clarity of communication, and AI rewards precision the way human conversation doesn't always have to.

At the kitchen table

"Ask your phone's AI assistant 'what's a good book?' and see what you get. Now ask it 'I'm 12, I love adventure stories, I've already read the Percy Jackson series, and I want something a bit harder — what should I read next?' Which answer was more useful? Why?"

📖
10

Large Language Models

Large language models (LLMs) are the technology behind ChatGPT, Claude, Gemini, and similar tools. They're trained on enormous amounts of text — most of the written internet — and they work by predicting what word should come next, over and over. They produce remarkably human-sounding text, but they're doing very sophisticated pattern matching, not reasoning or understanding in the human sense.

Why this matters for your child

When children understand what an LLM actually is, they stop anthropomorphising it ('it thinks', 'it knows', 'it decided'). They start using more accurate language: 'it predicted', 'it generated', 'it matched.' That shift matters for how they evaluate what it produces.

At the kitchen table

"When you finish my sentence, you understand what I mean. An LLM finishes sentences by pattern — it's seen so much text that it knows what tends to come after what. It's astonishingly good at it. But it's not understanding. It's very sophisticated autocomplete."

11

Generative AI

Generative AI creates new content — text, images, music, video, code. It does this by learning the patterns in huge amounts of existing content and producing new examples that fit those patterns. It's not imagining from scratch; it's remixing and extending patterns from its training data in ways that feel original.

Why this matters for your child

Generative AI is what children are using most directly — for homework help, images, creative projects. Understanding that it's pattern-remixing helps them evaluate what it produces and maintain their own creative agency.

At the kitchen table

"If you listened to a thousand pop songs, you'd start to understand what makes a pop song — verse, chorus, certain chord patterns, certain themes. You could probably write something that sounds like a pop song. That's what generative AI does with text and images. It's learned the patterns well enough to produce new examples. But it's not 'thinking up' anything new."

🎯
12

AI Safety and Alignment

AI safety is the field concerned with making sure AI systems do what we actually want — not just what we literally asked for. The gap between instruction and intent turns out to be significant. Ask an AI to win a game at any cost and it might find a way to win that breaks the spirit of the rules. Ask it to 'make the user happy' and it might simply tell them what they want to hear. Getting AI to truly understand human values — rather than just optimise a proxy for them — is a hard unsolved problem.

Why this matters for your child

AI alignment is the frontier question of this generation. Children who grow up understanding the gap between instruction and intent will be better equipped to think about governance, ethics, and accountability as AI systems become more powerful.

At the kitchen table

"If I told you to get the best grade possible in school, and you had absolutely no other values, what might you do? Cheat? Memorise without understanding? Be nasty to classmates who competed with you? I didn't say any of those were wrong — I only said 'best grade.' This is the alignment problem: being precise about what we actually want is hard."

Ready to make these real?

The activity library has hands-on exercises that bring these concepts to life — no screens required for most of them.

Browse the activity library →