The Stakes

In 2024, a deepfake robocall mimicking the voice of a presidential candidate was sent to voters. In 2023, AI-generated images of an explosion near the Pentagon briefly crashed stock markets. In multiple countries, synthetic video of politicians has been used in political advertising without disclosure.

These are not hypothetical future threats. They’re happening now. And the tools to create them cost nothing and require no technical skill. The ability to detect and respond to synthetic media is a genuine life skill for anyone aged 13 and up.

The goal of this activity is not to make people distrust all media — that’s its own problem, called “liar’s dividend,” where people dismiss real evidence as AI-generated. The goal is to develop calibrated skepticism: appropriate doubt, combined with concrete verification skills.

Part 1: The Tells (15 minutes)

Start by examining known AI-generated images together. These are widely available from AI art generators — a Google search for “AI-generated images” will show examples. Look for these common tells:

Hands and fingers: This remains the most reliable tell in AI-generated images. AI systems have historically struggled with the correct number of fingers, realistic finger joints, and natural hand poses. Look closely at any hands in an image.

Eyes: AI-generated faces often have eyes that are slightly too symmetrical, irises that lack the subtle variation of real eyes, or reflections in the eyes that don’t match the scene’s light source.

Text: AI-generated images frequently contain text that looks real at a glance but is garbled or nonsensical on close examination. Signs, labels, shirts with writing — look at these carefully.

Edges and backgrounds: The junction between a person and the background is often where AI images fall apart. Look for: hair that blends unnaturally with the background, clothing edges that are too smooth or that merge with surroundings, and backgrounds that have the texture of reality but the logic of a dream (things that shouldn’t be there, scale that’s slightly off).

Bilateral symmetry: Human faces are slightly asymmetrical. AI-generated faces are often too symmetrical in subtle ways — earrings that are mirror images, facial features that are too perfectly balanced.

Lighting inconsistencies: The light source on the subject and the light in the background should match. In AI images, they sometimes don’t — a person lit from the left in a scene where all other shadows suggest light from the right.

Part 2: The Investigation (15 minutes)

Now apply these skills to ambiguous cases. Find 4–6 images to examine together — a mix of real photos and AI-generated images. Good sources:

  • News photographs from reputable outlets (real)
  • AI-generated images from tools like Midjourney, DALL-E, or Stable Diffusion (many are labeled)
  • Check websites like “which face is real?” (whichfaceisreal.com) for comparison examples

For each image:

  1. First reaction: AI or real?
  2. What’s your confidence level (1–10)?
  3. What tells did you notice?
  4. What makes you uncertain?

Track your accuracy. Even experienced observers can’t reliably distinguish sophisticated AI-generated images from photos. This is the point — and the threat.

Part 3: Video and Audio (10 minutes)

Video deepfakes are more complex to detect, but certain tells persist:

Facial movement:

  • Unnatural blinking (too regular or too infrequent)
  • Facial expressions that don’t quite match emotional tone
  • Movement at the edges of the face — hairline, ears, jaw — that feels slightly off
  • Eyes that don’t reflect light naturally in video

Audio-video sync:

  • Lip movements that don’t precisely match sounds
  • Voice that doesn’t quite fit the person’s usual cadence

Background and framing:

  • Objects in the background that behave strangely
  • The “halo” effect around a person’s head where the AI has superimposed their face

What audio fakes sound like: AI voice cloning has become very accurate. The most reliable tells are: unnaturally consistent pitch, absence of the breathy variations that characterize real speech, and “perfect” pronunciation that no one actually has.

Part 4: Verification Protocol (5 minutes)

Detecting tells is useful but unreliable. The more robust skill is verification:

Before sharing any potentially alarming media:

  1. Reverse image search. Right-click the image → “Search image with Google” (or use TinEye). If it appears on many sites, check the earliest appearance. If it appears with different captions, something is wrong.
  2. Source check. Where did this come from? Who is claiming it’s real? What is their motivation?
  3. Cross-reference. Has any credible news source reported on this? If a major event is shown in a video and no news organization is covering it, be very skeptical.
  4. Check verification sites. Snopes, PolitiFact, AFP Fact Check, and AP Fact Check all investigate viral media claims.

The practical rule: If content makes you feel a strong emotion — outrage, fear, vindication — that’s the moment to pause before sharing. Strong emotions impair the critical evaluation you need to do.

The Ethical Dimension

Creating deepfakes of real people without their consent is:

  • A violation of their autonomy and dignity
  • Potentially illegal depending on jurisdiction (many places have laws specifically targeting non-consensual synthetic intimate images)
  • Harmful even if “just a joke” — because the person depicted loses control of their own likeness

The question worth discussing: “What should the rules be around creating synthetic media of real people? Who should decide? What exceptions, if any, should exist for satire or fiction?”

There’s no consensus answer — but there should be a conversation.

Ready for more?

Explore all activities in the library, or find ones matched to your child's age.

All Activities → Browse by Age