Back to Cases
Case ReportCompleted2024
AAB-CASE-2024-RV-004

Understanding how Computers Learn: AI Literacy for Elementary School Learners

Field-tested hands-on workshop for 10-11 year-old elementary learners introducing AI literacy through programming and machine learning, including data quality, bias, and ethical reflection.

This page documents an AI literacy or AI education case for registry purposes. It is descriptive and does not imply AAB endorsement of any specific tool, provider, or intervention.
01

Implementation

University-school collaboration (HTW Berlin with educational foundation)

02

Learning context

In-school (K-12)

03

AI role

Evaluator

04

Outcome signal

Not specified

Registry Facets

0
Case Type
  • Research Review
Setting
  • K-12
Status
  • Completed
Focus
  • Elementary AI Literacy
  • Machine Learning Basics
  • Ethics and Bias

Implementing Organization

1
Organization Type

University-school collaboration (HTW Berlin with educational foundation)

Location

Berlin, Germany

Primary Facilitator Role

Researchers and educators co-designing and facilitating elementary AI workshops

Learning Context

2
Setting Type
  • In-school (K-12)
Session Format

Hands-on workshop in school computer lab

Duration

Approx. 2-hour workshop session

Group Size

Two 5th-grade classes (20 and 23 learners)

Devices

School computers using Scratch and Machine Learning for Kids

Constraints
  • Limited AI background among elementary teachers and few ready-to-use curricular materials.
  • Learners showed large variance in prior exposure, understanding, and confidence.
  • Workshop requires computer-lab access and facilitator support for differentiated pacing.

Learner Profile

3
Age Range

10-11 years (5th grade)

Prior AI Exposure Assumed

Mostly consumer exposure (e.g., Alexa, Siri, ChatGPT mentions)

Prior Programming Background Assumed

No required prior programming or computer science knowledge

Educational Intent

4
Primary Learning Goals
  • Understand basic concepts and terminology around AI and machine learning.
  • Describe how labeled data is used to train and test simple ML models.
  • Recognize why data quality and bias affect model outcomes.
Secondary Learning Goals
  • Differentiate teaching computers via explicit programming versus ML training cycles.
  • Develop age-appropriate ethical reflection on AI errors and consequences.
  • Increase confidence to explore AI concepts in school contexts.
What This Was Not
  • Not a full-semester graded AI curriculum.
  • Not advanced algorithm/math instruction.
  • Not an assessment of long-term retention.

AI Tool Description

5
Tool Type

Beginner AI literacy tools for elementary ML and programming activities

Languages

German instructional context

AI Role
  • Evaluator
User Interaction Model
  • Scratch activity to illustrate algorithmic instruction and sequencing.
  • Machine Learning for Kids activity to train and test image classifiers.
  • Lab-log documentation of predictions and misclassifications.
  • Wrap-up discussion connecting model errors to data quality and bias.
Safeguards
  • Age-appropriate framing and no high-risk personal data collection.
  • Parental consent and child-sensitive research procedures (no audio/video recording).
  • Guided discussion of ethical implications such as bias and safety in AI systems.

Activity Design

6
Activity Flow
  • Warm-up on where computers/AI appear in daily life.
  • Algorithmic teaching analogy (e.g., sandwich instructions) and Scratch practice.
  • ML classification task with butterfly/caterpillar image dataset.
  • Reflection on misclassification causes, data quality, and ethical implications.
Human Vs AI Responsibilities
  • Learners and instructors select classes, labels, and training examples.
  • Model performs predictions that learners evaluate and debug.
  • Humans interpret errors and decide how to improve data quality.
Scaffolding Strategies
  • Constructive alignment between learning goals, activities, and reflection.
  • Short instructor inputs plus group/plenary discussion and hands-on experimentation.
  • Differentiated materials for fast learners and learners needing extra support.

Observed Challenges

7
Educators Reported
  • Technical terms (model, data, training) needed strong age-appropriate simplification.
  • Homogeneous-age groups still showed high variance in AI understanding.
  • Some learners required substantial prompting in reflective discussion phases.

Design Adaptations

8
Adaptations
  • Iterative workshop refinement after field test and stakeholder feedback.
  • Standing discussion segments to reduce screen distraction and improve attention.
  • Prepared template programs and support paths for differentiated classroom pacing.
  • Misclassification-by-design examples to make bias/data issues concrete.

Reported Outcomes

9
Engagement
  • Teachers reported strong participation and positive engagement during the workshop.
  • Learners responded positively to hands-on experimentation and immediate feedback.
Learning Signals
  • Learners articulated core ML cycle elements: labeling, training, testing, and checking correctness.
  • Students identified plausible misclassification causes such as small/biased data and ambiguous images.
  • Children demonstrated age-appropriate critical awareness of data quality and fairness implications.
Educators Reflection

The concept is implementable by teachers without formal CS background when materials and facilitation supports are provided.

Ethical & Privacy Considerations

10
Privacy
  • Ethics content included data bias, data quality, and consequences of classification errors.
  • Research with minors used consent-based, low-intrusion methods and avoided audio/video recording.
  • Discussion activities connected technical outcomes to responsibility and safe real-world AI use.

Evidence Type

11
Evidence
  • Activity documentation
  • Practitioner observation
  • Post assessment

Relevance to Research

12
Potential Research Use
  • Provides a practical elementary-level AI literacy design pattern with replicable workshop phases.
  • Contributes evidence that younger learners can engage with basic ML and ethics concepts.
  • Supports future work on equity-oriented and teacher-deliverable early AI education.
Relevant Research Domains
  • Elementary AI literacy pedagogy
  • Hands-on machine learning education
  • Child-centered ethics and bias education
  • Teacher-support models for early AI curriculum

Case Status

13
Case Status
  • Completed

AAB Classification Tags

14
Age

Elementary (10-11 years)

Setting

In-school computer lab

AI Function

Introductory machine learning classification and reflection

Pedagogy

Hands-on workshop with discussion and guided experimentation

Risk Level

Low to Medium

Data Sensitivity

Low (classroom activity datasets and non-sensitive learner artifacts)

Registry Metadata

15
Case ID
AAB-CASE-2024-RV-004
Publication Status
Completed
Tags
caseBerlin, GermanyIn-school (K-12)