Young children's understanding of AI
Group interviews (n=18) with 11–12-year-olds on AI as technology and socio-cultural tool; ethics engagement; design recommendations.
Implementation
University faculty of education
Learning context
In-school (K–12)
AI role
Tutor
Outcome signal
Student voice
Registry Facets
- 6-8
- AI literacy
- Ethics
- Qualitative research
- Students
- Researchers
- Ethics and society
- Research-informed guidance
- Interviews
- Student voice
- Ethics engagement
Implementing Organization
University faculty of education
Groningen, Netherlands
Researchers conducting group interviews and thematic analysis
Learning Context
- In-school (K–12)
- Informal learning
Semi-structured group interviews
Qualitative case study phase as reported
18 children in group interview format
Children discuss everyday AI apps and services they know
- Small sample and single national context
- Group interviews may yield peer-normed responses
- Rapid change in apps children reference
- Not linked to a specific curriculum intervention outcome
Learner Profile
11–12 years
High everyday exposure to AI-powered services
Not required for participation
Educational Intent
- Document how tweens conceptualize AI technically and socially
- Surface ethical curiosity tied to familiar technologies
- Inform personally relevant, critical AI literacy curricula
- Shift research gaze from outcomes-only to sense-making and engagement
- Center children’s rights to be heard (UNCRC framing in paper)
- Not a randomized curriculum trial
- Not large-scale survey
- Not technical skills assessment
AI Tool Description
Everyday AI services (voice assistants, recommender systems, etc.) as discussion objects
- Tutor
Dutch educational context
- Children narrate experience-based mental models
- Ethical reasoning emerges around known applications
- Age-appropriate facilitation of scary futures (jobs, privacy) without alarmism
- Respect child assent and confidentiality in groups
- Curriculum should connect critique to agency and design, not fear-only
Activity Design
- Design interview protocol on AI understanding and use
- Conduct group interviews
- Apply deductive and inductive coding
- Synthesize themes and derive material design principles
- Children interpret technologies; adults design curricula responding to their sense-making
- Start from lived experience before formal definitions
- foreground critical literacy alongside technical labels
Observed Challenges
- Gap between global AI literacy materials and evidence on child sense-making
- Need curricula that honor socio-cultural and ethical dimensions early
Design Adaptations
- Combined deductive/inductive coding to bridge theory and emergent child voice
Reported Outcomes
- Children show high interest in ethical implications of familiar AI
- AI seen as supportive tool culturally; technical views grounded in personal experience
Recommends engaging, personally relevant AI materials with critical literacy at the forefront.
Ethical & Privacy Considerations
- Group confidentiality and sensitive topic facilitation
- Avoid deficit narratives about children’s digital lives
- Inclusive facilitation across varying home access
- Transparent use of any recordings/transcripts
Evidence Type
- Activity documentation
- Practitioner observation
Relevance to Research
- Design-based research testing curriculum units built from these themes
- Cross-cultural replication with adolescents
- Child-computer interaction
- AI literacy curriculum
- Ethics education
Case Status
- Completed
AAB Classification Tags
11–12
Netherlands
Sense-making / ethics
Interview-informed design
Low
Medium (child discourse)
