Artificial intelligence literacy education in primary schools: a review
Systematic review of 25 empirical studies on AI literacy in primary schools, mapping definitions, theoretical frameworks, pedagogies, tools, assessment methods, outcomes, and implementation challenges.
Implementation
Academic systematic review (primary education focus)
Learning context
In-school (K-12)
AI role
Evaluator
Outcome signal
Not specified
Registry Facets
- Research Review
- K-12
- Completed
- Primary AI Literacy
- Pedagogy
- Assessment and Outcomes
Implementing Organization
Academic systematic review (primary education focus)
International (multi-country empirical literature)
Education researchers synthesizing AI literacy evidence for primary schools
Learning Context
- In-school (K-12)
- Informal learning
Systematic review using PRISMA-guided screening and coding
Studies from 2019 to March 2024; review published 2025
25 empirical studies (from 44 initially retrieved records)
Not single-site; tools varied across studies (intelligent agents, software, unplugged activities)
- Limited number of primary-focused empirical AI literacy studies.
- Heterogeneity in definitions, curricula, and assessment methods.
- Frequent short-term interventions and context-specific implementations limit broad generalization.
Learner Profile
Primary school learners (young students in early and upper primary)
Mixed; many learners encounter consumer AI tools in daily life
Often limited or none, depending on study design
Educational Intent
- Clarify how AI literacy is conceptualized in primary school contexts.
- Identify effective pedagogical and assessment approaches for young learners.
- Synthesize outcomes and challenges to guide future curriculum and policy design.
- Map theoretical frameworks underpinning existing interventions.
- Compare tool types and their roles in engagement and learning.
- Surface equity, readiness, and teacher-capacity considerations.
- Not a single intervention trial in one school.
- Not a meta-analysis with pooled effect sizes.
- Not a definitive global standard curriculum specification.
AI Tool Description
Synthesis of AI literacy learning tools and pedagogical configurations
English-language studies from Scopus and Web of Science
- Evaluator
- Database retrieval with predefined AI-literacy search terms.
- PRISMA screening with inclusion/exclusion criteria for primary contexts.
- Coding across definition, theory, pedagogy, tools, assessment, outcomes, and challenges.
- Cross-study thematic synthesis into implementation guidance.
- Transparent eligibility criteria and coding scheme.
- Inter-rater discussion process to improve coding agreement.
- Explicit reporting of limitations and future research needs.
Activity Design
- Identify relevant literature and apply screening criteria.
- Code studies by seven RQs covering definition through recommendations.
- Summarize frequencies of frameworks, pedagogies, tools, and assessments.
- Interpret trends, implementation barriers, and policy-level implications.
- Human researchers conducted screening, coding, and interpretation decisions.
- AI appears as learning content/tool in source studies rather than autonomous reviewer.
- Use of PRISMA structure for rigorous selection workflow.
- Evidence triangulation via mixed-method findings where available.
- Alignment of outcomes with conceptual and theoretical framing.
Observed Challenges
- Insufficient systematic primary-level AI curricula and validated measurements.
- Tool/interface limitations and computational constraints in school settings.
- Difficulty teaching abstract concepts such as data bias age-appropriately.
- Teacher professional development needs and variable student readiness.
Design Adaptations
- Frequent use of constructivist and constructionist approaches.
- Project-based, programming, and human-agent interaction strategies were emphasized.
- Use of intelligent agents and low-barrier tools to support conceptual entry points.
- Growing mixed-method evaluation to capture both performance and perception outcomes.
Reported Outcomes
- Studies generally report positive motivation, engagement, and satisfaction in AI learning activities.
- Game-based and constructivist designs often increased participation and persistence.
- Academic gains include understanding core AI/ML concepts and basic data-bias awareness.
- Affective and behavioral gains include improved self-efficacy and willingness to continue AI learning.
- Soft-skill outcomes include problem solving, computational thinking, and creative expression.
Primary AI literacy should balance technical skill-building with ethics, data literacy, and inclusive pedagogies for diverse learners.
Ethical & Privacy Considerations
- Review emphasizes AI ethics as a core competency, not a secondary add-on.
- Critical data literacy and bias awareness are necessary for responsible AI use by young learners.
- Primary curricula should address social impact, fairness, privacy, and AI-for-social-good framing.
Evidence Type
- Practitioner observation
- Activity documentation
- Post assessment
Relevance to Research
- Provides a primary-specific evidence map for AI literacy curriculum and assessment design.
- Identifies research gaps in longitudinal evidence, inclusivity, and validated measurement.
- Supports development of interdisciplinary and age-appropriate AI competency frameworks.
- Primary AI literacy curriculum design
- AI pedagogy and assessment in K-12
- Data literacy and AI ethics education
- Teacher readiness and professional development
Case Status
- Completed
AAB Classification Tags
Primary school learners
Formal primary school context (plus some mixed/informal studies)
AI concept learning, human-AI interaction, and data/ethics understanding
Constructivist/constructionist, project-based, programming, and interactive methods
Medium
Medium (student-learning context with data literacy and bias-related tasks)
