Artificial Intelligence teaching and learning in K-12 from 2019 to 2022: A systematic literature review
Empirical-outcomes-focused SLR: 28 studies from 8175 records; Raspberry Pi / Cambridge affiliation.
Implementation
Computing education research centre and foundation
Learning context
In-school (K–12)
AI role
Tutor
Outcome signal
Learning outcomes
Registry Facets
- K-12
- AI education
- Machine learning
- Systematic review
- Researchers
- Teachers
- ML
- Foundational AI concepts
- Classroom-level
- Systematic review
- Learning outcomes
- Pedagogy
Implementing Organization
Computing education research centre and foundation
Cambridge, UK
Authors screening, extracting, and synthesizing empirical AI education studies
Learning Context
- In-school (K–12)
- Informal learning
PRISMA-style systematic search across five databases (2019–2022)
Publication window 2019–2022
28 included empirical studies
Diverse tools across primary studies (robots, unplugged, IDEs, etc.)
- Rapid post-2022 expansion not covered
- Heterogeneity limits meta-analysis
- Publication bias toward positive results possible
- Outcome measures often inconsistent
Learner Profile
K-12 learners across included studies
Varies widely
Varies by intervention level
Educational Intent
- Inventory empirical evidence on K-12 AI learning outcomes
- Map pedagogical and theoretical coverage
- Highlight research gaps and challenges
- Inform policy and curriculum with evidence-based patterns
- Encourage standardized constructs for AI learning measurement
- Not a meta-analysis of effect sizes
- Not teaching-with-AI-only reviews
- Not longitudinal primary data
AI Tool Description
Heterogeneous across corpus (robots, visual ML tools, block languages, etc.)
- Tutor
- Co-creator
English-language search; studies may be multilingual
- Varies by study design in corpus
- Primary studies should report ethics and data practices clearly
- Equity in who gets rigorous AI interventions
- Avoid overstating evidence from small pilots
Activity Design
- Define inclusion criteria for empirical outcome studies
- Screen thousands of records to 28 papers
- Content-analyze pedagogy, theory, topics, outcomes
- Synthesize limitations and future research agenda
- Reviewers judge study quality; AI tools optional in future updates of method
- Learner-centred and context-aware designs recommended from patterns
Observed Challenges
- Limited empirical base relative to hype
- Need consistent outcome instruments
- More work on transfer and retention beyond immediate post-tests
Design Adaptations
- Tight focus on empirical learning outcomes distinguishes from descriptive program reviews
Reported Outcomes
- Most reviewed work reports positive cognitive and/or affective signals
- Evidence supports feasibility but not uniformity of rigor
Calls for more learner-centred, context-aware pedagogy and better measurement constructs.
Ethical & Privacy Considerations
- Ethical reporting of child studies in underlying corpus
- Data privacy in interventions using student models
- Transparency in selective reporting of successful pilots
Evidence Type
- Activity documentation
- Practitioner observation
Relevance to Research
- Registered systematic reviews updating annually
- Shared outcome banks for AI education RCTs
- K-12 AI education
- Systematic review methodology
- Learning sciences
Case Status
- Completed
AAB Classification Tags
K-12
International corpus
Teach AI / ML
Varied (reviewed)
Low (synthesis)
N/A
