Case ReportPublished empirical study2024
AAB-CASE-2025-RV-053
Teachers’ and students’ perceptions of AI-generated concept explanations: Implications for integrating generative AI in computer science education
CAEAI; Korea National University of Education; elementary CS.
This page documents an AI literacy or AI education case for registry purposes. It is descriptive and does not imply AAB endorsement of any specific tool, provider, or intervention.
01
Implementation
National university of education
02
Learning context
In-school (K–12)
03
AI role
Tutor
04
Outcome signal
Perceptions
Registry Facets
0
Education Level
- K-5
Subject Area
- Computer science
- AI literacy
Use Case Type
- Comparative study
Stakeholder Group
- Students
- Teachers
AI Capability Type
- LLM/Chat
- Generative AI
Implementation Model
- Classroom-level
Evidence Type
- Mixed methods
Outcomes Domain
- Perceptions
- Knowledge
Implementing Organization
1
Organization Type
National university of education
Location
Republic of Korea
Primary Facilitator Role
Researchers
Learning Context
2
Setting Type
- In-school (K–12)
Session Format
Comparative evaluation of explanations
Duration
Study sessions
Group Size
11 teachers; 70 sixth-grade students
Devices
ChatGPT-generated vs teacher-created explanations
Constraints
- Single country
- One grade level
Learner Profile
3
Age Range
Grade 6
Prior AI Exposure Assumed
Growing GAI familiarity
Prior Programming Background Assumed
Intro CS concepts
Educational Intent
4
Primary Learning Goals
- Compare helpfulness of GAI vs human explanations
- Measure ability to identify source
Secondary Learning Goals
- Derive integration strategies and AI literacy needs
What This Was Not
- Not long-term learning gains RCT
AI Tool Description
5
Tool Type
ChatGPT for CS concept explanations
AI Role
- Tutor
- Co-creator
Languages
Korean context
User Interaction Model
- Side-by-side evaluation of explanation quality and origin
Safeguards
- Explicit AI literacy on source discernment
- Pedagogically tuned prompts
Activity Design
6
Activity Flow
- Present explanations
- Rate helpfulness
- Identify AI vs teacher
Human Vs AI Responsibilities
- Teachers judge pedagogy; students weigh clarity and relatability
Scaffolding Strategies
- Teach criteria for evaluating opaque model text
Observed Challenges
7
Educators Reported
- Iteration hardest for students to attribute correctly
- Teacher and student evaluation criteria differ
Design Adaptations
8
Adaptations
- Design GAI explanations aligned to learner needs
Reported Outcomes
9
Engagement
Learning Signals
- Significant chi-square patterns by concept
Educators Reflection
Calls for explicit literacy on recognizing AI-generated CS help.
Ethical & Privacy Considerations
10
Privacy
- Child data
- Transparency with families
Evidence Type
11
Evidence
- Post assessment
- Activity documentation
- Practitioner observation
Relevance to Research
12
Potential Research Use
- Larger experiments on learning—not only perception
- Longitudinal integration studies
Relevant Research Domains
- Elementary CS
- Generative AI
- Explainability
Case Status
13
Case Status
- Completed
AAB Classification Tags
14
Age
Grade 6
Setting
Korea
AI Function
Concept explanations
Pedagogy
Comparative perception study
Risk Level
Medium
Data Sensitivity
Medium
Registry Metadata
15
Case ID
AAB-CASE-2025-RV-053
Publication Status
Published empirical study
Tags
caseK-5Republic of KoreaClassroom-levelLLM/ChatComputer scienceAI literacyComparative study
