Back to Cases
Case ReportPublished empirical / methods studySep. 17, 2024
AAB-CASE-2025-RV-031

Analyzing K-12 AI education: A large language model study of classroom instruction on learning theories, pedagogy, tools, and AI literacy

LLM framework analyzes 98 Chinese AI lesson videos; validated vs manual coding; maps pedagogy–literacy links.

This page documents an AI literacy or AI education case for registry purposes. It is descriptive and does not imply AAB endorsement of any specific tool, provider, or intervention.
01

Implementation

Faculty of Artificial Intelligence in Education, normal university

02

Learning context

In-school (K–12)

03

AI role

Evaluator

04

Outcome signal

Pedagogy patterns

Registry Facets

0
Education Level
  • K-12
Subject Area
  • AI education
  • Learning analytics
Use Case Type
  • Classroom video analysis
  • LLM research method
Stakeholder Group
  • Researchers
  • Teachers
AI Capability Type
  • LLM/Chat
  • ML
Implementation Model
  • Classroom-level
Evidence Type
  • Learning analytics
  • Comparative validation
Outcomes Domain
  • Pedagogy patterns
  • AI literacy depth

Implementing Organization

1
Organization Type

Faculty of Artificial Intelligence in Education, normal university

Location

Wuhan, China

Primary Facilitator Role

Researchers building LLM analysis pipeline and correlation study

Learning Context

2
Setting Type
  • In-school (K–12)
Session Format

Secondary analysis of recorded AI instruction videos

Duration

Corpus of 98 classroom videos (urban central China)

Group Size

98 lessons / classrooms as units of analysis

Devices

Varied AI teaching tools as captured on video

Constraints
  • Geographic concentration in central Chinese cities
  • Video may miss off-camera activity
  • LLM coding drift requires ongoing validation
  • Ethics coverage rarity is descriptive not causal proof of absence in all moments

Learner Profile

3
Age Range

K-12 as represented in video corpus

Prior AI Exposure Assumed

Varies by lesson level

Prior Programming Background Assumed

Varies by lesson design

Educational Intent

4
Primary Learning Goals
  • Quantify pedagogy, theory, tools, and literacy levels in real AI lessons
  • Validate LLM coding against human analysts
  • Identify instructional profiles (conceptual/heuristic/experimental)
Secondary Learning Goals
  • Test correlations between pedagogy mixes and higher-order literacy
  • Flag severe underrepresentation of explicit ethics segments
What This Was Not
  • Not a student RCT
  • Not exhaustive national census of all AI classes
  • Not fine-grained discourse analysis without LLM layer

AI Tool Description

5
Tool Type

LLM as research instrument for coding classroom video transcripts/segments

AI Role
  • Evaluator
  • Automation tool
Languages

Chinese instructional discourse

User Interaction Model
  • Automated tagging of instructional events and literacy level indicators
  • Human-in-the-loop validation for agreement metrics
Safeguards
  • Protect identifiable students/teachers in recordings per ethics approval
  • Guard against over-trust in LLM labels—maintain audit samples
  • Transparent prompt and rubric documentation for reproducibility
  • Avoid high-stakes teacher evaluation from automated scores alone

Activity Design

6
Activity Flow
  • Curate 98 AI education videos
  • Define coding scheme for theories, pedagogies, tools, literacy
  • Run LLM analysis and benchmark to manual coding
  • Compute distributions and correlations (e.g., PBL+collaboration vs evaluate/create)
Human Vs AI Responsibilities
  • Researchers validate and interpret LLM outputs; LLM accelerates large-scale annotation
Scaffolding Strategies
  • Use findings to coach teachers toward combinations supporting advanced literacy

Observed Challenges

7
Educators Reported
  • Low prevalence of explicit ethics instruction segments
  • Higher-order literacy tasks relatively uncommon
  • Scaling manual coding impractical without LLM assistance

Design Adaptations

8
Adaptations
  • Novel LLM-assisted framework for large-scale classroom AI-education analytics

Reported Outcomes

9
Engagement
  • Shows feasible >90% consistency with manual analysis in reported checks
Learning Signals
  • Pedagogy composition correlates with advanced literacy indicators
  • Most lessons skew conceptual; ethics segments sparse (5.1%)
Educators Reflection

Supports combining PBL with collaborative methods to elevate evaluate/create AI competencies and calls for more ethics time.

Ethical & Privacy Considerations

10
Privacy
  • Video research ethics and consent
  • Potential misuse of automated classroom ratings for accountability
  • Bias in LLM toward certain instructional styles/languages
  • Data security for large video corpora

Evidence Type

11
Evidence
  • Learning analytics
  • Activity documentation
  • Practitioner observation

Relevance to Research

12
Potential Research Use
  • Expand corpus multi-region; track ethics instruction over years
  • Teacher-facing dashboards from validated coding
Relevant Research Domains
  • AI classroom research
  • Computational education research methods
  • AI literacy assessment in vivo

Case Status

13
Case Status
  • Completed

AAB Classification Tags

14
Age

K-12 (video corpus)

Setting

Urban China AI lessons

AI Function

Instruction analysis + pedagogy mapping

Pedagogy

LLM-assisted coding

Risk Level

Medium (method misuse)

Data Sensitivity

High (video)

Registry Metadata

15
Case ID
AAB-CASE-2025-RV-031
Publication Status
Published empirical / methods study
Tags
caseK-12Wuhan, ChinaClassroom-levelLLM/ChatAI educationLearning analyticsClassroom video analysisLLM research method