Back to Cases
Case ReportCompleted2026
AAB-CASE-2026-RV-007

Learning to Use AI for Learning: Teaching Responsible Use of AI Chatbot to K-12 Students Through an AI Literacy Module

Classroom-based study of a web module for prompting literacy in secondary education, combining scenario-based deliberate practice, AI auto-grading, and iterative assessment refinement across 11 classrooms.

This page documents an AI literacy or AI education case for registry purposes. It is descriptive and does not imply AAB endorsement of any specific tool, provider, or intervention.
01

Implementation

University-led AI literacy intervention research

02

Learning context

In-school (K-12)

03

AI role

Tutor

04

Outcome signal

Not specified

Registry Facets

0
Case Type
  • Research Review
Setting
  • K-12
Status
  • Completed
Focus
  • Prompting Literacy
  • Responsible AI Use
  • Assessment Design

Implementing Organization

1
Organization Type

University-led AI literacy intervention research

Location

Secondary education classrooms (East Asia deployments with iterative follow-up)

Primary Facilitator Role

Researchers and teachers deploying and refining AI prompting literacy module

Learning Context

2
Setting Type
  • In-school (K-12)
Session Format

Scenario-based module with direct AI interaction and immediate feedback

Duration

Two iterative studies; Study 1 in 6 classrooms, Study 2 follow-up with revised assessment

Group Size

111 valid student records in Study 1; module deployed across 11 authentic classrooms

Devices

Web-based platform using LLM chatbot and AI auto-grader (GPT-4o)

Constraints
  • Prompting literacy interventions for K-12 remain limited despite increasing AI use.
  • Assessment design showed early ceiling effects in multiple-choice format.
  • Variability in students’ computer skills and topic interest influenced practice quality.

Learner Profile

3
Age Range

Secondary school learners (middle and high school)

Prior AI Exposure Assumed

Many learners had baseline familiarity with AI chatbots but varied usage depth

Prior Programming Background Assumed

Not required; focus was natural-language prompting competence

Educational Intent

4
Primary Learning Goals
  • Develop prompting literacy for responsible and effective AI use in learning.
  • Teach students when and how AI can support studying without replacing thinking.
  • Improve student ability to craft context-rich prompts for educational support.
Secondary Learning Goals
  • Build confidence in using AI as a learning assistant.
  • Reinforce ethical use and awareness of AI limitations.
  • Iterate and validate assessment formats for prompting literacy.
What This Was Not
  • Not a one-time generic AI awareness lecture.
  • Not unrestricted AI answer-copying support for assignments.
  • Not a final large-scale standardized exam validation study.

AI Tool Description

5
Tool Type

LLM-based prompting literacy learning and assessment module

Languages

Instructional prompts in classroom language context; LLM interaction via text

AI Role
  • Tutor
  • Evaluator
User Interaction Model
  • Students complete scenario-based prompting tasks in biology, geography, and math contexts.
  • AI chatbot responds to student prompts in realistic learning situations.
  • AI auto-grader scores prompts against rubric dimensions and returns detailed feedback.
  • Students iterate prompt writing using immediate elaborated feedback.
Safeguards
  • Instruction explicitly includes responsible-use framing and non-cheating guidance.
  • Teachers were informed that auto-grader may make errors and required monitoring.
  • Rubric criteria discourage direct-answer seeking in designated scenarios.

Activity Design

6
Activity Flow
  • Pre-survey and pre-test on AI use and prompting literacy.
  • Scenario practice: write prompt, receive AI response, receive auto-grade feedback.
  • Post-survey and post-test with reflective open responses.
  • Assessment redesign (MCQ to TF/OE) informed by Study 1 evidence.
Human Vs AI Responsibilities
  • Students write and revise prompts based on learning goals.
  • AI provides responses and rubric-based grading explanations.
  • Researchers/teachers oversee interpretation, validation, and assessment refinement.
Scaffolding Strategies
  • Learning-by-doing and experiential practice under authentic-like scenarios.
  • Immediate elaborated feedback for each prompt dimension.
  • Dimension-specific rubric adapted to context (e.g., no direct-answer requests in homework struggle tasks).

Observed Challenges

7
Educators Reported
  • Auto-grader had lower precision on purpose-detection and direct-answer interpretation edge cases.
  • Some students struggled with typing/logging in and other baseline digital-skills barriers.
  • Limited scenario diversity (e.g., mainly science topics) reduced relevance for some learners.
  • Single assessment format (MCQ) initially lacked discrimination for higher-order prompting skills.

Design Adaptations

8
Adaptations
  • Iterated assessment from MCQ-only to True/False plus open-ended questions.
  • Added abstract-level items to vary difficulty within same learning objective.
  • Used open-ended prompts to better capture analysis and prompt-rewriting skills.
  • Data-informed rubric and item revisions between Study 1 and Study 2.

Reported Outcomes

9
Engagement
  • Students reported strong interest in interactive AI dialogue and immediate feedback loops.
  • Learning scenarios and guided practice supported active participation.
Learning Signals
  • AI auto-grader achieved high overall grading performance (about 0.92 pass/fail accuracy).
  • Students improved in embedding background/context into prompts over practice.
  • Confidence in using AI for learning increased significantly after module participation.
Educators Reflection

Prompting literacy can be taught in secondary classrooms with scalable AI support, but accuracy guardrails, broader content contexts, and larger-scale validation are necessary for sustained adoption.

Ethical & Privacy Considerations

10
Privacy
  • Module explicitly addresses responsible AI use and discourages direct answer-seeking behaviors.
  • Students learned that AI outputs may be inaccurate and require critical verification.
  • Ethical concerns included cheating risk, information quality, and appropriate classroom AI boundaries.

Evidence Type

11
Evidence
  • Post assessment
  • Learning analytics
  • Activity documentation

Relevance to Research

12
Potential Research Use
  • Provides a replicable prompting literacy intervention model for K-12 settings.
  • Demonstrates feasibility of AI-supported rubric grading with actionable feedback loops.
  • Contributes evidence on assessment design trade-offs for emerging AI literacy competencies.
Relevant Research Domains
  • Prompting literacy in K-12
  • Responsible generative AI use in education
  • AI-based formative assessment
  • Learning sciences for human-AI interaction

Case Status

13
Case Status
  • Completed

AAB Classification Tags

14
Age

Secondary school (middle and high school)

Setting

In-class AI literacy module deployment

AI Function

LLM tutoring interaction and automated formative feedback

Pedagogy

Scenario-based active learning with iterative prompt revision

Risk Level

Medium

Data Sensitivity

Medium (student-written prompts and learning assessment traces)

Registry Metadata

15
Case ID
AAB-CASE-2026-RV-007
Publication Status
Completed
Tags
caseSecondary education classrooms (East Asia deployments with iterative follow-up)In-school (K-12)