Learning to Use AI for Learning: Teaching Responsible Use of AI Chatbot to K-12 Students Through an AI Literacy Module
Classroom-based study of a web module for prompting literacy in secondary education, combining scenario-based deliberate practice, AI auto-grading, and iterative assessment refinement across 11 classrooms.
Implementation
University-led AI literacy intervention research
Learning context
In-school (K-12)
AI role
Tutor
Outcome signal
Not specified
Registry Facets
- Research Review
- K-12
- Completed
- Prompting Literacy
- Responsible AI Use
- Assessment Design
Implementing Organization
University-led AI literacy intervention research
Secondary education classrooms (East Asia deployments with iterative follow-up)
Researchers and teachers deploying and refining AI prompting literacy module
Learning Context
- In-school (K-12)
Scenario-based module with direct AI interaction and immediate feedback
Two iterative studies; Study 1 in 6 classrooms, Study 2 follow-up with revised assessment
111 valid student records in Study 1; module deployed across 11 authentic classrooms
Web-based platform using LLM chatbot and AI auto-grader (GPT-4o)
- Prompting literacy interventions for K-12 remain limited despite increasing AI use.
- Assessment design showed early ceiling effects in multiple-choice format.
- Variability in students’ computer skills and topic interest influenced practice quality.
Learner Profile
Secondary school learners (middle and high school)
Many learners had baseline familiarity with AI chatbots but varied usage depth
Not required; focus was natural-language prompting competence
Educational Intent
- Develop prompting literacy for responsible and effective AI use in learning.
- Teach students when and how AI can support studying without replacing thinking.
- Improve student ability to craft context-rich prompts for educational support.
- Build confidence in using AI as a learning assistant.
- Reinforce ethical use and awareness of AI limitations.
- Iterate and validate assessment formats for prompting literacy.
- Not a one-time generic AI awareness lecture.
- Not unrestricted AI answer-copying support for assignments.
- Not a final large-scale standardized exam validation study.
AI Tool Description
LLM-based prompting literacy learning and assessment module
Instructional prompts in classroom language context; LLM interaction via text
- Tutor
- Evaluator
- Students complete scenario-based prompting tasks in biology, geography, and math contexts.
- AI chatbot responds to student prompts in realistic learning situations.
- AI auto-grader scores prompts against rubric dimensions and returns detailed feedback.
- Students iterate prompt writing using immediate elaborated feedback.
- Instruction explicitly includes responsible-use framing and non-cheating guidance.
- Teachers were informed that auto-grader may make errors and required monitoring.
- Rubric criteria discourage direct-answer seeking in designated scenarios.
Activity Design
- Pre-survey and pre-test on AI use and prompting literacy.
- Scenario practice: write prompt, receive AI response, receive auto-grade feedback.
- Post-survey and post-test with reflective open responses.
- Assessment redesign (MCQ to TF/OE) informed by Study 1 evidence.
- Students write and revise prompts based on learning goals.
- AI provides responses and rubric-based grading explanations.
- Researchers/teachers oversee interpretation, validation, and assessment refinement.
- Learning-by-doing and experiential practice under authentic-like scenarios.
- Immediate elaborated feedback for each prompt dimension.
- Dimension-specific rubric adapted to context (e.g., no direct-answer requests in homework struggle tasks).
Observed Challenges
- Auto-grader had lower precision on purpose-detection and direct-answer interpretation edge cases.
- Some students struggled with typing/logging in and other baseline digital-skills barriers.
- Limited scenario diversity (e.g., mainly science topics) reduced relevance for some learners.
- Single assessment format (MCQ) initially lacked discrimination for higher-order prompting skills.
Design Adaptations
- Iterated assessment from MCQ-only to True/False plus open-ended questions.
- Added abstract-level items to vary difficulty within same learning objective.
- Used open-ended prompts to better capture analysis and prompt-rewriting skills.
- Data-informed rubric and item revisions between Study 1 and Study 2.
Reported Outcomes
- Students reported strong interest in interactive AI dialogue and immediate feedback loops.
- Learning scenarios and guided practice supported active participation.
- AI auto-grader achieved high overall grading performance (about 0.92 pass/fail accuracy).
- Students improved in embedding background/context into prompts over practice.
- Confidence in using AI for learning increased significantly after module participation.
Prompting literacy can be taught in secondary classrooms with scalable AI support, but accuracy guardrails, broader content contexts, and larger-scale validation are necessary for sustained adoption.
Ethical & Privacy Considerations
- Module explicitly addresses responsible AI use and discourages direct answer-seeking behaviors.
- Students learned that AI outputs may be inaccurate and require critical verification.
- Ethical concerns included cheating risk, information quality, and appropriate classroom AI boundaries.
Evidence Type
- Post assessment
- Learning analytics
- Activity documentation
Relevance to Research
- Provides a replicable prompting literacy intervention model for K-12 settings.
- Demonstrates feasibility of AI-supported rubric grading with actionable feedback loops.
- Contributes evidence on assessment design trade-offs for emerging AI literacy competencies.
- Prompting literacy in K-12
- Responsible generative AI use in education
- AI-based formative assessment
- Learning sciences for human-AI interaction
Case Status
- Completed
AAB Classification Tags
Secondary school (middle and high school)
In-class AI literacy module deployment
LLM tutoring interaction and automated formative feedback
Scenario-based active learning with iterative prompt revision
Medium
Medium (student-written prompts and learning assessment traces)
