Back to Cases
Case ReportPublished empirical study2025
AAB-CASE-2026-RV-072

Can LLMs Reliably Simulate Human Learner Actions? A Simulation Authoring Framework for Open-Ended Learning Environments

Simulating learner actions helps stress-test open-ended inter- active learning environments and prototype new adaptations before deployment. While recent studies show the promise of using large language models (LLMs) for simulating hu- man behavior, such approaches have not gone beyond rudi- mentary proof-of-concept stages due to key limitations.

This page documents an AI literacy or AI education case for registry purposes. It is descriptive and does not imply AAB endorsement of any specific tool, provider, or intervention.
01

Implementation

Source publication / research team or educational organization described in paper

02

Learning context

Research / curriculum design context

03

AI role

Learning object / concept model

04

Outcome signal

Conceptual understanding

Registry Facets

0
Education Level
  • Unspecified / broad education
Subject Area
  • AI for education
  • learner simulation
  • LLM/Chat
  • NLP / text classification
Use Case Type
  • Learning tool / resource design
Stakeholder Group
  • Students
  • Adult learners / professionals
  • Researchers
AI Capability Type
  • LLM/Chat
  • NLP / text classification
Implementation Model
  • Research / curriculum design context
Evidence Type
  • Design / conceptual evidence
Outcomes Domain
  • Conceptual understanding

Implementing Organization

1
Organization Type

Source publication / research team or educational organization described in paper

Location

Not specified in extracted text

Primary Facilitator Role

Researchers, educators, instructors, or facilitators as described in the source publication

Learning Context

2
Setting Type
  • Research / curriculum design context
Session Format

Tool / platform-supported learning activity

Duration

100 hours of work per instructional hour (Blessing and Gilbert 2008

Group Size

Not specified in extracted text

Devices

LLM/Chat, NLP / text classification

Constraints
  • AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.

Learner Profile

3
Age Range

Unspecified / broad education

Prior AI Exposure Assumed

Mixed or not explicitly specified; infer from target learner group and intervention design.

Prior Programming Background Assumed

Varies by intervention; not specified unless the paper explicitly describes prerequisites.

Educational Intent

4
Primary Learning Goals
  • Document the AI education intervention, course, tool, or resource described in the source publication.
  • Extract the learner context, AI role, pedagogy, outcomes, and constraints for AAB registry comparison.
  • Simulating learner actions helps stress-test open-ended inter- active learning environments and prototype new adaptations before deployment.
Secondary Learning Goals
  • Support AAB comparison across AI literacy, AI education, teacher training, higher education, and workforce contexts.
  • Capture evidence maturity, transferability, and limitations rather than treating the publication as product endorsement.
What This Was Not
  • Not an AAB endorsement of the tool, curriculum, provider, or result.
  • Not a direct replication record unless the source paper reports implementation details sufficient for replication.

AI Tool Description

5
Tool Type

LLM/Chat, NLP / text classification

Languages

Language context discussed in source publication

AI Role
  • Learning object / concept model
User Interaction Model
  • Primary interaction pattern inferred from publication: Learning tool / resource design.
  • AI capability focus: LLM/Chat, NLP / text classification.
Safeguards
  • Require human review of generated outputs and explicit guidance against over-reliance or answer copying.

Activity Design

6
Activity Flow
  • Review the publication’s reported context, learner group, AI tool or curriculum, implementation process, and outcome evidence.
  • Map the case to AAB registry fields for comparison across educational levels and AI capability types.
  • Use the source publication and PDF for any manual verification before public registry release.
Human Vs AI Responsibilities
  • Human educators/researchers remain responsible for instructional design, supervision, interpretation, and ethical safeguards.
  • AI systems or AI concepts provide the learning object, support tool, evaluator, simulator, or automation context depending on the paper.
Scaffolding Strategies
  • Scenario / case-based learning
  • Registry extraction emphasizes explicit learning goals, observed outcomes, constraints, and safety limitations.

Observed Challenges

7
Educators Reported
  • AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.

Design Adaptations

8
Adaptations
  • Case classified under: Published empirical study.
  • Pedagogical pattern: Scenario / case-based learning.
  • Any additional adaptations should be verified against the full paper before public-facing publication.

Reported Outcomes

9
Engagement
  • Engagement evidence should be interpreted according to the source paper’s reported method and sample.
  • While recent studies show the promise of using large language models (LLMs) for simulating hu- man behavior, such approaches have not gone beyond rudi- mentary proof-of-concept stages due to key limitations.
Learning Signals
  • While recent studies show the promise of using large language models (LLMs) for simulating hu- man behavior, such approaches have not gone beyond rudi- mentary proof-of-concept stages due to key limitations.
  • Moreover, ap- parently successful outcomes can often be unreliable, either because domain experts unintentionally guide LLMs to pro- duce expected results, leading to self-fulfilling prophecies; or because the LLM has encountered highly similar scenarios in its training data, meaning that models may
  • To address these challenges, we propose HYP-MIX, a simulation authoring framework that allows experts to de- velop and evaluate simulations by combining testable hy- potheses about learner behavior.
Educators Reflection

Simulating learner actions helps stress-test open-ended inter- active learning environments and prototype new adaptations before deployment. While recent studies show the promise of using large language models (LLMs) for simulating hu- man behavior, such approaches have not gone beyond rudi- mentary proof-of-concept stages due to key limitations.

Ethical & Privacy Considerations

10
Privacy
  • Require human review of generated outputs and explicit guidance against over-reliance or answer copying.

Evidence Type

11
Evidence
  • Design / conceptual evidence

Relevance to Research

12
Potential Research Use
  • Can be used as an AAB evidence record for cross-case comparison, standards drafting, and evidence-maturity mapping.
  • Supports identification of recurring patterns in AI literacy, AI education implementation, teacher preparation, assessment, and responsible AI learning.
Relevant Research Domains
  • Conceptual understanding
  • Learning tool / resource design
  • LLM/Chat
  • NLP / text classification

Case Status

13
Case Status
  • Completed

AAB Classification Tags

14
Age

Unspecified / broad education

Setting

Research / curriculum design context

AI Function

LLM/Chat, NLP / text classification

Pedagogy

Scenario / case-based learning

Risk Level

Medium

Data Sensitivity

Medium

Source Publication

15
Title

Can LLMs Reliably Simulate Human Learner Actions? A Simulation Authoring Framework for Open-Ended Learning Environments

Authors
  • Amogh Mannekote
  • Adam Davies
  • Jina Kang
  • Kristy Elizabeth Boyer
Venue

Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 39 No. 28, EAAI-25

Year

2025

Doi

10.1609/aaai.v39i28.35175

Source URL

https://ojs.aaai.org/index.php/AAAI/article/view/35175

Pdf URL

https://ojs.aaai.org/index.php/AAAI/article/view/35175/37330

Pdf Filename

012_Can LLMs Reliably Simulate Human Learner Actions_ A Simulation Authoring Framework for Open-Ended Learning Environments.pdf

Page Count

9

Abstract

Simulating learner actions helps stress-test open-ended inter- active learning environments and prototype new adaptations before deployment. While recent studies show the promise of using large language models (LLMs) for simulating hu- man behavior, such approaches have not gone beyond rudi- mentary proof-of-concept stages due to key limitations. First, LLMs are highly sensitive to minor prompt variations, rais- ing doubts about their ability to generalize to new scenar- ios without extensive prompt engineering. Moreover, ap- parently successful outcomes can often be unreliable, either because domain experts unintentionally guide LLMs to pro- duce expected results, leading to self-fulfilling prophecies; or because the LLM has encountered highly similar scenarios in its training data, meaning that models may not be sim- ulating behavior so much as regurgitating memorized con- tent. To address these challenges, we propose HYP-MIX, a simulation authoring framework that allows experts to de- velop and evaluate simulations by combining testable hy- potheses about learner behavior. Testing this framework in a physics learning environment, we found that GPT-4 Turbo maintains calibrated behavior even as the underlying learner model changes, providing the first evidence that LLMs can be used to simulate realistic behaviors in open-ended interac- tive learning environments, a necessary prerequisite for use- ful LLM behavioral simulation.

Transferability

16
Best Fit Contexts
  • Research / curriculum design context
Likely Failure Modes
  • AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.

Cost And Operations

17
Time Cost Notes

Not specified in extracted text unless noted in duration field.

Staffing Notes

Requires educators/researchers/facilitators with sufficient AI literacy and pedagogy knowledge for the target learners.

Infra Notes

Infrastructure depends on AI tool type, learner devices, data access, and institutional policy context.

Extraction Notes

18
Confidence

High

Missing Information
  • group_size
Reasoning Limits

This entry was automatically extracted from the PDF text and manifest metadata. Fields should be manually verified before public registry publication, especially group size, location, duration, and outcome claims.

Duplicate Check Against Uploaded Cases Json
Closest Existing Title

Pre-service teachers preparedness for AI-integrated education: An investigation from perceptions, capabilities, and teachers’ identity changes

Similarity Score

0.368

Likely Duplicate

false

Registry Metadata

19
Case ID
AAB-CASE-2026-RV-072
Publication Status
Published empirical study
Tags
caseUnspecified / broad educationNot specified in extracted textResearch / curriculum design contextLLM/ChatAI for educationlearner simulationLLM/ChatNLP / text classificationLearning tool / resource design