Back to Cases
Case ReportPublished empirical study2025
AAB-CASE-2026-RV-061

Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models

Large language models (LLMs) have demonstrated strong capabilities in language understanding and generation, and their potential in educational contexts is increasingly being explored. One promising area is learnersourcing, where stu- dents engage in creating their own educational content, such as multiple-choice questions.

This page documents an AI literacy or AI education case for registry purposes. It is descriptive and does not imply AAB endorsement of any specific tool, provider, or intervention.
01

Implementation

Source publication / research team or educational organization described in paper

02

Learning context

Higher education

03

AI role

Tutor

04

Outcome signal

Conceptual understanding

Registry Facets

0
Education Level
  • Higher education
Subject Area
  • Higher education
  • AI-supported learning
  • assessment/explanations
  • LLM/Chat
  • NLP / text classification
Use Case Type
  • Learning tool / resource design
  • Assessment support
Stakeholder Group
  • Students
  • Researchers
AI Capability Type
  • LLM/Chat
  • NLP / text classification
  • Assessment / tutoring analytics
Implementation Model
  • Higher education
Evidence Type
  • Pre/post or experimental evidence
Outcomes Domain
  • Conceptual understanding
  • Assessment / feedback quality

Implementing Organization

1
Organization Type

Source publication / research team or educational organization described in paper

Location

New Zealand

Primary Facilitator Role

Researchers, educators, instructors, or facilitators as described in the source publication

Learning Context

2
Setting Type
  • Higher education
Session Format

Tool / platform-supported learning activity

Duration

Not specified in extracted text

Group Size

Not specified in extracted text

Devices

LLM/Chat, NLP / text classification, Assessment / tutoring analytics

Constraints
  • AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.

Learner Profile

3
Age Range

Higher education

Prior AI Exposure Assumed

Mixed or not explicitly specified; infer from target learner group and intervention design.

Prior Programming Background Assumed

Varies by intervention; not specified unless the paper explicitly describes prerequisites.

Educational Intent

4
Primary Learning Goals
  • Document the AI education intervention, course, tool, or resource described in the source publication.
  • Extract the learner context, AI role, pedagogy, outcomes, and constraints for AAB registry comparison.
  • Large language models (LLMs) have demonstrated strong capabilities in language understanding and generation, and their potential in educational contexts is increasingly being explored.
Secondary Learning Goals
  • Support AAB comparison across AI literacy, AI education, teacher training, higher education, and workforce contexts.
  • Capture evidence maturity, transferability, and limitations rather than treating the publication as product endorsement.
What This Was Not
  • Not an AAB endorsement of the tool, curriculum, provider, or result.
  • Not a direct replication record unless the source paper reports implementation details sufficient for replication.

AI Tool Description

5
Tool Type

LLM/Chat, NLP / text classification, Assessment / tutoring analytics

Languages

Language context discussed in source publication

AI Role
  • Tutor
  • Evaluator
User Interaction Model
  • Primary interaction pattern inferred from publication: Learning tool / resource design, Assessment support.
  • AI capability focus: LLM/Chat, NLP / text classification, Assessment / tutoring analytics.
Safeguards
  • Require human review of generated outputs and explicit guidance against over-reliance or answer copying.

Activity Design

6
Activity Flow
  • Review the publication’s reported context, learner group, AI tool or curriculum, implementation process, and outcome evidence.
  • Map the case to AAB registry fields for comparison across educational levels and AI capability types.
  • Use the source publication and PDF for any manual verification before public registry release.
Human Vs AI Responsibilities
  • Human educators/researchers remain responsible for instructional design, supervision, interpretation, and ethical safeguards.
  • AI systems or AI concepts provide the learning object, support tool, evaluator, simulator, or automation context depending on the paper.
Scaffolding Strategies
  • Tutoring / feedback-supported learning
  • Registry extraction emphasizes explicit learning goals, observed outcomes, constraints, and safety limitations.

Observed Challenges

7
Educators Reported
  • AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.

Design Adaptations

8
Adaptations
  • Case classified under: Published empirical study.
  • Pedagogical pattern: Tutoring / feedback-supported learning.
  • Any additional adaptations should be verified against the full paper before public-facing publication.

Reported Outcomes

9
Engagement
  • Engagement evidence should be interpreted according to the source paper’s reported method and sample.
  • One promising area is learnersourcing, where stu- dents engage in creating their own educational content, such as multiple-choice questions.
Learning Signals
  • One promising area is learnersourcing, where stu- dents engage in creating their own educational content, such as multiple-choice questions.
  • A critical step in this process is generating effective explanations for the solutions to these questions, as such explanations aid in peer understanding and promote deeper conceptual learning.
  • To support this task, we introduce “ILearner-LLM,” a framework that uses iterative enhancement with LLMs to improve gen- erated explanations.
Educators Reflection

Large language models (LLMs) have demonstrated strong capabilities in language understanding and generation, and their potential in educational contexts is increasingly being explored. One promising area is learnersourcing, where stu- dents engage in creating their own educational content, such as multiple-choice questions.

Ethical & Privacy Considerations

10
Privacy
  • Require human review of generated outputs and explicit guidance against over-reliance or answer copying.

Evidence Type

11
Evidence
  • Pre/post or experimental evidence

Relevance to Research

12
Potential Research Use
  • Can be used as an AAB evidence record for cross-case comparison, standards drafting, and evidence-maturity mapping.
  • Supports identification of recurring patterns in AI literacy, AI education implementation, teacher preparation, assessment, and responsible AI learning.
Relevant Research Domains
  • Conceptual understanding
  • Assessment / feedback quality
  • Learning tool / resource design
  • Assessment support
  • LLM/Chat
  • NLP / text classification
  • Assessment / tutoring analytics

Case Status

13
Case Status
  • Completed

AAB Classification Tags

14
Age

Higher education

Setting

Higher education

AI Function

LLM/Chat, NLP / text classification, Assessment / tutoring analytics

Pedagogy

Tutoring / feedback-supported learning

Risk Level

Medium

Data Sensitivity

Medium

Source Publication

15
Title

Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models

Authors
  • Qiming Bao
  • Juho Leinonen
  • Alex Yuxuan Peng
  • Wanjun Zhong, Gaël Gendron
  • Timothy Pistotti
  • Alice Huang
  • Paul Denny
  • Michael Witbrock
  • Jiamou Liu
Venue

Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 39 No. 28, EAAI-25

Year

2025

Doi

10.1609/aaai.v39i28.35164

Source URL

https://ojs.aaai.org/index.php/AAAI/article/view/35164

Pdf URL

https://ojs.aaai.org/index.php/AAAI/article/view/35164/37319

Pdf Filename

001_Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models.pdf

Page Count

9

Abstract

Large language models (LLMs) have demonstrated strong capabilities in language understanding and generation, and their potential in educational contexts is increasingly being explored. One promising area is learnersourcing, where stu- dents engage in creating their own educational content, such as multiple-choice questions. A critical step in this process is generating effective explanations for the solutions to these questions, as such explanations aid in peer understanding and promote deeper conceptual learning. However, students of- ten find it difficult to craft high-quality explanations due to limited understanding or gaps in their subject knowledge. To support this task, we introduce “ILearner-LLM,” a framework that uses iterative enhancement with LLMs to improve gen- erated explanations. The framework combines an explanation generation model and an explanation evaluation model fine- tuned using student preferences for quality, where feedback from the evaluation model is fed back into the generation model to refine the output. Our experiments with LLaMA2- 13B and GPT-4 using five large datasets from the PeerWise MCQ platform show that ILearner-LLM produces explana- tions of higher quality that closely align with those written by students. Our findings represent a promising approach for enriching the learnersourcing experience for students and for leveraging the capabilities of large language models for edu- cational applications.

Transferability

16
Best Fit Contexts
  • Higher education
Likely Failure Modes
  • AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.

Cost And Operations

17
Time Cost Notes

Not specified in extracted text unless noted in duration field.

Staffing Notes

Requires educators/researchers/facilitators with sufficient AI literacy and pedagogy knowledge for the target learners.

Infra Notes

Infrastructure depends on AI tool type, learner devices, data access, and institutional policy context.

Extraction Notes

18
Confidence

High

Missing Information
  • group_size
  • duration
Reasoning Limits

This entry was automatically extracted from the PDF text and manifest metadata. Fields should be manually verified before public registry publication, especially group size, location, duration, and outcome claims.

Duplicate Check Against Uploaded Cases Json
Closest Existing Title

Pre-service teachers preparedness for AI-integrated education: An investigation from perceptions, capabilities, and teachers’ identity changes

Similarity Score

0.404

Likely Duplicate

false

Registry Metadata

19
Case ID
AAB-CASE-2026-RV-061
Publication Status
Published empirical study
Tags
caseHigher educationNew ZealandHigher educationLLM/ChatHigher educationAI-supported learningassessment/explanationsLLM/ChatNLP / text classificationLearning tool / resource designAssessment support