Case ReportPublished empirical studyJul. 2025
AAB-CASE-2025-RV-052
Integrating Generative AI into Programming Education: Student Perceptions and the Challenge of Correcting AI Errors
IJAIED 2025; undergraduate programming; GenAI + assessment.
This page documents an AI literacy or AI education case for registry purposes. It is descriptive and does not imply AAB endorsement of any specific tool, provider, or intervention.
01
Implementation
Universities (Israel / Germany collaborators)
02
Learning context
Private program
03
AI role
Co-creator
04
Outcome signal
Skills
Registry Facets
0
Education Level
- Higher education
Subject Area
- Computer science
- AI literacy
Use Case Type
- Assessment
- Survey research
Stakeholder Group
- Students
AI Capability Type
- Generative AI
- LLM/Chat
Implementation Model
- Classroom-level
Evidence Type
- Post assessment
- Mixed methods
Outcomes Domain
- Skills
- Metacognition
Implementing Organization
1
Organization Type
Universities (Israel / Germany collaborators)
Location
Israel / Germany
Primary Facilitator Role
Faculty researchers
Learning Context
2
Setting Type
- Private program
Session Format
Programming courses with GenAI tools
Duration
Two complementary studies
Group Size
Undergraduate cohorts (per paper)
Devices
ChatGPT, Copilot-class tools (per introduction)
Constraints
- Assessment security
- Tool dependence
Learner Profile
3
Age Range
Undergraduates
Prior AI Exposure Assumed
Rising GenAI use
Prior Programming Background Assumed
CS majors
Educational Intent
4
Primary Learning Goals
- Characterize student perceptions of GenAI in programming
- Compare debugging LLM code vs traditional exam tasks
Secondary Learning Goals
- Argue for teaching critique and correction of AI outputs
What This Was Not
- Not industry workplace study
AI Tool Description
5
Tool Type
GenAI coding assistants / LLMs
AI Role
- Co-creator
- Automation tool
Languages
HE programming courses
User Interaction Model
- Students generate and repair AI-produced code
Safeguards
- Address over-reliance explicitly in curriculum
- Integrity policies for exams with AI
Activity Design
6
Activity Flow
- Study 1 survey on perceptions
- Study 2 performance on correction tasks
Human Vs AI Responsibilities
- Students must verify and fix AI suggestions
Scaffolding Strategies
- Explicit exercises in evaluating AI-generated code
Observed Challenges
7
Educators Reported
- Students favor tools but underestimate correction difficulty
- Unique assessment challenges for LLM outputs
Design Adaptations
8
Adaptations
- Align instruction with authentic debugging of AI code
Reported Outcomes
9
Engagement
- Generally favorable perceptions
Learning Signals
- Harder to correct LLM code than instructor-designed tasks
Educators Reflection
Programming pedagogy should teach AI output critique.
Ethical & Privacy Considerations
10
Privacy
- Academic integrity
- Exam conditions for AI-assisted work
Evidence Type
11
Evidence
- Post assessment
- Activity documentation
- Practitioner observation
Relevance to Research
12
Potential Research Use
- Longitudinal skill trajectories with GenAI
- Cross-institutional replication
Relevant Research Domains
- Programming education
- GenAI
- Assessment
Case Status
13
Case Status
- Completed
AAB Classification Tags
14
Age
Undergraduate
Setting
University CS
AI Function
Code generation literacy
Pedagogy
Dual study design
Risk Level
Medium
Data Sensitivity
Low
Registry Metadata
15
Case ID
AAB-CASE-2025-RV-052
Publication Status
Published empirical study
Tags
caseHigher educationIsrael / GermanyClassroom-levelGenerative AIComputer scienceAI literacyAssessmentSurvey research
