Back to Cases
Case ReportPublished empirical study2025
AAB-CASE-2026-RV-088

Learning About Algorithm Auditing in Five Steps: Scaffolding How High School Youth Can Systematically and Critically Evaluate Machine Learning Applications

While there is widespread interest in supporting young peo- ple to critically evaluate machine learning-powered systems, there is little research on how we can support them in in- quiring about how these systems work and what their limita- tions and implications may be. Outside of K-12 education, an effective strategy in evaluating black-boxed systems is algo- rithm auditing—a method for understanding algorithmic sys- tems’ opaque inner workings and external impacts from the outside in.

This page documents an AI literacy or AI education case for registry purposes. It is descriptive and does not imply AAB endorsement of any specific tool, provider, or intervention.
01

Implementation

Source publication / research team or educational organization described in paper

02

Learning context

In-school (K-12)

03

AI role

Learning object / concept model

04

Outcome signal

Conceptual understanding

Registry Facets

0
Education Level
  • 9-12
Subject Area
  • K-12
  • algorithm auditing
  • AI ethics
  • Generative AI
  • ML concepts / supervised learning
Use Case Type
  • Outreach / informal learning
  • Ethics / responsible AI education
Stakeholder Group
  • Students
  • Researchers
AI Capability Type
  • Generative AI
  • ML concepts / supervised learning
  • Explainable AI / robustness
  • Ethics / responsible AI
Implementation Model
  • In-school (K-12)
Evidence Type
  • Activity documentation
Outcomes Domain
  • Conceptual understanding
  • Engagement / motivation
  • Ethics and responsible use

Implementing Organization

1
Organization Type

Source publication / research team or educational organization described in paper

Location

Not specified in extracted text

Primary Facilitator Role

Researchers, educators, instructors, or facilitators as described in the source publication

Learning Context

2
Setting Type
  • In-school (K-12)
Session Format

Workshop / professional learning activity

Duration

Not specified in extracted text

Group Size

Not specified in extracted text

Devices

Generative AI, ML concepts / supervised learning, Explainable AI / robustness, Ethics / responsible AI

Constraints
  • AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.
  • Use with minors requires attention to privacy, consent, data minimization, and adult supervision.

Learner Profile

3
Age Range

9-12

Prior AI Exposure Assumed

Mixed or not explicitly specified; infer from target learner group and intervention design.

Prior Programming Background Assumed

Varies by intervention; not specified unless the paper explicitly describes prerequisites.

Educational Intent

4
Primary Learning Goals
  • Document the AI education intervention, course, tool, or resource described in the source publication.
  • Extract the learner context, AI role, pedagogy, outcomes, and constraints for AAB registry comparison.
  • While there is widespread interest in supporting young peo- ple to critically evaluate machine learning-powered systems, there is little research on how we can support them in in- quiring about how these systems work and what their limita- tions and implicatio
Secondary Learning Goals
  • Support AAB comparison across AI literacy, AI education, teacher training, higher education, and workforce contexts.
  • Capture evidence maturity, transferability, and limitations rather than treating the publication as product endorsement.
What This Was Not
  • Not an AAB endorsement of the tool, curriculum, provider, or result.
  • Not a direct replication record unless the source paper reports implementation details sufficient for replication.

AI Tool Description

5
Tool Type

Generative AI, ML concepts / supervised learning, Explainable AI / robustness, Ethics / responsible AI

Languages

Not specified in extracted text

AI Role
  • Learning object / concept model
User Interaction Model
  • Primary interaction pattern inferred from publication: Outreach / informal learning, Ethics / responsible AI education.
  • AI capability focus: Generative AI, ML concepts / supervised learning, Explainable AI / robustness, Ethics / responsible AI.
Safeguards
  • Use age-appropriate framing and teacher/facilitator oversight for any classroom deployment.
  • Require human review of generated outputs and explicit guidance against over-reliance or answer copying.
  • Include bias, fairness, transparency, and social impact discussion as part of the learning design.

Activity Design

6
Activity Flow
  • Review the publication’s reported context, learner group, AI tool or curriculum, implementation process, and outcome evidence.
  • Map the case to AAB registry fields for comparison across educational levels and AI capability types.
  • Use the source publication and PDF for any manual verification before public registry release.
Human Vs AI Responsibilities
  • Human educators/researchers remain responsible for instructional design, supervision, interpretation, and ethical safeguards.
  • AI systems or AI concepts provide the learning object, support tool, evaluator, simulator, or automation context depending on the paper.
Scaffolding Strategies
  • Hands-on / experiential learning, Scenario / case-based learning
  • Registry extraction emphasizes explicit learning goals, observed outcomes, constraints, and safety limitations.

Observed Challenges

7
Educators Reported
  • AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.
  • Use with minors requires attention to privacy, consent, data minimization, and adult supervision.

Design Adaptations

8
Adaptations
  • Case classified under: Published empirical study.
  • Pedagogical pattern: Hands-on / experiential learning, Scenario / case-based learning.
  • Any additional adaptations should be verified against the full paper before public-facing publication.

Reported Outcomes

9
Engagement
  • Engagement evidence should be interpreted according to the source paper’s reported method and sample.
  • In this paper, we review how expert researchers conduct algorithm audits and how end users engage in au- diting practices to propose five steps that, when incorporated into learning activities, can support young people in auditing algorithms.
Learning Signals
  • In this paper, we review how expert researchers conduct algorithm audits and how end users engage in au- diting practices to propose five steps that, when incorporated into learning activities, can support young people in auditing algorithms.
  • We discuss the kind of scaffolds we provided to support youth in algorithm auditing and directions and challenges for integrating algorithm auditing into classroom activities.
Educators Reflection

While there is widespread interest in supporting young peo- ple to critically evaluate machine learning-powered systems, there is little research on how we can support them in in- quiring about how these systems work and what their limita- tions and implications may be. Outside of K-12 education, an effective strategy in evaluating black-boxed systems is algo- rithm auditing—a method for understanding algorithmic sys- tems’ opaque inner workings and external impacts from the outside in.

Ethical & Privacy Considerations

10
Privacy
  • Use age-appropriate framing and teacher/facilitator oversight for any classroom deployment.
  • Require human review of generated outputs and explicit guidance against over-reliance or answer copying.
  • Include bias, fairness, transparency, and social impact discussion as part of the learning design.

Evidence Type

11
Evidence
  • Activity documentation

Relevance to Research

12
Potential Research Use
  • Can be used as an AAB evidence record for cross-case comparison, standards drafting, and evidence-maturity mapping.
  • Supports identification of recurring patterns in AI literacy, AI education implementation, teacher preparation, assessment, and responsible AI learning.
Relevant Research Domains
  • Conceptual understanding
  • Engagement / motivation
  • Ethics and responsible use
  • Outreach / informal learning
  • Ethics / responsible AI education
  • Generative AI
  • ML concepts / supervised learning
  • Explainable AI / robustness

Case Status

13
Case Status
  • Completed

AAB Classification Tags

14
Age

9-12

Setting

In-school (K-12)

AI Function

Generative AI, ML concepts / supervised learning, Explainable AI / robustness, Ethics / responsible AI

Pedagogy

Hands-on / experiential learning, Scenario / case-based learning

Risk Level

Medium

Data Sensitivity

Low to Medium

Source Publication

15
Title

Learning About Algorithm Auditing in Five Steps: Scaffolding How High School Youth Can Systematically and Critically Evaluate Machine Learning Applications

Authors
  • Luis Morales-Navarro
  • Yasmin B. Kafai
  • Lauren Vogelstein
  • Evelyn Yu, Danaë Metaxa
Venue

Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 39 No. 28, EAAI-25

Year

2025

Doi

10.1609/aaai.v39i28.35192

Source URL

https://ojs.aaai.org/index.php/AAAI/article/view/35192

Pdf URL

https://ojs.aaai.org/index.php/AAAI/article/view/35192/37347

Pdf Filename

029_Learning About Algorithm Auditing in Five Steps.pdf

Page Count

9

Abstract

While there is widespread interest in supporting young peo- ple to critically evaluate machine learning-powered systems, there is little research on how we can support them in in- quiring about how these systems work and what their limita- tions and implications may be. Outside of K-12 education, an effective strategy in evaluating black-boxed systems is algo- rithm auditing—a method for understanding algorithmic sys- tems’ opaque inner workings and external impacts from the outside in. In this paper, we review how expert researchers conduct algorithm audits and how end users engage in au- diting practices to propose five steps that, when incorporated into learning activities, can support young people in auditing algorithms. We present a case study of a team of teenagers engaging with each step during an out-of-school workshop in which they audited peer-designed generative AI TikTok fil- ters. We discuss the kind of scaffolds we provided to support youth in algorithm auditing and directions and challenges for integrating algorithm auditing into classroom activities. This paper contributes: (a) a conceptualization of five steps to scaf- fold algorithm auditing learning activities, and (b) examples of how youth engaged with each step during our pilot study.

Transferability

16
Best Fit Contexts
  • In-school (K-12)
Likely Failure Modes
  • AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.
  • Use with minors requires attention to privacy, consent, data minimization, and adult supervision.

Cost And Operations

17
Time Cost Notes

Not specified in extracted text unless noted in duration field.

Staffing Notes

Requires educators/researchers/facilitators with sufficient AI literacy and pedagogy knowledge for the target learners.

Infra Notes

Infrastructure depends on AI tool type, learner devices, data access, and institutional policy context.

Extraction Notes

18
Confidence

High

Missing Information
  • group_size
  • duration
Reasoning Limits

This entry was automatically extracted from the PDF text and manifest metadata. Fields should be manually verified before public registry publication, especially group size, location, duration, and outcome claims.

Duplicate Check Against Uploaded Cases Json
Closest Existing Title

Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education

Similarity Score

0.372

Likely Duplicate

false

Registry Metadata

19
Case ID
AAB-CASE-2026-RV-088
Publication Status
Published empirical study
Tags
case9-12Not specified in extracted textIn-school (K-12)Generative AIK-12algorithm auditingAI ethicsGenerative AIML concepts / supervised learningOutreach / informal learningEthics / responsible AI education