Learning About Algorithm Auditing in Five Steps: Scaffolding How High School Youth Can Systematically and Critically Evaluate Machine Learning Applications
While there is widespread interest in supporting young peo- ple to critically evaluate machine learning-powered systems, there is little research on how we can support them in in- quiring about how these systems work and what their limita- tions and implications may be. Outside of K-12 education, an effective strategy in evaluating black-boxed systems is algo- rithm auditing—a method for understanding algorithmic sys- tems’ opaque inner workings and external impacts from the outside in.
Implementation
Source publication / research team or educational organization described in paper
Learning context
In-school (K-12)
AI role
Learning object / concept model
Outcome signal
Conceptual understanding
Registry Facets
- 9-12
- K-12
- algorithm auditing
- AI ethics
- Generative AI
- ML concepts / supervised learning
- Outreach / informal learning
- Ethics / responsible AI education
- Students
- Researchers
- Generative AI
- ML concepts / supervised learning
- Explainable AI / robustness
- Ethics / responsible AI
- In-school (K-12)
- Activity documentation
- Conceptual understanding
- Engagement / motivation
- Ethics and responsible use
Implementing Organization
Source publication / research team or educational organization described in paper
Not specified in extracted text
Researchers, educators, instructors, or facilitators as described in the source publication
Learning Context
- In-school (K-12)
Workshop / professional learning activity
Not specified in extracted text
Not specified in extracted text
Generative AI, ML concepts / supervised learning, Explainable AI / robustness, Ethics / responsible AI
- AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.
- Use with minors requires attention to privacy, consent, data minimization, and adult supervision.
Learner Profile
9-12
Mixed or not explicitly specified; infer from target learner group and intervention design.
Varies by intervention; not specified unless the paper explicitly describes prerequisites.
Educational Intent
- Document the AI education intervention, course, tool, or resource described in the source publication.
- Extract the learner context, AI role, pedagogy, outcomes, and constraints for AAB registry comparison.
- While there is widespread interest in supporting young peo- ple to critically evaluate machine learning-powered systems, there is little research on how we can support them in in- quiring about how these systems work and what their limita- tions and implicatio
- Support AAB comparison across AI literacy, AI education, teacher training, higher education, and workforce contexts.
- Capture evidence maturity, transferability, and limitations rather than treating the publication as product endorsement.
- Not an AAB endorsement of the tool, curriculum, provider, or result.
- Not a direct replication record unless the source paper reports implementation details sufficient for replication.
AI Tool Description
Generative AI, ML concepts / supervised learning, Explainable AI / robustness, Ethics / responsible AI
Not specified in extracted text
- Learning object / concept model
- Primary interaction pattern inferred from publication: Outreach / informal learning, Ethics / responsible AI education.
- AI capability focus: Generative AI, ML concepts / supervised learning, Explainable AI / robustness, Ethics / responsible AI.
- Use age-appropriate framing and teacher/facilitator oversight for any classroom deployment.
- Require human review of generated outputs and explicit guidance against over-reliance or answer copying.
- Include bias, fairness, transparency, and social impact discussion as part of the learning design.
Activity Design
- Review the publication’s reported context, learner group, AI tool or curriculum, implementation process, and outcome evidence.
- Map the case to AAB registry fields for comparison across educational levels and AI capability types.
- Use the source publication and PDF for any manual verification before public registry release.
- Human educators/researchers remain responsible for instructional design, supervision, interpretation, and ethical safeguards.
- AI systems or AI concepts provide the learning object, support tool, evaluator, simulator, or automation context depending on the paper.
- Hands-on / experiential learning, Scenario / case-based learning
- Registry extraction emphasizes explicit learning goals, observed outcomes, constraints, and safety limitations.
Observed Challenges
- AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.
- Use with minors requires attention to privacy, consent, data minimization, and adult supervision.
Design Adaptations
- Case classified under: Published empirical study.
- Pedagogical pattern: Hands-on / experiential learning, Scenario / case-based learning.
- Any additional adaptations should be verified against the full paper before public-facing publication.
Reported Outcomes
- Engagement evidence should be interpreted according to the source paper’s reported method and sample.
- In this paper, we review how expert researchers conduct algorithm audits and how end users engage in au- diting practices to propose five steps that, when incorporated into learning activities, can support young people in auditing algorithms.
- In this paper, we review how expert researchers conduct algorithm audits and how end users engage in au- diting practices to propose five steps that, when incorporated into learning activities, can support young people in auditing algorithms.
- We discuss the kind of scaffolds we provided to support youth in algorithm auditing and directions and challenges for integrating algorithm auditing into classroom activities.
While there is widespread interest in supporting young peo- ple to critically evaluate machine learning-powered systems, there is little research on how we can support them in in- quiring about how these systems work and what their limita- tions and implications may be. Outside of K-12 education, an effective strategy in evaluating black-boxed systems is algo- rithm auditing—a method for understanding algorithmic sys- tems’ opaque inner workings and external impacts from the outside in.
Ethical & Privacy Considerations
- Use age-appropriate framing and teacher/facilitator oversight for any classroom deployment.
- Require human review of generated outputs and explicit guidance against over-reliance or answer copying.
- Include bias, fairness, transparency, and social impact discussion as part of the learning design.
Evidence Type
- Activity documentation
Relevance to Research
- Can be used as an AAB evidence record for cross-case comparison, standards drafting, and evidence-maturity mapping.
- Supports identification of recurring patterns in AI literacy, AI education implementation, teacher preparation, assessment, and responsible AI learning.
- Conceptual understanding
- Engagement / motivation
- Ethics and responsible use
- Outreach / informal learning
- Ethics / responsible AI education
- Generative AI
- ML concepts / supervised learning
- Explainable AI / robustness
Case Status
- Completed
AAB Classification Tags
9-12
In-school (K-12)
Generative AI, ML concepts / supervised learning, Explainable AI / robustness, Ethics / responsible AI
Hands-on / experiential learning, Scenario / case-based learning
Medium
Low to Medium
Source Publication
Learning About Algorithm Auditing in Five Steps: Scaffolding How High School Youth Can Systematically and Critically Evaluate Machine Learning Applications
- Luis Morales-Navarro
- Yasmin B. Kafai
- Lauren Vogelstein
- Evelyn Yu, Danaë Metaxa
Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 39 No. 28, EAAI-25
2025
10.1609/aaai.v39i28.35192
https://ojs.aaai.org/index.php/AAAI/article/view/35192
https://ojs.aaai.org/index.php/AAAI/article/view/35192/37347
029_Learning About Algorithm Auditing in Five Steps.pdf
9
While there is widespread interest in supporting young peo- ple to critically evaluate machine learning-powered systems, there is little research on how we can support them in in- quiring about how these systems work and what their limita- tions and implications may be. Outside of K-12 education, an effective strategy in evaluating black-boxed systems is algo- rithm auditing—a method for understanding algorithmic sys- tems’ opaque inner workings and external impacts from the outside in. In this paper, we review how expert researchers conduct algorithm audits and how end users engage in au- diting practices to propose five steps that, when incorporated into learning activities, can support young people in auditing algorithms. We present a case study of a team of teenagers engaging with each step during an out-of-school workshop in which they audited peer-designed generative AI TikTok fil- ters. We discuss the kind of scaffolds we provided to support youth in algorithm auditing and directions and challenges for integrating algorithm auditing into classroom activities. This paper contributes: (a) a conceptualization of five steps to scaf- fold algorithm auditing learning activities, and (b) examples of how youth engaged with each step during our pilot study.
Transferability
- In-school (K-12)
- AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.
- Use with minors requires attention to privacy, consent, data minimization, and adult supervision.
Cost And Operations
Not specified in extracted text unless noted in duration field.
Requires educators/researchers/facilitators with sufficient AI literacy and pedagogy knowledge for the target learners.
Infrastructure depends on AI tool type, learner devices, data access, and institutional policy context.
Extraction Notes
High
- group_size
- duration
This entry was automatically extracted from the PDF text and manifest metadata. Fields should be manually verified before public registry publication, especially group size, location, duration, and outcome claims.
Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education
0.372
false
