Back to Cases
Case ReportPublished curriculum / implementation paper2025
AAB-CASE-2026-RV-062

Making Transparency Advocates: An Educational Approach Towards Better Algorithmic Transparency in Practice

Concerns about the risks and harms posed by artificial intelli- gence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations.

This page documents an AI literacy or AI education case for registry purposes. It is descriptive and does not imply AAB endorsement of any specific tool, provider, or intervention.
01

Implementation

Source publication / research team or educational organization described in paper

02

Learning context

Professional / adult learning

03

AI role

Learning object / concept model

04

Outcome signal

AI literacy

Registry Facets

0
Education Level
  • Adult / workforce
Subject Area
  • AI literacy
  • algorithmic transparency
  • ethics
  • Explainable AI / robustness
  • Ethics / responsible AI
Use Case Type
  • Ethics / responsible AI education
Stakeholder Group
  • Adult learners / professionals
  • Researchers
AI Capability Type
  • Explainable AI / robustness
  • Ethics / responsible AI
Implementation Model
  • Professional / adult learning
Evidence Type
  • Activity documentation
Outcomes Domain
  • AI literacy
  • Ethics and responsible use

Implementing Organization

1
Organization Type

Source publication / research team or educational organization described in paper

Location

USA, United States

Primary Facilitator Role

Researchers, educators, instructors, or facilitators as described in the source publication

Learning Context

2
Setting Type
  • Professional / adult learning
Session Format

Workshop / professional learning activity

Duration

Not specified in extracted text

Group Size

and Stoyanovich 2023). While these initiatives have been primarily focused on K-12 students and emphasized the technical aspects of AI (i.e., computer programming) (Dom´ınguez Figaredo and Stoyanovich 2023; Will

Devices

Explainable AI / robustness, Ethics / responsible AI

Constraints
  • The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.

Learner Profile

3
Age Range

Adult / workforce

Prior AI Exposure Assumed

Mixed or not explicitly specified; infer from target learner group and intervention design.

Prior Programming Background Assumed

Varies by intervention; not specified unless the paper explicitly describes prerequisites.

Educational Intent

4
Primary Learning Goals
  • Document the AI education intervention, course, tool, or resource described in the source publication.
  • Extract the learner context, AI role, pedagogy, outcomes, and constraints for AAB registry comparison.
  • Concerns about the risks and harms posed by artificial intelli- gence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI).
Secondary Learning Goals
  • Support AAB comparison across AI literacy, AI education, teacher training, higher education, and workforce contexts.
  • Capture evidence maturity, transferability, and limitations rather than treating the publication as product endorsement.
What This Was Not
  • Not an AAB endorsement of the tool, curriculum, provider, or result.
  • Not a direct replication record unless the source paper reports implementation details sufficient for replication.

AI Tool Description

5
Tool Type

Explainable AI / robustness, Ethics / responsible AI

Languages

Not specified in extracted text

AI Role
  • Learning object / concept model
User Interaction Model
  • Primary interaction pattern inferred from publication: Ethics / responsible AI education.
  • AI capability focus: Explainable AI / robustness, Ethics / responsible AI.
Safeguards
  • Include bias, fairness, transparency, and social impact discussion as part of the learning design.

Activity Design

6
Activity Flow
  • Review the publication’s reported context, learner group, AI tool or curriculum, implementation process, and outcome evidence.
  • Map the case to AAB registry fields for comparison across educational levels and AI capability types.
  • Use the source publication and PDF for any manual verification before public registry release.
Human Vs AI Responsibilities
  • Human educators/researchers remain responsible for instructional design, supervision, interpretation, and ethical safeguards.
  • AI systems or AI concepts provide the learning object, support tool, evaluator, simulator, or automation context depending on the paper.
Scaffolding Strategies
  • Hands-on / experiential learning
  • Registry extraction emphasizes explicit learning goals, observed outcomes, constraints, and safety limitations.

Observed Challenges

7
Educators Reported
  • The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.

Design Adaptations

8
Adaptations
  • Case classified under: Published curriculum / implementation paper.
  • Pedagogical pattern: Hands-on / experiential learning.
  • Any additional adaptations should be verified against the full paper before public-facing publication.

Reported Outcomes

9
Engagement
  • Engagement evidence should be interpreted according to the source paper’s reported method and sample.
  • In this work, we test an approach for addressing the challenge by creat- ing transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency.
Learning Signals
  • In this work, we test an approach for addressing the challenge by creat- ing transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency.
  • We de- livered the workshop to professionals across two separate do- mains to improve their algorithmic transparency literacy and willingness to advocate for change.
Educators Reflection

Concerns about the risks and harms posed by artificial intelli- gence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations.

Ethical & Privacy Considerations

10
Privacy
  • Include bias, fairness, transparency, and social impact discussion as part of the learning design.

Evidence Type

11
Evidence
  • Activity documentation

Relevance to Research

12
Potential Research Use
  • Can be used as an AAB evidence record for cross-case comparison, standards drafting, and evidence-maturity mapping.
  • Supports identification of recurring patterns in AI literacy, AI education implementation, teacher preparation, assessment, and responsible AI learning.
Relevant Research Domains
  • AI literacy
  • Ethics and responsible use
  • Ethics / responsible AI education
  • Explainable AI / robustness
  • Ethics / responsible AI

Case Status

13
Case Status
  • Completed

AAB Classification Tags

14
Age

Adult / workforce

Setting

Professional / adult learning

AI Function

Explainable AI / robustness, Ethics / responsible AI

Pedagogy

Hands-on / experiential learning

Risk Level

Medium

Data Sensitivity

Low to Medium

Source Publication

15
Title

Making Transparency Advocates: An Educational Approach Towards Better Algorithmic Transparency in Practice

Authors
  • Andrew Bell
  • Julia Stoyanovich
Venue

Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 39 No. 28, EAAI-25

Year

2025

Doi

10.1609/aaai.v39i28.35165

Source URL

https://ojs.aaai.org/index.php/AAAI/article/view/35165

Pdf URL

https://ojs.aaai.org/index.php/AAAI/article/view/35165/37320

Pdf Filename

002_Making Transparency Advocates_ An Educational Approach Towards Better Algorithmic Transparency in Practice.pdf

Page Count

9

Abstract

Concerns about the risks and harms posed by artificial intelli- gence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations. In this work, we test an approach for addressing the challenge by creat- ing transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency. Over several years, we created an open-source educational workshop on algorithmic transparency and advocacy. We de- livered the workshop to professionals across two separate do- mains to improve their algorithmic transparency literacy and willingness to advocate for change. In the weeks following the workshop, participants applied what they learned, such as speaking up for algorithmic transparency at an organization- wide AI strategy meeting. We also make two broader obser- vations: first, advocacy is not a monolith and can be broken down into different levels. Second, individuals’ willingness for advocacy is affected by their professional field. For ex- ample, news and media professionals may be more likely to advocate for algorithmic transparency than those working at technology start-ups.

Transferability

16
Best Fit Contexts
  • Professional / adult learning
Likely Failure Modes
  • The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.

Cost And Operations

17
Time Cost Notes

Not specified in extracted text unless noted in duration field.

Staffing Notes

Requires educators/researchers/facilitators with sufficient AI literacy and pedagogy knowledge for the target learners.

Infra Notes

Infrastructure depends on AI tool type, learner devices, data access, and institutional policy context.

Extraction Notes

18
Confidence

High

Missing Information
  • duration
Reasoning Limits

This entry was automatically extracted from the PDF text and manifest metadata. Fields should be manually verified before public registry publication, especially group size, location, duration, and outcome claims.

Duplicate Check Against Uploaded Cases Json
Closest Existing Title

Integrating generative artificial intelligence in K-12 education: Examining teachers’ preparedness, practices, and barriers

Similarity Score

0.41

Likely Duplicate

false

Registry Metadata

19
Case ID
AAB-CASE-2026-RV-062
Publication Status
Published curriculum / implementation paper
Tags
caseAdult / workforceUSA, United StatesProfessional / adult learningExplainable AI / robustnessAI literacyalgorithmic transparencyethicsExplainable AI / robustnessEthics / responsible AIEthics / responsible AI education