Making Transparency Advocates: An Educational Approach Towards Better Algorithmic Transparency in Practice
Concerns about the risks and harms posed by artificial intelli- gence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations.
Implementation
Source publication / research team or educational organization described in paper
Learning context
Professional / adult learning
AI role
Learning object / concept model
Outcome signal
AI literacy
Registry Facets
- Adult / workforce
- AI literacy
- algorithmic transparency
- ethics
- Explainable AI / robustness
- Ethics / responsible AI
- Ethics / responsible AI education
- Adult learners / professionals
- Researchers
- Explainable AI / robustness
- Ethics / responsible AI
- Professional / adult learning
- Activity documentation
- AI literacy
- Ethics and responsible use
Implementing Organization
Source publication / research team or educational organization described in paper
USA, United States
Researchers, educators, instructors, or facilitators as described in the source publication
Learning Context
- Professional / adult learning
Workshop / professional learning activity
Not specified in extracted text
and Stoyanovich 2023). While these initiatives have been primarily focused on K-12 students and emphasized the technical aspects of AI (i.e., computer programming) (Dom´ınguez Figaredo and Stoyanovich 2023; Will
Explainable AI / robustness, Ethics / responsible AI
- The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.
Learner Profile
Adult / workforce
Mixed or not explicitly specified; infer from target learner group and intervention design.
Varies by intervention; not specified unless the paper explicitly describes prerequisites.
Educational Intent
- Document the AI education intervention, course, tool, or resource described in the source publication.
- Extract the learner context, AI role, pedagogy, outcomes, and constraints for AAB registry comparison.
- Concerns about the risks and harms posed by artificial intelli- gence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI).
- Support AAB comparison across AI literacy, AI education, teacher training, higher education, and workforce contexts.
- Capture evidence maturity, transferability, and limitations rather than treating the publication as product endorsement.
- Not an AAB endorsement of the tool, curriculum, provider, or result.
- Not a direct replication record unless the source paper reports implementation details sufficient for replication.
AI Tool Description
Explainable AI / robustness, Ethics / responsible AI
Not specified in extracted text
- Learning object / concept model
- Primary interaction pattern inferred from publication: Ethics / responsible AI education.
- AI capability focus: Explainable AI / robustness, Ethics / responsible AI.
- Include bias, fairness, transparency, and social impact discussion as part of the learning design.
Activity Design
- Review the publication’s reported context, learner group, AI tool or curriculum, implementation process, and outcome evidence.
- Map the case to AAB registry fields for comparison across educational levels and AI capability types.
- Use the source publication and PDF for any manual verification before public registry release.
- Human educators/researchers remain responsible for instructional design, supervision, interpretation, and ethical safeguards.
- AI systems or AI concepts provide the learning object, support tool, evaluator, simulator, or automation context depending on the paper.
- Hands-on / experiential learning
- Registry extraction emphasizes explicit learning goals, observed outcomes, constraints, and safety limitations.
Observed Challenges
- The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.
Design Adaptations
- Case classified under: Published curriculum / implementation paper.
- Pedagogical pattern: Hands-on / experiential learning.
- Any additional adaptations should be verified against the full paper before public-facing publication.
Reported Outcomes
- Engagement evidence should be interpreted according to the source paper’s reported method and sample.
- In this work, we test an approach for addressing the challenge by creat- ing transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency.
- In this work, we test an approach for addressing the challenge by creat- ing transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency.
- We de- livered the workshop to professionals across two separate do- mains to improve their algorithmic transparency literacy and willingness to advocate for change.
Concerns about the risks and harms posed by artificial intelli- gence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations.
Ethical & Privacy Considerations
- Include bias, fairness, transparency, and social impact discussion as part of the learning design.
Evidence Type
- Activity documentation
Relevance to Research
- Can be used as an AAB evidence record for cross-case comparison, standards drafting, and evidence-maturity mapping.
- Supports identification of recurring patterns in AI literacy, AI education implementation, teacher preparation, assessment, and responsible AI learning.
- AI literacy
- Ethics and responsible use
- Ethics / responsible AI education
- Explainable AI / robustness
- Ethics / responsible AI
Case Status
- Completed
AAB Classification Tags
Adult / workforce
Professional / adult learning
Explainable AI / robustness, Ethics / responsible AI
Hands-on / experiential learning
Medium
Low to Medium
Source Publication
Making Transparency Advocates: An Educational Approach Towards Better Algorithmic Transparency in Practice
- Andrew Bell
- Julia Stoyanovich
Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 39 No. 28, EAAI-25
2025
10.1609/aaai.v39i28.35165
https://ojs.aaai.org/index.php/AAAI/article/view/35165
https://ojs.aaai.org/index.php/AAAI/article/view/35165/37320
002_Making Transparency Advocates_ An Educational Approach Towards Better Algorithmic Transparency in Practice.pdf
9
Concerns about the risks and harms posed by artificial intelli- gence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations. In this work, we test an approach for addressing the challenge by creat- ing transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency. Over several years, we created an open-source educational workshop on algorithmic transparency and advocacy. We de- livered the workshop to professionals across two separate do- mains to improve their algorithmic transparency literacy and willingness to advocate for change. In the weeks following the workshop, participants applied what they learned, such as speaking up for algorithmic transparency at an organization- wide AI strategy meeting. We also make two broader obser- vations: first, advocacy is not a monolith and can be broken down into different levels. Second, individuals’ willingness for advocacy is affected by their professional field. For ex- ample, news and media professionals may be more likely to advocate for algorithmic transparency than those working at technology start-ups.
Transferability
- Professional / adult learning
- The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.
Cost And Operations
Not specified in extracted text unless noted in duration field.
Requires educators/researchers/facilitators with sufficient AI literacy and pedagogy knowledge for the target learners.
Infrastructure depends on AI tool type, learner devices, data access, and institutional policy context.
Extraction Notes
High
- duration
This entry was automatically extracted from the PDF text and manifest metadata. Fields should be manually verified before public registry publication, especially group size, location, duration, and outcome claims.
Integrating generative artificial intelligence in K-12 education: Examining teachers’ preparedness, practices, and barriers
0.41
false
