Maestro: A Gamified Platform for Teaching AI Robustness
Although the prevention of AI vulnerabilities is critical to preserve the safety and privacy of users and businesses, ed- ucational tools for robust AI are still underdeveloped world- wide. We present the design, implementation, and assessment of Maestro.
Implementation
Source publication / research team or educational organization described in paper
Learning context
Higher education
AI role
Evaluator
Outcome signal
Conceptual understanding
Registry Facets
- Higher education
- AI robustness
- gamified learning
- Explainable AI / robustness
- Assessment / tutoring analytics
- Curriculum / course design
- Learning tool / resource design
- Assessment support
- Students
- Explainable AI / robustness
- Assessment / tutoring analytics
- Higher education
- Activity documentation
- Conceptual understanding
- Engagement / motivation
- Assessment / feedback quality
Implementing Organization
Source publication / research team or educational organization described in paper
USA
Researchers, educators, instructors, or facilitators as described in the source publication
Learning Context
- Higher education
Course implementation or course design
Not specified in extracted text
Not specified in extracted text
Explainable AI / robustness, Assessment / tutoring analytics
- The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.
Learner Profile
Higher education
Mixed or not explicitly specified; infer from target learner group and intervention design.
Varies by intervention; not specified unless the paper explicitly describes prerequisites.
Educational Intent
- Document the AI education intervention, course, tool, or resource described in the source publication.
- Extract the learner context, AI role, pedagogy, outcomes, and constraints for AAB registry comparison.
- Although the prevention of AI vulnerabilities is critical to preserve the safety and privacy of users and businesses, ed- ucational tools for robust AI are still underdeveloped world- wide.
- Support AAB comparison across AI literacy, AI education, teacher training, higher education, and workforce contexts.
- Capture evidence maturity, transferability, and limitations rather than treating the publication as product endorsement.
- Not an AAB endorsement of the tool, curriculum, provider, or result.
- Not a direct replication record unless the source paper reports implementation details sufficient for replication.
AI Tool Description
Explainable AI / robustness, Assessment / tutoring analytics
Not specified in extracted text
- Evaluator
- Primary interaction pattern inferred from publication: Curriculum / course design, Learning tool / resource design, Assessment support.
- AI capability focus: Explainable AI / robustness, Assessment / tutoring analytics.
- Apply standard AAB safeguards: privacy, transparency, human oversight, and documentation of limitations.
Activity Design
- Review the publication’s reported context, learner group, AI tool or curriculum, implementation process, and outcome evidence.
- Map the case to AAB registry fields for comparison across educational levels and AI capability types.
- Use the source publication and PDF for any manual verification before public registry release.
- Human educators/researchers remain responsible for instructional design, supervision, interpretation, and ethical safeguards.
- AI systems or AI concepts provide the learning object, support tool, evaluator, simulator, or automation context depending on the paper.
- Game-based learning, Scenario / case-based learning
- Registry extraction emphasizes explicit learning goals, observed outcomes, constraints, and safety limitations.
Observed Challenges
- The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.
Design Adaptations
- Case classified under: Published empirical study.
- Pedagogical pattern: Game-based learning, Scenario / case-based learning.
- Any additional adaptations should be verified against the full paper before public-facing publication.
Reported Outcomes
- Engagement evidence should be interpreted according to the source paper’s reported method and sample.
- Maestro provides goal-based scenarios where col- lege students are exposed to challenging life-inspired assign- ments in a competitive programming environment.
- Maestro provides goal-based scenarios where col- lege students are exposed to challenging life-inspired assign- ments in a competitive programming environment.
- We as- sessed Maestro’s influence on students’ engagement, moti- vation, and learning success in robust AI.
Although the prevention of AI vulnerabilities is critical to preserve the safety and privacy of users and businesses, ed- ucational tools for robust AI are still underdeveloped world- wide. We present the design, implementation, and assessment of Maestro.
Ethical & Privacy Considerations
- Apply standard AAB safeguards: privacy, transparency, human oversight, and documentation of limitations.
Evidence Type
- Activity documentation
Relevance to Research
- Can be used as an AAB evidence record for cross-case comparison, standards drafting, and evidence-maturity mapping.
- Supports identification of recurring patterns in AI literacy, AI education implementation, teacher preparation, assessment, and responsible AI learning.
- Conceptual understanding
- Engagement / motivation
- Assessment / feedback quality
- Curriculum / course design
- Learning tool / resource design
- Assessment support
- Explainable AI / robustness
- Assessment / tutoring analytics
Case Status
- Completed
AAB Classification Tags
Higher education
Higher education
Explainable AI / robustness, Assessment / tutoring analytics
Game-based learning, Scenario / case-based learning
Low to Medium
Medium
Source Publication
Maestro: A Gamified Platform for Teaching AI Robustness
- Margarita Geleta
- Jiacen Xu
- Manikanta Loya
- Junlin Wang
- Sameer Singh
- Zhou Li
- Sergio Gago-Masague
Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37 No. 13, EAAI-23
2023
10.1609/aaai.v37i13.26878
https://ojs.aaai.org/index.php/AAAI/article/view/26878
https://ojs.aaai.org/index.php/AAAI/article/view/26878/26650
069_Maestro_ A Gamified Platform for Teaching AI Robustness.pdf
9
Although the prevention of AI vulnerabilities is critical to preserve the safety and privacy of users and businesses, ed- ucational tools for robust AI are still underdeveloped world- wide. We present the design, implementation, and assessment of Maestro. Maestro is an effective open-source game-based platform that contributes to the advancement of robust AI ed- ucation. Maestro provides goal-based scenarios where col- lege students are exposed to challenging life-inspired assign- ments in a competitive programming environment. We as- sessed Maestro’s influence on students’ engagement, moti- vation, and learning success in robust AI. This work also pro- vides insights into the design features of online learning tools that promote active learning opportunities in the robust AI do- main. We analyzed the reflection responses (measured with Likert scales) of 147 undergraduate students using Maestro in two quarterly college courses in AI. According to the re- sults, students who felt the acquisition of new skills in robust AI tended to appreciate highly Maestro and scored highly on material consolidation, curiosity, and maestry in robust AI. Moreover, the leaderboard, our key gamification element in Maestro, has effectively contributed to students’ engagement and learning. Results also indicate that Maestro can be effec- tively adapted to any course length and depth without losing its educational quality.
Transferability
- Higher education
- The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.
Cost And Operations
Not specified in extracted text unless noted in duration field.
Requires educators/researchers/facilitators with sufficient AI literacy and pedagogy knowledge for the target learners.
Infrastructure depends on AI tool type, learner devices, data access, and institutional policy context.
Extraction Notes
High
- group_size
- duration
This entry was automatically extracted from the PDF text and manifest metadata. Fields should be manually verified before public registry publication, especially group size, location, duration, and outcome claims.
AI in STEM education: The relationship between teacher perceptions and ChatGPT use
0.394
false
