Learning Affects Trust: Design Recommendations and Concepts for Teaching Children—and Nearly Anyone—about Conversational Agents
Conversational agents are rapidly becoming commonplace. However, since these systems are typically blackboxed, users—including vulnerable populations, like children— often do not understand them deeply.
Implementation
Source publication / research team or educational organization described in paper
Learning context
In-school (K-12)
AI role
Learning object / concept model
Outcome signal
Conceptual understanding
Registry Facets
- K-12
- Children
- conversational agents
- trust
- LLM/Chat
- Curriculum / course design
- Outreach / informal learning
- Students
- LLM/Chat
- In-school (K-12)
- Activity documentation
- Conceptual understanding
Implementing Organization
Source publication / research team or educational organization described in paper
Not specified in extracted text
Researchers, educators, instructors, or facilitators as described in the source publication
Learning Context
- In-school (K-12)
Workshop / professional learning activity
5 hours each day
the workshops. In total, 49 com- pleted at least 1 of the 3 surveys. There were 27 children (age avg.=13.96, SD=1.829) and 19 parents (age avg.=46.35, SD=11.07) on the pre-survey. From the same survey, 23 partic; 19 parents (age avg.=46.35, SD=11.07) on the pre-survey. From the same survey, 23 participants were from WEIRD countries (age avg.=26.45, SD=19.24) and 23 were from non-WEIRD countries (age avg.=25.48, SD=15.18). W
LLM/Chat
- AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.
- Use with minors requires attention to privacy, consent, data minimization, and adult supervision.
Learner Profile
K-12
Mixed or not explicitly specified; infer from target learner group and intervention design.
Varies by intervention; not specified unless the paper explicitly describes prerequisites.
Educational Intent
- Document the AI education intervention, course, tool, or resource described in the source publication.
- Extract the learner context, AI role, pedagogy, outcomes, and constraints for AAB registry comparison.
- Conversational agents are rapidly becoming commonplace.
- Support AAB comparison across AI literacy, AI education, teacher training, higher education, and workforce contexts.
- Capture evidence maturity, transferability, and limitations rather than treating the publication as product endorsement.
- Not an AAB endorsement of the tool, curriculum, provider, or result.
- Not a direct replication record unless the source paper reports implementation details sufficient for replication.
AI Tool Description
LLM/Chat
Not specified in extracted text
- Learning object / concept model
- Primary interaction pattern inferred from publication: Curriculum / course design, Outreach / informal learning.
- AI capability focus: LLM/Chat.
- Use age-appropriate framing and teacher/facilitator oversight for any classroom deployment.
- Require human review of generated outputs and explicit guidance against over-reliance or answer copying.
Activity Design
- Review the publication’s reported context, learner group, AI tool or curriculum, implementation process, and outcome evidence.
- Map the case to AAB registry fields for comparison across educational levels and AI capability types.
- Use the source publication and PDF for any manual verification before public registry release.
- Human educators/researchers remain responsible for instructional design, supervision, interpretation, and ethical safeguards.
- AI systems or AI concepts provide the learning object, support tool, evaluator, simulator, or automation context depending on the paper.
- Hands-on / experiential learning
- Registry extraction emphasizes explicit learning goals, observed outcomes, constraints, and safety limitations.
Observed Challenges
- AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.
- Use with minors requires attention to privacy, consent, data minimization, and adult supervision.
Design Adaptations
- Case classified under: Published curriculum / implementation paper.
- Pedagogical pattern: Hands-on / experiential learning.
- Any additional adaptations should be verified against the full paper before public-facing publication.
Reported Outcomes
- Engagement evidence should be interpreted according to the source paper’s reported method and sample.
- However, since these systems are typically blackboxed, users—including vulnerable populations, like children— often do not understand them deeply.
- However, since these systems are typically blackboxed, users—including vulnerable populations, like children— often do not understand them deeply.
Conversational agents are rapidly becoming commonplace. However, since these systems are typically blackboxed, users—including vulnerable populations, like children— often do not understand them deeply.
Ethical & Privacy Considerations
- Use age-appropriate framing and teacher/facilitator oversight for any classroom deployment.
- Require human review of generated outputs and explicit guidance against over-reliance or answer copying.
Evidence Type
- Activity documentation
Relevance to Research
- Can be used as an AAB evidence record for cross-case comparison, standards drafting, and evidence-maturity mapping.
- Supports identification of recurring patterns in AI literacy, AI education implementation, teacher preparation, assessment, and responsible AI learning.
- Conceptual understanding
- Curriculum / course design
- Outreach / informal learning
- LLM/Chat
Case Status
- Completed
AAB Classification Tags
K-12
In-school (K-12)
LLM/Chat
Hands-on / experiential learning
Low to Medium
Medium
Source Publication
Learning Affects Trust: Design Recommendations and Concepts for Teaching Children—and Nearly Anyone—about Conversational Agents
- Jessica Van Brummelen
- Mingyan Claire Tian
- Maura Kelleher
- Nghi Hoang Nguyen
Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37 No. 13, EAAI-23
2023
10.1609/aaai.v37i13.26883
https://ojs.aaai.org/index.php/AAAI/article/view/26883
https://ojs.aaai.org/index.php/AAAI/article/view/26883/26655
074_Learning Affects Trust_ Design Recommendations and Concepts for Teaching Children #U2014 and Nearly Anyone #U2014 about Conversational Agents.pdf
9
Conversational agents are rapidly becoming commonplace. However, since these systems are typically blackboxed, users—including vulnerable populations, like children— often do not understand them deeply. For example, they might assume agents are overly intelligent, leading to frustration and distrust. Users may also overtrust agents, and thus over- share personal information or rely heavily on agents’ ad- vice. Despite this, little research investigates users’ percep- tions of conversational agents in-depth, and even less inves- tigates how education might change these perceptions to be more healthy. We present workshops with associated edu- cational conversational AI concepts to encourage healthier understanding of agents. Through studies with the curricu- lum with children and parents from various countries, we found participants’ perceptions of agents—specifically their partner models and trust—changed. When participants dis- cussed changes in trust of agents, we found they most of- ten mentioned learning something. For example, they fre- quently mentioned learning where agents obtained informa- tion, what agents do with this information and how agents are programmed. Based on the results, we developed recommen- dations for teaching conversational agent concepts, includ- ing emphasizing the concepts students found most challeng- ing, like training, turn-taking and terminology; supplement- ing agent development activities with related learning activi- ties; fostering appropriate levels of trust towards agents; and fostering accurate partner models of agents. Through such pedagogy, students can learn to better understand conversa- tional AI and what it means to have it in the world.
Transferability
- In-school (K-12)
- AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.
- Use with minors requires attention to privacy, consent, data minimization, and adult supervision.
Cost And Operations
Not specified in extracted text unless noted in duration field.
Requires educators/researchers/facilitators with sufficient AI literacy and pedagogy knowledge for the target learners.
Infrastructure depends on AI tool type, learner devices, data access, and institutional policy context.
Extraction Notes
High
This entry was automatically extracted from the PDF text and manifest metadata. Fields should be manually verified before public registry publication, especially group size, location, duration, and outcome claims.
Briteller: Shining a Light on AI Recommendation for Children
0.439
false
