Back to Cases
Case ReportPublished curriculum / implementation paper2023
AAB-CASE-2026-RV-103

Learning Affects Trust: Design Recommendations and Concepts for Teaching Children—and Nearly Anyone—about Conversational Agents

Conversational agents are rapidly becoming commonplace. However, since these systems are typically blackboxed, users—including vulnerable populations, like children— often do not understand them deeply.

This page documents an AI literacy or AI education case for registry purposes. It is descriptive and does not imply AAB endorsement of any specific tool, provider, or intervention.
01

Implementation

Source publication / research team or educational organization described in paper

02

Learning context

In-school (K-12)

03

AI role

Learning object / concept model

04

Outcome signal

Conceptual understanding

Registry Facets

0
Education Level
  • K-12
Subject Area
  • Children
  • conversational agents
  • trust
  • LLM/Chat
Use Case Type
  • Curriculum / course design
  • Outreach / informal learning
Stakeholder Group
  • Students
AI Capability Type
  • LLM/Chat
Implementation Model
  • In-school (K-12)
Evidence Type
  • Activity documentation
Outcomes Domain
  • Conceptual understanding

Implementing Organization

1
Organization Type

Source publication / research team or educational organization described in paper

Location

Not specified in extracted text

Primary Facilitator Role

Researchers, educators, instructors, or facilitators as described in the source publication

Learning Context

2
Setting Type
  • In-school (K-12)
Session Format

Workshop / professional learning activity

Duration

5 hours each day

Group Size

the workshops. In total, 49 com- pleted at least 1 of the 3 surveys. There were 27 children (age avg.=13.96, SD=1.829) and 19 parents (age avg.=46.35, SD=11.07) on the pre-survey. From the same survey, 23 partic; 19 parents (age avg.=46.35, SD=11.07) on the pre-survey. From the same survey, 23 participants were from WEIRD countries (age avg.=26.45, SD=19.24) and 23 were from non-WEIRD countries (age avg.=25.48, SD=15.18). W

Devices

LLM/Chat

Constraints
  • AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.
  • Use with minors requires attention to privacy, consent, data minimization, and adult supervision.

Learner Profile

3
Age Range

K-12

Prior AI Exposure Assumed

Mixed or not explicitly specified; infer from target learner group and intervention design.

Prior Programming Background Assumed

Varies by intervention; not specified unless the paper explicitly describes prerequisites.

Educational Intent

4
Primary Learning Goals
  • Document the AI education intervention, course, tool, or resource described in the source publication.
  • Extract the learner context, AI role, pedagogy, outcomes, and constraints for AAB registry comparison.
  • Conversational agents are rapidly becoming commonplace.
Secondary Learning Goals
  • Support AAB comparison across AI literacy, AI education, teacher training, higher education, and workforce contexts.
  • Capture evidence maturity, transferability, and limitations rather than treating the publication as product endorsement.
What This Was Not
  • Not an AAB endorsement of the tool, curriculum, provider, or result.
  • Not a direct replication record unless the source paper reports implementation details sufficient for replication.

AI Tool Description

5
Tool Type

LLM/Chat

Languages

Not specified in extracted text

AI Role
  • Learning object / concept model
User Interaction Model
  • Primary interaction pattern inferred from publication: Curriculum / course design, Outreach / informal learning.
  • AI capability focus: LLM/Chat.
Safeguards
  • Use age-appropriate framing and teacher/facilitator oversight for any classroom deployment.
  • Require human review of generated outputs and explicit guidance against over-reliance or answer copying.

Activity Design

6
Activity Flow
  • Review the publication’s reported context, learner group, AI tool or curriculum, implementation process, and outcome evidence.
  • Map the case to AAB registry fields for comparison across educational levels and AI capability types.
  • Use the source publication and PDF for any manual verification before public registry release.
Human Vs AI Responsibilities
  • Human educators/researchers remain responsible for instructional design, supervision, interpretation, and ethical safeguards.
  • AI systems or AI concepts provide the learning object, support tool, evaluator, simulator, or automation context depending on the paper.
Scaffolding Strategies
  • Hands-on / experiential learning
  • Registry extraction emphasizes explicit learning goals, observed outcomes, constraints, and safety limitations.

Observed Challenges

7
Educators Reported
  • AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.
  • Use with minors requires attention to privacy, consent, data minimization, and adult supervision.

Design Adaptations

8
Adaptations
  • Case classified under: Published curriculum / implementation paper.
  • Pedagogical pattern: Hands-on / experiential learning.
  • Any additional adaptations should be verified against the full paper before public-facing publication.

Reported Outcomes

9
Engagement
  • Engagement evidence should be interpreted according to the source paper’s reported method and sample.
  • However, since these systems are typically blackboxed, users—including vulnerable populations, like children— often do not understand them deeply.
Learning Signals
  • However, since these systems are typically blackboxed, users—including vulnerable populations, like children— often do not understand them deeply.
Educators Reflection

Conversational agents are rapidly becoming commonplace. However, since these systems are typically blackboxed, users—including vulnerable populations, like children— often do not understand them deeply.

Ethical & Privacy Considerations

10
Privacy
  • Use age-appropriate framing and teacher/facilitator oversight for any classroom deployment.
  • Require human review of generated outputs and explicit guidance against over-reliance or answer copying.

Evidence Type

11
Evidence
  • Activity documentation

Relevance to Research

12
Potential Research Use
  • Can be used as an AAB evidence record for cross-case comparison, standards drafting, and evidence-maturity mapping.
  • Supports identification of recurring patterns in AI literacy, AI education implementation, teacher preparation, assessment, and responsible AI learning.
Relevant Research Domains
  • Conceptual understanding
  • Curriculum / course design
  • Outreach / informal learning
  • LLM/Chat

Case Status

13
Case Status
  • Completed

AAB Classification Tags

14
Age

K-12

Setting

In-school (K-12)

AI Function

LLM/Chat

Pedagogy

Hands-on / experiential learning

Risk Level

Low to Medium

Data Sensitivity

Medium

Source Publication

15
Title

Learning Affects Trust: Design Recommendations and Concepts for Teaching Children—and Nearly Anyone—about Conversational Agents

Authors
  • Jessica Van Brummelen
  • Mingyan Claire Tian
  • Maura Kelleher
  • Nghi Hoang Nguyen
Venue

Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37 No. 13, EAAI-23

Year

2023

Doi

10.1609/aaai.v37i13.26883

Source URL

https://ojs.aaai.org/index.php/AAAI/article/view/26883

Pdf URL

https://ojs.aaai.org/index.php/AAAI/article/view/26883/26655

Pdf Filename

074_Learning Affects Trust_ Design Recommendations and Concepts for Teaching Children #U2014 and Nearly Anyone #U2014 about Conversational Agents.pdf

Page Count

9

Abstract

Conversational agents are rapidly becoming commonplace. However, since these systems are typically blackboxed, users—including vulnerable populations, like children— often do not understand them deeply. For example, they might assume agents are overly intelligent, leading to frustration and distrust. Users may also overtrust agents, and thus over- share personal information or rely heavily on agents’ ad- vice. Despite this, little research investigates users’ percep- tions of conversational agents in-depth, and even less inves- tigates how education might change these perceptions to be more healthy. We present workshops with associated edu- cational conversational AI concepts to encourage healthier understanding of agents. Through studies with the curricu- lum with children and parents from various countries, we found participants’ perceptions of agents—specifically their partner models and trust—changed. When participants dis- cussed changes in trust of agents, we found they most of- ten mentioned learning something. For example, they fre- quently mentioned learning where agents obtained informa- tion, what agents do with this information and how agents are programmed. Based on the results, we developed recommen- dations for teaching conversational agent concepts, includ- ing emphasizing the concepts students found most challeng- ing, like training, turn-taking and terminology; supplement- ing agent development activities with related learning activi- ties; fostering appropriate levels of trust towards agents; and fostering accurate partner models of agents. Through such pedagogy, students can learn to better understand conversa- tional AI and what it means to have it in the world.

Transferability

16
Best Fit Contexts
  • In-school (K-12)
Likely Failure Modes
  • AI output reliability, hallucination, academic integrity, and age-appropriate use require safeguards.
  • Use with minors requires attention to privacy, consent, data minimization, and adult supervision.

Cost And Operations

17
Time Cost Notes

Not specified in extracted text unless noted in duration field.

Staffing Notes

Requires educators/researchers/facilitators with sufficient AI literacy and pedagogy knowledge for the target learners.

Infra Notes

Infrastructure depends on AI tool type, learner devices, data access, and institutional policy context.

Extraction Notes

18
Confidence

High

Missing Information
    Reasoning Limits

    This entry was automatically extracted from the PDF text and manifest metadata. Fields should be manually verified before public registry publication, especially group size, location, duration, and outcome claims.

    Duplicate Check Against Uploaded Cases Json
    Closest Existing Title

    Briteller: Shining a Light on AI Recommendation for Children

    Similarity Score

    0.439

    Likely Duplicate

    false

    Registry Metadata

    19
    Case ID
    AAB-CASE-2026-RV-103
    Publication Status
    Published curriculum / implementation paper
    Tags
    caseK-12Not specified in extracted textIn-school (K-12)LLM/ChatChildrenconversational agentstrustLLM/ChatCurriculum / course designOutreach / informal learning