Does Knowing When Help Is Needed Improve Subgoal Hint Performance in an Intelligent Data-Driven Logic Tutor?
The assistance dilemma is a well-recognized challenge to de- termine when and how to provide help during problem solv- ing in intelligent tutoring systems. This dilemma is particu- larly challenging to address in domains such as logic proofs, where problems can be solved in a variety of ways.
Implementation
Source publication / research team or educational organization described in paper
Learning context
Research / curriculum design context
AI role
Tutor
Outcome signal
Conceptual understanding
Registry Facets
- Unspecified / broad education
- AI tutoring
- logic education
- Assessment / tutoring analytics
- Assessment support
- Students
- Researchers
- Assessment / tutoring analytics
- Research / curriculum design context
- Pre/post or experimental evidence
- Conceptual understanding
- Assessment / feedback quality
Implementing Organization
Source publication / research team or educational organization described in paper
Not specified in extracted text
Researchers, educators, instructors, or facilitators as described in the source publication
Learning Context
- Research / curriculum design context
Classroom, course, or resource-based AI education activity
Not specified in extracted text
Not specified in extracted text
Assessment / tutoring analytics
- The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.
Learner Profile
Unspecified / broad education
Mixed or not explicitly specified; infer from target learner group and intervention design.
Varies by intervention; not specified unless the paper explicitly describes prerequisites.
Educational Intent
- Document the AI education intervention, course, tool, or resource described in the source publication.
- Extract the learner context, AI role, pedagogy, outcomes, and constraints for AAB registry comparison.
- The assistance dilemma is a well-recognized challenge to de- termine when and how to provide help during problem solv- ing in intelligent tutoring systems.
- Support AAB comparison across AI literacy, AI education, teacher training, higher education, and workforce contexts.
- Capture evidence maturity, transferability, and limitations rather than treating the publication as product endorsement.
- Not an AAB endorsement of the tool, curriculum, provider, or result.
- Not a direct replication record unless the source paper reports implementation details sufficient for replication.
AI Tool Description
Assessment / tutoring analytics
Not specified in extracted text
- Tutor
- Primary interaction pattern inferred from publication: Assessment support.
- AI capability focus: Assessment / tutoring analytics.
- Apply standard AAB safeguards: privacy, transparency, human oversight, and documentation of limitations.
Activity Design
- Review the publication’s reported context, learner group, AI tool or curriculum, implementation process, and outcome evidence.
- Map the case to AAB registry fields for comparison across educational levels and AI capability types.
- Use the source publication and PDF for any manual verification before public registry release.
- Human educators/researchers remain responsible for instructional design, supervision, interpretation, and ethical safeguards.
- AI systems or AI concepts provide the learning object, support tool, evaluator, simulator, or automation context depending on the paper.
- Tutoring / feedback-supported learning
- Registry extraction emphasizes explicit learning goals, observed outcomes, constraints, and safety limitations.
Observed Challenges
- The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.
Design Adaptations
- Case classified under: Published curriculum / implementation paper.
- Pedagogical pattern: Tutoring / feedback-supported learning.
- Any additional adaptations should be verified against the full paper before public-facing publication.
Reported Outcomes
- Engagement evidence should be interpreted according to the source paper’s reported method and sample.
- In this study, we investigate two data-driven techniques to address the when and how of the assistance dilemma, combining a model that predicts when students need help learning efficient strategies, and hints that suggest what subgoal to achieve.
- In this study, we investigate two data-driven techniques to address the when and how of the assistance dilemma, combining a model that predicts when students need help learning efficient strategies, and hints that suggest what subgoal to achieve.
- We found empirical evidence which sug- gests that showing subgoals in training problems upon pre- dictions of the model helped the students who needed it most and improved test performance when compared to their con- trol peers.
The assistance dilemma is a well-recognized challenge to de- termine when and how to provide help during problem solv- ing in intelligent tutoring systems. This dilemma is particu- larly challenging to address in domains such as logic proofs, where problems can be solved in a variety of ways.
Ethical & Privacy Considerations
- Apply standard AAB safeguards: privacy, transparency, human oversight, and documentation of limitations.
Evidence Type
- Pre/post or experimental evidence
Relevance to Research
- Can be used as an AAB evidence record for cross-case comparison, standards drafting, and evidence-maturity mapping.
- Supports identification of recurring patterns in AI literacy, AI education implementation, teacher preparation, assessment, and responsible AI learning.
- Conceptual understanding
- Assessment / feedback quality
- Assessment support
- Assessment / tutoring analytics
Case Status
- Completed
AAB Classification Tags
Unspecified / broad education
Research / curriculum design context
Assessment / tutoring analytics
Tutoring / feedback-supported learning
Low to Medium
Medium
Source Publication
Does Knowing When Help Is Needed Improve Subgoal Hint Performance in an Intelligent Data-Driven Logic Tutor?
- Nazia Alam
- Mehak Maniktala
- Behrooz Mostafavi
- Min Chi
- Tiffany Barnes
Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37 No. 13, EAAI-23
2023
10.1609/aaai.v37i13.26887
https://ojs.aaai.org/index.php/AAAI/article/view/26887
https://ojs.aaai.org/index.php/AAAI/article/view/26887/26659
078_Does Knowing When Help Is Needed Improve Subgoal Hint Performance in an Intelligent Data-Driven Logic Tutor.pdf
8
The assistance dilemma is a well-recognized challenge to de- termine when and how to provide help during problem solv- ing in intelligent tutoring systems. This dilemma is particu- larly challenging to address in domains such as logic proofs, where problems can be solved in a variety of ways. In this study, we investigate two data-driven techniques to address the when and how of the assistance dilemma, combining a model that predicts when students need help learning efficient strategies, and hints that suggest what subgoal to achieve. We conduct a study assessing the impact of the new peda- gogical policy against a control policy without these adap- tive components. We found empirical evidence which sug- gests that showing subgoals in training problems upon pre- dictions of the model helped the students who needed it most and improved test performance when compared to their con- trol peers. Our key findings include significantly fewer steps in posttest problem solutions for students with low prior pro- ficiency and significantly reduced help avoidance for all stu- dents in training.
Transferability
- Research / curriculum design context
- The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.
Cost And Operations
Not specified in extracted text unless noted in duration field.
Requires educators/researchers/facilitators with sufficient AI literacy and pedagogy knowledge for the target learners.
Infrastructure depends on AI tool type, learner devices, data access, and institutional policy context.
Extraction Notes
High
- group_size
- duration
This entry was automatically extracted from the PDF text and manifest metadata. Fields should be manually verified before public registry publication, especially group size, location, duration, and outcome claims.
Understanding Student Perceptions of Artificial Intelligence as a Teammate
0.407
false
