Back to Cases
Case ReportPublished empirical study2023
AAB-CASE-2026-RV-110

Learning Logical Reasoning Using an Intelligent Tutoring System: A Hybrid Approach to Student Modeling

In our previous works, we presented Logic-Muse as an Intelli- gent Tutoring System that helps learners improve logical rea- soning skills in multiple contexts. Logic-Muse components were validated and argued by experts throughout the design- ing process (ITS researchers, logicians, and reasoning psy- chologists).

This page documents an AI literacy or AI education case for registry purposes. It is descriptive and does not imply AAB endorsement of any specific tool, provider, or intervention.
01

Implementation

Source publication / research team or educational organization described in paper

02

Learning context

Research / curriculum design context

03

AI role

Tutor

04

Outcome signal

Conceptual understanding

Registry Facets

0
Education Level
  • Unspecified / broad education
Subject Area
  • Intelligent tutoring
  • logic
  • Assessment / tutoring analytics
Use Case Type
  • Instructional design / AI education
Stakeholder Group
  • Students
  • Researchers
AI Capability Type
  • Assessment / tutoring analytics
Implementation Model
  • Research / curriculum design context
Evidence Type
  • Design / conceptual evidence
Outcomes Domain
  • Conceptual understanding

Implementing Organization

1
Organization Type

Source publication / research team or educational organization described in paper

Location

Canada

Primary Facilitator Role

Researchers, educators, instructors, or facilitators as described in the source publication

Learning Context

2
Setting Type
  • Research / curriculum design context
Session Format

Classroom, course, or resource-based AI education activity

Duration

Not specified in extracted text

Group Size

(a bayesian learner model). We conducted a study and collected data from nearly 300 students who processed 48 reasoning activities. These data were used to develop a psychometric model for initializing the learne; en his response vector. Data Collection Participants and procedures. A total of 294 participants were recruited online via the Prolific Academic platform. Materials. For each of the 16 items classes, three items were

Devices

Assessment / tutoring analytics

Constraints
  • The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.

Learner Profile

3
Age Range

Unspecified / broad education

Prior AI Exposure Assumed

Mixed or not explicitly specified; infer from target learner group and intervention design.

Prior Programming Background Assumed

Varies by intervention; not specified unless the paper explicitly describes prerequisites.

Educational Intent

4
Primary Learning Goals
  • Document the AI education intervention, course, tool, or resource described in the source publication.
  • Extract the learner context, AI role, pedagogy, outcomes, and constraints for AAB registry comparison.
  • In our previous works, we presented Logic-Muse as an Intelli- gent Tutoring System that helps learners improve logical rea- soning skills in multiple contexts.
Secondary Learning Goals
  • Support AAB comparison across AI literacy, AI education, teacher training, higher education, and workforce contexts.
  • Capture evidence maturity, transferability, and limitations rather than treating the publication as product endorsement.
What This Was Not
  • Not an AAB endorsement of the tool, curriculum, provider, or result.
  • Not a direct replication record unless the source paper reports implementation details sufficient for replication.

AI Tool Description

5
Tool Type

Assessment / tutoring analytics

Languages

Not specified in extracted text

AI Role
  • Tutor
User Interaction Model
  • Primary interaction pattern inferred from publication: Instructional design / AI education.
  • AI capability focus: Assessment / tutoring analytics.
Safeguards
  • Apply standard AAB safeguards: privacy, transparency, human oversight, and documentation of limitations.

Activity Design

6
Activity Flow
  • Review the publication’s reported context, learner group, AI tool or curriculum, implementation process, and outcome evidence.
  • Map the case to AAB registry fields for comparison across educational levels and AI capability types.
  • Use the source publication and PDF for any manual verification before public registry release.
Human Vs AI Responsibilities
  • Human educators/researchers remain responsible for instructional design, supervision, interpretation, and ethical safeguards.
  • AI systems or AI concepts provide the learning object, support tool, evaluator, simulator, or automation context depending on the paper.
Scaffolding Strategies
  • Tutoring / feedback-supported learning
  • Registry extraction emphasizes explicit learning goals, observed outcomes, constraints, and safety limitations.

Observed Challenges

7
Educators Reported
  • The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.

Design Adaptations

8
Adaptations
  • Case classified under: Published empirical study.
  • Pedagogical pattern: Tutoring / feedback-supported learning.
  • Any additional adaptations should be verified against the full paper before public-facing publication.

Reported Outcomes

9
Engagement
  • Engagement evidence should be interpreted according to the source paper’s reported method and sample.
  • A Bayesian net- work with expert validation has been developed and used in a Bayesian Knowledge Tracing (BKT) process that allows the inference of the learner skills.
Learning Signals
  • A Bayesian net- work with expert validation has been developed and used in a Bayesian Knowledge Tracing (BKT) process that allows the inference of the learner skills.
  • This paper presents an eval- uation of the learner-model components in Logic-Muse (a bayesian learner model).
Educators Reflection

In our previous works, we presented Logic-Muse as an Intelli- gent Tutoring System that helps learners improve logical rea- soning skills in multiple contexts. Logic-Muse components were validated and argued by experts throughout the design- ing process (ITS researchers, logicians, and reasoning psy- chologists).

Ethical & Privacy Considerations

10
Privacy
  • Apply standard AAB safeguards: privacy, transparency, human oversight, and documentation of limitations.

Evidence Type

11
Evidence
  • Design / conceptual evidence

Relevance to Research

12
Potential Research Use
  • Can be used as an AAB evidence record for cross-case comparison, standards drafting, and evidence-maturity mapping.
  • Supports identification of recurring patterns in AI literacy, AI education implementation, teacher preparation, assessment, and responsible AI learning.
Relevant Research Domains
  • Conceptual understanding
  • Instructional design / AI education
  • Assessment / tutoring analytics

Case Status

13
Case Status
  • Completed

AAB Classification Tags

14
Age

Unspecified / broad education

Setting

Research / curriculum design context

AI Function

Assessment / tutoring analytics

Pedagogy

Tutoring / feedback-supported learning

Risk Level

Low to Medium

Data Sensitivity

Medium

Source Publication

15
Title

Learning Logical Reasoning Using an Intelligent Tutoring System: A Hybrid Approach to Student Modeling

Authors
  • Roger Nkambou
  • Janie Brisson
  • Ange Tato
  • Serge Robert
Venue

Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37 No. 13, EAAI-23

Year

2023

Doi

10.1609/aaai.v37i13.26891

Source URL

https://ojs.aaai.org/index.php/AAAI/article/view/26891

Pdf URL

https://ojs.aaai.org/index.php/AAAI/article/view/26891/26663

Pdf Filename

082_Learning Logical Reasoning Using an Intelligent Tutoring System_ A Hybrid Approach to Student Modeling.pdf

Page Count

8

Abstract

In our previous works, we presented Logic-Muse as an Intelli- gent Tutoring System that helps learners improve logical rea- soning skills in multiple contexts. Logic-Muse components were validated and argued by experts throughout the design- ing process (ITS researchers, logicians, and reasoning psy- chologists). A catalog of reasoning errors (syntactic and se- mantic) has been established, in addition to an explicit repre- sentation of semantic knowledge and the structures and meta- structures underlying conditional reasoning. A Bayesian net- work with expert validation has been developed and used in a Bayesian Knowledge Tracing (BKT) process that allows the inference of the learner skills. This paper presents an eval- uation of the learner-model components in Logic-Muse (a bayesian learner model). We conducted a study and collected data from nearly 300 students who processed 48 reasoning activities. These data were used to develop a psychometric model for initializing the learner’s model and validating the structure of the initial Bayesian network. We have also devel- oped a neural architecture on which a model was trained to support a deep knowledge tracing (DKT) process. The pro- posed neural architecture improves the initial version of DKT by allowing the integration of expert knowledge (through the Bayesian Expert Validation Network) and allowing bet- ter generalization of knowledge with few samples. The results show a significant improvement in the predictive power of the learner model. The analysis of the results of the psychometric model also illustrates an excellent potential for improving the Bayesian network’s structure and the learner model’s initial- ization process.

Transferability

16
Best Fit Contexts
  • Research / curriculum design context
Likely Failure Modes
  • The paper provides limited implementation detail in the extracted abstract; additional manual review may be needed for local replication.

Cost And Operations

17
Time Cost Notes

Not specified in extracted text unless noted in duration field.

Staffing Notes

Requires educators/researchers/facilitators with sufficient AI literacy and pedagogy knowledge for the target learners.

Infra Notes

Infrastructure depends on AI tool type, learner devices, data access, and institutional policy context.

Extraction Notes

18
Confidence

High

Missing Information
  • duration
Reasoning Limits

This entry was automatically extracted from the PDF text and manifest metadata. Fields should be manually verified before public registry publication, especially group size, location, duration, and outcome claims.

Duplicate Check Against Uploaded Cases Json
Closest Existing Title

Pedagogical Design of K-12 Artificial Intelligence Education: A Systematic Review

Similarity Score

0.437

Likely Duplicate

false

Registry Metadata

19
Case ID
AAB-CASE-2026-RV-110
Publication Status
Published empirical study
Tags
caseUnspecified / broad educationCanadaResearch / curriculum design contextAssessment / tutoring analyticsIntelligent tutoringlogicAssessment / tutoring analyticsInstructional design / AI education