Student models, which infer students’ latent knowledge dynamically based on observed exercise performance, is critical for adaptive tutoring systems. Prior work has mostly demonstrated the predictive accuracy of each newly proposed student model, but has ignored many important aspects for real-world applications. For example, a highly predictive student model can generate implausible knowledge estimates, or produce shallow learners.
This talk summarizes our recent work towards reliable and useful student modeling for real-world adaptive tutoring systems from two directions. For the first direction, we present novel evaluation metrics quantifying the parameter plausibility and consistency of latent variable student models, and the expected effort and outcomes based on observational data. Conventional metrics such as AUC immediately fall short. For the second direction, we present our on-going work of constructing hierarchical Bayesian networks (BN) for providing in-depth diagnosis differentiating shallow vs. deep learners. We incorporate latent variables for inferring how well a student can integrate individual skills. We present preliminary results based on learning the network structure from existed data. Although such a hierarchical BN doesn’t improve the predictive accuracy significantly, it increases mastery inference accuracy significantly and tends to more reasonably distribute students’ efforts, compared with traditional KT models and its non-hierarchical counterparts. We briefly introduce our on-going user study evaluating the recommendation helpfulness. Based on the simulation and pilot study, we find out that the proposed model generates very different and more useful recommendations. We also discuss reasons for the limited prediction improvement raising concerns for better educational content creation.