Abstract
Artificial Intelligence in Education is expanding rapidly, yet the adaptation of chatbots to specific reading-comprehension levels remains underexplored. This mixed-methods study presents Lectobot, a conversational agent designed to provide personalized scaffolding across three levels of reading comprehension (literal, inferential, and critical). First, we conducted a diagnostic assessment with first-year undergraduates (N = 58) using validated instruments: COMPLECsec (reading comprehension), EMA (Academic Motivation Scale), and MARSI (Metacognitive Strategies). Non-parametric analyses (Kolmogorov–Smirnov; Mann–Whitney U with Benjamini–Hochberg adjustment) indicated wide heterogeneity in comprehension (median global accuracy ≈ 55%) and a predominance of extrinsic motivation, with selective use of problem-solving strategies. These findings informed design rules for Lectobot (text selection, adaptive task difficulty, and strategy prompts). In a five-week implementation with a focus group (n = 8), semi-structured interviews were transcribed and coded in MAXQDA, guided by the Technology Acceptance Model (perceived usefulness and ease of use). Students perceived Lectobot as useful for text understanding and synthesis and moderately easy to use; reported difficulties were mainly technical (access and session continuity), leading to actionable design improvements. We discuss ethical and practical implications for personalized scaffolding in higher education and outline avenues for larger-scale evaluations and broader grade levels.