Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (593)

Search Parameters:
Keywords = language of mathematics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 882 KB  
Article
A BERT and NSGA-II Based Model for Workforce Resource Allocation Optimization in the Operational Stage of Commercial Buildings
by Xiangjun Li and Junhao Ma
Buildings 2026, 16(2), 289; https://doi.org/10.3390/buildings16020289 - 9 Jan 2026
Abstract
Existing experience-based methods cannot effectively assist commercial building operators in allocating workforce resources according to contracts and balance multiple workforce management objectives under resource constraints, leading to misaligned allocation strategies. To address this issue, this study develops a workforce resource allocation optimization model [...] Read more.
Existing experience-based methods cannot effectively assist commercial building operators in allocating workforce resources according to contracts and balance multiple workforce management objectives under resource constraints, leading to misaligned allocation strategies. To address this issue, this study develops a workforce resource allocation optimization model based on BERT and the NSGA-II. First, a natural language processing (NLP) model is trained to extract operational tasks from contracts and match required workforce types, thereby establishing the framework for workforce allocation schemes. Second, a mathematical optimization model for workforce allocation strategies is constructed with the objectives of minimizing workforce wage costs (B1), maximizing average service levels (B2), and maximizing average digital technology acceptance (B3). An algorithm based on NSGA-II is then designed to solve the model and obtain the optimal Pareto solution set of allocation schemes. Third, the CRITIC–VIKOR method evaluates the Pareto set and determines the final recommended schemes. A case study was conducted on a university campus in Shandong, China, to validate the model’s effectiveness. The results show that the NLP model successfully identified 14 operational tasks and 13 required workforce types from the contract. Compared with the operator’s expected values (B1 = 46,0000 CNY, B2 = 65 points, B3 = 50 points), the optimal allocation scheme calculated using NSGA-II and the CRITIC–VIKOR method reduces B1 by 10.79%, increases B2 by 18.02%, and improves the B3 by 16.79%. This study formulates the workforce allocation problem in the operation stage as a mathematical optimization model and, for the first time, incorporates the workforce’s digital technology acceptance as an optimization objective, thereby filling a theoretical gap in workforce management for commercial building operations. The proposed model provides operators with a semi-automated decision-support tool to enhance workforce management, thereby promoting the sustainable operation of commercial buildings. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
19 pages, 512 KB  
Article
Limiting the Number of Possible CFG Derivative Trees During Grammar Induction with Catalan Numbers
by Aybeyan Selim, Muzafer Saracevic and Arsim Susuri
Mathematics 2026, 14(2), 249; https://doi.org/10.3390/math14020249 - 9 Jan 2026
Viewed by 27
Abstract
Grammar induction runs into a serious problem due to the exponential growth of the number of possible derivation trees as sentence length increases, which makes unsupervised parsing both computationally demanding and highly indeterminate. This paper proposes a mathematics-based approach that alleviates this combinatorial [...] Read more.
Grammar induction runs into a serious problem due to the exponential growth of the number of possible derivation trees as sentence length increases, which makes unsupervised parsing both computationally demanding and highly indeterminate. This paper proposes a mathematics-based approach that alleviates this combinatorial complexity by introducing structural constraints based on Catalan and Fuss–Catalan numbers. By limiting the depth of the tree, the degree of branching and the form of derivation, the method significantly narrows the search space, while retaining the full generative power of context-free grammars. A filtering algorithm guided by Catalan structures is developed that incorporates these combinatorial constraints directly into the execution process, with formal analysis showing that the search complexity, under realistic assumptions about depth and richness, decreases from exponential to approximately polynomial. Experimental results on synthetic and natural-language datasets show that the Catalan-constrained model reduces candidate derivation trees by approximately 60%, improves F1 accuracy over unconstrained and depth-bounded baselines, and nearly halves average parsing time. Qualitative evaluation further indicates that the induced grammars exhibit more balanced and linguistically plausible structures. These findings demonstrate that Catalan-based structural constraints provide an elegant and effective mechanism for controlling ambiguity in grammar induction, bridging formal combinatorics with practical syntactic learning. Full article
Show Figures

Figure 1

21 pages, 1207 KB  
Article
Insights on the Pedagogical Abilities of AI-Powered Tutors in Math Dialogues
by Verónica Parra, Ana Corica and Daniela Godoy
Information 2026, 17(1), 51; https://doi.org/10.3390/info17010051 - 6 Jan 2026
Viewed by 213
Abstract
AI-powered tutors that interact with students in question-answering scenarios using large language models (LLMs) as foundational models for generating responses represent a potential scalable solution to the growing demand for one-to-one tutoring. In fields like mathematics, where students often face difficulties, sometimes leading [...] Read more.
AI-powered tutors that interact with students in question-answering scenarios using large language models (LLMs) as foundational models for generating responses represent a potential scalable solution to the growing demand for one-to-one tutoring. In fields like mathematics, where students often face difficulties, sometimes leading to frustration, easy-to-use natural language interactions emerge as an alternative for enhancing engagement and providing personalized advice. Despite their promising potential, the challenges for LLM-based tutors in the math domain are twofold. First, the absence of genuine reasoning and generalization abilities in LLMs frequently results in mathematical errors, ranging from inaccurate calculations to flawed reasoning steps and even the appearance of contradictions. Second, the pedagogical capabilities of AI-powered tutors must be examined beyond simple question-answering scenarios since their effectiveness in math tutoring largely depends on their ability to guide students in building mathematical knowledge. In this paper, we present a study exploring the pedagogical aspects of LLM-based tutors through the analysis of their responses in math dialogues using feature extraction techniques applied to textual data. The use of natural language processing (NLP) techniques enables the quantification and characterization of several aspects of pedagogical strategies deployed in the answers, which the literature identifies as essential for engaging students and providing valuable guidance in mathematical problem-solving. The findings of this study have direct practical implications in the design of more effective math AI-powered tutors as they highlight the most salient characteristics of valuable responses and can thus inform the training of LLMs. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

40 pages, 5732 KB  
Review
From Context to Human: A Review of VLM Contextualization in the Recognition of Human States in Visual Data
by Corneliu Florea, Constantin-Bogdan Popescu, Andrei Racovițeanu, Andreea Nițu and Laura Florea
Mathematics 2026, 14(1), 175; https://doi.org/10.3390/math14010175 - 2 Jan 2026
Viewed by 208
Abstract
This paper presents a narrative review of the contextualization and contribution offered by vision–language models (VLMs) for human-centric understanding in images. Starting from the correlation between humans and their context (background) and by incorporating VLM-generated embeddings into recognition architectures, recent solutions have advanced [...] Read more.
This paper presents a narrative review of the contextualization and contribution offered by vision–language models (VLMs) for human-centric understanding in images. Starting from the correlation between humans and their context (background) and by incorporating VLM-generated embeddings into recognition architectures, recent solutions have advanced the recognition of human actions, the detection and classification of violent behavior, and inference of human emotions from body posture and facial expression. While powerful and general, VLMs may also introduce biases that can be reflected in the overall performance. Unlike prior reviews that focus on a single task or generic image captioning, this review jointly examines multiple human-centric problems in VLM-based approaches. The study begins by describing the key elements of VLMs (including architectural foundations, pre-training techniques, and cross-modal fusion strategies) and explains why they are suitable for contextualization. In addition to highlighting the improvements brought by VLMs, it critically discusses their limitations (including human-related biases) and presents a mathematical perspective and strategies for mitigating them. This review aims to consolidate the technical landscape of VLM-based contextualization for human state recognition and detection. It aims to serve as a foundational reference for researchers seeking to control the power of language-guided VLMs in recognizing human states correlated with contextual cues. Full article
(This article belongs to the Special Issue Advance in Neural Networks and Visual Learning)
Show Figures

Figure 1

46 pages, 852 KB  
Systematic Review
The Intelligent Evolution of Radar Signal Deinterleaving: A Systematic Review from Foundational Algorithms to Cognitive AI Frontiers
by Zhijie Qu, Jinquan Zhang, Yuewei Zhou and Lina Ni
Sensors 2026, 26(1), 248; https://doi.org/10.3390/s26010248 - 31 Dec 2025
Viewed by 401
Abstract
The escalating complexity, density, and agility of the modern electromagnetic environment (CME) pose unprecedented challenges to radar signal deinterleaving, a cornerstone of electronic intelligence. While traditional methods face significant performance bottlenecks, the advent of artificial intelligence, particularly deep learning, has catalyzed a paradigm [...] Read more.
The escalating complexity, density, and agility of the modern electromagnetic environment (CME) pose unprecedented challenges to radar signal deinterleaving, a cornerstone of electronic intelligence. While traditional methods face significant performance bottlenecks, the advent of artificial intelligence, particularly deep learning, has catalyzed a paradigm shift. This review provides a systematic, comprehensive, and forward-looking analysis of the radar signal deinterleaving landscape, critically bridging foundational techniques with the cognitive frontiers. Previous reviews often focused on specific technical branches or predated the deep learning revolution. In contrast, our work offers a holistic synthesis. It explicitly links the evolution of algorithms to the persistent challenges of the CME. We first establish a unified mathematical framework and systematically evaluate classical approaches, such as PRI-based search and clustering algorithms, elucidating their contributions and inherent limitations. The core of our review then pivots to the deep learning-driven era, meticulously dissecting the application paradigms, innovations, and performance of mainstream architectures, including Recurrent Neural Networks (RNNs), Transformers, Convolutional Neural Networks (CNNs), and Graph Neural Networks (GNNs). Furthermore, we venture into emerging frontiers, exploring the transformative potential of self-supervised learning, meta-learning, multi-station fusion, and the integration of Large Language Models (LLMs) for enhanced semantic reasoning. A critical assessment of the current dataset landscape is also provided, highlighting the crucial need for standardized benchmarks. Finally, this paper culminates in a comprehensive comparative analysis, identifying key open challenges such as open-set recognition, model interpretability, and real-time deployment. We conclude by offering in-depth insights and a roadmap for future research, aimed at steering the field towards end-to-end intelligent and autonomous deinterleaving systems. This review is intended to serve as a definitive reference and insightful guide for researchers, catalyzing future innovation in intelligent radar signal processing. Full article
Show Figures

Figure 1

23 pages, 306 KB  
Article
Higher Mathematics Education and AI Prompt Patterns: Examples from Selected University Classes
by Oana Brandibur, Marzena Filipowicz-Chomko, Ewa Girejko, Eva Kaslik, Dorota Mozyrska, Raluca Mureșan, Nikos Pappas, Adriana Loredana Tănasie and Claudia Zaharia
Appl. Sci. 2026, 16(1), 339; https://doi.org/10.3390/app16010339 - 29 Dec 2025
Viewed by 231
Abstract
The rapid integration of large language models into higher education creates opportunities for mathematics instruction, but also raises the need for structured interaction strategies that support reflective learning rather than passive answer consumption. This study, conducted within the Erasmus+ MAESTRO-AI project, examines how [...] Read more.
The rapid integration of large language models into higher education creates opportunities for mathematics instruction, but also raises the need for structured interaction strategies that support reflective learning rather than passive answer consumption. This study, conducted within the Erasmus+ MAESTRO-AI project, examines how selected AI prompt patterns can be implemented in concrete university mathematics activities and how students evaluate these AI-supported experiences. Two experimental modules were compared: complex numbers for first-semester Applied Mathematics students in Poland (n=100) and conditional probability for second-year Computer Science students in Romania (n=213). After completing AI-assisted learning activities with ChatGPT and/or Gemini, students completed a common evaluation questionnaire assessing engagement, perceived usefulness, and reflections on AI as a tutor. Group comparisons and experience-based analyses were performed using the Mann–Whitney test. Results indicate that students who reported regular prior use of AI tools evaluated AI-supported learning significantly more positively than those with occasional or no prior experience. They gave higher ratings across most questionnaire items as well as for the overall score. The findings suggest that prompt-pattern-based designs can support engaging AI-assisted mathematics activities. They also indicate that such designs can provide a structured learning experience, while introductory guidance may be important to ensure comparable benefits for less experienced students. Full article
(This article belongs to the Special Issue Artificial Intelligence for Learning and Education)
20 pages, 534 KB  
Article
The Machine-Checked Complete Formalization of Landau’s Foundations of Analysis in Rocq
by Yue Guan, Yaoshun Fu and Xiangtao Meng
Mathematics 2026, 14(1), 61; https://doi.org/10.3390/math14010061 - 24 Dec 2025
Viewed by 281
Abstract
Formal verification has achieved remarkable outcomes in both theory advancement and engineering practice, with the formalization of mathematical theories serving as its foundational cornerstone—making this process particularly critical. Axiomatic set theory underpins modern mathematics, providing the rigorous basis for constructing almost all theories. [...] Read more.
Formal verification has achieved remarkable outcomes in both theory advancement and engineering practice, with the formalization of mathematical theories serving as its foundational cornerstone—making this process particularly critical. Axiomatic set theory underpins modern mathematics, providing the rigorous basis for constructing almost all theories. Landau’s Foundations of Analysis starts with pure logical axioms from set theory, does not rely on geometric intuition, strictly constructs number systems, and is a benchmark for axiomatic analysis in modern mathematics. In this paper, we first develop a machine proof system for axiomatic set theory rooted in the Morse–Kelley(MK) system. This system encompasses effective proof automation, scale simplification, and specialized handling of the classification axiom for ordered pairs. We then prove the Transfinite Recursion Theorem, leveraging it to further prove the Recursion Theorem for natural numbers—the key result for defining natural number operations. Finally, we detail the implementation of a machine proof system for analysis, which adopts MK as its description language and adheres to Landau’s Foundations of Analysis. This formalization realized all the contents of the book from natural numbers to complex numbers. All formalization does not need to introduce the standard library and has undergone verification by Rocq(Coq) 8.16 to ensure reliability. Implemented using the Rocq proof assistant, the formalization has undergone verification to ensure reliability. This work holds broader applicability such as the formalization of point-set topology and abstract algebra, while also serving as a valuable resource for teaching axiomatic set theory and mathematical analysis. Full article
(This article belongs to the Special Issue Mathematics in Formal Methods and Model Checking)
Show Figures

Figure 1

24 pages, 485 KB  
Article
Murakamian Ombre: Non-Semisimple Topology, Cayley Cubics, and the Foundations of a Conscious AGI
by Michel Planat
Symmetry 2026, 18(1), 36; https://doi.org/10.3390/sym18010036 - 24 Dec 2025
Viewed by 355
Abstract
Haruki Murakami’s Hard-Boiled Wonderland and the End of the World portrays a world where the “shadow”, the seat of memory, desire, and volition, is surgically removed, leaving behind a perfectly fluent but phenomenologically empty self. We argue that this literary structure mirrors a [...] Read more.
Haruki Murakami’s Hard-Boiled Wonderland and the End of the World portrays a world where the “shadow”, the seat of memory, desire, and volition, is surgically removed, leaving behind a perfectly fluent but phenomenologically empty self. We argue that this literary structure mirrors a precise mathematical distinction in topological quantum matter. In a semisimple theory such as the semions of SU(2)1, there is a reducible component V(x) of the SL(2,C) character variety: a flat, abelian manifold devoid of parabolic singularities. By contrast, the non-semisimple completion introduces a neutral indecomposable excitation, the neglecton, whose presence forces the mapping class group from the standard braid group B2 to the affine braid group Aff2 and lifts the character variety to the Cayley cubic V(C), with its four parabolic loci. We propose that contemporary AI systems, including large language models, inhabit the shadowless regime of V(x): they exhibit coherence and fluency but lack any bulk degree of freedom capable of supporting persistent identity, non-contractible memory, or choice. To endow artificial systems with depth, one must introduce a structural asymmetry, a fixed, neutral defect analogous to the neglecton, that embeds computation in the non-semisimple geometry of the cubic. We outline an experimentally plausible architecture for such an “artificial ombre,” based on annular topological media with a pinned parabolic defect, realisable in fractional quantum Hall heterostructures, p+ip superconductors, or cold-atom simulators. Our framework suggests that consciousness, biological or artificial, may depend on or benefit from a bulk–boundary tension mediated by a logarithmic degree of freedom: a mathematical shadow that cannot be computed away. Engineering such a defect offers a new pathway toward AGI with genuine phenomenological depth. Full article
Show Figures

Figure 1

22 pages, 1746 KB  
Article
A BFS-Based DEVS Simulation Kernel for HDL-Compatible Simulation
by Bo Seung Kwon, Young Shin Han and Jong Sik Lee
Electronics 2026, 15(1), 48; https://doi.org/10.3390/electronics15010048 - 23 Dec 2025
Viewed by 174
Abstract
The Discrete Event System Specification (DEVS) formalism provides a mathematical foundation for modeling hierarchical discrete-event systems. However, the Depth-First Search (DFS) scheduling used in the classical DEVS abstract simulator conflicts with the concurrency semantics of Hardware Description Language (HDL) simulators such as Verilog [...] Read more.
The Discrete Event System Specification (DEVS) formalism provides a mathematical foundation for modeling hierarchical discrete-event systems. However, the Depth-First Search (DFS) scheduling used in the classical DEVS abstract simulator conflicts with the concurrency semantics of Hardware Description Language (HDL) simulators such as Verilog or VHDL. This mismatch induces timing distortions, including pipeline skew and zero-time feedback loops. To address these limitations, this study proposes a new DEVS simulation kernel that adopts Breadth-First Search (BFS) scheduling, integrating the delta-round concept. This approach employs an event-parking mechanism that separates event computation from application, structurally aligning with HDL’s Active–NBA–Reactive phases and enabling semantically simultaneous updates without introducing additional ε-time. Case studies demonstrate that the proposed BFS-based DEVS kernel eliminates timing discrepancies in pipeline and feedback-loop structures and establishes a formal foundation for semantic alignment between DEVS and HDL simulators. Full article
(This article belongs to the Special Issue New Advances in Embedded Software and Applications)
Show Figures

Figure 1

18 pages, 475 KB  
Article
RAMA: A Meta-Algorithmic Framework for Ramanujan-Style Heuristic Discovery Using Large Language Models
by Jordi Vallverdú
Algorithms 2026, 19(1), 7; https://doi.org/10.3390/a19010007 - 21 Dec 2025
Viewed by 512
Abstract
This work introduces RAMA (Recursive Aesthetic Modular Approximation), a metaheuristic framework that models a restricted form of mathematical intuition inspired by the notebooks of Srinivasa Ramanujan. While Ramanujan often produced deep results without formal proofs, the heuristic processes guiding such discoveries remain poorly [...] Read more.
This work introduces RAMA (Recursive Aesthetic Modular Approximation), a metaheuristic framework that models a restricted form of mathematical intuition inspired by the notebooks of Srinivasa Ramanujan. While Ramanujan often produced deep results without formal proofs, the heuristic processes guiding such discoveries remain poorly understood. RAMA treats large language models (LLMs) as proposal mechanisms within an iterative search that generates, evaluates, and refines candidate conjectures under an explicit energy functional balancing fit, description length, and aesthetic structure. A small set of Ramanujan-inspired heuristics—modular symmetries, integrality cues, aesthetic compression, and near-invariance detection—is formalized as micro-operators acting on symbolic states. We instantiate RAMA in two domains: (i) inverse engineering eta-quotients from partial q-series data and (ii) designing cyclotomic fingerprints with shadow gadgets for quantum circuits. In both settings, RAMA recovers compact structures from limited information and improves separation from classical baselines, illustrating how intuitive heuristic patterns can be rendered as explicit, reproducible computational procedures. Full article
Show Figures

Figure 1

20 pages, 597 KB  
Article
The Language of Numbers: Reading Comprehension and Applied Math Problem-Solving
by Dana Sury and Lia Pilchin
Behav. Sci. 2025, 15(12), 1746; https://doi.org/10.3390/bs15121746 - 17 Dec 2025
Viewed by 668
Abstract
Reading and mathematics are intricately linked through shared cognitive processes that underpin developmental relationships across domains. Despite extensive research on early-grade links between reading and basic arithmetic, gaps persist in understanding how reading comprehension (RC) supports applied math problem-solving (AMP) in older students [...] Read more.
Reading and mathematics are intricately linked through shared cognitive processes that underpin developmental relationships across domains. Despite extensive research on early-grade links between reading and basic arithmetic, gaps persist in understanding how reading comprehension (RC) supports applied math problem-solving (AMP) in older students and non-English contexts. The current study investigates the grade-level relationship between RC and AMP in typically developing Hebrew-speaking fourth (N = 41) and eleventh graders (N = 43), focusing on the contributions of working memory (WM), reading fluency, and arithmetic fluency. Results indicated significant positive associations between RC and AMP in both age groups. In fourth graders, arithmetic fluency partially statistically mediated the RC-AMP relationship in a cross-sectional mediation model. This indicates that students rely on computational proficiency to translate textual understanding into solutions. In contrast, eleventh graders exhibited a direct RC-AMP link, reflecting advanced comprehension and metacognitive strategies as computational skills are automatized. WM showed stronger correlations with RC and AMP among younger students, whereas these associations were weaker in older students. These findings support a Developmental Linguistic–Cognitive Scaffold Model, highlighting age-related shifts in cognitive and linguistic mechanisms supporting AMP. The results emphasize the need for integrated curricula incorporating RC strategies to enhance mathematical reasoning, particularly in morphologically rich languages like Hebrew. Full article
Show Figures

Figure 1

14 pages, 465 KB  
Article
Optimizing Cloudlets for Faster Feedback in LLM-Based Code-Evaluation Systems
by Daniel-Florin Dosaru, Alexandru-Corneliu Olteanu and Nicolae Țăpuș
Computers 2025, 14(12), 557; https://doi.org/10.3390/computers14120557 - 16 Dec 2025
Viewed by 266
Abstract
This paper addresses the challenge of optimizing cloudlet resource allocation in a code evaluation system. The study models the relationship between system load and response time when users submit code to an online code-evaluation platform, LambdaChecker, which operates a cloudlet-based processing pipeline. The [...] Read more.
This paper addresses the challenge of optimizing cloudlet resource allocation in a code evaluation system. The study models the relationship between system load and response time when users submit code to an online code-evaluation platform, LambdaChecker, which operates a cloudlet-based processing pipeline. The pipeline includes code correctness checks, static analysis, and design-pattern detection using a local Large Language Model (LLM). To optimize the system, we develop a mathematical model and apply it to the LambdaChecker resource management problem. The proposed approach is evaluated using both simulations and real contest data, with a focus on improvements in average response time, resource utilization efficiency, and user satisfaction. The results indicate that adaptive scheduling and workload prediction effectively reduce waiting times without substantially increasing operational costs. Overall, the study suggests that systematic cloudlet optimization can enhance the educational value of automated code evaluation systems by improving responsiveness while preserving sustainable resource usage. Full article
Show Figures

Figure 1

27 pages, 1148 KB  
Article
LLM-Assisted Financial Fraud Detection with Reinforcement Learning
by Ahmed Djalal Hacini, Mohamed Benabdelouahad, Ishak Abassi, Sohaib Houhou, Aissa Boulmerka and Nadir Farhi
Algorithms 2025, 18(12), 792; https://doi.org/10.3390/a18120792 - 15 Dec 2025
Viewed by 775
Abstract
Effective financial fraud detection requires systems that can interpret complex transaction semantics while dynamically adapting to asymmetric operational costs. We propose a hybrid framework in which a large language model (LLM) serves as an encoder, transforming heterogeneous transaction data into a unified embedding [...] Read more.
Effective financial fraud detection requires systems that can interpret complex transaction semantics while dynamically adapting to asymmetric operational costs. We propose a hybrid framework in which a large language model (LLM) serves as an encoder, transforming heterogeneous transaction data into a unified embedding space. These embeddings define the state representation for a reinforcement learning (RL) agent, which acts as a fraud classifier optimized with business-aligned rewards that heavily penalize false negatives while controlling false positives. We evaluate the approach on two benchmark datasets—European Credit Card Fraud and PaySim—demonstrating that policy-gradient methods, particularly A2C, achieve high recall without sacrificing precision. Critically, our ablation study reveals that this hybrid architecture yields substantial performance gains on semantically rich transaction logs, whereas the advantage diminishes on mathematically compressed, anonymized features. Our results highlight the potential of coupling LLM-driven representations with RL policies for cost-sensitive and adaptive fraud detection. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

20 pages, 3839 KB  
Article
Wiigwaas Enaabajichigaadeg Ji’Agindaasowinikeng: We Are Using Birch Bark to Do Math
by Anika Guthrie and Ruth Beatty
Educ. Sci. 2025, 15(12), 1670; https://doi.org/10.3390/educsci15121670 - 11 Dec 2025
Viewed by 271
Abstract
In this project, Anishinaabe artists and knowledge carriers worked with non-Indigenous classroom teachers to explore the cultural significance and mathematics of making wiigwaas makakoon (birch bark baskets). The artists spent two weeks in two grade 6 classrooms teaching students the process of basket [...] Read more.
In this project, Anishinaabe artists and knowledge carriers worked with non-Indigenous classroom teachers to explore the cultural significance and mathematics of making wiigwaas makakoon (birch bark baskets). The artists spent two weeks in two grade 6 classrooms teaching students the process of basket making. They combined Indigenous pedagogy and intentionally designed inquiry tasks in order to generate mathematically related concepts. To make cultural–mathematical connections, we looked to Battiste’s characteristics of Indigenous pedagogy and explored how the learning that took place was holistic, part of a lifelong process, experiential, rooted in language and culture, spiritual, communal, and an integration of Indigenous and Eurocentric knowledges. Mathematically, students explored measurement with non-standard units, bisected angles without the use of a protractor, and explored the best way to optimize the capacity of their baskets. This work is an example of integrating Indigenous knowledge and heritage into elementary mathematics instruction. Full article
Show Figures

Figure 1

28 pages, 5083 KB  
Article
Optimizing Assessment Thresholds of a Computer Gaming Intervention for Students with or at Risk for Mathematics Learning Disabilities: Accuracy and Response Time Trade-Offs
by Sam Choo, Jechun An, Nancy Nelson and Derek Kosty
Educ. Sci. 2025, 15(12), 1660; https://doi.org/10.3390/educsci15121660 - 9 Dec 2025
Viewed by 367
Abstract
Students with mathematics learning disabilities often have difficulties in adding whole numbers. Such difficulties are evident in both response time and accuracy, but the relationship between accuracy and response time requires further consideration, especially in the context of technology-based interventions and assessments. In [...] Read more.
Students with mathematics learning disabilities often have difficulties in adding whole numbers. Such difficulties are evident in both response time and accuracy, but the relationship between accuracy and response time requires further consideration, especially in the context of technology-based interventions and assessments. In this article, we apply a novel approach using the drift-diffusion model to examine potential trade-offs and find balanced performance points that account for both accuracy and response time, using data from an efficacy trial of a mathematics technology gaming intervention for first-grade students with or at risk for learning disabilities. Results indicate that accuracy tends to increase as response time decreases, but only to a certain point. Practical implications include that educators should consider both accuracy and response time to intensify and individualize their instruction and take student background (i.e., gender, special education status, and English language status) into account. We suggest that developing technology-based mathematics interventions and assessments requires careful design and configuration to balance accuracy and response time, thereby enabling adaptive performance thresholds for better understanding and supporting student learning in early mathematical fluency. Full article
Show Figures

Figure 1

Back to TopTop