Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (331)

Search Parameters:
Keywords = writing accuracy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 24433 KB  
Article
A Novel Deep Learning Model for Predicting University English Proficiency Achievement of Students
by Yan Yang, Xiaowei Wang, Mohan Liu, Huiwen Xue and Laixiang Xu
Information 2026, 17(4), 386; https://doi.org/10.3390/info17040386 - 19 Apr 2026
Viewed by 130
Abstract
The rapid expansion of English major enrollment has exposed critical limitations in traditional academic assessment methods regarding efficiency and accuracy, constraining educational quality enhancement. This paper introduces an English proficiency assessment approach utilizing an improved RegNet architecture integrated with a dual attention mechanism. [...] Read more.
The rapid expansion of English major enrollment has exposed critical limitations in traditional academic assessment methods regarding efficiency and accuracy, constraining educational quality enhancement. This paper introduces an English proficiency assessment approach utilizing an improved RegNet architecture integrated with a dual attention mechanism. The multidimensional academic data processed by our model include attendance, online participation, language practice, and assessment scores for listening, speaking, reading, and writing from undergraduate English majors. The initial downsampling module of RegNet is optimized through a dual convolutional structure to augment shallow feature extraction. Subsequently, a deformable attention mechanism (DAT) is incorporated to enhance focus on salient features, while a graph attention network (GAT) facilitates interaction and fusion among academic node features. Experimental results demonstrate that the proposed method achieves an average accuracy of 99.46% in proficiency assessment, substantially outperforming mainstream models including EfficientNet and AlexNet. Additionally, it demonstrates robust edge deployment capabilities, providing an effective technical solution for intelligent academic management of English programs within smart campus frameworks. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

16 pages, 450 KB  
Article
The Effects of Computer-Assisted Writing on Written Language Production in Students with Specific Learning Difficulties: Implications for Sustainable Digital Education
by Georgios Polydoros, Ilias Vasileiou, Zoe Krokou and Alexandros-Stamatios Antoniou
Computers 2026, 15(4), 251; https://doi.org/10.3390/computers15040251 - 17 Apr 2026
Viewed by 230
Abstract
This study investigated the effects of computer-assisted writing on the written language production of secondary school students with Specific Learning Difficulties (SLD), particularly dyslexia. Writing is a complex cognitive process requiring the coordination of spelling, lexical retrieval, syntactic organization, transcription, and revision, areas [...] Read more.
This study investigated the effects of computer-assisted writing on the written language production of secondary school students with Specific Learning Difficulties (SLD), particularly dyslexia. Writing is a complex cognitive process requiring the coordination of spelling, lexical retrieval, syntactic organization, transcription, and revision, areas in which students with SLD often experience persistent difficulties. The study compared handwritten and computer-based texts produced by 40 students with SLD and 20 students without learning difficulties using a counterbalanced design, with an interval of approximately two weeks between the two writing sessions. In the handwriting condition, students used printed reference materials, whereas in the computer-based condition they had access to general-purpose digital tools, including spell-checkers, electronic dictionaries, online resources, and word-processing software. Written texts were evaluated using the Spelling Accuracy Index and holistic scores assigned by independent raters. Data were analyzed using descriptive statistics and non-parametric tests (Mann–Whitney U and Wilcoxon signed-rank tests). The findings revealed statistically significant improvements in favor of computer-based writing for both groups, with particularly strong gains among students with SLD. Computer-written texts demonstrated higher spelling accuracy and received higher evaluation scores, indicating improved performance in the assessed writing outcomes. The findings suggest that computer-assisted writing may support written language production in secondary school students with SLD, particularly in relation to spelling accuracy and overall text evaluation, and may offer a useful avenue for more inclusive writing instruction. Full article
Show Figures

Figure 1

7 pages, 473 KB  
Proceeding Paper
Visual Teaching, Accessibility, and Hybridization: At the Intersection of Visual Education, Artificial Intelligence, and Universal Design for Learning
by Pierangelo Berardi and Carmela Paladino
Proceedings 2026, 139(1), 5; https://doi.org/10.3390/proceedings2026139005 - 8 Apr 2026
Viewed by 244
Abstract
Positioned at the intersection of instructional mediation, Visual Education, and Universal Design for Learning (UDL), this research aims to ascertain whether the use of Artificial Intelligence (AI) enhances accessibility for students with sensory disabilities. The study involved 137 pre-service teachers attending the “Special [...] Read more.
Positioned at the intersection of instructional mediation, Visual Education, and Universal Design for Learning (UDL), this research aims to ascertain whether the use of Artificial Intelligence (AI) enhances accessibility for students with sensory disabilities. The study involved 137 pre-service teachers attending the “Special Didactics and Learning for Sensory Disabilities” course within the teacher specialization program (TFA) at the University of Foggia. Although the hybridization of AI, UDL, and Visual Education was favourably received, its application remains sporadic, highlighting the challenge of balancing the need for simplification with requisite conceptual accuracy. This underscores the necessity of integrating more structured and continuous training pathways into teacher education, grounded in visual education and featuring micro-modules dedicated to specific skills such as writing alternative text, subtitling, and verifying color contrast according to recognized standards. Full article
Show Figures

Figure 1

15 pages, 252 KB  
Article
Cognitive and Psychosocial Burden of Childhood Cancer Survivors in Greece: A Case–Control Study
by Kalliopi Mavrea, Katerina Katsibardi, Kleoniki Roka, Roser Pons, Vasiliki Efthymiou, Alexandros-Stamatios Antoniou, Antonios I. Christou, Christina Kanaka-Gantenbein, George P. Chrousos, Antonis Kattamis and Flora Bacopoulou
Med. Sci. 2026, 14(2), 171; https://doi.org/10.3390/medsci14020171 - 30 Mar 2026
Viewed by 371
Abstract
Background/Objectives: To study the hypothesis that cognitive functions and learning skills are impaired in child/adolescent childhood cancer survivors (CCS). Secondary outcomes included psychosocial parameters and quality of life. Methods: This case–control study was conducted over four years (2017–2021) at the largest pediatric Aghia [...] Read more.
Background/Objectives: To study the hypothesis that cognitive functions and learning skills are impaired in child/adolescent childhood cancer survivors (CCS). Secondary outcomes included psychosocial parameters and quality of life. Methods: This case–control study was conducted over four years (2017–2021) at the largest pediatric Aghia Sophia Children’s Hospital, in Greece. Eligible participants were children and adolescents in Greece. For CCS, ≥1 year should have elapsed from completion of cancer treatment. Assessments of neurocognitive function, learning and psychosocial skills and health-related quality of life (HRQoL) were performed with validated instruments (WISC-III, LAMDA software, Achenbach CBCL/6-18 and YSR, KIDSCREEN-52, respectively). Results: In total, 219 participants (47.49% males, mean age ± SD 11.72 ± 2.32 years), 70 CCS and 149 controls (matched for age, sex, family income), were included. Cases were CCS of acute lymphoblastic leukemia (n = 25)/brain tumors (n = 19)/lymphoma (n = 17)/nephroblastoma (n = 5)/Ewing sarcoma (n = 3)/rhabdomyosarcoma (n = 1). CCS had worse scores in full-scale Intelligence Quotient (FSIQ) (p = 0.004), verbal IQ (VIQ) (p = 0.005) and all its subscales, performance IQ (PIQ) (p = 0.021), and almost all learning parameters than controls. Attention, working memory, writing/visual–motor coordination, processing accuracy/speed, language acquisition/expression, all psychosocial scales, and HRQoL domains of mood and emotions, were negatively affected in CCS. Female CCS demonstrated lower FSIQ (p = 0.019) and VIQ (p = 0.014) than control females, whereas male CCS retained their total IQ unaffected. Among CCS, those with non-central nervous system (CNS) tumors, higher parental educational level or higher family income had significantly higher IQ than those with CNS tumors, lower parental educational level or lower family income, respectively. Conclusions: CCS in Greece carry a significant burden of cognitive and psychological morbidity. Cognitive/educational and psychosocial support to CCS is imperative. Full article
(This article belongs to the Section Cancer and Cancer-Related Research)
20 pages, 2182 KB  
Article
Physics-Aligned Data Augmentation for Reliable Property Prediction in Direct Ink Writing Under Extreme Data Scarcity
by Biva Gyawali, Pavan Akula, Kamran Alba and Vahid Nasir
J. Manuf. Mater. Process. 2026, 10(4), 118; https://doi.org/10.3390/jmmp10040118 - 30 Mar 2026
Viewed by 507
Abstract
Reliable property prediction in extrusion-based additive manufacturing remains challenging under extreme data scarcity (e.g., sample size of <50), particularly when experiments are constrained by designed studies such as Taguchi orthogonal arrays. In direct ink writing of lignocellulosic composites, limited experimental runs restrict the [...] Read more.
Reliable property prediction in extrusion-based additive manufacturing remains challenging under extreme data scarcity (e.g., sample size of <50), particularly when experiments are constrained by designed studies such as Taguchi orthogonal arrays. In direct ink writing of lignocellulosic composites, limited experimental runs restrict the development of predictive models capable of guiding formulation and process optimization. This study introduces a physics-consistent data augmentation framework to enhance predictive reliability while preserving material-consistent behavior. Synthetic data are evaluated using four criteria: sensitivity to augmentation size, distributional consistency with experimental observations, stability with respect to boosting depth in regression modeling, and preservation of physics-consistent factor hierarchies through interpretability analysis. The framework is validated using compressive strength data from direct ink writing experiments conducted under an extremely small data regime. Results show that augmentation performance depends on the augmentation scale and model capacity. Variational autoencoder-based augmentation produced more stable and physically consistent predictions than conditional tabular generative adversarial networks in this application. Increasing predictive accuracy alone, or applying excessive augmentation, can distort material hierarchies and reduce physics consistency. The proposed evaluation framework supports reliable and interpretable property prediction in additive manufacturing when experimental data are severely limited. Full article
(This article belongs to the Special Issue Smart Manufacturing in the Era of Industry 4.0, 2nd Edition)
Show Figures

Figure 1

21 pages, 281 KB  
Review
Citation Inaccuracies and the Need for Multi-Level Oversight in AI-Assisted Medical Writing
by Vaikunthan Rajaratnam, Usama Farghaly Omar, Kristen Kee and Arun-Kumar Kaliya-Perumal
Standards 2026, 6(1), 10; https://doi.org/10.3390/standards6010010 - 20 Mar 2026
Viewed by 498
Abstract
Generative artificial intelligence (AI)-based large language models (LLMs) are increasingly being used in medical writing to improve efficiency and broaden access to knowledge. However, concerns have emerged regarding the accuracy of the citations they generate. This review discusses the issue of citation inaccuracies [...] Read more.
Generative artificial intelligence (AI)-based large language models (LLMs) are increasingly being used in medical writing to improve efficiency and broaden access to knowledge. However, concerns have emerged regarding the accuracy of the citations they generate. This review discusses the issue of citation inaccuracies in AI-assisted medical writing and its implications for scientific reliability and accountability in academic medicine. Published literature describing citation errors in AI-generated content, particularly in medical and academic contexts, was examined to understand the nature and persistence of this problem and to consider potential safeguards. Reports consistently describe citation inaccuracies, including fabricated references, incorrect bibliographic details, and incomplete source information such as missing authors, journal titles, publication years, or digital object identifiers. Although these tools continue to evolve, such errors remain reported and highlight limitations in their reliability. While LLMs offer clear benefits in supporting medical writing, their outputs require careful verification. As developers continue to address these challenges, responsible use will depend on continued human oversight, improved transparency, greater user awareness, and institutional and policy-level guidance to ensure accurate and trustworthy use of generative AI in medical writing. Full article
22 pages, 679 KB  
Review
Applications of Large Language Models in Medical Research: From Systematic Reviews to Clinical Studies
by Eun Jeong Gong, Chang Seok Bang and Yong Seok Shin
Bioengineering 2026, 13(3), 365; https://doi.org/10.3390/bioengineering13030365 - 20 Mar 2026
Viewed by 1516
Abstract
Background: Large Language Models (LLMs) are reshaping medical research workflows. Objective: This narrative review synthesizes evidence on LLM applications across systematic reviews, scientific writing, and clinical research. Methods: We reviewed literature from 2023–2025 examining LLM applications in medical research, identified through [...] Read more.
Background: Large Language Models (LLMs) are reshaping medical research workflows. Objective: This narrative review synthesizes evidence on LLM applications across systematic reviews, scientific writing, and clinical research. Methods: We reviewed literature from 2023–2025 examining LLM applications in medical research, identified through PubMed, Scopus, Web of Science, arXiv, medRxiv, and Google Scholar. Studies reporting empirical findings, methodological evaluations, or systematic analyses of LLM applications were included; editorials and commentaries without empirical data were excluded. Results: In systematic reviews, LLMs achieve 80–94% data extraction accuracy and 40% reduction in screening workload, but show only slight-to-moderate agreement (κ = 0.16–0.43) in risk-of-bias assessment. In scientific writing, hallucination rates of 47–55% for fabricated references and over 90% prevalence of demographic bias require rigorous verification. For clinical research, LLMs assist with statistical coding and protocol development but require human validation. Critically, excessive reliance on automated tools may cause cognitive offloading that compromises analytical capabilities. Conclusions: LLMs are powerful but unstable tools requiring constant verification. Success depends on maintaining human-in-the-loop approaches that preserve critical thinking while leveraging AI efficiency. Full article
Show Figures

Figure 1

17 pages, 1774 KB  
Article
An Energy- and Endurance-Aware Hybrid CMOS–SDC Memristor Convolutional Spiking Neural Network for Edge Intelligence
by Jun Sung Go and Jong Tae Kim
Electronics 2026, 15(6), 1217; https://doi.org/10.3390/electronics15061217 - 14 Mar 2026
Cited by 1 | Viewed by 423
Abstract
The inherent bottleneck of the von Neumann architecture and the limited power budget of edge devices necessitate energy-efficient hardware solutions for artificial intelligence. Memristor-based In-Memory Computing (IMC) has emerged as a promising candidate; however, the high-power consumption of peripheral circuits, particularly Analog-to-Digital Converters [...] Read more.
The inherent bottleneck of the von Neumann architecture and the limited power budget of edge devices necessitate energy-efficient hardware solutions for artificial intelligence. Memristor-based In-Memory Computing (IMC) has emerged as a promising candidate; however, the high-power consumption of peripheral circuits, particularly Analog-to-Digital Converters (ADCs), and the reliability issues of memristive devices remain significant challenges. In this paper, we propose a hybrid Convolutional Spiking Neural Network (CSNN) architecture designed for resource-constrained edge computing. Our approach integrates digital Non-Leaky Integrate-and-Fire (NLIF) neurons with Knowm Self-Directed Channel (SDC) memristor-based synapses in a 1T1R crossbar array. To maximize power efficiency, we replace conventional high-resolution ADCs with a streamlined readout circuit utilizing a Current Sense Amplifier (CSA) and a 1-bit comparator. Furthermore, we employ an intensity-to-latency temporal coding scheme to minimize spike activity and mitigate device endurance degradation. We validated the proposed system using the MNIST dataset, achieving a classification accuracy of 97.8%, which is comparable to state-of-the-art floating-point SNNs using supervised learning methods. Power analysis confirms that our 1-bit readout method consumes only 18.4% of the energy required by an 8-bit ADC-based approach while maintaining negligible accuracy loss. Additionally, the deterministic single-spike nature of our temporal coding significantly reduces write stress on memristors compared to rate coding. These results demonstrate that the proposed hybrid CSNN offers a robust and energy-efficient solution for neuromorphic edge intelligence. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

25 pages, 1887 KB  
Article
Does All or Nothing Always Work Best? In Search of Advantageous Representation of Attributes
by Urszula Stańczyk and Grzegorz Baron
Appl. Sci. 2026, 16(6), 2679; https://doi.org/10.3390/app16062679 - 11 Mar 2026
Viewed by 200
Abstract
Discretisation is a processing step often included in the preliminary data preparation. Typically, when the input features have continuous domains and their discrete forms are needed, all are translated into categorical type at the same time, before data mining takes place. However, proceeding [...] Read more.
Discretisation is a processing step often included in the preliminary data preparation. Typically, when the input features have continuous domains and their discrete forms are needed, all are translated into categorical type at the same time, before data mining takes place. However, proceeding this way is not always the most advantageous to performance. The paper presents results from the research where the discretisation transformations were carried out sequentially forward for variables, and their selection was based on their values and also importance of the attributes estimated by the constructed rankings. The experiments were executed on the datasets from the area of stylometric analysis of texts, the application domain focused on recognising authorship based on individual characteristics of writing styles. For the selected data mining techniques, the performance was studied in the context of transformed features. The observed trends indicate that along with enhanced understanding of the nature of the data, partial discretisation of feature sets could bring higher accuracy than transformation of entire input domain, showing the merits of the described research methodology. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 273 KB  
Article
The Impact of the iWrite Automated Writing Evaluation System on University EFL Students’ Writing Performance and Writing Anxiety
by Jiapeng Du and Nur Rasyidah Mohd Nordin
Educ. Sci. 2026, 16(3), 411; https://doi.org/10.3390/educsci16030411 - 9 Mar 2026
Viewed by 517
Abstract
Automated Writing Evaluation (AWE) systems have been increasingly integrated into second-language writing instruction; however, empirical evidence regarding the effectiveness of localized AWE tools in EFL contexts remains limited. This study investigated the impact of the iWrite Automated Writing Evaluation system on university EFL [...] Read more.
Automated Writing Evaluation (AWE) systems have been increasingly integrated into second-language writing instruction; however, empirical evidence regarding the effectiveness of localized AWE tools in EFL contexts remains limited. This study investigated the impact of the iWrite Automated Writing Evaluation system on university EFL students’ writing performance and writing anxiety. Employing a quasi-experimental mixed-methods design, 60 Chinese university students were assigned to an experimental group using iWrite and a control group receiving traditional teacher feedback over a 12-week instructional period. Writing performance was assessed using the complexity, accuracy, and fluency (CAF) framework, while writing anxiety was measured through a validated questionnaire. Quantitative results revealed that the experimental group demonstrated significantly greater improvements in writing accuracy, fluency, and lexical complexity, as well as significantly lower levels of writing anxiety, compared with the control group. No significant difference was found in syntactic complexity. Qualitative findings further indicated that immediate, non-judgmental feedback and opportunities for repeated revision contributed to increased learner confidence and reduced anxiety. The findings suggest that localized AWE systems such as iWrite can effectively support both the cognitive and affective dimensions of EFL writing when integrated within a human–AI collaborative instructional framework. Full article
16 pages, 683 KB  
Article
Artificial Intelligence and Error Analysis: Effects on Feedback of Recurrent Errors and Fossilisation Tendencies
by Manuel Macías-Borrego
Educ. Sci. 2026, 16(3), 393; https://doi.org/10.3390/educsci16030393 - 4 Mar 2026
Viewed by 494
Abstract
This study investigates the pedagogical value of integrating AI-supported feedback with Error Analysis in university-level English as a Foreign Language (EFL) writing instruction, where English is the target language (TL). Adopting a comparative, corpus-based design, the research examines whether AI-mediated feedback can complement [...] Read more.
This study investigates the pedagogical value of integrating AI-supported feedback with Error Analysis in university-level English as a Foreign Language (EFL) writing instruction, where English is the target language (TL). Adopting a comparative, corpus-based design, the research examines whether AI-mediated feedback can complement traditional teacher-led Error Analysis in reducing recurrent errors, improving grammatical accuracy, and supporting revision practices among Spanish L1 learners of English at the B2 (CEFR) level. Seventy participants completed two writing tasks over a twelve-week period, generating a learner corpus that was randomly assigned to two groups: AI-assisted feedback and teacher-mediated feedback. Quantitative Error Analysis and learner-perception surveys were conducted to assess both linguistic outcomes and attitudinal responses. Results indicate that students receiving AI-assisted feedback demonstrated lower rates of error repetition (25%) compared to those receiving teacher-based correction (40%), particularly in subject–verb agreement, preposition use, tense selection, and L1-induced lexical transfer in L2 English writing. Survey findings further reveal higher perceived levels of clarity, usefulness, and immediacy for AI-generated feedback, although participants continued to value teacher input for higher-order writing concerns. Overall, the findings suggest that AI-supported Error Analysis can contribute to short-term error reduction and foster learner autonomy. This study highlights the potential of blended and mixed feedback models within a focused pedagogical context and underscores the need for longitudinal research examining long-term retention, pragmatic development, and cross-context generalizability. Full article
Show Figures

Figure 1

21 pages, 14880 KB  
Article
Beyond the Black Box: Interpretable Multi-Trait Essay Scoring with Trait-Aware Transformer
by Xiaoyi Tang
Electronics 2026, 15(5), 1066; https://doi.org/10.3390/electronics15051066 - 4 Mar 2026
Viewed by 395
Abstract
The rapid advancement of automated essay scoring (AES) has been constrained by a representation bottleneck, where monolithic models collapse diverse facets of writing constructs into a single, uninterpretable signal, undermining the pedagogical value of multi-dimensional rating traits. To address this limitation, the RoBERTa-based [...] Read more.
The rapid advancement of automated essay scoring (AES) has been constrained by a representation bottleneck, where monolithic models collapse diverse facets of writing constructs into a single, uninterpretable signal, undermining the pedagogical value of multi-dimensional rating traits. To address this limitation, the RoBERTa-based Trait-Aware Transformer (RoBERTa-TAT) is introduced. This architectural reframing replaces unified pooling with parallel, trait-specific attention streams, preserving and disentangling critical features such as conceptual depth and mechanical precision. Tested on the ASAP Dataset-7, RoBERTa-TAT attains a new state-of-the-art Quadratic Weighted Kappa (QWK) of 0.936, outperforming sequential baselines and conventional Transformer variants. Beyond gains in accuracy, this trait-specialized architecture recasts scoring from a black-box prediction into a transparent diagnostic tool, enabling actionable, fine-grained feedback at different rating traits. High-resolution inspection reveals that the model’s internal representations correlate with specific linguistic markers—such as discourse connectives for organization—suggesting a degree of structural alignment with expert judgment. By aligning high-capacity representation learning with the granular demands of formative assessment, RoBERTa-TAT provides a practical, interpretable blueprint for deploying accountable AI in education and broadening access to expert diagnostic insight. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

16 pages, 19250 KB  
Article
Variable Bit-Width All-Optical Content-Addressable Memory Enabled by Sb2Se3 for Similarity Search
by Yi Guo, Xinmeng Hao, Yibo Zhang, Guangsong Yuan, Hongxiang Guo, Bing Song, Jian Wu and Qingjiang Li
Photonics 2026, 13(3), 249; https://doi.org/10.3390/photonics13030249 - 3 Mar 2026
Viewed by 431
Abstract
In the big-data-driven artificial intelligence era, similarity search, as a core operation in machine learning and data mining, demands high speed, energy efficiency, and scenario adaptability. Conventional electronic content-addressable memory (ECAMs) suffer from inherent RC delay bottlenecks, whereas existing optical content-addressable memory (OCAMs) [...] Read more.
In the big-data-driven artificial intelligence era, similarity search, as a core operation in machine learning and data mining, demands high speed, energy efficiency, and scenario adaptability. Conventional electronic content-addressable memory (ECAMs) suffer from inherent RC delay bottlenecks, whereas existing optical content-addressable memory (OCAMs) are restricted by fixed bit-widths and limited distance metrics. In this work, we propose a variable bit-width all-optical CAM leveraging multi-segment modulators and phase-change material (PCM) Sb2Se3. The multi-segment memory unit (MSMU) therein compresses N-bit binary data into a single analog photonic unit, supporting direct data writing/loading without digital-to-analog converters (DACs) and flexible trade-offs between precision, storage capacity, noise immunity, and energy while enabling Hamming and nonlinear distance metrics. A six-element three-bit OCAM prototype was fabricated on a silicon nitride silicon-on-insulator (SiN-SOI) platform. Despite the absence of integrated high-speed phase shifters, the device still achieves reliable optical data storage and retrieval. K-nearest neighbor (kNN) simulations based on experimentally derived statistical data—validated on the iris, wine, and breast cancer datasets—show that the three-bit operating mode achieves classification accuracy comparable to Manhattan/Euclidean distances at high signal-to-noise ratios (SNRs), while the one-bit mode exhibits strong noise robustness. Energy consumption is 364 fJ/bit (3-bit) and 890 fJ/bit (1-bit). This work provides a high-speed, energy-efficient, and reconfigurable all-optical similarity search solution with experimentally verified device performance and dataset-validated applicability, showing great potential for widespread deployment in data-intensive machine learning and data-mining applications. Full article
Show Figures

Figure 1

20 pages, 788 KB  
Article
Efficient Management of High-Frequency Sensor Data Streams Using a Read-Optimized Learned Index
by Hu Luo, Jiabao Wen, Desheng Chen, Zhengjian Li, Meng Xi, Jingyi He, Shuai Xiao and Jiachen Yang
Sensors 2026, 26(4), 1217; https://doi.org/10.3390/s26041217 - 13 Feb 2026
Viewed by 420
Abstract
The rapid growth of sensor data in IoT and Digital Twins necessitates high-performance spatial indexing. Traditional indexes like Rtrees suffer from high storage overhead, while state-of-the-art learned indexes like GLIN encounter a “Refinement Bottleneck” due to coarse-grained Minimum Bounding Rectangle (MBR) filtering. Furthermore, [...] Read more.
The rapid growth of sensor data in IoT and Digital Twins necessitates high-performance spatial indexing. Traditional indexes like Rtrees suffer from high storage overhead, while state-of-the-art learned indexes like GLIN encounter a “Refinement Bottleneck” due to coarse-grained Minimum Bounding Rectangle (MBR) filtering. Furthermore, existing solutions often trade update throughput for query accuracy, failing in dynamic IoT workloads with concurrent reads and writes. We propose DyGLIN (Dynamic Generate Learning-Based Index), a dynamic, read-optimized learned spatial index tailored for high-frequency sensor streams. DyGLIN introduces a decoupled leaf architecture separating query processing from data maintenance. To accelerate queries, we implement a hierarchical filtering pipeline using hierarchical MBRs (HMBR) and Cuckoo Filters to aggressively prune false positives. For maintenance, a Delta Buffer mechanism amortizes update costs, while logical deletion ensures high throughput. Experiments on real-world datasets show that DyGLIN reduces query latency by 26.4% [95% CI: 20.1%, 38.6%] compared to GLIN. It achieves 30.0% [95% CI: 21.4%, 35.9%] higher insertion throughput and superior deletion performance, with only an 18.5% [95% CI: 16.8%, 19.8%] increase in memory overhead. Full article
Show Figures

Figure 1

19 pages, 254 KB  
Tutorial
CREDIBLE: A Framework for Critical Source Evaluation—From Information Consumers to Critical Evaluators
by Zoi A. Traga Philippakos
AI Educ. 2026, 2(1), 3; https://doi.org/10.3390/aieduc2010003 - 9 Feb 2026
Viewed by 1947
Abstract
With the rise of social media and the sharing of information, as well as the use of AI tools like ChatGPT in education, the ability to evaluate information credibility has become a crucial skill. The CREDIBLE framework, standing for Credibility, Reliability, Evidence, Date, [...] Read more.
With the rise of social media and the sharing of information, as well as the use of AI tools like ChatGPT in education, the ability to evaluate information credibility has become a crucial skill. The CREDIBLE framework, standing for Credibility, Reliability, Evidence, Date, Intent, Bias, Logic, and Expertise, offers a practical, student-friendly approach to source evaluation, especially suited for secondary and postsecondary learners. Unlike models and frameworks designed for higher education, CREDIBLE helps learners critically assess both online and AI-generated content. This paper introduces the framework and explores how educators can embed it into instruction to foster critical thinking, academic integrity, and responsible digital literacy. Full article
Back to TopTop