Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,402)

Search Parameters:
Keywords = intelligence level

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 884 KB  
Article
Resolving Information Asymmetry: A Framework for Reducing Linguistic Complexity Using Denoising Objectives
by Weidong Gao and Wei He
Symmetry 2026, 18(2), 319; https://doi.org/10.3390/sym18020319 - 9 Feb 2026
Abstract
Information asymmetry between complex source texts and general-audience comprehension remains a critical challenge in Artificial Intelligence. However, existing supervised simplification methods suffer from the scarcity of parallel training data, while standard text summarization methods often discard essential details to reduce length. Furthermore, zero-shot [...] Read more.
Information asymmetry between complex source texts and general-audience comprehension remains a critical challenge in Artificial Intelligence. However, existing supervised simplification methods suffer from the scarcity of parallel training data, while standard text summarization methods often discard essential details to reduce length. Furthermore, zero-shot large language models frequently lack fine-grained controllability over linguistic complexity. To address these technical limitations, we present a framework to resolve information asymmetry by casting text simplification as a controllable denoising language modeling task. Unlike summarization, our approach preserves full semantic coverage while reducing difficulty. Our algorithm targets the problem of identifying and rewriting complex spans without labeled data through three mechanisms: (1) Asymmetry-Aware Masking, which uses model-based reconstruction difficulty (Negative Log-Likelihood) to isolate high-complexity terms; (2) paraphrase context prompting to enforce semantic invariance; and (3) an adaptive decoding strategy that dynamically penalizes complex tokens based on input difficulty. On ASSET (Abstractive Sentence Simplification Evaluation and Tuning dataset), our best setting reaches SARI (System output Against References and against the Input) 42.90 with FKGL (Flesch–Kincaid Grade Level) 7.10 (Sentence Similarity 0.948), and performs consistently on TurkCorpus (SARI 41.10), while requiring no parallel data or fine-tuning. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

25 pages, 8674 KB  
Article
LLM-Based Geospatial Assistant for WebGIS Public Service Applications
by Gabriel Ionut Dorobantu and Ana Cornelia Badea
AI 2026, 7(2), 64; https://doi.org/10.3390/ai7020064 - 9 Feb 2026
Abstract
The automation of public services represents a key area of development at the national level, with the main goal of facilitating citizens’ access to comprehensive, integrated and high-quality services in the shortest possible time. National strategies emphasize the need to integrate open geospatial [...] Read more.
The automation of public services represents a key area of development at the national level, with the main goal of facilitating citizens’ access to comprehensive, integrated and high-quality services in the shortest possible time. National strategies emphasize the need to integrate open geospatial data and artificial intelligence into information, transparency and decision-making processes. The evolution of artificial intelligence, particularly large language models (LLMs), has led to the development of virtual assistants capable of understanding user requirements and providing answers in natural, easy-to-understand language. This paper presents directions for the development and use of large-language-model-based virtual assistants, focusing on their ability to understand and interact with the geospatial domain through an LLM API. Geospatial modeling contributes significantly to the automation of public services, but access to this technology is often limited by technical expertise or dedicated software programs. The development of AI-based virtual assistants removes these barriers, facilitating access, reducing time and ensuring transparency and accuracy of information. The proposed approach is implemented using a commercial large language model API, integrated with domain-specific geospatial functions and authoritative spatial databases. This study highlights practical examples of virtual assistants capable of understanding the geospatial field and contributing to the optimization and automation of public services in the country. In addition, the paper presents comparative analyses, challenges encountered and potential directions for future research. Full article
29 pages, 33427 KB  
Article
A Multi-Task Detection Approach with Multi-Scale Attention Aggregation and Feature Enhancement
by Xibao Wu, Kexin Yang, Wei Zhao, Yiqun Wang, Wenbai Chen and Chunjiang Zhao
Agronomy 2026, 16(4), 419; https://doi.org/10.3390/agronomy16040419 - 9 Feb 2026
Abstract
This research presents an advanced YOLOv8-MMD framework specifically designed for intelligent white radish harvesting systems, addressing the critical need for simultaneous species recognition and quality evaluation. The proposed architecture is built upon a dual-branch detection system (YOLOv8-Dual) with a shared Backbone network, and [...] Read more.
This research presents an advanced YOLOv8-MMD framework specifically designed for intelligent white radish harvesting systems, addressing the critical need for simultaneous species recognition and quality evaluation. The proposed architecture is built upon a dual-branch detection system (YOLOv8-Dual) with a shared Backbone network, and is further enhanced by two novel components: the Multi-Scale Attention Aggregation (MSAA) module that strategically combines channel-wise and spatial attention mechanisms to refine feature representation, and the Multi-scale Feature Enhancement (MAFE) module that facilitates effective information fusion across different hierarchical levels of the network. Extensive experimental validation reveals that the YOLOv8-MMD model achieves remarkable performance metrics, including a species detection precision of 0.945 and a quality assessment precision of 0.812, representing improvements of 1.4% and 4%, respectively, over the baseline YOLOv8-Dual model. Under the comprehensive mAP@50 evaluation standard, the model reaches 0.949 for species identification and 0.859 for quality classification, while maintaining impressive recall rates of 0.924 and 0.836 for the respective tasks. The system demonstrates exceptional robustness when deployed in challenging field conditions, consistently performing well under varying lighting intensities, different growth stages, and partial occlusion scenarios. Computational analysis confirms the model’s practical viability, achieving a processing throughput of 112 frames per second with 8.1 GFLOPs of computational overhead, thereby meeting stringent real-time operational requirements for agricultural robotic applications. Comparative studies with existing methods further substantiate the superiority of the proposed approach in balancing detection accuracy with computational efficiency. The integration of multi-scale attention mechanisms and hierarchical feature enhancement strategies provides a comprehensive solution for automated agricultural harvesting in complex, unstructured environments, offering significant potential for practical implementation in precision agriculture systems. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

26 pages, 3202 KB  
Article
Predicting Seasonal Variations in River Water Quality: An Artificial Intelligence (AI) Approach Integrating Physicochemical Parameters
by Hasibul Hasan Shawon, Md Safwan Kabir Bhuiya, Tris Kee, Md Sabbir Hossan, Md Jubayer Hasan, Wasiq Hasan Nafi, Al-Noman Hossain and Mohammad Nyme Uddin
Sustainability 2026, 18(4), 1746; https://doi.org/10.3390/su18041746 - 9 Feb 2026
Abstract
The characterization and prediction of seasonal variations in river water quality are essential for maintaining control of aquatic ecosystems and resource management. This study aims to develop predictive models using Artificial Intelligence (AI) techniques, particularly Machine Learning (ML) algorithms, to classify seasonal patterns [...] Read more.
The characterization and prediction of seasonal variations in river water quality are essential for maintaining control of aquatic ecosystems and resource management. This study aims to develop predictive models using Artificial Intelligence (AI) techniques, particularly Machine Learning (ML) algorithms, to classify seasonal patterns in three major rivers in Bangladesh: Buriganga, Shitalakhya, and Turag. This study considered 15 of the most significant water quality parameters, including pH, alkalinity, biochemical oxygen demand (BOD), chemical oxygen demand (COD), total dissolved solids (TDSs), and electrical conductivity (EC). A total of 476 samples were gathered on a monthly basis at 17 monitoring points in the three rivers, covering all months between January and December from 2021 to 2023. With K-fold cross-validation and hyperparameter optimization, three ML models, like Extreme Gradient Boosting (XGBoost), Random Forest (RF), and Decision Tree (DT), were employed for predicting seasonal variation in river water quality. The models were assessed based on accuracy, precision, recall, F1, and ROC–AUC scores. Partial Dependence Plot (PDP) analysis was applied to explore the marginal effects of key water quality features on seasonal prediction while keeping other variables constant. RF achieved the highest accuracy of 79%, and XGBoost was about 77% among the models. The achieved prediction accuracies indicate a robust capability to capture key seasonal and spatial changes in river water quality. At this performance level, the models are effective in identifying conditions associated with deteriorated water quality and potential exceedances of guideline-based thresholds established by the World Health Organization (WHO) and Bangladesh water quality standards, supporting timely assessment and management interventions. The SHAP analysis demonstrated TDS, alkalinity, and EC as the top feature drivers of seasonal differences, providing insight into the interplay between chemical composition and climate. The results of the study have the potential to accurately depict the seasonal patterns in river water quality using AI approaches. Full article
(This article belongs to the Section Sustainable Water Management)
Show Figures

Figure 1

21 pages, 1000 KB  
Article
Length- and Usage-Weighted Indices for Representative Route Extraction from Trajectory Data
by Choongheon Yang
Sensors 2026, 26(4), 1114; https://doi.org/10.3390/s26041114 - 9 Feb 2026
Abstract
This paper introduces weighted indices—link passing ratio adjusted by length, average link usage ratio weighted by frequency and length, and path overlap weighted by length and usage—to improve representative path extraction from large-scale vehicle trajectory data. Conventional indices often overstate the representativeness of [...] Read more.
This paper introduces weighted indices—link passing ratio adjusted by length, average link usage ratio weighted by frequency and length, and path overlap weighted by length and usage—to improve representative path extraction from large-scale vehicle trajectory data. Conventional indices often overstate the representativeness of short links, leading to biased path similarity and unstable grouping. The proposed indices explicitly down-weight short segments such that routes with many small links no longer appear falsely similar. Using data from 18,205 real-world urban trajectories, the weighted indices reduced short-link bias by 20–30% and increased the stability of representative path grouping by 15–30% compared with conventional metrics. Distribution of comparisons confirmed that the weighted indices consistently capture the structural characteristics of real-world GPS-based trajectories, reflecting stable link usage and overlap patterns. These improvements were evaluated on a refined subset comprising 12,540 link-level observations and 8320 route pair comparisons, ensuring statistical robustness and consistency. These improvements are expected to enhance downstream applications such as estimations of vehicle kilometers traveled, congestion diagnostics, and sensor-based mobility services. The findings demonstrate that refining trajectory similarity metrics at the link level has direct implications for intelligent transportation systems, supporting accurate analysis and practical decision-making in large-scale urban mobility management. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

39 pages, 8743 KB  
Review
A Review of Aggregation-Based Colorimetric and SERS Sensing of Metal Ions Utilizing Au/Ag Nanoparticles
by Shu Wang, Lin Yin, Yanlong Meng, Han Gao, Yuhan Fu, Jihui Hu and Chunlian Zhan
Biosensors 2026, 16(2), 110; https://doi.org/10.3390/bios16020110 - 8 Feb 2026
Abstract
The accurate monitoring and dynamic analysis of metal ions are of considerable practical significance in environmental toxicology and life sciences. Colorimetric analysis and surface-enhanced Raman scattering (SERS) sensing technologies, utilizing the aggregation effect of gold and silver nanoparticles (Au/Ag NPs), have emerged as [...] Read more.
The accurate monitoring and dynamic analysis of metal ions are of considerable practical significance in environmental toxicology and life sciences. Colorimetric analysis and surface-enhanced Raman scattering (SERS) sensing technologies, utilizing the aggregation effect of gold and silver nanoparticles (Au/Ag NPs), have emerged as prominent methods for rapid metal ion detection. While sharing a common plasmonic basis, these two techniques serve distinct yet complementary analytical roles: colorimetric assays offer rapid, instrument-free visual screening ideal for point-of-care testing (POCT), whereas SERS provides superior sensitivity and structural fingerprinting for precise quantification in complex matrices. Furthermore, the synergistic integration of these modalities facilitates the development of dual-mode sensing platforms, enabling mutual signal verification for enhanced reliability. This article evaluates contemporary optical sensing methodologies utilizing aggregation effects and their advancements in the detection of diverse metal ions. It comprehensively outlines methodological advancements from nanomaterial fabrication to signal transduction, encompassing approaches such as biomass-mediated green synthesis and functionalization, targeted surface ligand engineering, digital readout systems utilizing intelligent algorithms, and multimodal synergistic sensing. Recent studies demonstrate that these techniques have attained trace-level identification of target ions regarding analytical efficacy, with detection limits generally conforming to or beyond applicable environmental and health safety regulations. Moreover, pertinent research has enhanced detection linear ranges, anti-interference properties, and adaptability for POCT, validating the usefulness and developmental prospects of this technology for analysis in complicated matrices. Full article
(This article belongs to the Section Optical and Photonic Biosensors)
Show Figures

Figure 1

26 pages, 300 KB  
Review
Theoretical Foundations and Architectural Evolution of Cyberspace Endogenous Security: A Comprehensive Survey
by Heming Zhang, Jian Li, Hong Wang, Shizhong Xu, Hong Yang and Haitao Wu
Appl. Sci. 2026, 16(4), 1689; https://doi.org/10.3390/app16041689 - 8 Feb 2026
Abstract
The endogenous security paradigm has emerged to address the limitations of traditional cybersecurity, which relies on reactive “patching” and struggles against unknown threats, APTs, and supply chain attacks. Centered on the principle that “structure determines security”, it diverges from detection-based approaches by employing [...] Read more.
The endogenous security paradigm has emerged to address the limitations of traditional cybersecurity, which relies on reactive “patching” and struggles against unknown threats, APTs, and supply chain attacks. Centered on the principle that “structure determines security”, it diverges from detection-based approaches by employing systems theory and cybernetics to architect closed-loop systems with “heterogeneous execution, multimodal adjudication, and dynamic scheduling”. This is realized through intrinsic architectural constructs such as dynamism, heterogeneity, and redundancy. Theoretically, it transforms deterministic component-level attacks into probabilistic system-level events, thereby shifting the security foundation from a “cognitive contest” to an “entropy-driven confrontation”. This paper provides a comprehensive review of this paradigm. We begin by elucidating its philosophical foundations and core axioms, focusing on the Dynamic Heterogeneous Redundancy (DHR) model, which converts attacks on specific vulnerabilities into probabilistic events under the core assumption of independent heterogeneous execution entities. Next, we trace the architectural evolution from early mimic defense prototypes to a universal framework, analyzing key developments including expanded heterogeneity dimensions, intelligence-driven dynamic policies, and enhanced adjudication mechanisms. We then explore essential enabling technologies and their integration with cutting-edge trends such as artificial intelligence, 6G, and cloud-native computing. Through case studies of the 5G core network and intelligent connected vehicles, the engineering feasibility of the endogenous security paradigm has been validated, with quantifiable security gains demonstrated. In a live-network pilot of the endogenous security micro-segmentation system for the 5G core, resource consumption (CPU/memory usage) of network function virtual machines remained below 3% under steady-state service loads. The system concurrently maintained microsecond-level forwarding performance and achieved carrier-grade core service availability of 99.999%. These results demonstrate that the endogenous security mechanism delivers high-level structural security with an acceptable performance cost. The paper also critically summarizes current theoretical, engineering, and ecosystem challenges, while outlining future research directions such as “Endogenous Security as a Service” and convergence with quantum-safe technologies. Full article
(This article belongs to the Special Issue AI Technology and Security in Cloud/Big Data)
20 pages, 1202 KB  
Perspective
The Innovative Potential of Artificial Intelligence Applied to Patient Registries to Implement Clinical Guidelines
by Sebastiano Gangemi, Alessandro Allegra, Mario Di Gioacchino, Luca Gammeri, Irene Cacciola and Giorgio Walter Canonica
Mach. Learn. Knowl. Extr. 2026, 8(2), 38; https://doi.org/10.3390/make8020038 - 7 Feb 2026
Viewed by 63
Abstract
Guidelines provide specific recommendations based on the best available medical knowledge, summarizing and balancing the advantages and disadvantages of various diagnostic and treatment options. Currently, consensus methods are the best and most common practices in creating clinical guidelines, even though these approaches have [...] Read more.
Guidelines provide specific recommendations based on the best available medical knowledge, summarizing and balancing the advantages and disadvantages of various diagnostic and treatment options. Currently, consensus methods are the best and most common practices in creating clinical guidelines, even though these approaches have several limitations. However, the rapid pace of biomedical innovation and the growing availability of real-world data (RWD) from clinical registries (containing data like clinical outcomes, treatment variables, imaging, and laboratory results) call for a complementary paradigm in which recommendations are continuously stress-tested against high-quality, interoperable data and auditable artificial intelligence (AI) pipelines. AI, based on information retrieved from patient registries, can optimize the process of creating guidelines. In fact, AI can analyze large volumes of data, ensuring essential tasks such as correct feature identification, prediction, classification, and pattern recognition of all information. In this work, we propose a four-phase lifecycle, comprising data curation, causal analysis and estimation, objective validation, and real-time updates, complemented by governance and machine learning operations (MLOps). A comparative analysis with consensus-only methods, a pilot protocol, and a compliance checklist are provided. We believe that the use of AI will be a valuable support in drafting clinical guidelines to complement expert consensus and ensure continuous updates to standards, providing a higher level of evidence. The integration of AI with high-quality patient registries has the potential to substantially modernize guideline development, enabling continuously updated, data-driven recommendations. Full article
Show Figures

Figure 1

18 pages, 3084 KB  
Article
Real-Time Defect Detection of Capacitive Touch Pads for Hands-Off Detection in Advanced Driver Assistance Systems
by Sung Min Hong, Jae-Wan Park, Jae-Hoon Jeong and Sun Young Kim
Appl. Sci. 2026, 16(4), 1675; https://doi.org/10.3390/app16041675 - 7 Feb 2026
Viewed by 64
Abstract
The hands-off detection (HOD) function plays a critical role in accurately identifying driver hand contact in advanced driver assistance systems (ADAS), thereby ensuring system reliability and safety compliance. Capacitive touch pads, which are extensively utilized for this purpose, are prone to various defects [...] Read more.
The hands-off detection (HOD) function plays a critical role in accurately identifying driver hand contact in advanced driver assistance systems (ADAS), thereby ensuring system reliability and safety compliance. Capacitive touch pads, which are extensively utilized for this purpose, are prone to various defects arising from their manufacturing process. These defects include pad friction, plating anomalies, pattern deformation, surface scratches, and press gaps. Despite their extensive utilization, a systematic methodology capable of detecting both surface-level and internal microstructural defects remains to be established. The present study proposes a capacitance defect detection algorithm grounded in charge quantity (Q) analysis. A dedicated main control board was developed, integrating signal amplification, analog-to-digital conversion, noise filtering, defect classification logic, and real-time visualization through a graphical user interface (GUI). The system was implemented on an operational automotive production line and validated through the inspection of over 240,000 capacitive touch pads under real-world manufacturing conditions. In this setting, the system successfully identified subtle defects that conventional visual inspection methods failed to detect. The proposed method addresses the limitations of traditional inspection techniques and introduces a structured approach to detecting complex defects in capacitive touch sensors. This research is of practical relevance in industrial settings and contributes a systematic framework for future advancements in HOD system reliability and quality assurance. Subsequent research endeavors will investigate the integration of artificial intelligence (AI) and machine learning techniques to facilitate predictive maintenance and intelligent defect management. Full article
Show Figures

Figure 1

25 pages, 769 KB  
Article
Can Digital–Intelligent Integration Enhance Urban Green Economic Efficiency? An Empirical Analysis Based on National Big Data Comprehensive Pilot Zones and Smart-City Dual-Pilot Programs
by Feng He and Yue Zhang
Sustainability 2026, 18(4), 1710; https://doi.org/10.3390/su18041710 - 7 Feb 2026
Viewed by 59
Abstract
Digital–intelligent integration (DII) has emerged as a pivotal driver for high-quality urban development, offering a pathway to overcome pressing resource and environmental constraints. By harnessing data as a core production factor and integrating advanced intelligent technologies, DII can substantially elevate urban green economic [...] Read more.
Digital–intelligent integration (DII) has emerged as a pivotal driver for high-quality urban development, offering a pathway to overcome pressing resource and environmental constraints. By harnessing data as a core production factor and integrating advanced intelligent technologies, DII can substantially elevate urban green economic efficiency (GEE). This study constructs a quasi-natural experiment using the staggered rollout of national big data comprehensive pilot zones (initiated in 2012) and smart-city pilot programs (from 2016 onward). Employing a rigorous staggered difference-in-differences (DID) estimator on panel data from 279 Chinese prefecture-level cities over 2010–2021, we find that DII causally increases GEE by 5.03 percentage points (p < 0.01). This benchmark result remains robust across a comprehensive set of checks, including parallel-trend validation, placebo tests, double/debiased machine learning, two-stage least squares with historical IT-sector instruments, and controls for overlapping policies (e.g., ETS, low-carbon pilots, green finance zones). Mechanism analysis, conducted via a sequential 2SLS control-function approach with lagged mediators and Sobel–Goodman mediation tests, reveals three theoretically grounded channels: (i) enhanced urban ecological resilience (mediates 62%, z = 4.68), (ii) accelerated green technological innovation (55%, z = 4.12, measured by IPC/Y02 patent share), and (iii) heightened entrepreneurial vitality (58%, z = 4.39, new firms per 10,000 residents). Heterogeneity tests show pronounced effects in growing and mature resource-based cities (+1.21% and +11.21%), high-fintech cities (+11.35%), and high-river-density areas (+10.29%) but insignificant impacts in declining resource-exhausted cities (joint F p = 0.08). This study makes four key contributions: (1) it innovatively constructs a continuous DII policy variable by exploiting the synergistic timing of dual pilots, thereby overcoming the limitation of analyzing policies in isolation; (2) it opens the “theoretical black box” by integrating institutional theory and information economics into a unified conceptual framework that explicitly links DII to GEE through reduced transaction costs and alleviated information asymmetry; (3) it enriches the mediation identification strategy in staggered settings using 2SLS control functions and sequential G-estimation, addressing endogeneity in intermediary variables more rigorously than traditional three-step approaches; and (4) it delivers nuanced evidence on the contextual conditions (when and where) under which DII yields the strongest green dividends, providing actionable guidance for China’s “dual-carbon” goals and the global green transition. Full article
Show Figures

Figure 1

26 pages, 44951 KB  
Article
Advanced Deep Learning Models for Classifying Dental Diseases from Panoramic Radiographs
by Deema M. Alnasser, Reema M. Alnasser, Wareef M. Alolayan, Shihanah S. Albadi, Haifa F. Alhasson, Amani A. Alkhamees and Shuaa S. Alharbi
Diagnostics 2026, 16(3), 503; https://doi.org/10.3390/diagnostics16030503 - 6 Feb 2026
Viewed by 147
Abstract
Background/Objectives: Dental diseases represent a great problem for oral health care, and early diagnosis is essential to reduce the risk of complications. Panoramic radiographs provide a detailed perspective of dental structures that is suitable for automated diagnostic methods. This paper aims to investigate [...] Read more.
Background/Objectives: Dental diseases represent a great problem for oral health care, and early diagnosis is essential to reduce the risk of complications. Panoramic radiographs provide a detailed perspective of dental structures that is suitable for automated diagnostic methods. This paper aims to investigate the use of an advanced deep learning (DL) model for the multiclass classification of diseases at the sub-diagnosis level using panoramic radiographs to resolve the inconsistencies and skewed classes in the dataset. Methods: To classify and test the models, rich data of 10,580 high-quality panoramic radiographs, initially annotated in 93 classes and subsequently improved to 35 consolidated classes, was used. We applied extensive preprocessing techniques like class consolidation, mislabeled entry correction, redundancy removal and augmentation to reduce the ratio of class imbalance from 2560:1 to 61:1. Five modern convolutional neural network (CNN) architectures—InceptionV3, EfficientNetV2, DenseNet121, ResNet50, and VGG16—were assessed with respect to five metrics: accuracy, mean average precision (mAP), precision, recall, and F1-score. Results: InceptionV3 achieved the best performance with a 97.51% accuracy rate and a mAP of 96.61%, thus confirming its superior ability for diagnosing a wide range of dental conditions. The EfficientNetV2 and DenseNet121 models achieved accuracies of 97.04% and 96.70%, respectively, indicating strong classification performance. ResNet50 and VGG16 also yielded competitive accuracy values comparable to these models. Conclusions: Overall, the results show that deep learning models are successful in dental disease classification, especially the model with the highest accuracy, InceptionV3. New insights and clinical applications will be realized from a further study into dataset expansion, ensemble learning strategies, and the application of explainable artificial intelligence techniques. The findings provide a starting point for implementing automated diagnostic systems for dental diagnosis with greater efficiency, accuracy, and clinical utility in the deployment of oral healthcare. Full article
(This article belongs to the Special Issue Advances in Dental Diagnostics)
14 pages, 241 KB  
Article
Is the Rise of Artificial Intelligence Redefining Italian University Students’ Learning Experiences? Perceptions, Practices, and the Future of Education
by Chiara Buizza, Jessica Dagani and Alberto Ghilardi
Educ. Sci. 2026, 16(2), 258; https://doi.org/10.3390/educsci16020258 - 6 Feb 2026
Viewed by 99
Abstract
Background: The rapid diffusion of generative Artificial Intelligence (AI) in higher education is reshaping students’ learning practices and raising concerns about unequal access and educational equity. In the Italian university context, where institutional guidelines on AI use are still developing, examining how [...] Read more.
Background: The rapid diffusion of generative Artificial Intelligence (AI) in higher education is reshaping students’ learning practices and raising concerns about unequal access and educational equity. In the Italian university context, where institutional guidelines on AI use are still developing, examining how students adopt and perceive tools such as ChatGPT is particularly relevant. Methods: This quantitative study investigated patterns of ChatGPT use and perceptions among Italian university students, with specific attention to its perceived support for learning and the development of transversal skills. Data were collected through an online survey. Differences across socio-demographic and academic characteristics were analysed using Mann–Whitney and Kruskal–Wallis tests, while associations between ChatGPT use, students’ perceptions, and study-related outcomes were examined using Spearman’s rho coefficients. Results: Students perceived ChatGPT as a useful tool, particularly in supporting the development of analytical, writing, and digital skills. Significant differences emerged across student groups. Higher levels of use and more positive perceptions were reported by freshmen, students studying in urban areas, and those with stronger economic resources. Conclusions: ChatGPT adoption and subjectively perceived institutional support and benefits vary by academic experience and socio-economic background. As the findings are based on self-reported perceptions, they reflect perceived rather than measured learning outcomes, highlighting the need for further research using objective indicators. Full article
(This article belongs to the Special Issue The State of the Art and the Future of Education)
25 pages, 3080 KB  
Article
Lightweight Vision Transformer for Real-Time Threat Level Assessment in Φ-OTDR-Based Pipeline Monitoring
by Yuhan Zhang, Hao Zeng, Chang Su, Jie Yang, Jianjun Zhu and Jianli Wang
Appl. Sci. 2026, 16(3), 1664; https://doi.org/10.3390/app16031664 - 6 Feb 2026
Viewed by 83
Abstract
Phase-sensitive optical time domain reflectometry (Φ-OTDR) is a highly sensitive distributed vibration sensing technology crucial for pipeline safety monitoring. However, its sensitivity makes it susceptible to environmental interference, leading to frequent false alarms by misclassifying routine activities as threats. To enable accurate threat [...] Read more.
Phase-sensitive optical time domain reflectometry (Φ-OTDR) is a highly sensitive distributed vibration sensing technology crucial for pipeline safety monitoring. However, its sensitivity makes it susceptible to environmental interference, leading to frequent false alarms by misclassifying routine activities as threats. To enable accurate threat identification and rapid response, this study proposes a lightweight LightPatch Vision Transformer (LP-ViT) model suitable for edge deployment. We establish a mapping between excavator-pipeline distance and threat levels: “direct intrusion” (within 5 m), “high-risk operation” (within 10 m), and “background construction” (beyond 15 m). The LP-ViT model is developed through structural optimization and parameter compression of the standard Vision Transformer, achieving a 96.6% reduction in parameter count while maintaining a high classification accuracy of 89.9%. Furthermore, via knowledge distillation, we derive an ultra-lightweight student model with merely 0.37 M parameters, which achieves an inference latency of 5.5 ms per sample, enabling millisecond-level threat detection and response. The proposed solution effectively enhances both the classification accuracy and real-time performance of Φ-OTDR systems in complex environments, providing a practical pathway for implementing edge intelligence in pipeline safety monitoring. Full article
Show Figures

Figure 1

19 pages, 416 KB  
Article
Hybrid Intelligence in Requirements Education: Preserving Student Agency in Refining User Stories with Generative AI
by Leon Sterling and Eduardo Oliveira
Information 2026, 17(2), 166; https://doi.org/10.3390/info17020166 - 6 Feb 2026
Viewed by 72
Abstract
Generative Artificial Intelligence (Gen AI) offers significant potential to support requirements engineering (RE) education; however, its integration poses challenges regarding accuracy and student engagement. While Gen AI cannot independently specify requirements without hallucinating or overstepping scope, it can serve as a powerful partner [...] Read more.
Generative Artificial Intelligence (Gen AI) offers significant potential to support requirements engineering (RE) education; however, its integration poses challenges regarding accuracy and student engagement. While Gen AI cannot independently specify requirements without hallucinating or overstepping scope, it can serve as a powerful partner in a hybrid intelligence workflow. In this paper, we address the challenge of translating high-level motivational models into detailed user stories, a process that is traditionally labour-intensive for novices. We introduce a structured, human-in-the-loop workflow that uses Gen AI to refine and polish user stories while strictly preserving student agency. By grounding the output from Gen AI in a validated motivational model, the workflow minimises the risk of metacognitive offloading, requiring students to actively critique and validate the initially generated requirements. Our analysis of instructional artefacts demonstrates that Gen AI helps in three ways: suggesting structural improvements, offering alternative professional phrasing, and enhancing readability. However, we also identify risks of intent drift and scope expansion, reinforcing the need for rigorous human oversight. The findings advocate for a pedagogical approach where the Gen AI system acts as a reflective assistant rather than an autonomous generator. Full article
(This article belongs to the Special Issue Using Generative Artificial Intelligence Within Software Engineering)
Show Figures

Figure 1

20 pages, 947 KB  
Systematic Review
A Systematic Review of Multimodal Frameworks for Assessing Health Vulnerability in Academicians Across Ergonomic, Lifestyle, and Dietary Domains
by Pooja Oza, Shraddha Phansalkar, Aayush Shrivastava, Abhishek Sharma, Jun-Jiat Tiang and Wei Hong Lim
Healthcare 2026, 14(3), 413; https://doi.org/10.3390/healthcare14030413 - 6 Feb 2026
Viewed by 164
Abstract
Background: Lifestyle challenges such as prolonged sitting, irregular dietary habits, high stress levels, and lack of physical activity have become increasingly common among working professionals. All these factors contribute to the risk of chronic diseases such as diabetes, heart disease, obesity, and high [...] Read more.
Background: Lifestyle challenges such as prolonged sitting, irregular dietary habits, high stress levels, and lack of physical activity have become increasingly common among working professionals. All these factors contribute to the risk of chronic diseases such as diabetes, heart disease, obesity, and high blood pressure, which in turn result in reduced work performance and quality of life and may further affect health services access through increase healthcare needs. The teaching environment, like many other work environments, is mentally, emotionally, and practically demanding, and it puts extra pressure on those who work in it. Academicians, who devote themselves to guiding young minds, often make unhealthy daily choices and face significant work-related stress, which can lead to serious long-term health problems. This review highlights that health and well-being are shaped not by a single factor such as diet, work patterns, or habits, but by their combined effect. Methods: A study of around 113 studies has highlighted that academicians usually feel drained and physically exhausted. Result: The factors like prolonged fasts, insufficient water intake, long-standing hours, long and continuous talking, and extended periods in the sitting position have added to their stress levels at the workplace. The most critical finding is that these factors do not affect in isolation but impact as a combined interaction. These issues influence each other, thus increasing the vulnerability to lifestyle disorders. Conclusions: This critical problem can be addressed with a Multimodal Assessment Framework that integrates teachers’ data on dietary habits, workplace ergonomics, sleep quality, and levels of physical activity. The presented work also proposes a statistical technique with an Artificial Intelligence (AI) model, and generates Vulnerability Quotient (VQ) that show lifestyle disease-related exposure of the teachers, which may be further used to provide remedial interventions. These insights can further guide institutions and policymakers to design healthier, supportive, and sustainable teaching environments. Full article
Show Figures

Graphical abstract

Back to TopTop