Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,143)

Search Parameters:
Keywords = easy integrate

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 7314 KB  
Article
Establishment of a QuEChERS-FaPEx Rapid Analytical Method for N-Nitrosamines in Meat Products
by Chun-Han Su, Peng-Wang Tan and Tsai-Hua Kao
Molecules 2026, 31(1), 32; https://doi.org/10.3390/molecules31010032 - 22 Dec 2025
Abstract
This study aimed to establish a fast and efficient method for the determination of N-nitrosamines (NAs) in meat products by integrating two sample preparation techniques—QuEChERS (Quick, Easy, Cheap, Effective, Rugged, and Safe) and FaPEx (Fast Pesticide Extraction)—with liquid chromatography–tandem mass spectrometry (LC–MS/MS). [...] Read more.
This study aimed to establish a fast and efficient method for the determination of N-nitrosamines (NAs) in meat products by integrating two sample preparation techniques—QuEChERS (Quick, Easy, Cheap, Effective, Rugged, and Safe) and FaPEx (Fast Pesticide Extraction)—with liquid chromatography–tandem mass spectrometry (LC–MS/MS). Chromatographic separation was performed on a Poroshell 120 Phenyl Hexyl column using a gradient elution of acetonitrile and 0.01% formic acid at a flow rate of 0.3 mL/min and a column temperature of 25 °C. Under these conditions, nine NAs and one internal standard were completely separated within 11 min with selective reaction monitoring mode (SRM) for detection. Samples were first extracted with QuEChERS powder using acetonitrile containing 0.1% formic acid, followed by purification with a FaPEx-Chl cartridge. This combined approach demonstrated superior performance compared with traditional solvent extraction or QuEChERS extraction alone. The recoveries of the developed method ranged from 76% to 111% and 52% to 103% at spiking levels of 50 ng/g and 20 ng/g, respectively. The limits of detection (LOD) and quantification (LOQ) were 0.002–0.3 ng/g and 0.006–1.00 ng/g, respectively. The inter-day and intra-day precisions (RSD%) ranged from 2.7% to 17% and 2.9% to 17%, respectively. These results indicate that the proposed method is among the most time-efficient and effective analytical approaches currently available for the determination of NAs in meat products. Full article
(This article belongs to the Special Issue Application of Analytical Chemistry in Food Science)
Show Figures

Figure 1

22 pages, 1545 KB  
Article
The Diffusion of Risk Management Assistance for Wildland Fire Management in the United States
by Tyler A. Beeton, Tyler Aldworth, Melanie M. Colavito, Nicolena vonHedemann, Ch’aska Huayhuaca and Michael D. Caggiano
Fire 2025, 8(12), 478; https://doi.org/10.3390/fire8120478 - 17 Dec 2025
Viewed by 160
Abstract
The wildland fire management system is increasingly complex and uncertain, which challenges suppression actions and increases stress on an already strained system. Researchers and managers have called for the use of strategic, risk-informed decision making and decision support tools (DSTs) in wildfire management [...] Read more.
The wildland fire management system is increasingly complex and uncertain, which challenges suppression actions and increases stress on an already strained system. Researchers and managers have called for the use of strategic, risk-informed decision making and decision support tools (DSTs) in wildfire management to manage complexity and mitigate uncertainty. This paper evaluated the use of an emerging wildfire DST, the Risk Management Assistance (RMA) dashboard, during the 2021 and 2022 wildfire seasons. We used a mixed-method approach, consisting of an online survey and in-depth interviews with fire managers. Our objectives were the following: (1) to determine what factors at multiple scales facilitated and frustrated the adoption of RMA; and (2) to identify actionable recommendations to facilitate uptake of RMA. We situate our findings within the diffusion of innovations literature and use-inspired research. Most respondents indicated RMA tools were easy to use, accurate, and relevant to decision-making processes. We found evidence that the tools were used throughout the fire management cycle. Previous experience with RMA and training in risk management, trust in models, leadership support, and perceptions of current and future fire risk affected RMA adoption. Recommendations to improve RMA included articulating how the tools integrate with existing wildland fire DSTs, new tools that consider dynamic forecasting of risk, and both formal and informal learning opportunities in the pre-season, during incidents, and in post-fire reviews. We conclude with research and management considerations to increase the use of RMA and other DSTs in support of safe, effective, and informed wildfire decision making. Full article
(This article belongs to the Section Fire Social Science)
Show Figures

Figure 1

22 pages, 450 KB  
Review
Exploring the Security of Mobile Face Recognition: Attacks, Defenses, and Future Directions
by Elísabet Líf Birgisdóttir, Michał Ignacy Kunkel, Lukáš Pleva, Maria Papaioannou, Gaurav Choudhary and Nicola Dragoni
Appl. Sci. 2025, 15(24), 13232; https://doi.org/10.3390/app152413232 - 17 Dec 2025
Viewed by 184
Abstract
Biometric authentication on smartphones has advanced rapidly in recent years, with face recognition becoming the dominant modality due to its convenience and easy integration with modern mobile hardware. However, despite these developments, smartphone-based facial recognition systems remain vulnerable to a broad spectrum of [...] Read more.
Biometric authentication on smartphones has advanced rapidly in recent years, with face recognition becoming the dominant modality due to its convenience and easy integration with modern mobile hardware. However, despite these developments, smartphone-based facial recognition systems remain vulnerable to a broad spectrum of attacks. This survey provides an updated and comprehensive examination of the evolving attack landscape and corresponding defense mechanisms, incorporating recent advances up to 2025. A key contribution of this work is a structured taxonomy of attack types targeting smartphone facial recognition systems, encompassing (i) 2D and 3D presentation attacks; (ii) digital attacks; and (iii) dynamic attack patterns that exploit acquisition conditions. We analyze how these increasingly realistic and condition-dependent attacks challenge the robustness and generalization capabilities of modern face anti-spoofing (FAS) systems. On the defense side, the paper reviews recent progress in liveness detection, deep-learning- and transformer-based approaches, quality-aware and domain-generalizable models, and emerging unified frameworks capable of handling both physical and digital spoofing. Hardware-assisted methods and multi-modal techniques are also examined, with specific attention to their applicability in mobile environments. Furthermore, we provide a systematic overview of commonly used datasets, evaluation metrics, and cross-domain testing protocols, identifying limitations related to demographic bias, dataset variability, and controlled laboratory conditions. Finally, the survey outlines key research challenges and future directions, including the need for mobile-efficient anti-spoofing models, standardized in-the-wild evaluation protocols, and defenses robust to unseen and AI-generated spoof types. Collectively, this work offers an integrated view of current trends and emerging paradigms in smartphone-based face anti-spoofing, supporting the development of more secure and resilient biometric authentication systems. Full article
(This article belongs to the Collection Innovation in Information Security)
Show Figures

Figure 1

20 pages, 2313 KB  
Article
A Cybersecurity NER Method Based on Hard and Easy Labeled Training Data Discrimination
by Lin Ye, Yue Wu, Hongli Zhang and Mengmeng Ge
Sensors 2025, 25(24), 7627; https://doi.org/10.3390/s25247627 - 16 Dec 2025
Viewed by 190
Abstract
Although general-domain Named Entity Recognition (NER) has achieved substantial progress in the past decade, its application to cybersecurity NER is hindered by the lack of publicly available annotated datasets, primarily because of the sensitive and privacy-related nature of security data. Prior research has [...] Read more.
Although general-domain Named Entity Recognition (NER) has achieved substantial progress in the past decade, its application to cybersecurity NER is hindered by the lack of publicly available annotated datasets, primarily because of the sensitive and privacy-related nature of security data. Prior research has largely sought to improve performance by expanding annotation volumes, while overlooking the intrinsic characteristics of training data. In this study, we propose a cybersecurity Named Entity Recognition (NER) method based on hard and easy labeled training data discrimination. Firstly, a hybrid strategy that integrates a deep learning (DL)-based discriminator and a rule-based discriminator is employed to partition the original dataset into hard and easy samples. Secondly, the proportion of hard and easy data in the training set is adjusted to determine the optimal balance. Finally, a data augmentation algorithm is applied to the partitioned dataset to further improve model performance. The results demonstrate that, under a fixed total training data size, the ratio of hard to easy samples has a significant impact on NER performance, with the optimal strategy achieved at a 1:1 proportion. Moreover, the proposed method further improves the overall performance of cybersecurity NER. Full article
Show Figures

Figure 1

18 pages, 1360 KB  
Article
Lean-Enhanced Virtual Reality Training for Productivity and Ergonomic Safety Improvements
by Rongzhen Liu, Peng Wang and Chunjiang Chen
Buildings 2025, 15(24), 4534; https://doi.org/10.3390/buildings15244534 - 15 Dec 2025
Viewed by 144
Abstract
Effective training is essential for addressing the continuous requirement for enhancing productivity and safety in construction. Virtual reality (VR) has emerged as a powerful tool for simulating site environments with high fidelity. While previous studies have explored the potential of VR in construction [...] Read more.
Effective training is essential for addressing the continuous requirement for enhancing productivity and safety in construction. Virtual reality (VR) has emerged as a powerful tool for simulating site environments with high fidelity. While previous studies have explored the potential of VR in construction training, there is potential to incorporate advanced construction theories, such as lean principles, which are critical for optimizing work processes and safety. Thus, this study aims to develop an integrated VR-lean training system that integrates lean principles into traditional VR training, focusing on improving productivity and ergonomic safety—two interrelated challenges in construction. This study developed a virtual training environment for scaffolding installation, employing value stream mapping—a key lean tool—to guide trainees in eliminating waste and streamlining workflows. A before-and-after experimental design was implemented, involving 64 participants randomly assigned to non-lean VR or integrated VR-lean training groups. Training performance was assessed using productivity and ergonomic safety indicators, while a post-training questionnaire evaluated training outcomes. The results demonstrated significant productivity improvements in integrated VR-lean training compared to non-lean VR training, including a 12.3% reduction in processing time, a 21.6% reduction in waste time, a 20.8% increase in productivity index, and an 18.4% decrease in number of errors. These gains were driven by identifying and eliminating waste categories, including rework, unnecessary traveling, communication delays, and idling. Additionally, reducing rework contributed to a 7.2% improvement in the safety risk index by minimizing hazardous postures. A post-training questionnaire revealed that training satisfaction was strongly influenced by platform reliability and stability, and user-friendly, easy-to-navigate interfaces, while training effects of the integrated training were enhanced by before-session on waste knowledge and after-training feedback on optimized workflows. This study provides valuable insights into the synergy of lean principles and VR-based training, demonstrating the significant impact of lean within VR scenarios on productivity and ergonomic safety. The study also provides practical recommendations for designing immersive training systems that optimize construction performance and safety outcomes. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

27 pages, 3231 KB  
Review
Towards Greener Sample Preparation: A Review on Micro-QuEChERS Advances and Applications in Food, Environmental, and Biological Matrices
by Athina Papadopoulou, Vasiliki Boti and Christina Nannou
Separations 2025, 12(12), 339; https://doi.org/10.3390/separations12120339 - 14 Dec 2025
Viewed by 256
Abstract
This review provides a comprehensive evaluation of recent advances in miniaturized Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) sample preparation techniques applied across food, environmental, and biological matrices. Covering developments within 2020–2025, it focuses on analytical performance, environmental impact, and alignment with [...] Read more.
This review provides a comprehensive evaluation of recent advances in miniaturized Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) sample preparation techniques applied across food, environmental, and biological matrices. Covering developments within 2020–2025, it focuses on analytical performance, environmental impact, and alignment with principles of sustainable and green analytical chemistry. Central to this review is the significant reduction in solvent and sample volumes achieved through miniaturization, thus decreasing the reagent consumption and hazardous waste generation. The integration of eco-friendly extraction solvents and sorbent materials enhances selectivity and reduces the environmental footprint. These methods are often coupled with high-resolution mass spectrometers, enabling sensitive, multi-residue, and suspect analysis. Challenges associated with complex matrices, low analyte concentrations, and the need for robust clean-up procedures are addressed through innovative hybrid workflows and advanced materials, e.g., polymeric electrospun fibers and deep eutectic solvents. The growing adoption of greener protocols is highlighted. Moreover, it underscores their potential to improve routine analytical workflows while reducing environmental burden. Future research should focus on the development of sustainable sample preparation with improved sensitivity, broader applicability, and minimal ecological impacts. This comprehensive assessment supports the ongoing transformation of analytical chemistry towards more sustainable practices without compromising analytical reliability and efficacy. Full article
Show Figures

Graphical abstract

42 pages, 9085 KB  
Review
In2O3: An Oxide Semiconductor for Thin-Film Transistors, a Short Review
by Christophe Avis and Jin Jang
Molecules 2025, 30(24), 4762; https://doi.org/10.3390/molecules30244762 - 12 Dec 2025
Viewed by 759
Abstract
With the discovery of amorphous oxide semiconductors, a new era of electronics opened. Indium gallium zinc oxide (IGZO) overcame the problems of amorphous and poly-silicon by reaching mobilities of ~10 cm2/Vs and demonstrating thin-film transistors (TFTs) are easy to manufacture on [...] Read more.
With the discovery of amorphous oxide semiconductors, a new era of electronics opened. Indium gallium zinc oxide (IGZO) overcame the problems of amorphous and poly-silicon by reaching mobilities of ~10 cm2/Vs and demonstrating thin-film transistors (TFTs) are easy to manufacture on transparent and flexible substrates. However, mobilities over 30 cm2/Vs have been difficult to reach and other materials have been introduced. Recently, polycrystalline In2O3 has demonstrated breakthroughs in the field. In2O3 TFTs have attracted attention because of their high mobility of over 100 cm2/Vs, which has been achieved multiple times, and because of their use in scaled devices with channel lengths down to 10 nm for high integration in back-end-of-the-line (BEOL) applications and others. The present review focuses first on the material properties with the understanding of the bandgap value, the importance of the position of the charge neutrality level (CNL), the doping effect of various atoms (Zr, Ge, Mo, Ti, Sn, or H) on the carrier concentration, the optical properties, the effective mass, and the mobility. We introduce the effects of the non-parabolicity of the conduction band and how to assess them. We also introduce ways to evaluate the CNL position (usually at ~EC + 0.4 eV). Then, we describe TFTs’ general properties and parameters, like the field effect mobility, the subthreshold swing, the measurements necessary to assess the TFT stability through positive and negative bias temperature stress, and the negative bias illumination stress (NBIS), to finally introduce In2O3 TFTs. Then, we will introduce vacuum and non-vacuum processes like spin-coating and liquid metal printing. We will introduce the various dopants and their applications, from mobility and crystal size improvements with H to NBIS improvements with lanthanides. We will also discuss the importance of device engineering, introducing how to choose the passivation layer, the source and drain, the gate insulator, the substrate, but also the possibility of advanced engineering by introducing the use of dual gate and 2 DEG devices on the mobility improvement. Finally, we will introduce the recent breakthroughs where In2O3 TFTs are integrated in neuromorphic applications and 3D integration. Full article
Show Figures

Figure 1

21 pages, 542 KB  
Systematic Review
Application of Augmented Reality Technology as a Dietary Monitoring and Control Measure Among Adults: A Systematic Review
by Gabrielle Victoria Gonzalez, Bingjing Mao, Ruxin Wang, Wen Liu, Chen Wang and Tung Sung Tseng
Nutrients 2025, 17(24), 3893; https://doi.org/10.3390/nu17243893 - 12 Dec 2025
Viewed by 183
Abstract
Background/Objectives: Traditional dietary monitoring methods such as 24 h recalls rely on self-report, leading to recall bias and underreporting. Similarly, dietary control approaches, including portion control and calorie restriction, depend on user accuracy and consistency. Augmented reality (AR) offers a promising alternative [...] Read more.
Background/Objectives: Traditional dietary monitoring methods such as 24 h recalls rely on self-report, leading to recall bias and underreporting. Similarly, dietary control approaches, including portion control and calorie restriction, depend on user accuracy and consistency. Augmented reality (AR) offers a promising alternative for improving dietary monitoring and control by enhancing engagement, feedback accuracy, and user learning. This systematic review aimed to examine how AR technologies are implemented to support dietary monitoring and control and to evaluate their usability and effectiveness among adults. Methods: A systematic search of PubMed, CINAHL, and Embase identified studies published between 2000 and 2025 that evaluated augmented reality for dietary monitoring and control among adults. Eligible studies included peer-reviewed and gray literature in English. Data extraction focused on study design, AR system type, usability, and effectiveness outcomes. Risk of bias was assessed using the Cochrane RoB 2 tool for randomized controlled trials and ROBINS-I for non-randomized studies. Results: Thirteen studies met inclusion criteria. Since the evidence based was heterogeneous in design, outcomes, and measurement, findings were synthesized qualitatively rather than pooled. Most studies utilized smartphone-based AR systems for portion size estimation, nutrition education, and behavior modification. Usability and satisfaction varied by study: One study found that 80% of participants (N = 15) were satisfied or extremely satisfied with the AR tool. Another reported that 100% of users (N = 26) rated the app easy to use, and a separate study observed a 72.5% agreement rate on ease of use among participants (N = 40). Several studies also examined portion size estimation, with one reporting a 12.2% improvement in estimation accuracy and another showing −6% estimation, though a 12.7% overestimation in energy intake persisted. Additional outcomes related to behavior, dietary knowledge, and physiological or psychological effects were also identified across the review. Common limitations included difficulty aligning markers, overestimation of amorphous foods, and short intervention durations. Despite these promising findings, the existing evidence is limited by small sample sizes, heterogeneity in intervention and device design, short study durations, and variability in usability and accuracy measures. The limitations of this review warrant cautious interpretation of findings. Conclusions: AR technologies show promise for improving dietary monitoring and control by enhancing accuracy, engagement, and behavior change. Future research should focus on longitudinal designs, diverse populations, and integration with multimodal sensors and artificial intelligence. Full article
(This article belongs to the Section Nutrition Methodology & Assessment)
Show Figures

Figure 1

27 pages, 1423 KB  
Article
Integrating Fuzzy Delphi and Rough Set Analysis for ICH Festival Planning and Urban Place Branding
by Bei Yao Lin, Hongbo Zhao, Cheng Cheong Lei and Gwo-Hshiung Tzeng
Urban Sci. 2025, 9(12), 535; https://doi.org/10.3390/urbansci9120535 - 12 Dec 2025
Viewed by 221
Abstract
Folk festivals and other intangible cultural heritage have received widespread attention, and their socio-cultural value can be used to promote tourism, strengthen local identity, and build city brands. However, it remains unclear how these intangible cultural heritage festivals transform their multi-dimensional and multi-configuration [...] Read more.
Folk festivals and other intangible cultural heritage have received widespread attention, and their socio-cultural value can be used to promote tourism, strengthen local identity, and build city brands. However, it remains unclear how these intangible cultural heritage festivals transform their multi-dimensional and multi-configuration material characteristics into economic benefits and image enhancement. This study proposes a practical decision-making framework aimed at understanding how different festival design and governance strategies can work synergistically under different cultural conditions. Based primarily on a literature review and expert questionnaire survey, this study identified six stable materialized practice modules: productization, spatialization, experientialization, digitalization, branding/communication, and co-creation governance. At the same time, this framework also incorporates two other conditional intervention properties: classicism and novelty. The interactions between these modules shape people’s understanding of intangible cultural heritage festivals. Subsequently, this study used a multimodal national dataset that included official statistics, industry reports, e-commerce and social media data, questionnaires, and expert ratings to construct module scores and cultural attributes for 167 festival case studies. Through rough set analysis (RSA), this study simplifies the attributes and extracts clear “if-then” rules, establishing a configurational causal relationship between module configuration and classic/novel conditions to form high economic benefits and enhance local image. The findings of this study reveal a robust core built around spatialization, digitalization, and co-creative governance, with brand promotion/communication yielding benefits depending on the specific context. This further confirms that classicism reinforces the legitimacy and effectiveness of rituals/spaces and governance pathways, while novelty amplifies the impact of digitalization and immersive interaction. In summary, this study constructs an integrated and easy-to-understand process that links indicators, weights, and rules, and provides operational support for screening schemes and resource allocation in festival event combinations and venue brand governance. Full article
Show Figures

Figure 1

13 pages, 603 KB  
Systematic Review
IPOS-Dem Scale in the Assessment of Patients with Dementia in Palliative Care—Potential for Adaptation: A Systematic Review
by Fernanda Quartilho, Joana Brandão Silva, Daniela Cunha, Daniel Canelas, João Rocha Neves, José Paulo Andrade, Marília Dourado and Hugo Ribeiro
J. Dement. Alzheimer's Dis. 2025, 2(4), 47; https://doi.org/10.3390/jdad2040047 - 11 Dec 2025
Viewed by 194
Abstract
Background: Dementia is a chronic, multifactorial syndrome with a high incidence and prevalence worldwide. The clinical assessment of these patients is challenging, imposing several barriers related to the system, the healthcare professional and the patient. While numerous assessment tools exist for dementia, few [...] Read more.
Background: Dementia is a chronic, multifactorial syndrome with a high incidence and prevalence worldwide. The clinical assessment of these patients is challenging, imposing several barriers related to the system, the healthcare professional and the patient. While numerous assessment tools exist for dementia, few are specifically validated or widely used in palliative care. This study evaluates the relevance of using the Integrated Palliative Outcome Scale for Dementia (IPOS-Dem) in Portugal. The primary objective is to synthesize evidence on the implementation and clinical performance of IPOS-Dem in people with dementia receiving palliative care—including feasibility, acceptability, validity, reliability, and clinical applicability—while the secondary objective is to assess the instrument’s relevance and potential for cultural/linguistic adaptation to context. Methods: A systematic review of the literature was carried out, with research in evidence-based medicine databases on the use of the Integrated Palliative Outcome Scale for Dementia (IPOS-Dem) in palliative care, using the terms “dementia”, “alzheimer”, “lewy body”, “cognitive impair”, “outcome”, “IPOS-Dem”, “patient outcome assessment”, “outcome assessment”, “scale”, “palliative care”, and “palliative outcome scale”. Results: The IPOS-Dem was considered to be a useful tool for monitoring patients with dementia while receiving palliative care, allowing for a comprehensive and systematic evaluation of symptoms, as well as involving family members in the care process. It facilitates the identification of previously unknown symptoms and issues, particularly emotional and social concerns. Its use led to an improvement in symptom control and greater family involvement in care. The reduction in missing response rates and the time required to complete the scale with repeated use indicated good adaptation to the scale’s implementation. Difficulties were reported in assessing patients with communication impairments. Some staff also highlighted the need for training in using the scale. The Swiss Easy-Read IPOS-Dem showed significant variation in scores between evaluators, which raises concerns about the reliability and consistency of the scale, indicating that the tool requires further validation. Digital models, although they may present some inconveniences, were suggested as a potential improvement in acceptability. Conclusions: Our review suggests that IPOS-Dem provides initial evidence of feasibility, acceptability, and potential clinical usefulness in dementia palliative care, making its implementation beneficial for the Portuguese population. Translation and adaptation to the Portuguese population and culture will be necessary, but the scale is promising, and we recommend its national use. Full article
Show Figures

Figure 1

17 pages, 1239 KB  
Article
Prescribed-Performance-Function-Based RISE Control for Electrohydraulic Servo Systems with Disturbance Compensation
by Guangda Liu and Junjie Mi
Mathematics 2025, 13(24), 3923; https://doi.org/10.3390/math13243923 - 8 Dec 2025
Viewed by 116
Abstract
Considering that the electrohydraulic servo system has extremely strong nonlinear characteristics, problems such as low initial tracking accuracy and large unmodeled dynamic errors are prominent, leading to easy degradation of control performance. To achieve high-precision position tracking control, this study proposes a robust [...] Read more.
Considering that the electrohydraulic servo system has extremely strong nonlinear characteristics, problems such as low initial tracking accuracy and large unmodeled dynamic errors are prominent, leading to easy degradation of control performance. To achieve high-precision position tracking control, this study proposes a robust integral of the sign of the error (RISE) control method with prescribed performance function (PPF) and dual extended state observers (DESOs). Combined with the system dynamic model, DESOs are designed to estimate matched and mismatched uncertainties, respectively. The transformed error signal is obtained based on the prescribed performance function (PPF), while restricting the convergence rate and range of the error. A RISE controller is designed using the backstepping method to suppress both matched and unmatched uncertainties and improve the system robustness. The Lyapunov stability theory proves that the system is semi-globally stable and all signals are bounded. Simulation results show that the proposed control strategy significantly improves the tracking accuracy and error convergence rate of the electrohydraulic servo system, fully verifying the effectiveness of the control strategy. Full article
Show Figures

Figure 1

24 pages, 7161 KB  
Article
Markerless AR Navigation for Smart Campuses: Lightweight Machine Learning for Infrastructure-Free Wayfinding
by Elohim Ramírez-Galván, Cesar Benavides-Alvarez, Carlos Avilés-Cruz, Arturo Zúñiga-López and José Félix Serrano-Talamantes
Electronics 2025, 14(24), 4834; https://doi.org/10.3390/electronics14244834 - 8 Dec 2025
Viewed by 334
Abstract
This paper presents a markerless augmented reality (AR) navigation system for guiding users across a university campus, independent of internet or wireless connectivity, integrating machine learning (ML) and deep learning techniques. The system employs computer vision to detect campus signage “Meeting Point” and [...] Read more.
This paper presents a markerless augmented reality (AR) navigation system for guiding users across a university campus, independent of internet or wireless connectivity, integrating machine learning (ML) and deep learning techniques. The system employs computer vision to detect campus signage “Meeting Point” and “Directory”, and classifies them through a binary classifier (BC) and convolutional neural networks (CNNs). The BC distinguishes between the two types of signs using RGB values with algorithms such as Perceptron, Bayesian classification, and k-Nearest Neighbors (KNN), while the CNN identifies the specific sign ID to link it to a campus location. Navigation routes are generated with the Floyd–Warshall algorithm, which computes the shortest path between nodes on a digital campus map. Directional arrows are then overlaid in AR on the user’s device via ARCore, updated every 200 milliseconds using sensor data and direction vectors. The prototype, developed in Android Studio, achieved over 99.5% accuracy with CNNs and 100% accuracy with the BC, even when signs were worn or partially occluded. A usability study with 27 participants showed that 85.2% successfully reached their destinations, with more than half rating the system as easy or very easy to use. Users also expressed strong interest in extending the application to other environments, such as shopping malls or airports. Overall, the solution is lightweight, scalable, and sustainable, requiring no additional infrastructure beyond existing campus signage. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 1347 KB  
Data Descriptor
China’s 15-Year Mine Accident Report Dataset (2010–2025): Construction and Analysis
by Maoquan Wan, Hao Li, Hao Wang, Hanjun Gong and Jie Hou
Data 2025, 10(12), 202; https://doi.org/10.3390/data10120202 - 4 Dec 2025
Viewed by 683
Abstract
Mine accidents pose severe threats to worker safety and sustainable mining development in China. However, existing mine accident data in China are often scattered, unstructured, and lack systematic integration, which limits their application in safety research and practice. This study constructed a standardized [...] Read more.
Mine accidents pose severe threats to worker safety and sustainable mining development in China. However, existing mine accident data in China are often scattered, unstructured, and lack systematic integration, which limits their application in safety research and practice. This study constructed a standardized structured dataset using 532 mine accident reports from official channels covering the period 2010–2025. The dataset went through four stages: data collection, standardized cleaning, structured annotation, and quality validation. It is stored in JSON Lines (JSONL) format for easy reuse. The dataset covers 27 provinces/autonomous regions/municipalities in China. Among accident levels, general accidents account for 65.6%; among accident types, roof accidents account for 20.3%. Accidents are geographically concentrated, with 11.7%, 8.3%, and 7.7% occurring in Shanxi, Gansu, and Inner Mongolia, respectively. Official data have shown an annual average decrease of 9.7% in mine accidents from 2018 to 2022, reflecting improved safety governance. This dataset addresses the gap of a full-element structured mine accident database in China, providing high-quality data for accident causation modeling, regional risk early warning, and safety policy evaluation. It also supports mine enterprises in targeted risk prevention and regulatory authorities in precise regulatory enforcement. Full article
Show Figures

Figure 1

44 pages, 10088 KB  
Article
NAIA: A Robust Artificial Intelligence Framework for Multi-Role Virtual Academic Assistance
by Adrián F. Pabón M., Kenneth J. Barrios Q., Samuel D. Solano C. and Christian G. Quintero M.
Systems 2025, 13(12), 1091; https://doi.org/10.3390/systems13121091 - 3 Dec 2025
Viewed by 532
Abstract
Virtual assistants in academic environments often lack comprehensive multimodal integration and specialized role-based architecture. This paper presents NAIA (Nimble Artificial Intelligence Assistant), a robust artificial intelligence framework designed for multi-role virtual academic assistance through a modular monolithic approach. The system integrates Large Language [...] Read more.
Virtual assistants in academic environments often lack comprehensive multimodal integration and specialized role-based architecture. This paper presents NAIA (Nimble Artificial Intelligence Assistant), a robust artificial intelligence framework designed for multi-role virtual academic assistance through a modular monolithic approach. The system integrates Large Language Models (LLMs), Computer Vision, voice processing, and animated digital avatars within five specialized roles: researcher, receptionist, personal skills trainer, personal assistant, and university guide. NAIA’s architecture implements simultaneous voice, vision, and text processing through a three-model LLM system for optimized response quality, Redis-based conversation state management for context-aware interactions, and strategic third-party service integration with OpenAI, Backblaze B2, and SerpAPI. The framework seamlessly connects with the institutional ecosystem through Microsoft Graph API integration, while the frontend delivers immersive experiences via 3D avatar rendering using Ready Player Me and Mixamo. System effectiveness is evaluated through a comprehensive mixed-methods approach involving 30 participants from Universidad del Norte, employing Technology Acceptance Model (TAM2/TAM3) constructs and System Usability Scale (SUS) assessments. Results demonstrate strong user acceptance: 93.3% consider NAIA useful overall, 93.3% find it easy to use and learn, 100% intend to continue using and recommend it, and 90% report confident independent operation. Qualitative analysis reveals high satisfaction with role specialization, intuitive interface design, and institutional integration. The comparative analysis positions NAIA’s distinctive contributions through its synthesis of institutional knowledge integration with enhanced multimodal capabilities and specialized role architecture, establishing a comprehensive framework for intelligent human-AI interaction in modern educational environments. Full article
Show Figures

Figure 1

20 pages, 1272 KB  
Article
Impact of Scaling Classic Component on Performance of Hybrid Multi-Backbone Quantum–Classic Neural Networks for Medical Applications
by Arsenii Khmelnytskyi, Yuri Gordienko and Sergii Stirenko
Computation 2025, 13(12), 278; https://doi.org/10.3390/computation13120278 - 1 Dec 2025
Viewed by 261
Abstract
Purpose: While hybrid quantum–classical neural networks (HNNs) are a promising avenue for quantum advantage, the critical influence of the classical backbone architecture on their performance remains poorly understood. This study investigates the role of lightweight convolutional neural network architectures, focusing on LCNet, in [...] Read more.
Purpose: While hybrid quantum–classical neural networks (HNNs) are a promising avenue for quantum advantage, the critical influence of the classical backbone architecture on their performance remains poorly understood. This study investigates the role of lightweight convolutional neural network architectures, focusing on LCNet, in determining the stability, generalization, and effectiveness of hybrid models augmented with quantum layers for medical applications. The objective is to clarify the architectural compatibility between quantum and classical components and provide guidelines for backbone selection in hybrid designs. Methods: We constructed HNNs by integrating a four-qubit quantum circuit (with trainable rotations) into scaled versions of LCNet (050, 075, 100, 150, 200). These models were rigorously evaluated on CIFAR-10 and MedMNIST using stratified 5-fold cross-validation, assessing accuracy, AUC, and robustness metrics. Performance was assessed with accuracy, macro- and micro-averaged area under the ROC curve (AUC), per-class accuracy, and out-of-fold (OoF) predictions to ensure unbiased generalization. In addition, training dynamics, confusion matrices, and performance stability across folds were analyzed to capture both predictive accuracy and robustness. Results: The experiments revealed a strong dependence of hybrid network performance on both backbone architecture and model scale. Across all tests, LCNet-based hybrids achieved the most consistent benefits, particularly at compact and medium configurations. From LCNet050 to LCNet100, hybrid models maintained high macro-AUC values exceeding 0.95 and delivered higher mean accuracies with lower variance across folds, confirming enhanced stability and generalization through quantum integration. On the DermaMNIST dataset, these hybrids achieved accuracy gains of up to seven percentage points and improved AUC by more than three points, demonstrating their robustness in imbalanced medical settings. However, as backbone complexity increased (LCNet150 and LCNet200), the classical architectures regained superiority, indicating that the advantages of quantum layers diminish with scale. The mostconsistent gains were observed at smaller and medium LCNet scales, where hybridization improved accuracy and stability across folds. This divergence indicates that hybrid networks do not necessarily follow the “bigger is better” paradigm of classical deep learning. Per-class analysis further showed that hybrids improved recognition in challenging categories, narrowing the gap between easy and difficult classes. Conclusions: The study demonstrates that the performance and stability of hybrid quantum–classical neural networks are fundamentally determined by the characteristics of their classical backbones. Across extensive experiments on CIFAR-10 and DermaMNIST, LCNet-based hybrids consistently outperformed or matched their classical counterparts at smaller and medium scales, achieving higher accuracy and AUC along with notably reduced variability across folds. These improvements highlight the role of quantum layers as implicit regularizers that enhance learning stability and generalization—particularly in data-limited or imbalanced medical settings. However, the observed benefits diminished with increasing backbone complexity, as larger classical models regained superiority in both accuracy and convergence reliability. This indicates that hybrid architectures do not follow the conventional “larger-is-better” paradigm of classical deep learning. Overall, the results establish that architectural compatibility and model scale are decisive factors for effective quantum–classical integration. Lightweight backbones such as LCNet offer a robust foundation for realizing the advantages of hybridization in practical, resource-constrained medical applications, paving the way for future studies on scalable, hardware-efficient, and clinically reliable hybrid neural networks. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

Back to TopTop