Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,156)

Search Parameters:
Keywords = feature-based threshold

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 4527 KB  
Article
Automatic Scoring of Laboratory Reports Using Multi-Dimensional Feature Engineering and Ensemble Learning with Dynamic Threshold Control
by Chang Wang and Jingzhuo Shi
Appl. Sci. 2026, 16(8), 3649; https://doi.org/10.3390/app16083649 - 8 Apr 2026
Abstract
In the field of engineering, the advancement of automated scoring systems for laboratory reports has been significantly hampered by three persistent challenges: scarcity of high-quality annotated data, high domain-specific complexity, and insufficient model interpretability. To address these limitations, this study proposes an AdaBoost [...] Read more.
In the field of engineering, the advancement of automated scoring systems for laboratory reports has been significantly hampered by three persistent challenges: scarcity of high-quality annotated data, high domain-specific complexity, and insufficient model interpretability. To address these limitations, this study proposes an AdaBoost regression model based on multi-level feature engineering and threshold control, denoted as MFTC-ABR. This method constructs a multi-dimensional feature set using a lightweight neural network, which evaluates laboratory reports across four core dimensions: comprehension of experimental principles, completion of experimental procedures, depth of result analysis, and plagiarism detection. At the scoring algorithm level, a dynamic threshold adjustment mechanism is integrated into the AdaBoostReg ensemble learning framework. By redesigning the sample weight update rule, the prediction errors of samples are divided into three intervals: the acceptable region, the stable learning range, and the focus range. Accordingly, a differentiated weight update strategy is implemented, and a history-aware mechanism is introduced to further regulate the attention allocated to individual samples. Finally, experimental results on the power electronics laboratory report dataset show that MFTC-ABR model achieves a mean absolute error (MAE) of 3.09 and a scoring consistency rate of 82% within a five-point error tolerance. These findings validate the effectiveness and practicability of the proposed method for automatic assessment in specialized domains with limited data availability. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 4042 KB  
Article
Memory Cueing and Augmented Sensory Feedback in Virtual Reality as an Assistive Technology for Enhancing Hand Motor Performance
by Zachary Marvin, Sophie Dewil, Yu Shi, Noam Y. Harel and Raviraj Nataraj
Technologies 2026, 14(4), 217; https://doi.org/10.3390/technologies14040217 - 8 Apr 2026
Abstract
Neurological injuries and disorders affecting hand motor control can severely impair the ability to perform activities of daily living and substantially reduce quality of life. Technologies such as virtual reality (VR) are increasingly used to address fundamental challenges in therapy, including motivation and [...] Read more.
Neurological injuries and disorders affecting hand motor control can severely impair the ability to perform activities of daily living and substantially reduce quality of life. Technologies such as virtual reality (VR) are increasingly used to address fundamental challenges in therapy, including motivation and engagement; further, programmable features of digital interfaces offer additional opportunities to personalize and optimize motor training. In this proof-of-concept study, we developed and evaluated a novel VR-based training framework to support improved dexterity and hand function using physiological (sensory-driven) and cognitive (memory) cues designed to promote greater task-relevant neural engagement. The proposed approach leverages the integration of augmented sensory feedback (ASF) with memory-anchored cues for motor learning of target hand gestures. Using a within-subjects design, thirteen neurotypical adults completed four training conditions: (1) control (baseline gesture-matching in VR), (2) visual ASF (enhanced visualization and feedback of gesture accuracy), (3) memory-anchored cues (associating gestures with semantically meaningful entities, loosely analogous to American Sign Language), and (4) hybrid multimodal (visual ASF + memory-anchored cues). Training with the hybrid condition produced the fastest skill acquisition (9.3 trials to reach an 80% accuracy threshold) and the steepest initial learning slope (1.86 ± 0.12%/trial), with all conditions differing significantly in initial slope (all p < 0.002). Post-training assessment showed that the hybrid condition achieved the highest gesture accuracy (95.2%), greatest normalized post-training accuracy gain (14.3% above baseline), fastest execution time to target gesture (1.14 s), and lowest variability in gestural kinematics (SD = 3.9%). Both ASF and memory-anchored cue conditions each also independently outperformed the control condition on gesture accuracy (both p ≤ 0.002), with omnibus ANOVAs indicating significant condition effects across metrics. Together, these findings suggest that pairing ASF cues with memory-based cognitive scaffolding can yield additive benefits for motor skill acquisition and stability. Pending validation in clinical populations, such approaches may inform the design of VR-based motor training frameworks for rehabilitation. Full article
Show Figures

Figure 1

28 pages, 658 KB  
Article
Dual-Branch Deep Remote Sensing for Growth Anomaly and Risk Perception in Smart Horticultural Systems
by Yan Bai, Ceteng Fu, Shen Liu, Xichen Wang, Jibo Fan, Yuecheng Li and Yihong Song
Horticulturae 2026, 12(4), 461; https://doi.org/10.3390/horticulturae12040461 - 8 Apr 2026
Abstract
In the context of the rapid development of smart horticulture, a deep remote sensing-based dual detection method for horticultural crop growth anomalies and safety risks was proposed to address the limitations of existing remote sensing monitoring approaches. These conventional methods, which predominantly focused [...] Read more.
In the context of the rapid development of smart horticulture, a deep remote sensing-based dual detection method for horticultural crop growth anomalies and safety risks was proposed to address the limitations of existing remote sensing monitoring approaches. These conventional methods, which predominantly focused on growth vigor assessment or single-task anomaly detection, had difficulty distinguishing anomalies from actual production risks and exhibited insufficient sensitivity to weak anomalies and complex temporal disturbances. Within a unified framework, a growth state modeling branch and an anomaly perception branch were constructed, enabling the joint modeling of normal growth trajectories and anomalous deviation features. By further introducing a risk joint discrimination mechanism, an integrated analysis pipeline from anomaly identification to risk assessment was achieved. Multi-temporal remote sensing features were used as inputs, through which normal crop growth patterns were characterized via trend perception, texture modeling, and temporal aggregation, while sensitivity to local disturbances and weak anomaly signals was enhanced by anomaly embeddings and energy representations. Systematic experiments conducted on multi-regional and multi-crop horticultural remote sensing datasets demonstrated that the proposed method significantly outperformed comparative approaches, including traditional threshold-based methods, support vector machines, random forests, autoencoders, ConvLSTM, and temporal transformer models. In the dual task of horticultural crop growth anomaly detection and safety risk identification, an accuracy of approximately 0.91 and an F1 score of 0.88 were achieved, indicating higher anomaly recognition accuracy and more stable risk discrimination capability. Further anomaly-type awareness experiments showed that consistent performance was maintained across diverse real-world production scenarios, including climate stress, disease-induced anomalies, and management errors. Full article
(This article belongs to the Special Issue New Trends in Smart Horticulture)
Show Figures

Figure 1

16 pages, 1033 KB  
Article
Modified Shamir Threshold Scheme for Secure Storage of Biometric Data
by Saule Nyssanbayeva, Nursulu Kapalova and Saltanat Beisenova
Computers 2026, 15(4), 228; https://doi.org/10.3390/computers15040228 - 7 Apr 2026
Abstract
The security of biometric data is a critical challenge in modern information security due to their uniqueness and non-revocability. Compromise of biometric characteristics leads to irreversible consequences; therefore, storing or transmitting them in plaintext is unacceptable. This paper addresses the confidentiality and integrity [...] Read more.
The security of biometric data is a critical challenge in modern information security due to their uniqueness and non-revocability. Compromise of biometric characteristics leads to irreversible consequences; therefore, storing or transmitting them in plaintext is unacceptable. This paper addresses the confidentiality and integrity of fingerprint data using cryptographic protection methods. Considering the specific nature of biometrics, fingerprint features are used only to generate a cryptographic secret rather than being stored directly. To protect the derived secret, a modified threshold secret-sharing scheme based on non-positional polynomial notation and the Chinese Remainder Theorem is proposed. The method generates a cryptographic secret from fingerprint minutiae described by spatial coordinates and ridge orientation. Concatenating minutiae coordinates and converting them into binary form produces a unique value deterministically linked to a specific user. Compared to the classical Shamir scheme, the modified scheme reduces the computational complexity of secret reconstruction from O(n log2n) to O(k log k), decreases data storage requirements by 30–40% through compact polynomial remainders, and increases successful secret reconstruction by 12–15% in the presence of noise in biometric samples. The results show that the proposed algorithm can be effectively applied in biometric authentication systems to protect personal data in distributed environments. Security analysis confirms resistance to major attack classes and demonstrates practical applicability in real-world systems. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

18 pages, 2383 KB  
Article
Position-Independent Lactate Kinetic Phenotypes in Professional Soccer Players: A Machine Learning Approach for Maximal Running Velocity Prediction
by Erkan Tortu, İzzet İnce, Salih Çabuk, Süleyman Ulupınar, Cebrail Gençoğlu, Serhat Özbay and Kaan Kaya
Sensors 2026, 26(7), 2252; https://doi.org/10.3390/s26072252 - 6 Apr 2026
Viewed by 94
Abstract
This study aimed to identify distinct lactate kinetic phenotypes in professional soccer players using unsupervised machine learning and determine their relationship with maximal running velocity (Vmax) through explainable artificial intelligence methods. A total of 361 professional male soccer players from the [...] Read more.
This study aimed to identify distinct lactate kinetic phenotypes in professional soccer players using unsupervised machine learning and determine their relationship with maximal running velocity (Vmax) through explainable artificial intelligence methods. A total of 361 professional male soccer players from the First Division participated in the study. Incremental treadmill tests measured lactate concentrations at five standardized velocities, alongside VO2max, Vmax, lactate threshold (LT), and anaerobic threshold (AT) parameters. Three distinct lactate kinetic phenotypes emerged: Economical Aerobic (n = 216), Balanced Metabolic (n = 19), and High Producer (n = 126). The Economical Aerobic phenotype demonstrated superior performance metrics compared to High Producer (Vmax: 15.85 ± 0.85 km/h; VO2max: 56.20 ± 4.26 mL/kg/min; p < 0.001). Initial multicollinearity assessment revealed notable collinearity among all 10 candidate predictors (VIF > 10; maximum VIF = 10.75 for VAT), necessitating rigorous feature selection. Ridge regression with 4 selected features (VAT, VO2max, 9.5 km/h lactate, 14 km/h lactate) achieved moderate but statistically significant predictive performance: 10-fold cross-validation R2= 0.392 ± 0.147 (permutation test p = 0.001). Standardized coefficients identified VAT (β = 0.399) as the dominant predictor, followed by VO2max (β = 0.253), 9.5 km/h lactate (β = 0.107), and 14 km/h lactate (β = −0.066). Lactate kinetic phenotyping reveals position-independent metabolic profiles with potentially meaningful performance associations in professional soccer. The Economical Aerobic phenotype demonstrates performance advantages associated with superior anaerobic threshold capacity. These exploratory findings suggest that individualized training strategies based on metabolic phenotype rather than playing position alone warrant further investigation, with potential applications for talent identification, training periodization, and return-to-play protocols pending prospective validation. Full article
Show Figures

Figure 1

30 pages, 4178 KB  
Article
An Intelligent Evaluation Algorithm for Pilot Flight Training Ability Based on Multimodal Information Fusion
by Heming Zhang, Changyuan Wang and Pengbo Wang
Sensors 2026, 26(7), 2245; https://doi.org/10.3390/s26072245 - 4 Apr 2026
Viewed by 231
Abstract
Intelligent-assisted assessment of pilot flight training ability is a method of automating the evaluation of pilots’ flight skills using artificial intelligence. Currently, using AI to assist or replace human instructors in flight skill assessment has become a mainstream research direction in the field [...] Read more.
Intelligent-assisted assessment of pilot flight training ability is a method of automating the evaluation of pilots’ flight skills using artificial intelligence. Currently, using AI to assist or replace human instructors in flight skill assessment has become a mainstream research direction in the field of intelligent aviation. Existing flight skill assessment methods suffer from limitations in data types and insufficient assessment accuracy. To address these issues, we evaluate and predict pilot performance in simulated flight missions based on physiological signals. Following the “OODA loop” theory, we established a multimodal dataset including pilot eye movement, electroencephalogram (EEG), electrocardiogram (ECG), electrodermal signaling (EDS), heart rate, respiration, and flight attitude data. This dataset records changes in physiological rhythms and flight behaviors during pilots’ flight training at different difficulty levels. To enhance the signal-to-noise ratio, we propose an enhanced wavelet fuzzy thresholding denoising algorithm utilizing LSTM optimization. We address the problem of isolated features across different time frames in multimodal data modeling by introducing a multi-feature fusion algorithm based on STFT. Furthermore, by combining a high-efficiency sub-attention mechanism with a Transformer network, we construct a multi-classification network for intelligent-assisted assessment of pilot flight training ability, further improving the output accuracy of each category. Experiments show that our designed algorithm can achieve a classification accuracy of up to 85% on the dataset (5-fold cross-validation), which meets the requirements for auxiliary assessment of flight capabilities. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

6 pages, 685 KB  
Proceeding Paper
Contactless Footprint Acquisition and Automated Identification Using Convolutional Neural Network
by Angelica A. Claros, Elmo Joaquin D. Estacion and Jocelyn F. Villaverde
Eng. Proc. 2026, 134(1), 30; https://doi.org/10.3390/engproc2026134030 - 3 Apr 2026
Abstract
Biometric systems are widely used in security and forensic applications. Conventionally, contact-based footprint scanners require physical contact, which presents significant limitations. These devices raise hygiene concerns and are impractical in field identification conditions, such as forensic investigations or disaster victim identification, where quick [...] Read more.
Biometric systems are widely used in security and forensic applications. Conventionally, contact-based footprint scanners require physical contact, which presents significant limitations. These devices raise hygiene concerns and are impractical in field identification conditions, such as forensic investigations or disaster victim identification, where quick and non-invasive methods are essential. To address these challenges, a contactless footprint acquisition and identification system was developed using image processing techniques and a Convolutional Neural Network (CNN) based on the Visual Geometry Group–16 layer architecture. The system employs a Raspberry Pi 4, a Logitech C922 camera, and a ring light to capture footprints without direct surface contact. Captured images are processed with Contrast Limited Adaptive Histogram Equalization (CLAHE) to improve contrast and mean thresholding to generate binary images for clearer feature extraction. System performance was evaluated using a multiclass confusion matrix. The CNN correctly classified 158 of 160 test images, achieving an accuracy of 98.75%. This result demonstrates higher accuracy than earlier studies that used older CNN models, such as Alex Krizhevsky’s Network and LeCun’s Network-5, which performed with fewer subjects and lower accuracy rates. The developed system shows potential for biometric security, forensic investigations, and disaster response, where contactless and reliable identification is required. Future research can expand the dataset with more diverse footprints, test performance under varied conditions, and extend the approach to other contactless biometrics such as palmprints or ears. Full article
Show Figures

Figure 1

41 pages, 35277 KB  
Article
A Multi-Strategy Improved Seagull Optimization Algorithm for Global Optimization and Artistic Image Segmentation
by Yangyang Jiang
Biomimetics 2026, 11(4), 247; https://doi.org/10.3390/biomimetics11040247 - 3 Apr 2026
Viewed by 254
Abstract
Multilevel threshold image segmentation is a key task in image processing, yet it faces challenges such as low search efficiency in high-dimensional spaces, difficulty in balancing segmentation accuracy and stability, and insufficient adaptability to complex scenes. Existing solutions mainly include traditional thresholding methods [...] Read more.
Multilevel threshold image segmentation is a key task in image processing, yet it faces challenges such as low search efficiency in high-dimensional spaces, difficulty in balancing segmentation accuracy and stability, and insufficient adaptability to complex scenes. Existing solutions mainly include traditional thresholding methods and metaheuristic optimization-based schemes, but they still face limitations in high-dimensional and complex segmentation tasks. The standard Seagull Optimization Algorithm (SOA) suffers from shortcomings including a single exploration mechanism, weak local exploitation capability, and a tendency for population diversity to deteriorate, making it difficult to meet the demands of high-dimensional optimization. To address these issues, this paper proposes a multi-strategy fused improved Seagull Optimization Algorithm (MFISOA), which integrates three strategies: adaptive cooperative foraging, differential evolution-driven exploitation, and centroid opposition-based boundary control. These strategies jointly construct a collaborative optimization framework with dynamic resource allocation, fine local search, and population diversity maintenance, thereby improving global exploration efficiency, local exploitation accuracy, and population stability. To evaluate the optimization performance of MFISOA, numerical simulation experiments were conducted on the CEC2017 and CEC2022 benchmark test suites, and comparisons were made with nine other mainstream advanced algorithms. The results show that MFISOA outperforms the competing algorithms in terms of optimization accuracy, convergence speed, and operational stability. Its superiority is further verified by the Wilcoxon rank-sum test and the Friedman test, with statistical significance (p < 0.05). In the multilevel threshold image segmentation task, using the Otsu criterion as the objective function, MFISOA was tested on nine benchmark images under 4-, 6-, 8-, and 10-threshold segmentation scenarios. The results indicate that MFISOA achieves better performance on metrics such as Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), and Feature Similarity Index (FSIM), enabling more accurate characterization of image grayscale distribution features and producing higher-quality segmentation results. This study provides an efficient and reliable approach for numerical optimization and multilevel threshold image segmentation. Full article
Show Figures

Figure 1

20 pages, 1092 KB  
Article
Predictive Analysis of Drug-Resistant Tuberculosis: Integrating Molecular Markers, Clinical Governance, and Community-Engaged Education in Rural South Africa
by Siphosihle Conham, Ncomeka Sineke, Ntandazo Dlatu, Lindiwe Modest Faye, Mojisola Clara Hosu and Teke Apalata
Diseases 2026, 14(4), 132; https://doi.org/10.3390/diseases14040132 - 3 Apr 2026
Viewed by 145
Abstract
Background: Drug-resistant tuberculosis remains a major challenge in resource-limited settings, particularly in rural regions of the Eastern Cape Province, where limited laboratory infrastructure, constrained access to advanced molecular diagnostics, shortages of specialized healthcare personnel, and prolonged diagnostic turnaround times can delay appropriate treatment [...] Read more.
Background: Drug-resistant tuberculosis remains a major challenge in resource-limited settings, particularly in rural regions of the Eastern Cape Province, where limited laboratory infrastructure, constrained access to advanced molecular diagnostics, shortages of specialized healthcare personnel, and prolonged diagnostic turnaround times can delay appropriate treatment initiation. This study examined whether routinely detectable genomic resistance markers could be integrated with parsimonious machine learning approaches to support early risk stratification for isoniazid (INH) and/or rifampicin (RIF) resistance and multidrug-resistant tuberculosis (MDR-TB). Methods: We conducted a retrospective analysis of clinical, demographic, and genomic data from 207 Mycobacterium tuberculosis isolates representing 207 unique patients. Resistance was classified as INH and/or RIF resistance or MDR-TB (concurrent resistance to both drugs). Predictors included age, sex, and canonical resistance-associated mutations (katG S315T, inhA −15C>T, and rpoB codon substitutions). Logistic regression was used to estimate adjusted odds ratios (aORs), while Random Forest models were applied to assess non-linear feature importance. Internal validation was performed using 10-fold cross-validation. A systems network analysis mapped the integration of model-derived risk bands into Clinical Governance structures and Community-Engaged Education pathways, including interventions delivered by Community Health Workers (CHWs). Results: INH and/or RIF resistance was identified in 58.9% of isolates, with 21.7% classified as MDR-TB. The most frequently detected mutations were katG S315T (29.0%) and rpoB S450L (26.6%). Logistic regression identified rpoB S450L (aOR 4.20; 95% CI: 2.10–8.45) and katG S315T (aOR 2.85; 95% CI: 1.40–5.80) as the strongest independent predictors, while age and sex were not statistically significant. Models demonstrated strong internal discrimination (AUCs of 0.96 for INH and/or RIF resistance and 0.99 for MDR-TB). Risk stratification categorized 18% of patients as high risk. Scenario-based modelling suggested that prioritizing high-risk patients for reflex Line Probe Assay testing could reduce the median time to appropriate treatment from 14 to 3 days and may reduce progression from isoniazid-resistant TB to MDR-TB under specified operational assumptions. Conclusions: Mutation-informed predictive modelling demonstrates strong internally validated discrimination and provides a structured framework for risk-stratified intervention. Integrating probability-based risk thresholds within Clinical Governance systems and community-level support structures, including CHW-led adherence and education strategies, may support earlier treatment optimization in high-burden rural settings. External validation and prospective implementation studies are required before broader programmatic adoption. Full article
Show Figures

Figure 1

19 pages, 4570 KB  
Article
Adaptive Deletion of Gaussian Ellipsoids in 3D Gaussian Splatting
by Fei Zhang, Yinghui Wang, Bo Yi and Jiaxin Ma
Mathematics 2026, 14(7), 1197; https://doi.org/10.3390/math14071197 - 3 Apr 2026
Viewed by 164
Abstract
As a leading method for Novel View Synthesis (NVS), 3D Gaussian Splatting (3DGS) faces limitations. Fixed thresholds governing Gaussian scale and opacity lead to over-reconstruction or under-reconstruction, while the linear penalty used for handling outliers during optimization tends to introduce artifacts. Therefore, we [...] Read more.
As a leading method for Novel View Synthesis (NVS), 3D Gaussian Splatting (3DGS) faces limitations. Fixed thresholds governing Gaussian scale and opacity lead to over-reconstruction or under-reconstruction, while the linear penalty used for handling outliers during optimization tends to introduce artifacts. Therefore, we propose Adaptive 3DGS featuring a dynamic deletion mechanism. Specifically, our method calculates coverage for each Gaussian based on its scale during removal. Gaussians with high coverage face stricter scale thresholds to reduce over-reconstruction, while those with lower coverage receive lenient thresholds to preserve details. Simultaneously, transparency-based contribution assessment is applied. Gaussians with low contribution meet stricter transparency thresholds to combat over-reconstruction, while high-contribution ones get lenient thresholds to mitigate under-reconstruction. During optimization, introducing Huber loss promotes quadratic growth for small errors, reducing smoothing to alleviate artifacts and better preserve details. Evaluation on standard datasets shows our method improves peak signal-to-noise ratio (PSNR) by 0.3 dB over 3DGS and 0.5 dB over MS-3DGS at 4× resolution, and it achieves a 0.1 dB gain over Mip-Splatting, confirming its effectiveness and robustness. Full article
(This article belongs to the Topic Intelligent Image Processing Technology)
Show Figures

Figure 1

26 pages, 12902 KB  
Article
Soft Threshold Denoising-Based Environmental Adaptive UAV Signal Modulation Recognition for Small-Sample Scenarios
by Fang Jin, Yang Shao, Yunhong He, Zhihao Ye, Fangmin He, Zhipeng Lin and Han Xiao
Drones 2026, 10(4), 257; https://doi.org/10.3390/drones10040257 - 3 Apr 2026
Viewed by 176
Abstract
As a key technology for wireless signal identification, modulation recognition plays an important role in the fields of unmanned aerial vehicle (UAV) communications, low-altitude spectrum management, etc. However, the accuracy of modulation recognition often cannot be guaranteed in scenarios with serious noise interference [...] Read more.
As a key technology for wireless signal identification, modulation recognition plays an important role in the fields of unmanned aerial vehicle (UAV) communications, low-altitude spectrum management, etc. However, the accuracy of modulation recognition often cannot be guaranteed in scenarios with serious noise interference when a few samples are available. In this paper, we propose an intelligent modulation recognition method for UAV signals based on small-sample augmentation and soft threshold denoising. We first propose a new dual-driven dataset expansion method by combining the UAV air–ground channel propagation model with the received data samples. Then, we construct a background learning-based long short-term memory (BL-LSTM) model to extract the environmental background features embedded in the UAV signal, including Line-of-Sight (LoS) state, multi-scale fading parameters and Doppler shift characteristics. We integrate environmental background information into the data training model and optimize the authenticity of data distribution. As a result, the model adaptability can be enhanced. Finally, we construct a deep residual shrinkage network based on the soft threshold function (STF-DRSN). By leveraging the capability of the soft threshold that resists noise interference, we integrate it into each residual block of the deep residual shrinkage network. Simulation results show that compared with the state of the art, our method can improve the modulation recognition accuracy of UAV signals in small-sample scenarios. Full article
Show Figures

Figure 1

15 pages, 664 KB  
Article
Longitudinal Evaluation of Neurological and Sensory Changes in Gaucher Disease: A Prospective Observational Cohort Study (SENOPRO)
by Emanuele Cerulli Irelli, Adolfo Mazzeo, Nicoletta Fallarino, Francesca Caramia, Gianmarco Tessari, Enza Morgillo, Carlo Di Bonaventura, Rosaria Turchetta, Giovanna Palumbo, Maria Giulia Tullo, Laura Mariani, Marcella Nebbioso, Patrizia Mancini, Cecilia Guariglia and Fiorina Giona
Med. Sci. 2026, 14(2), 181; https://doi.org/10.3390/medsci14020181 - 2 Apr 2026
Viewed by 295
Abstract
Background: Gaucher disease (GD) is a rare lysosomal storage disorder caused by mutations in the GBA1 gene. Traditionally, GD is classified into three subtypes based on the severity of neurological involvement; however, overlapping clinical features increasingly suggest a continuum of phenotypes rather than [...] Read more.
Background: Gaucher disease (GD) is a rare lysosomal storage disorder caused by mutations in the GBA1 gene. Traditionally, GD is classified into three subtypes based on the severity of neurological involvement; however, overlapping clinical features increasingly suggest a continuum of phenotypes rather than distinct categories. In this prospective observational cohort study, we conducted a multidisciplinary assessment of patients with GD to identify and monitor neurological, cognitive, auditory, and visual impairments. Materials and Methods: A comprehensive clinical and instrumental evaluation was performed at baseline and repeated at follow-up, with a median interval of 37 months (IQR 36–38). Neurological assessments included physical examination, clinical rating scales, video-EEG, and brain MRI. Cognitive status was assessed using a standardized battery of neuropsychological tests. Detailed audiological and ophthalmological evaluations were also conducted. Paired parametric or non-parametric tests were applied as appropriate, with Bonferroni correction for cognitive outcomes (p < 0.05). Results: Of the 22 patients assessed at baseline, 18 completed the follow-up evaluation. Neurological assessments showed a worsening of subtle parkinsonian signs, with significant increases in Movement Disorder Society–Unified Parkinson’s Disease Rating Scale Part III scores (p = 0.04) and non-motor symptom scores (p = 0.01). Two of the eighteen patients developed epilepsy during follow-up. A high prevalence of sleep disturbances was confirmed, with 27.8% exhibiting excessive daytime sleepiness and 16.7% reporting REM sleep behaviour disorder on standardized questionnaires. Compared with baseline, cognitive assessments revealed a higher proportion of patients with performance below normative population scores in at least one cognitive domain, particularly memory. Sensorineural hearing loss was confirmed in 11 of 15 patients (73.3%) who underwent audiological evaluation, with progressive worsening of audiometric thresholds observed in 7 of 11 (64%). Ophthalmological evaluations showed no changes in visual acuity or OCT findings; however, multifocal electroretinography abnormalities were detected in 12 of 13 patients. Conclusions: Through in-depth phenotyping, this study identifies measurable neurological, cognitive, and sensory progressive changes in patients with GD over time, supporting the value of tailored, multidisciplinary long-term care strategies to monitor and address emerging clinical needs in this rare disease. Full article
Show Figures

Figure 1

26 pages, 16800 KB  
Article
Automated Anatomical Feature Analysis and Scoring for Draw-a-Person Test Drawings via ResNet-Based Multi-Label Detection and Classification
by Asma Abdullah Alwadai and Emad Sami Jaha
AI 2026, 7(4), 130; https://doi.org/10.3390/ai7040130 - 2 Apr 2026
Viewed by 270
Abstract
The process of manually scoring drawings for the Goodenough–Harris Draw-a-Person (DAP) test is time-consuming and labor-intensive. It is also prone to inconsistencies due to subjective interpretation. Keeping these drawbacks in mind, this study aims to introduce a hybrid model of automated analysis and [...] Read more.
The process of manually scoring drawings for the Goodenough–Harris Draw-a-Person (DAP) test is time-consuming and labor-intensive. It is also prone to inconsistencies due to subjective interpretation. Keeping these drawbacks in mind, this study aims to introduce a hybrid model of automated analysis and scoring of DAP test results using a combination of deep learning and rule-based reasoning. The proposed model has two different modules: one for predicting ten visual anatomical features of drawings using a convolutional neural network (CNN), and another set of six rules for representing geometric and spatial relationships. The output of the CNN is converted to binary using thresholding and then concatenated with the results of heuristic rules to obtain a final set of sixteen features. The proposed model was also evaluated using five-fold cross-validation methods and a separate hold-out test set containing 948 labeled drawings. The evaluation using the five-fold cross-validation approach shows that the proposed approach maintains consistent performance with high average F1-scores for all primary anatomical features above 0.90. On the other hand, the evaluation using the hold-out test set revealed that the proposed approach achieved a high macro-average accuracy of 91.78% for all sixteen features. This implies that the proposed approach has a high degree of generalization capability for the problem domain. The proposed approach achieves almost-perfect scores for structurally prominent anatomical features such as the head, limbs, trunk-related relationships, and all heuristic-based features. Nevertheless, the proposed approach performs poorly for less visually distinguishable anatomical features such as the ears (average F1-scores ≈ 0.09–0.12) and the neck (average F1-scores ≈ 0.75). The evaluation results show that the proposed approach is efficient in approximating expert-level scoring with a considerable reduction in human effort. Nevertheless, some limitations exist in the proposed approach. First, the proposed approach is less robust for subtle anatomical features. Second, the proposed approach relies on heuristic thresholds for feature extraction. Third, the proposed approach equally weighs all sixteen features; however, this may not exactly match the actual DAP scoring system. Full article
Show Figures

Figure 1

26 pages, 14178 KB  
Article
FADiff: A Frequency-Aware Diffusion Model Based on Hybrid CNN–Transformer Network for Radar-Based Precipitation Nowcasting
by Jiandan Zhong, Wei Deng, Guanru Lyu, Jingbo Zhai, Yingxiang Li, Yajuan Xue and Zhipeng Yang
Remote Sens. 2026, 18(7), 1061; https://doi.org/10.3390/rs18071061 - 2 Apr 2026
Viewed by 317
Abstract
Precipitation nowcasting is a critical part of meteorological services and applications. Recently, mainstream research has been focused on adopting deep learning-based models to generate the predictions, yet existing deep learning models face challenges with blurry predictions that fail to capture high-frequency meteorological details, [...] Read more.
Precipitation nowcasting is a critical part of meteorological services and applications. Recently, mainstream research has been focused on adopting deep learning-based models to generate the predictions, yet existing deep learning models face challenges with blurry predictions that fail to capture high-frequency meteorological details, difficulty modeling both local correlations and long-range spatial dependencies, and a fundamental signal–noise confusion within the diffusion process that degrades structural fidelity. In this paper, we propose FADiff, a novel frequency-aware diffusion model based on a hybrid CNN–Transformer network for radar-based precipitation nowcasting. A hybrid CNN–Transformer backbone is first designed to integrate the CNNs with the Transformers, jointly enabling the local and global feature extraction capability of the meteorological dynamics. Subsequently, a novel Frequency-Aware Module (FAM) is proposed to mitigate signal–noise confusion. By transforming features into the frequency domain via the Discrete Cosine Transform (DCT), the FAM performs content-adaptive filtering with a learnable gating mechanism, which is designed to suppress noise-dominant frequency components while benefiting high-frequency signals corresponding to real meteorological structures. Finally, these components are embedded within a latent diffusion model to form an end-to-end nowcasting framework. Extensive experiments on the CIKM and SEVIR datasets demonstrate that the proposed FADiff outperforms state-of-the-art methods across a comprehensive suite of evaluation metrics. Significantly, under high-intensity precipitation thresholds, FADiff exhibits remarkable robustness and stability, presenting its superior capability in generating meteorologically critical structures with high fidelity. Full article
Show Figures

Figure 1

21 pages, 3770 KB  
Article
Wavelet Entropy and Machine Learning Analysis of Nonlinear Dynamics in Tubular Light Pipes
by Sertac Gorgulu
Electronics 2026, 15(7), 1474; https://doi.org/10.3390/electronics15071474 - 1 Apr 2026
Viewed by 235
Abstract
This study presents a hybrid framework primarily designed to predict electrical energy consumption in tubular light pipe systems while also providing interpretability through wavelet-based analysis. Indoor and outdoor illuminance were continuously monitored at one-minute intervals between January and May in Istanbul, Turkey. Using [...] Read more.
This study presents a hybrid framework primarily designed to predict electrical energy consumption in tubular light pipe systems while also providing interpretability through wavelet-based analysis. Indoor and outdoor illuminance were continuously monitored at one-minute intervals between January and May in Istanbul, Turkey. Using the continuous wavelet transform (CWT) with predefined scale ranges, multi-scale features such as scale-wise energy, relative wavelet energy, and wavelet entropy were extracted to quantify illumination variability and stability. These features were combined with contextual parameters (e.g., month and weather) to predict electrical energy consumption and the energy-saving ratio under a threshold-based lighting control strategy. Among the evaluated models, Random Forest was selected as the primary model due to its balance between prediction accuracy and interpretability, achieving lower prediction errors compared to baseline models (RMSE = 7.84 for RF, 9.39 for Linear Regression, and 8.28 for ARIMA), although the observed improvements are influenced by the inherent variability in the dataset. Feature-importance and SHapley Additive exPlanations (SHAP) analyses revealed that low-frequency wavelet components and low Wavelet Entropy values were found to strongly influence the predictive behavior, indicating that stable illumination leads to reduced artificial lighting demand and higher energy savings. A Lyapunov-inspired stability interpretation suggests that the system exhibits stable behavior consistent with asymptotic convergence. Unlike existing studies, the proposed framework integrates wavelet entropy with interpretable machine learning to jointly model illumination dynamics and energy demand. This enables more reliable prediction of lighting energy demand under highly variable daylight conditions. Full article
Show Figures

Figure 1

Back to TopTop