Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,956)

Search Parameters:
Keywords = real-world validation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 414 KB  
Article
DQMAF—Data Quality Modeling and Assessment Framework
by Razan Al-Toq and Abdulaziz Almaslukh
Information 2025, 16(10), 911; https://doi.org/10.3390/info16100911 (registering DOI) - 17 Oct 2025
Abstract
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only [...] Read more.
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only undermines analytics and machine learning models but also exposes unsuspecting users to unreliable services, compromised authentication mechanisms, and biased decision-making processes. Traditional data quality assessment methods, largely based on manual inspection or rigid rule-based validation, cannot cope with the scale, heterogeneity, and velocity of modern data streams. To address this gap, we propose DQMAF (Data Quality Modeling and Assessment Framework), a generalized machine learning–driven approach that systematically profiles, evaluates, and classifies data quality to protect end-users and enhance the reliability of Internet services. DQMAF introduces an automated profiling mechanism that measures multiple dimensions of data quality—completeness, consistency, accuracy, and structural conformity—and aggregates them into interpretable quality scores. Records are then categorized into high, medium, and low quality, enabling downstream systems to filter or adapt their behavior accordingly. A distinctive strength of DQMAF lies in integrating profiling with supervised machine learning models, producing scalable and reusable quality assessments applicable across domains such as social media, healthcare, IoT, and e-commerce. The framework incorporates modular preprocessing, feature engineering, and classification components using Decision Trees, Random Forest, XGBoost, AdaBoost, and CatBoost to balance performance and interpretability. We validate DQMAF on a publicly available Airbnb dataset, showing its effectiveness in detecting and classifying data issues with high accuracy. The results highlight its scalability and adaptability for real-world big data pipelines, supporting user protection, document and text-based classification, and proactive data governance while improving trust in analytics and AI-driven applications. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

20 pages, 6424 KB  
Article
Coherent Dynamic Clutter Suppression in Structural Health Monitoring via the Image Plane Technique
by Mattia Giovanni Polisano, Marco Manzoni, Stefano Tebaldini, Damiano Badini and Sergi Duque
Remote Sens. 2025, 17(20), 3459; https://doi.org/10.3390/rs17203459 - 16 Oct 2025
Abstract
In this work, a radar imagery-based signal processing technique to eliminate dynamic clutter interference in Structural Health Monitoring (SHM) is proposed. This can be considered an application of a joint communication and sensing telecommunication infrastructure, leveraging a base-station as ground-based radar. The dynamic [...] Read more.
In this work, a radar imagery-based signal processing technique to eliminate dynamic clutter interference in Structural Health Monitoring (SHM) is proposed. This can be considered an application of a joint communication and sensing telecommunication infrastructure, leveraging a base-station as ground-based radar. The dynamic clutter is considered to be a fast moving road user, such as car, truck, or moped. The proposed technique is suitable in case of a dynamic clutter, such that its Doppler contribute alias and falls over the 0 Hz component. In those cases, a standard low-pass filter is not a viable option. Indeed, an excessively shallow low-pass filter preserves the dynamic clutter contribution, while an excessively narrow low-pass filter deletes the displacement information and also preserves the dynamic clutter. The proposed approach leverages the Time Domain Backprojection (TDBP), a well-known technique to produce radar imagery, to transfer the dynamic clutter from the data domain to an image plane, where the dynamic clutter is maximally compressed. Consequently, the dynamic clutter can be more effectively suppressed than in the range-Doppler domain. The dynamic clutter cancellation is performed by coherent subtraction. Throughout this work, a numerical simulation is conducted. The simulation results show consistency with the ground truth. A further validation is performed using real-world data acquired in the C-band by Huawei Technologies. Corner reflectors are placed on an infrastructure, in particular a bridge, to perform the measurements. Here, two case studies are proposed: a bus and a truck. The validation shows consistency with the ground truth, providing a degree of improvement within respect to the corrupted displacement on the mean error and its variance. As a by-product of the algorithm, there is the capability to produce high-resolution imagery of moving targets. Full article
Show Figures

Figure 1

31 pages, 3812 KB  
Review
Generative Adversarial Networks in Dermatology: A Narrative Review of Current Applications, Challenges, and Future Perspectives
by Rosa Maria Izu-Belloso, Rafael Ibarrola-Altuna and Alex Rodriguez-Alonso
Bioengineering 2025, 12(10), 1113; https://doi.org/10.3390/bioengineering12101113 - 16 Oct 2025
Abstract
Generative Adversarial Networks (GANs) have emerged as powerful tools in artificial intelligence (AI) with growing relevance in medical imaging. In dermatology, GANs are revolutionizing image analysis, enabling synthetic image generation, data augmentation, color standardization, and improved diagnostic model training. This narrative review explores [...] Read more.
Generative Adversarial Networks (GANs) have emerged as powerful tools in artificial intelligence (AI) with growing relevance in medical imaging. In dermatology, GANs are revolutionizing image analysis, enabling synthetic image generation, data augmentation, color standardization, and improved diagnostic model training. This narrative review explores the landscape of GAN applications in dermatology, systematically analyzing 27 key studies and identifying 11 main clinical use cases. These range from the synthesis of under-represented skin phenotypes to segmentation, denoising, and super-resolution imaging. The review also examines the commercial implementations of GAN-based solutions relevant to practicing dermatologists. We present a comparative summary of GAN architectures, including DCGAN, cGAN, StyleGAN, CycleGAN, and advanced hybrids. We analyze technical metrics used to evaluate performance—such as Fréchet Inception Distance (FID), SSIM, Inception Score, and Dice Coefficient—and discuss challenges like data imbalance, overfitting, and the lack of clinical validation. Additionally, we review ethical concerns and regulatory limitations. Our findings highlight the transformative potential of GANs in dermatology while emphasizing the need for standardized protocols and rigorous validation. While early results are promising, few models have yet reached real-world clinical integration. The democratization of AI tools and open-access datasets are pivotal to ensure equitable dermatologic care across diverse populations. This review serves as a comprehensive resource for dermatologists, researchers, and developers interested in applying GANs in dermatological practice and research. Future directions include multimodal integration, clinical trials, and explainable GANs to facilitate adoption in daily clinical workflows. Full article
(This article belongs to the Special Issue AI-Driven Imaging and Analysis for Biomedical Applications)
Show Figures

Figure 1

25 pages, 1355 KB  
Article
Source Robust Non-Parametric Reconstruction of Epidemic-like Event-Based Network Diffusion Processes Under Online Data
by Jiajia Xie, Chen Lin, Xinyu Guo and Cassie S. Mitchell
Big Data Cogn. Comput. 2025, 9(10), 262; https://doi.org/10.3390/bdcc9100262 - 16 Oct 2025
Abstract
Temporal network diffusion models play a crucial role in healthcare, information technology, and machine learning, enabling the analysis of dynamic event-based processes such as disease spread, information propagation, and behavioral diffusion. This study addresses the challenge of reconstructing temporal network diffusion events in [...] Read more.
Temporal network diffusion models play a crucial role in healthcare, information technology, and machine learning, enabling the analysis of dynamic event-based processes such as disease spread, information propagation, and behavioral diffusion. This study addresses the challenge of reconstructing temporal network diffusion events in real time under conditions of missing and evolving data. A novel non-parametric reconstruction method by simple weights differentiationis proposed to enhance source detection robustness with provable improved error bounds. The approach introduces adaptive cost adjustments, dynamically reducing high-risk source penalties and enabling bounded detours to mitigate errors introduced by missing edges. Theoretical analysis establishes enhanced upper bounds on false positives caused by detouring, while a stepwise evaluation of dynamic costs minimizes redundant solutions, resulting in robust Steiner tree reconstructions. Empirical validation on three real-world datasets demonstrates a 5% improvement in Matthews correlation coefficient (MCC), a twofold reduction in redundant sources, and a 50% decrease in source variance. These results confirm the effectiveness of the proposed method in accurately reconstructing temporal network diffusion while improving stability and reliability in both offline and online settings. Full article
21 pages, 3443 KB  
Review
Artificial Intelligence in the Management of Infectious Diseases in Older Adults: Diagnostic, Prognostic, and Therapeutic Applications
by Antonio Pinto, Flavia Pennisi, Stefano Odelli, Emanuele De Ponti, Nicola Veronese, Carlo Signorelli, Vincenzo Baldo and Vincenza Gianfredi
Biomedicines 2025, 13(10), 2525; https://doi.org/10.3390/biomedicines13102525 - 16 Oct 2025
Abstract
Background: Older adults are highly vulnerable to infectious diseases due to immunosenescence, multimorbidity, and atypical presentations. Artificial intelligence (AI) offers promising opportunities to improve diagnosis, prognosis, treatment, and continuity of care in this population. This review summarizes current applications of AI in [...] Read more.
Background: Older adults are highly vulnerable to infectious diseases due to immunosenescence, multimorbidity, and atypical presentations. Artificial intelligence (AI) offers promising opportunities to improve diagnosis, prognosis, treatment, and continuity of care in this population. This review summarizes current applications of AI in the management of infections in older adults across diagnostic, prognostic, therapeutic, and preventive domains. Methods: We conducted a narrative review of peer-reviewed studies retrieved from PubMed, Scopus, and Web of Science, focusing on AI-based tools for infection diagnosis, risk prediction, antimicrobial stewardship, prevention of healthcare-associated infections, and post-discharge care in individuals aged ≥65 years. Results: AI models, including machine learning, deep learning, and natural language processing techniques, have demonstrated high performance in detecting infections such as sepsis, pneumonia, and healthcare-associated infections (Area Under the Curve AUC up to 0.98). Prognostic algorithms integrating frailty and functional status enhance the prediction of mortality, complications, and readmission. AI-driven clinical decision support systems contribute to optimized antimicrobial therapy and timely interventions, while remote monitoring and telemedicine applications support safer hospital-to-home transitions and reduced 30-day readmissions. However, the implementation of these technologies is limited by the underrepresentation of frail older adults in training datasets, lack of real-world validation in geriatric settings, and the insufficient explainability of many models. Additional barriers include system interoperability issues and variable digital infrastructure, particularly in long-term care and community settings. Conclusions: AI has strong potential to support predictive and personalized infection management in older adults. Future research should focus on developing geriatric-specific, interpretable models, improving system integration, and fostering interdisciplinary collaboration to ensure safe and equitable implementation. Full article
(This article belongs to the Special Issue Feature Reviews in Infection and Immunity)
Show Figures

Graphical abstract

17 pages, 2716 KB  
Article
A Study on the Performance Comparison of Brain MRI Image-Based Abnormality Classification Models
by Jinhyoung Jeong, Sohyeon Bang, Yuyeon Jung and Jaehyun Jo
Life 2025, 15(10), 1614; https://doi.org/10.3390/life15101614 - 16 Oct 2025
Abstract
We developed a model that classifies normal and abnormal brain MRI images. This study initially referenced a small-scale real patient dataset (98 normal and 155 abnormal MRI images) provided by the National Institute of Aging (NIA) to illustrate the class imbalance challenge. However, [...] Read more.
We developed a model that classifies normal and abnormal brain MRI images. This study initially referenced a small-scale real patient dataset (98 normal and 155 abnormal MRI images) provided by the National Institute of Aging (NIA) to illustrate the class imbalance challenge. However, all experiments and performance evaluations were conducted on a larger synthetic dataset (10,000 images; 5000 normal and 5000 abnormal) generated from the National Imaging System (NIS/AI Hub). Therefore, while the NIA dataset highlights the limitations of real-world data availability, the reported results are based exclusively on the synthetic dataset. In the preprocessing step, all MRI images were normalized to the same size, and data augmentation techniques such as rotation, translation, and flipping were applied to increase data diversity and reduce overfitting during training. Based on deep learning, we fine-tuned our own CNN model and a ResNet-50 transfer learning model using ImageNet pretrained weights. We also compared the performance of our model with traditional machine learning using SVM (RBF kernel) and random forest classifiers. Experimental results showed that the ResNet-50 transfer learning model achieved the best performance, achieving approximately 95% accuracy and a high F1 score on the test set, while our own CNN also performed well. In contrast, SVM and random forests showed relatively poor performance due to their inability to sufficiently learn the complex characteristics of the images. This study confirmed that deep learning techniques, including transfer learning, achieve excellent brain abnormality detection performance even with limited real-world medical data. These results highlight methodological potential but should be interpreted with caution, as further validation with real-world clinical MRI data is required before clinical applicability can be established. Full article
(This article belongs to the Section Radiobiology and Nuclear Medicine)
Show Figures

Figure 1

13 pages, 423 KB  
Article
Trastuzumab Deruxtecan-Associated Interstitial Lung Disease: Real-World Insights from a Tertiary Care Center
by Ahmed S. Alanazi, Ahmed A. Alanazi, Abdalrhman Alanizi, Ranad Babalghaith, Reema Alotaibi, Mohammed Alnuhait and Hatoon Bakhribah
Curr. Oncol. 2025, 32(10), 575; https://doi.org/10.3390/curroncol32100575 (registering DOI) - 16 Oct 2025
Abstract
Background: Trastuzumab deruxtecan (T-DXd), a HER2-directed antibody-drug conjugate, has significantly advanced the management of HER2-expressing malignancies. However, interstitial lung disease (ILD) remains a clinically significant adverse effect. Despite increasing clinical use of T-DXd, real-world data on ILD incidence, characteristics, and outcomes—particularly in Middle [...] Read more.
Background: Trastuzumab deruxtecan (T-DXd), a HER2-directed antibody-drug conjugate, has significantly advanced the management of HER2-expressing malignancies. However, interstitial lung disease (ILD) remains a clinically significant adverse effect. Despite increasing clinical use of T-DXd, real-world data on ILD incidence, characteristics, and outcomes—particularly in Middle Eastern populations remain limited. Methods: This retrospective study analyzed medical records of patients who received trastuzumab deruxtecan (T-DXd) at a tertiary care hospital. Data collected included demographics, tumor characteristics, prior treatments, and interstitial lung disease (ILD)-related outcomes. ILD events were identified and graded according to the Common Terminology Criteria for Adverse Events (CTCAE) version 5.0. Descriptive statistics were used to summarize baseline characteristics and ILD features. Univariate logistic regression was performed to assess potential risk factors associated with ILD development. Kaplan–Meier survival analysis was used to evaluate time-to-event outcomes, including time to ILD onset and resolution. Results: Among 65 patients with advanced stage IV cancer (90.8% with breast cancer), 16 (24.6%) developed ILD following T-DXd therapy. The median time to ILD onset was 125.5 days. The most common presenting symptoms were dyspnea and cough (50%). A history of ground-glass opacities was associated with increased odds of ILD (OR 2.7; p = 0.236), though not statistically significant. Patients with Grade ≥ 3 ILD had significantly lower oxygen saturation levels compared to those with milder grades (88.3% vs. 97.7%, p = 0.049). Median time to clinical resolution was 297 days (95% CI: 77.5–516). No significant associations were observed with smoking history, pulmonary metastases, or prior thoracic radiation. Conclusions: In this real-world cohort, ILD occurred in nearly one-quarter of patients receiving T-DXd, predominantly within the first six months of treatment. The findings highlight the importance of early respiratory symptom monitoring and pulse oximetry—particularly in patients with pre-existing pulmonary abnormalities. These results underscore the need for vigilant ILD surveillance strategies and further prospective studies to validate predictive risk factors and optimize management protocols. Full article
(This article belongs to the Section Thoracic Oncology)
Show Figures

Figure 1

28 pages, 8791 KB  
Article
CRSensor: A Synchronized and Impact-Aware Traceability Framework for Business Application Development
by Soojin Park
Appl. Sci. 2025, 15(20), 11083; https://doi.org/10.3390/app152011083 - 16 Oct 2025
Abstract
To enable effective change impact management in business applications, robust requirements traceability is essential. However, manual approaches are inefficient and prone to errors. While the prior Model-Driven Engineering (MDE)-based research, including the author’s theoretical models, established the principles of traceability, these approaches lacked [...] Read more.
To enable effective change impact management in business applications, robust requirements traceability is essential. However, manual approaches are inefficient and prone to errors. While the prior Model-Driven Engineering (MDE)-based research, including the author’s theoretical models, established the principles of traceability, these approaches lacked decisive quantitative validation using metrics such as precision and recall, thereby limiting their real-world applicability. This paper addresses these limitations by introducing the CRSensor framework, which integrates the real-time automated trace link generation and dynamic refinement of the developer model. This approach enhances the reliability and completeness of organizational impact analysis, resolving key weaknesses of conventional link recovery methods. Notably, CRSensor maintains structural consistency throughout the lifecycle, overcoming reliability limitations often found in traditional information retrieval (IR)/machine learning (ML)-based traceability solutions. Empirical evaluation demonstrates that CRSensor achieves an average trace link setting performance with a precision of 0.95, a recall of 0.98, and an auto-generation rate of 80%. These results validate both the industrial applicability and the quantitative rigor of the proposed framework, paving the way for broader practical adoption. Full article
Show Figures

Figure 1

22 pages, 5278 KB  
Article
Modeling and Simulation of Lower Limb Rehabilitation Exoskeletons: A Comparative Analysis for Dynamic Model Validation and Optimal Approach Selection
by Rana Sami Ullah Khan, Muhammad Tallal Saeed, Zeashan Khan, Urooj Abid, Hafiz Zia Ur Rehman, Zareena Kausar and Shiyin Qin
Robotics 2025, 14(10), 143; https://doi.org/10.3390/robotics14100143 - 16 Oct 2025
Abstract
Accurate modeling and simulation of lower limb rehabilitation exoskeleton (LLRE) enables effective control resulting in enhanced performance and ensuring efficient rehabilitation. There are two primary objectives of this study. First is to validate the existing models and second is to identify the optimal [...] Read more.
Accurate modeling and simulation of lower limb rehabilitation exoskeleton (LLRE) enables effective control resulting in enhanced performance and ensuring efficient rehabilitation. There are two primary objectives of this study. First is to validate the existing models and second is to identify the optimal modeling approach for exoskeletons. For validation, firstly a lower limb rehabilitation exoskeleton is modeled using three different modeling approaches which include analytical modeling, bond graph modeling, and modeling through Simscape (SS). Thereafter, dynamic responses of analytical and graphical modeling are compared with SS model using key dynamic response parameters, including rise time, peak time, and others. The SS-based physical model can be employed for validation because SS, unlike mathematical modeling, uses unit-consistent physical domain data and, therefore, serves as an intermediate step between mathematical modeling and hardware validation. Secondly, to identify the most suitable modeling approach, a structured and comprehensive comparison of different modeling approaches based on aspects such as control domain, complexity, ease of use, and other relevant factors is carried out. The results highlight the qualitative strengths and limitations of the three approaches. Previous studies focus on individual methods and lack such comparison. This work contributes to the validation of models and identification of an efficient and effective modeling methodology for LLRE. The findings reveal that Simscape™ is the most suitable approach for modeling LLREs as it provides multidisciplinary system modeling and supports real-time simulation. The validated model can now be employed for advancements in model-based control design. Moreover, the identified optimal approach provides an insight to researchers and engineers for model selection in early-stage design and control development of complex mechatronic systems. Future work includes comparison of dynamic responses with actual hardware responses to experimentally validate the effectiveness of the model for real-world patient assistance and mobility restoration. Full article
(This article belongs to the Special Issue Development of Biomedical Robotics)
Show Figures

Figure 1

19 pages, 2701 KB  
Article
RFID-Enabled Electronic Voting Framework for Secure Democratic Processes
by Stella N. Arinze and Augustine O. Nwajana
Telecom 2025, 6(4), 78; https://doi.org/10.3390/telecom6040078 (registering DOI) - 16 Oct 2025
Abstract
The growing global demand for secure, transparent, and efficient electoral systems has highlighted the limitations of traditional voting methods, which remain susceptible to voter impersonation, ballot tampering, long queues, logistical challenges, and delayed result processing. To address these issues, this study presents the [...] Read more.
The growing global demand for secure, transparent, and efficient electoral systems has highlighted the limitations of traditional voting methods, which remain susceptible to voter impersonation, ballot tampering, long queues, logistical challenges, and delayed result processing. To address these issues, this study presents the design and implementation of a Radio Frequency Identification (RFID)-based electronic voting framework that integrates robust voter authentication, encrypted vote processing, and decentralized real-time monitoring. The system is developed as a scalable, cost-effective solution suitable for both urban and resource-constrained environments, especially those with limited infrastructure or inconsistent internet connectivity. It employs RFID-enabled smart voter cards containing encrypted unique identifiers, with each voter authenticated via an RC522 reader that validates their UID against an encrypted whitelist stored locally. Upon successful verification, the voter selects a candidate via a digital interface, and the vote is encrypted using AES-128 before being stored either locally on an SD card or transmitted through GSM to a secure backend. To ensure operability in offline settings, the system supports batch synchronization, where encrypted votes and metadata are uploaded once connectivity is restored. A tamper-proof monitoring mechanism logs each session with device ID, timestamps, and cryptographic checksums to maintain integrity and prevent duplication or external manipulation. Simulated deployments under real-world constraints tested the system’s performance against common threats such as duplicate voting, tag cloning, and data interception. Results demonstrated reduced authentication time, improved voter throughput, and strong resistance to security breaches—validating the system’s resilience and practicality. This work offers a hybrid RFID-based voting framework that bridges the gap between technical feasibility and real-world deployment, contributing a secure, transparent, and credible model for modernizing democratic processes in diverse political and technological landscapes. Full article
(This article belongs to the Special Issue Digitalization, Information Technology and Social Development)
Show Figures

Figure 1

27 pages, 4875 KB  
Article
A Comprehensive Radar-Based Berthing-Aid Dataset (R-BAD) and Onboard System for Safe Vessel Docking
by Fotios G. Papadopoulos, Antonios-Periklis Michalopoulos, Efstratios N. Paliodimos, Ioannis K. Christopoulos, Charalampos Z. Patrikakis, Alexandros Simopoulos and Stylianos A. Mytilinaios
Electronics 2025, 14(20), 4065; https://doi.org/10.3390/electronics14204065 - 16 Oct 2025
Abstract
Ship berthing operations are inherently challenging for maritime vessels, particularly within restricted port areas and under unfavorable weather conditions. Contrary to autonomous open-sea navigation, autonomous ship berthing remains a significant technological challenge for the maritime industry. Lidar and optical camera systems have been [...] Read more.
Ship berthing operations are inherently challenging for maritime vessels, particularly within restricted port areas and under unfavorable weather conditions. Contrary to autonomous open-sea navigation, autonomous ship berthing remains a significant technological challenge for the maritime industry. Lidar and optical camera systems have been deployed as auxiliary tools to support informed berthing decisions; however, these sensing modalities are severely affected by weather and light conditions, respectively, while cameras in particular are inherently incapable of providing direct range measurements. In this paper, we introduce a comprehensive, Radar-Based Berthing-Aid Dataset (R-BAD), aimed to cultivate the development of safe berthing systems onboard ships. The proposed R-BAD dataset includes a large collection of Frequency-Modulated Continuous Wave (FMCW) radar data in point cloud format alongside timestamped and synced video footage. There are more than 69 h of recorded ship operations, and the dataset is freely accessible to the interested reader(s). We also propose an onboard support system for radar-aided vessel docking, which enables obstacle detection, clustering, tracking and classification during ferry berthing maneuvers. The proposed dataset covers all docking/undocking scenarios (arrivals, departures, port idle, and cruising operations) and was used to train various machine/deep learning models of substantial performance, showcasing its validity for further autonomous navigation systems development. The berthing-aid system is tested in real-world conditions onboard an operational Ro-Ro/Passenger Ship and demonstrated superior, weather-resilient, repeatable and robust performance in detection, tracking and classification tasks, demonstrating its technology readiness for integration into future autonomous berthing-aid systems. Full article
Show Figures

Figure 1

13 pages, 764 KB  
Article
Super Responders in Plaque Psoriasis: A Real-World, Multi-Agent Analysis Showing Bimekizumab Associated with the Highest Odds of PASI = 0 at Week 12
by Dominika Ziolkowska-Banasik, Kamila Zawadzinska-Halat, Paulina Basta and Maciej Pastuszczak
J. Clin. Med. 2025, 14(20), 7293; https://doi.org/10.3390/jcm14207293 (registering DOI) - 16 Oct 2025
Abstract
Introduction: Super responders (SRs)—patients achieving complete skin clearance (PASI = 0) soon after biologic initiation—represent a clinically relevant but underexplored phenotype. This study is one of the first real-world, multi-agent analyses comparing SR likelihood across biologic classes in plaque psoriasis. We assessed [...] Read more.
Introduction: Super responders (SRs)—patients achieving complete skin clearance (PASI = 0) soon after biologic initiation—represent a clinically relevant but underexplored phenotype. This study is one of the first real-world, multi-agent analyses comparing SR likelihood across biologic classes in plaque psoriasis. We assessed whether biologic choice predicts SR in routine clinical practice. Methods: We performed a retrospective, single-center study of 116 adults with moderate-to-severe plaque psoriasis initiating their first biologic (adalimumab, tildrakizumab, guselkumab, risankizumab, bimekizumab, or secukinumab). SR was defined as PASI = 0 at week 12. SR proportions (exact 95% CIs) were compared using Fisher’s exact tests and odds ratios (ORs). Multivariable logistic regression estimated adjusted associations between biologic and SR, controlling for age, sex, disease duration, BMI, baseline PASI, and prior cyclosporine/acitretin. Sensitivity analyses included Firth bias-reduced regression, the exclusion of sparse drug strata, and an alternative endpoint (PASI ≤ 1 at week 12). Results: Overall, 26/116 patients (22.4%) achieved SR. SR proportions differed by agent, highest with bimekizumab (11/17; 64.7%); Fisher’s p < 0.001 vs. others; OR = 12.83 (95% CI 4.17–39.50). In adjusted models, bimekizumab remained independently associated with SR (adjusted OR = 17.30; 95% CI 4.62–64.82; p = 2.35 × 10−5), while other covariates were not significant. Conclusions: In this real-world cohort, biologic selection—particularly bimekizumab—was the main determinant of early complete clearance. These findings highlight mechanistic class as a key driver of rapid, deep responses and support prospective validation with harmonized SR definitions and extended follow-up. Full article
Show Figures

Figure 1

21 pages, 1706 KB  
Article
Spatiotemporal Feature Learning for Daily-Life Cough Detection Using FMCW Radar
by Saihu Lu, Yuhan Liu, Guangqiang He, Zhongrui Bai, Zhenfeng Li, Pang Wu, Xianxiang Chen, Lidong Du, Peng Wang and Zhen Fang
Bioengineering 2025, 12(10), 1112; https://doi.org/10.3390/bioengineering12101112 - 15 Oct 2025
Abstract
Cough is a key symptom reflecting respiratory health, with its frequency and pattern providing valuable insights into disease progression and clinical management. Objective and reliable cough detection systems are therefore of broad significance for healthcare and remote monitoring. However, existing algorithms often struggle [...] Read more.
Cough is a key symptom reflecting respiratory health, with its frequency and pattern providing valuable insights into disease progression and clinical management. Objective and reliable cough detection systems are therefore of broad significance for healthcare and remote monitoring. However, existing algorithms often struggle to jointly model spatial and temporal information, limiting their robustness in real-world applications. To address this issue, we propose a cough recognition framework based on frequency-modulated continuous-wave (FMCW) radar, integrating a deep convolutional neural network (CNN) with a Self-Attention mechanism. The CNN extracts spatial features from range-Doppler maps, while Self-Attention captures temporal dependencies, and effective data augmentation strategies enhance generalization by simulating position variations and masking local dependencies. To rigorously evaluate practicality, we collected a large-scale radar dataset covering diverse positions, orientations, and activities. Experimental results demonstrate that, under subject-independent five-fold cross-validation, the proposed model achieved a mean F1-score of 0.974±0.016 and an accuracy of 99.05±0.55 %, further supported by high precision of 98.77±1.05 %, recall of 96.07±2.16 %, and specificity of 99.73±0.23 %. These results confirm that our method is not only robust in realistic scenarios but also provides a practical pathway toward continuous, non-invasive, and privacy-preserving respiratory health monitoring in both clinical and telehealth applications. Full article
Show Figures

Graphical abstract

56 pages, 732 KB  
Review
The Erosion of Cybersecurity Zero-Trust Principles Through Generative AI: A Survey on the Challenges and Future Directions
by Dan Xu, Iqbal Gondal, Xun Yi, Teo Susnjak, Paul Watters and Timothy R. McIntosh
J. Cybersecur. Priv. 2025, 5(4), 87; https://doi.org/10.3390/jcp5040087 (registering DOI) - 15 Oct 2025
Abstract
Generative artificial intelligence (AI) and persistent empirical gaps are reshaping the cyber threat landscape faster than Zero-Trust Architecture (ZTA) research can respond. We reviewed 10 recent ZTA surveys and 136 primary studies (2022–2024) and found that 98% provided only partial or no real-world [...] Read more.
Generative artificial intelligence (AI) and persistent empirical gaps are reshaping the cyber threat landscape faster than Zero-Trust Architecture (ZTA) research can respond. We reviewed 10 recent ZTA surveys and 136 primary studies (2022–2024) and found that 98% provided only partial or no real-world validation, leaving several core controls largely untested. Our critique, therefore, proceeds on two axes: first, mainstream ZTA research is empirically under-powered and operationally unproven; second, generative-AI attacks exploit these very weaknesses, accelerating policy bypass and detection failure. To expose this compounding risk, we contribute the Cyber Fraud Kill Chain (CFKC), a seven-stage attacker model (target identification, preparation, engagement, deception, execution, monetization, and cover-up) that maps specific generative techniques to NIST SP 800-207 components they erode. The CFKC highlights how synthetic identities, context manipulation and adversarial telemetry drive up false-negative rates, extend dwell time, and sidestep audit trails, thereby undermining the Zero-Trust principles of verify explicitly and assume breach. Existing guidance offers no systematic countermeasures for AI-scaled attacks, and that compliance regimes struggle to audit content that AI can mutate on demand. Finally, we outline research directions for adaptive, evidence-driven ZTA, and we argue that incremental extensions of current ZTA that are insufficient; only a generative-AI-aware redesign will sustain defensive parity in the coming threat cycle. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

24 pages, 6334 KB  
Article
Modeling of Electric Vehicle Energy Demand: A Big Data Approach to Energy Planning
by Iván Sánchez-Loor and Manuel Ayala-Chauvin
Energies 2025, 18(20), 5429; https://doi.org/10.3390/en18205429 - 15 Oct 2025
Abstract
The rapid expansion of electric vehicles in high-altitude Andean cities, such as the Metropolitan District of Quito, Ecuador’s capital, presents unique challenges for electrical infrastructure planning, necessitating advanced methodologies that capture behavioral heterogeneity and mass synchronization effects in high-penetration scenarios. This study introduces [...] Read more.
The rapid expansion of electric vehicles in high-altitude Andean cities, such as the Metropolitan District of Quito, Ecuador’s capital, presents unique challenges for electrical infrastructure planning, necessitating advanced methodologies that capture behavioral heterogeneity and mass synchronization effects in high-penetration scenarios. This study introduces a hybrid approach that combines agent-based modelling with Monte Carlo simulation and a TimescaleDB architecture project charging demand with quarter-hour resolution through 2040. The model calibration deployed real-world data from 764 charging points collected over 30 months, which generated 2.1 million charging sessions. A dynamic coincidence factor (FC=0.222+0.036e(0.0003n)) was incorporated, resulting in a 52% reduction in demand overestimation compared to traditional models. The results for the 2040 project show a peak demand of 255 MW (95% CI: 240–270 MW) and an annual consumption of 800 GWh. These findings reveal that non-optimized time-of-use tariffs can generate a critical “cliff effect,” increasing peak demand by 32%, whereas smart charging management with randomization reduces it by 18 ± 2.5%. Model validation yields a MAPE of 4.2 ± 0.8% and an RMSE of 12.3 MW. The TimescaleDB architecture demonstrated processing speeds of 2398.7 records/second and achieved 91% data compression. This methodology offers robust tools for urban energy planning and demand-side management policy optimization in high-altitude contexts, with the source code available to ensure reproducibility. Full article
(This article belongs to the Section E: Electric Vehicles)
Show Figures

Figure 1

Back to TopTop