Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (171)

Search Parameters:
Keywords = physical annotation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 7385 KB  
Article
Reducing Annotation Effort in Semantic Segmentation Through Conformal Risk Controlled Active Learning
by Can Erhan and Nazim Kemal Ure
AI 2025, 6(10), 270; https://doi.org/10.3390/ai6100270 - 18 Oct 2025
Viewed by 184
Abstract
Modern semantic segmentation models require extensive pixel-level annotations, creating a significant barrier to practical deployment as labeling a single image can take hours of human effort. Active learning offers a promising way to reduce annotation costs through intelligent sample selection. However, existing methods [...] Read more.
Modern semantic segmentation models require extensive pixel-level annotations, creating a significant barrier to practical deployment as labeling a single image can take hours of human effort. Active learning offers a promising way to reduce annotation costs through intelligent sample selection. However, existing methods rely on poorly calibrated confidence estimates, making uncertainty quantification unreliable. We introduce Conformal Risk Controlled Active Learning (CRC-AL), a novel framework that provides statistical guarantees on uncertainty quantification for semantic segmentation, in contrast to heuristic approaches. CRC-AL calibrates class-specific thresholds via conformal risk control, transforming softmax outputs into multi-class prediction sets with formal guarantees. From these sets, our approach derives complementary uncertainty representations: risk maps highlighting uncertain regions and class co-occurrence embeddings capturing semantic confusions. A physics-inspired selection algorithm leverages these representations with a barycenter-based distance metric that balances uncertainty and diversity. Experiments on Cityscapes and PascalVOC2012 show CRC-AL consistently outperforms baseline methods, achieving 95% of fully supervised performance with only 30% of labeled data, making semantic segmentation more practical under limited annotation budgets. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Graphical abstract

35 pages, 777 KB  
Review
Predictive Autonomy for UAV Remote Sensing: A Survey of Video Prediction
by Zhan Chen, Enze Zhu, Zile Guo, Peirong Zhang, Xiaoxuan Liu, Lei Wang and Yidan Zhang
Remote Sens. 2025, 17(20), 3423; https://doi.org/10.3390/rs17203423 - 13 Oct 2025
Viewed by 370
Abstract
The analysis of dynamic remote sensing scenes from unmanned aerial vehicles (UAVs) is shifting from reactive processing to proactive, predictive intelligence. Central to this evolution is video prediction—forecasting future imagery from past observations—which enables critical remote sensing applications like persistent environmental monitoring, occlusion-robust [...] Read more.
The analysis of dynamic remote sensing scenes from unmanned aerial vehicles (UAVs) is shifting from reactive processing to proactive, predictive intelligence. Central to this evolution is video prediction—forecasting future imagery from past observations—which enables critical remote sensing applications like persistent environmental monitoring, occlusion-robust object tracking, and infrastructure anomaly detection under challenging aerial conditions. Yet, a systematic review of video prediction models tailored for the unique constraints of aerial remote sensing has been lacking. Existing taxonomies often obscure key design choices, especially for emerging operators like state-space models (SSMs). We address this gap by proposing a unified, multi-dimensional taxonomy with three orthogonal axes: (i) operator architecture; (ii) generative nature; and (iii) training/inference regime. Through this lens, we analyze recent methods, clarifying their trade-offs for deployment on UAV platforms that demand processing of high-resolution, long-horizon video streams under tight resource constraints. Our review assesses the utility of these models for key applications like proactive infrastructure inspection and wildlife tracking. We then identify open problems—from the scarcity of annotated aerial video data to evaluation beyond pixel-level metrics—and chart future directions. We highlight a convergence toward scalable dynamic world models for geospatial intelligence, which leverage physics-informed learning, multimodal fusion, and action-conditioning, powered by efficient operators like SSMs. Full article
Show Figures

Figure 1

15 pages, 10305 KB  
Article
Convolutional Neural Network for Automatic Detection of Segments Contaminated by Interference in ECG Signal
by Veronika Kalousková, Pavel Smrčka, Radim Kliment, Tomáš Veselý, Martin Vítězník, Adam Zach and Petr Šrotýř
AI 2025, 6(10), 250; https://doi.org/10.3390/ai6100250 - 1 Oct 2025
Viewed by 378
Abstract
Various types of interfering signals are an integral part of ECGs recorded using wearable electronics, specifically during field monitoring, outside the controlled environment of a medical doctor’s office, or laboratory. The frequency spectrum of several types of interfering signals overlaps significantly with the [...] Read more.
Various types of interfering signals are an integral part of ECGs recorded using wearable electronics, specifically during field monitoring, outside the controlled environment of a medical doctor’s office, or laboratory. The frequency spectrum of several types of interfering signals overlaps significantly with the ECG signal, making effective filtration impossible without losing clinically relevant information. In this article, we proceed from the practical assumption that it is unnecessary to analyze the entire ECG recording in real long-term recordings. Conversely, in the preprocessing phase, it is necessary to detect unreadable segments of the ECG signal. This paper proposes a novel method for automatically detecting unreadable segments distorted by superimposed interference in ECG recordings. The method is based on a convolutional neural network (CNN) and is comparable in quality to annotation performed by a medical expert, but incomparably faster. In a series of controlled experiments, the ECG signal was recorded during physical activities of varying intensities, and individual segments of the recordings were manually annotated based on visual assessment by a medical expert, i.e., divided into four different classes based on the intensity of distortion to the useful ECG signal. A deep convolutional model was designed and evaluated, exhibiting a 87.62% accuracy score and the same F1-score in automatic recognition of segments distorted by superimposed interference. Furthermore, the model exhibits an accuracy and F1-score of 98.70% in correctly identifying segments with visually detectable and non-detectable heart rate. The proposed interference detection procedure appears to be sufficiently effective despite its simplicity. It facilitates subsequent automatic analysis of undisturbed ECG waveform segments, which is crucial in ECG monitoring using wearable electronics. Full article
Show Figures

Figure 1

29 pages, 3308 KB  
Article
A Comparative Study of BERT-Based Models for Teacher Classification in Physical Education
by Laura Martín-Hoz, Samuel Yanes-Luis, Jerónimo Huerta Cejudo, Daniel Gutiérrez-Reina and Evelia Franco Álvarez
Electronics 2025, 14(19), 3849; https://doi.org/10.3390/electronics14193849 - 28 Sep 2025
Viewed by 261
Abstract
Assessing teaching behavior is essential for improving instructional quality, particularly in Physical Education, where classroom interactions are fast-paced and complex. Traditional evaluation methods such as questionnaires, expert observations, and manual discourse analysis are often limited by subjectivity, high labor costs, and poor scalability. [...] Read more.
Assessing teaching behavior is essential for improving instructional quality, particularly in Physical Education, where classroom interactions are fast-paced and complex. Traditional evaluation methods such as questionnaires, expert observations, and manual discourse analysis are often limited by subjectivity, high labor costs, and poor scalability. These challenges underscore the need for automated, objective tools to support pedagogical assessment. This study explores and compares the use of Transformer-based language models for the automatic classification of teaching behaviors from real classroom transcriptions. A dataset of over 1300 utterances was compiled and annotated according to the teaching styles proposed in the circumplex approach (Autonomy Support, Structure, Control, and Chaos), along with an additional category for messages in which no style could be identified (Unidentified Style). To address class imbalance and enhance linguistic variability, data augmentation techniques were applied. Eight pretrained BERT-based Transformer architectures were evaluated, including several pretraining strategies and architectural structures. BETO achieved the highest performance, with an accuracy of 0.78, a macro-averaged F1-score of 0.72, and a weighted F1-score of 0.77. It showed strength in identifying challenging utterances labeled as Chaos and Autonomy Support. Furthermore, other BERT-based models purely trained with a Spanish text corpus like DistilBERT also present competitive performance, achieving accuracy metrics over 0.73 and and F1-score of 0.68. These results demonstrate the potential of leveraging Transformer-based models for objective and scalable teacher behavior classification. The findings support the feasibility of leveraging pretrained language models to develop scalable, AI-driven systems for classroom behavior classification and pedagogical feedback. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

25 pages, 562 KB  
Article
VeriFlow: A Framework for the Static Verification of Web Application Access Control via Policy-Graph Consistency
by Tao Zhang, Fuzhong Hao, Yunfan Wang, Bo Zhang and Guangwei Xie
Electronics 2025, 14(18), 3742; https://doi.org/10.3390/electronics14183742 - 22 Sep 2025
Viewed by 522
Abstract
The evolution of industrial automation toward Industry 3.0 and 4.0 has driven the emergence of Industrial Edge-Cloud Platforms, which increasingly depend on web interfaces for managing and monitoring critical operational technology. This convergence introduces significant security risks, particularly from Broken Access Control (BAC)—a [...] Read more.
The evolution of industrial automation toward Industry 3.0 and 4.0 has driven the emergence of Industrial Edge-Cloud Platforms, which increasingly depend on web interfaces for managing and monitoring critical operational technology. This convergence introduces significant security risks, particularly from Broken Access Control (BAC)—a vulnerability consistently ranked as the top web application risk by the Open Web Application Security Project (OWASP). BAC flaws in industrial contexts can lead not only to data breaches but also to disruptions of physical processes. To address this urgent need for robust web-layer defense, this paper presents VeriFlow, a static verification framework for access control in web applications. VeriFlow reformulates access control verification as a consistency problem between two core artifacts: (1) a Formal Access Control Policy (P), which declaratively defines intended permissions, and (2) a Navigational Graph, which models all user-driven UI state transitions. By annotating the graph with policy P, VeriFlow verifies a novel Path-Permission Safety property, ensuring that no sequence of legitimate UI interactions can lead a user from an authorized state to an unauthorized one. A key technical contribution is a static analysis method capable of extracting navigational graphs directly from the JavaScript bundles of Single-Page Applications (SPAs), circumventing the limitations of traditional dynamic crawlers. In empirical evaluations, VeriFlow outperformed baseline tools in vulnerability detection, demonstrating its potential to deliver strong security guarantees that are provable within its abstracted navigational model. By formally checking policy-graph consistency, it systematically addresses a class of vulnerabilities often missed by dynamic tools, though its effectiveness is subject to the model-reality gap inherent in static analysis. Full article
Show Figures

Figure 1

15 pages, 473 KB  
Article
Every Step Counts—How Can We Accurately Count Steps with Wearable Sensors During Activities of Daily Living in Individuals with Neurological Conditions?
by Florence Crozat, Johannes Pohl, Chris Easthope Awai, Christoph Michael Bauer and Roman Peter Kuster
Sensors 2025, 25(18), 5657; https://doi.org/10.3390/s25185657 - 11 Sep 2025
Viewed by 772
Abstract
Wearable sensors provide objective, continuous, and non-invasive quantification of physical activity, with step count serving as one of the most intuitive measures. However, significant gait alterations in individuals with neurological conditions limit the accuracy of step-counting algorithms trained on able-bodied individuals. Therefore, this [...] Read more.
Wearable sensors provide objective, continuous, and non-invasive quantification of physical activity, with step count serving as one of the most intuitive measures. However, significant gait alterations in individuals with neurological conditions limit the accuracy of step-counting algorithms trained on able-bodied individuals. Therefore, this study investigates the accuracy of step counting during activities of daily living (ADL) in a neurological population. Seven individuals with neurological conditions wore seven accelerometers while performing ADL for 30 min. Step events manually annotated from video served as ground truth. An optimal sensing and analysis configuration for machine learning algorithm development (sensor location, filter range, window length, and regressor type) was identified and compared to existing algorithms developed for able-bodied individuals. The most accurate configuration includes a waist-worn sensor, a 0.5–3 Hz bandpass filter, a 5 s window, and gradient boosting regression. The corresponding algorithm showed a significantly lower error rate compared to existing algorithms trained on able-bodied data. Notably, all algorithms undercounted steps. This study identified an optimal sensing and analysis configuration for machine learning-based step counting in a neurological population and highlights the limitations of applying able-bodied-trained algorithms. Future research should focus on developing accurate and robust step-counting algorithms tailored to individuals with neurological conditions. Full article
(This article belongs to the Section Wearables)
Show Figures

Graphical abstract

18 pages, 2308 KB  
Article
Sit-and-Reach Pose Detection Based on Self-Train Method and Ghost-ST-GCN
by Shuheng Jiang, Haihua Cui and Liyuan Jin
Sensors 2025, 25(18), 5624; https://doi.org/10.3390/s25185624 - 9 Sep 2025
Viewed by 646
Abstract
The sit-and-reach test is a common stretching exercise suitable for adolescents, aimed at improving joint flexibility and somatic neural control, and has become a mandatory item in China’s student physical fitness assessments. However, many students tend to perform incorrect postures during their practice, [...] Read more.
The sit-and-reach test is a common stretching exercise suitable for adolescents, aimed at improving joint flexibility and somatic neural control, and has become a mandatory item in China’s student physical fitness assessments. However, many students tend to perform incorrect postures during their practice, which may lead to sports injuries such as muscle strains if sustained over time. To address this issue, this paper proposes a Ghost-ST-GCN model for judging the correctness of the sit-and-reach pose. The model first requires detecting seven body keypoints. Leveraging a publicly available labeled keypoint dataset and unlabeled sit-and-reach videos, these keypoints are acquired through the proposed self-train method using the BlazePose network. Subsequently, the keypoints are fed into the Ghost-ST-GCN model, which consists of nine stacked GCN-TCN blocks. Critically, each GCN-TCN layer is embedded with a ghost layer to enhance efficiency. Finally, a classification layer determines the movement’s correctness. Experimental results demonstrate that the self-train method significantly improves the annotation accuracy of the seven keypoints; the integration of ghost layers streamlines the overall detection model; and the system achieves an action detection accuracy of 85.20% for the sit-and-reach exercise, with a response latency of less than 1 s. This approach is highly suitable for guiding adolescents to standardize their movements during independent sit-and-reach practice. Full article
(This article belongs to the Special Issue AI-Based Automated Recognition and Detection in Healthcare)
Show Figures

Figure 1

19 pages, 2267 KB  
Article
Comparative Analysis of Base-Width-Based Annotation Box Ratios for Vine Trunk and Support Post Detection Performance in Agricultural Autonomous Navigation Environments
by Hong-Kun Lyu, Sanghun Yun and Seung Park
Agronomy 2025, 15(9), 2107; https://doi.org/10.3390/agronomy15092107 - 31 Aug 2025
Viewed by 615
Abstract
AI-driven agricultural automation increasingly demands efficient data generation methods for training deep learning models in autonomous robotic systems. Traditional bounding box annotation methods for agricultural objects present significant challenges including subjective boundary determination, inconsistent labeling across annotators, and physical strain from extensive mouse [...] Read more.
AI-driven agricultural automation increasingly demands efficient data generation methods for training deep learning models in autonomous robotic systems. Traditional bounding box annotation methods for agricultural objects present significant challenges including subjective boundary determination, inconsistent labeling across annotators, and physical strain from extensive mouse movements required for elongated objects. This study proposes a novel base-width standardized annotation method that utilizes the base width of a vine trunk and a support post as a reference parameter for automated bounding box generation. The method requires annotators to specify only the left and right endpoints of object bases, from which the system automatically generates standardized bounding boxes with predefined aspect ratios. Performance assessment utilized Precision, Recall, F1-score, and Average Precision metrics across vine trunks and support posts. The study reveals that vertically elongated rectangular bounding boxes outperform square configurations for agricultural object detection. The proposed method is expected to reduce time consumption from subjective boundary determination and minimize physical strain during bounding box annotation for AI-based autonomous navigation models in agricultural environments. This will ultimately enhance dataset consistency and improve the efficiency of artificial intelligence learning. Full article
(This article belongs to the Collection AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

22 pages, 23041 KB  
Article
ViTrans: Inter-Frame Alignment Enhancement for Moving Vehicle Detection in Satellite Videos with Stabilization Offsets
by Tao He, Kaimin Sun, Yu Duan, Wei Cui, Ziang Wang, Song Gao, Yuan Yao and Zijie Chen
Remote Sens. 2025, 17(17), 2973; https://doi.org/10.3390/rs17172973 - 27 Aug 2025
Viewed by 601
Abstract
Satellite videos typically employ image registration techniques for video stabilization in order to achieve persistent observation. However, existing methods largely neglect the residual stabilization offsets, particularly when exceeding the physical dimensions of target vehicles, which inevitably causes performance degradation. Furthermore, the detection pipeline [...] Read more.
Satellite videos typically employ image registration techniques for video stabilization in order to achieve persistent observation. However, existing methods largely neglect the residual stabilization offsets, particularly when exceeding the physical dimensions of target vehicles, which inevitably causes performance degradation. Furthermore, the detection pipeline struggles with hard-to-discriminate samples that exhibit low contrast, motion blur, or occlusion, while conventional sample assignment strategies fail to address the inherent annotation ambiguity for extremely small objects. We propose an end-to-end method called ViTrans for detecting moving vehicles in satellite video under stabilization offsets. ViTrans consists of three core modules: (1) a feature-aligned stabilization offset correction module (SCM) that mitigates feature misalignment by aligning features between the reference frame and the current frame; (2) a feature adaptive aggregation enhancement module (AAEM) based on vehicle trajectory consistency, which leverages the motion characteristics of objects across consecutive frames to eliminate dynamic clutter and false-alarm artifacts; and (3) a Gaussian distribution-based metric that dynamically adapts to bounding box dimensions, thereby providing more accurate positive sample feedback during model training. Extensive experiments on the VISO and SDM-Car datasets under simulated stabilization offsets demonstrate that ViTrans achieves state-of-the-art performance, improving F1-score by 14.4% on VISO and 6.9% on SDM-Car over existing methods. Full article
Show Figures

Graphical abstract

26 pages, 2328 KB  
Article
Physiological State Recognition via HRV and Fractal Analysis Using AI and Unsupervised Clustering
by Galya Georgieva-Tsaneva, Krasimir Cheshmedzhiev, Yoan-Aleksandar Tsanev and Miroslav Dechev
Information 2025, 16(9), 718; https://doi.org/10.3390/info16090718 - 22 Aug 2025
Viewed by 781
Abstract
Early detection of physiological dysregulation is critical for timely intervention and effective health management. Traditional monitoring systems often rely on labeled data and predefined thresholds, limiting their adaptability and generalization to unseen conditions. To address this, we propose a framework for label-free classification [...] Read more.
Early detection of physiological dysregulation is critical for timely intervention and effective health management. Traditional monitoring systems often rely on labeled data and predefined thresholds, limiting their adaptability and generalization to unseen conditions. To address this, we propose a framework for label-free classification of physiological states using Heart Rate Variability (HRV), combined with unsupervised machine learning techniques. This approach is particularly valuable when annotated datasets are scarce or unavailable—as is often the case in real-world wearable and IoT-based health monitoring. In this study, data were collected from participants under controlled conditions representing rest, stress, and physical exertion. Core HRV parameters such as the SDNN (Standard Deviation of all Normal-to-Normal intervals), RMSSD (Root Mean Square of the Successive Differences), DFA (Detrended Fluctuation Analysis) were extracted. Principal Component Analysis was applied for dimensionality reduction. K-Means, hierarchical clustering, and Density-based spatial clustering of applications with noise (DBSCAN) were used to uncover natural groupings within the data. DBSCAN identified outliers associated with atypical responses, suggesting potential for early anomaly detection. The combination of HRV descriptors enabled unsupervised classification with over 90% consistency between clusters and physiological conditions. The proposed approach successfully differentiated the three physiological conditions based on HRV and fractal features, with a clear separation between clusters in terms of DFA α1, α2, LF/HF, and RMSSD (with high agreement to physiological labels (Purity ≈ 0.93; ARI = 0.89; NMI = 0.92)). Furthermore, DBSCAN identified three outliers with atypical autonomic profiles, highlighting the potential of the method for early warning detection in real-time monitoring systems. Full article
Show Figures

Graphical abstract

20 pages, 5369 KB  
Article
Smart Postharvest Management of Strawberries: YOLOv8-Driven Detection of Defects, Diseases, and Maturity
by Luana dos Santos Cordeiro, Irenilza de Alencar Nääs and Marcelo Tsuguio Okano
AgriEngineering 2025, 7(8), 246; https://doi.org/10.3390/agriengineering7080246 - 1 Aug 2025
Cited by 1 | Viewed by 1066
Abstract
Strawberries are highly perishable fruits prone to postharvest losses due to defects, diseases, and uneven ripening. This study proposes a deep learning-based approach for automated quality assessment using the YOLOv8n object detection model. A custom dataset of 5663 annotated strawberry images was compiled, [...] Read more.
Strawberries are highly perishable fruits prone to postharvest losses due to defects, diseases, and uneven ripening. This study proposes a deep learning-based approach for automated quality assessment using the YOLOv8n object detection model. A custom dataset of 5663 annotated strawberry images was compiled, covering eight quality categories, including anthracnose, gray mold, powdery mildew, uneven ripening, and physical defects. Data augmentation techniques, such as rotation and Gaussian blur, were applied to enhance model generalization and robustness. The model was trained over 100 and 200 epochs, and its performance was evaluated using standard metrics: Precision, Recall, and mean Average Precision (mAP). The 200-epoch model achieved the best results, with a mAP50 of 0.79 and an inference time of 1 ms per image, demonstrating suitability for real-time applications. Classes with distinct visual features, such as anthracnose and gray mold, were accurately classified. In contrast, visually similar categories, such as ‘Good Quality’ and ‘Unripe’ strawberries, presented classification challenges. Full article
Show Figures

Figure 1

13 pages, 769 KB  
Article
A Novel You Only Listen Once (YOLO) Deep Learning Model for Automatic Prominent Bowel Sounds Detection: Feasibility Study in Healthy Subjects
by Rohan Kalahasty, Gayathri Yerrapragada, Jieun Lee, Keerthy Gopalakrishnan, Avneet Kaur, Pratyusha Muddaloor, Divyanshi Sood, Charmy Parikh, Jay Gohri, Gianeshwaree Alias Rachna Panjwani, Naghmeh Asadimanesh, Rabiah Aslam Ansari, Swetha Rapolu, Poonguzhali Elangovan, Shiva Sankari Karuppiah, Vijaya M. Dasari, Scott A. Helgeson, Venkata S. Akshintala and Shivaram P. Arunachalam
Sensors 2025, 25(15), 4735; https://doi.org/10.3390/s25154735 - 31 Jul 2025
Cited by 2 | Viewed by 2848
Abstract
Accurate diagnosis of gastrointestinal (GI) diseases typically requires invasive procedures or imaging studies that pose the risk of various post-procedural complications or involve radiation exposure. Bowel sounds (BSs), though typically described during a GI-focused physical exam, are highly inaccurate and variable, with low [...] Read more.
Accurate diagnosis of gastrointestinal (GI) diseases typically requires invasive procedures or imaging studies that pose the risk of various post-procedural complications or involve radiation exposure. Bowel sounds (BSs), though typically described during a GI-focused physical exam, are highly inaccurate and variable, with low clinical value in diagnosis. Interpretation of the acoustic characteristics of BSs, i.e., using a phonoenterogram (PEG), may aid in diagnosing various GI conditions non-invasively. Use of artificial intelligence (AI) and improvements in computational analysis can enhance the use of PEGs in different GI diseases and lead to a non-invasive, cost-effective diagnostic modality that has not been explored before. The purpose of this work was to develop an automated AI model, You Only Listen Once (YOLO), to detect prominent bowel sounds that can enable real-time analysis for future GI disease detection and diagnosis. A total of 110 2-minute PEGs sampled at 44.1 kHz were recorded using the Eko DUO® stethoscope from eight healthy volunteers at two locations, namely, left upper quadrant (LUQ) and right lower quadrant (RLQ) after IRB approval. The datasets were annotated by trained physicians, categorizing BSs as prominent or obscure using version 1.7 of Label Studio Software®. Each BS recording was split up into 375 ms segments with 200 ms overlap for real-time BS detection. Each segment was binned based on whether it contained a prominent BS, resulting in a dataset of 36,149 non-prominent segments and 6435 prominent segments. Our dataset was divided into training, validation, and test sets (60/20/20% split). A 1D-CNN augmented transformer was trained to classify these segments via the input of Mel-frequency cepstral coefficients. The developed AI model achieved area under the receiver operating curve (ROC) of 0.92, accuracy of 86.6%, precision of 86.85%, and recall of 86.08%. This shows that the 1D-CNN augmented transformer with Mel-frequency cepstral coefficients achieved creditable performance metrics, signifying the YOLO model’s capability to classify prominent bowel sounds that can be further analyzed for various GI diseases. This proof-of-concept study in healthy volunteers demonstrates that automated BS detection can pave the way for developing more intuitive and efficient AI-PEG devices that can be trained and utilized to diagnose various GI conditions. To ensure the robustness and generalizability of these findings, further investigations encompassing a broader cohort, inclusive of both healthy and disease states are needed. Full article
(This article belongs to the Special Issue Biomedical Signals, Images and Healthcare Data Analysis: 2nd Edition)
Show Figures

Figure 1

40 pages, 18923 KB  
Article
Twin-AI: Intelligent Barrier Eddy Current Separator with Digital Twin and AI Integration
by Shohreh Kia, Johannes B. Mayer, Erik Westphal and Benjamin Leiding
Sensors 2025, 25(15), 4731; https://doi.org/10.3390/s25154731 - 31 Jul 2025
Viewed by 666
Abstract
The current paper presents a comprehensive intelligent system designed to optimize the performance of a barrier eddy current separator (BECS), comprising a conveyor belt, a vibration feeder, and a magnetic drum. This system was trained and validated on real-world industrial data gathered directly [...] Read more.
The current paper presents a comprehensive intelligent system designed to optimize the performance of a barrier eddy current separator (BECS), comprising a conveyor belt, a vibration feeder, and a magnetic drum. This system was trained and validated on real-world industrial data gathered directly from the working separator under 81 different operational scenarios. The intelligent models were used to recommend optimal settings for drum speed, belt speed, vibration intensity, and drum angle, thereby maximizing separation quality and minimizing energy consumption. the smart separation module utilizes YOLOv11n-seg and achieves a mean average precision (mAP) of 0.838 across 7163 industrial instances from aluminum, copper, and plastic materials. For shape classification (sharp vs. smooth), the model reached 91.8% accuracy across 1105 annotated samples. Furthermore, the thermal monitoring unit can detect iron contamination by analyzing temperature anomalies. Scenarios with iron showed a maximum temperature increase of over 20 °C compared to clean materials, with a detection response time of under 2.5 s. The architecture integrates a Digital Twin using Azure Digital Twins to virtually mirror the system, enabling real-time tracking, behavior simulation, and remote updates. A full connection with the PLC has been implemented, allowing the AI-driven system to adjust physical parameters autonomously. This combination of AI, IoT, and digital twin technologies delivers a reliable and scalable solution for enhanced separation quality, improved operational safety, and predictive maintenance in industrial recycling environments. Full article
(This article belongs to the Special Issue Sensors and IoT Technologies for the Smart Industry)
Show Figures

Figure 1

19 pages, 1260 KB  
Review
Structural Variants: Mechanisms, Mapping, and Interpretation in Human Genetics
by Shruti Pande, Moez Dawood and Christopher M. Grochowski
Genes 2025, 16(8), 905; https://doi.org/10.3390/genes16080905 - 29 Jul 2025
Viewed by 2166
Abstract
Structural variations (SVs) represent genomic variations that involve breakage and rejoining of DNA segments. SVs can alter normal gene dosage, lead to rearrangements of genes and regulatory elements within a topologically associated domain, and potentially contribute to physical traits, genomic disorders, or complex [...] Read more.
Structural variations (SVs) represent genomic variations that involve breakage and rejoining of DNA segments. SVs can alter normal gene dosage, lead to rearrangements of genes and regulatory elements within a topologically associated domain, and potentially contribute to physical traits, genomic disorders, or complex traits. Recent advances in sequencing technologies and bioinformatics have greatly improved SV detection and interpretation at unprecedented resolution and scale. Despite these advances, the functional impact of SVs, the underlying SV mechanism(s) contributing to complex traits, and the technical challenges associated with SV detection and annotation remain active areas of research. This review aims to provide an overview of structural variations, their mutagenesis mechanisms, and their detection in the genomics era, focusing on the biological significance, methodologies, and future directions in the field. Full article
(This article belongs to the Special Issue Detecting and Interpreting Structural Variation in the Human Genome)
Show Figures

Figure 1

42 pages, 1300 KB  
Article
A Hybrid Human-AI Model for Enhanced Automated Vulnerability Scoring in Modern Vehicle Sensor Systems
by Mohamed Sayed Farghaly, Heba Kamal Aslan and Islam Tharwat Abdel Halim
Future Internet 2025, 17(8), 339; https://doi.org/10.3390/fi17080339 - 28 Jul 2025
Viewed by 943
Abstract
Modern vehicles are rapidly transforming into interconnected cyber–physical systems that rely on advanced sensor technologies and pervasive connectivity to support autonomous functionality. Yet, despite this evolution, standardized methods for quantifying cybersecurity vulnerabilities across critical automotive components remain scarce. This paper introduces a novel [...] Read more.
Modern vehicles are rapidly transforming into interconnected cyber–physical systems that rely on advanced sensor technologies and pervasive connectivity to support autonomous functionality. Yet, despite this evolution, standardized methods for quantifying cybersecurity vulnerabilities across critical automotive components remain scarce. This paper introduces a novel hybrid model that integrates expert-driven insights with generative AI tools to adapt and extend the Common Vulnerability Scoring System (CVSS) specifically for autonomous vehicle sensor systems. Following a three-phase methodology, the study conducted a systematic review of 16 peer-reviewed sources (2018–2024), applied CVSS version 4.0 scoring to 15 representative attack types, and evaluated four free source generative AI models—ChatGPT, DeepSeek, Gemini, and Copilot—on a dataset of 117 annotated automotive-related vulnerabilities. Expert validation from 10 domain professionals reveals that Light Detection and Ranging (LiDAR) sensors are the most vulnerable (9 distinct attack types), followed by Radio Detection And Ranging (radar) (8) and ultrasonic (6). Network-based attacks dominate (104 of 117 cases), with 92.3% of the dataset exhibiting low attack complexity and 82.9% requiring no user interaction. The most severe attack vectors, as scored by experts using CVSS, include eavesdropping (7.19), Sybil attacks (6.76), and replay attacks (6.35). Evaluation of large language models (LLMs) showed that DeepSeek achieved an F1 score of 99.07% on network-based attacks, while all models struggled with minority classes such as high complexity (e.g., ChatGPT F1 = 0%, Gemini F1 = 15.38%). The findings highlight the potential of integrating expert insight with AI efficiency to deliver more scalable and accurate vulnerability assessments for modern vehicular systems.This study offers actionable insights for vehicle manufacturers and cybersecurity practitioners, aiming to inform strategic efforts to fortify sensor integrity, optimize network resilience, and ultimately enhance the cybersecurity posture of next-generation autonomous vehicles. Full article
Show Figures

Figure 1

Back to TopTop