Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,353)

Search Parameters:
Keywords = Semi-automatic

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1217 KB  
Review
Applications of Artificial Intelligence in Corneal Nerve Images in Ophthalmology
by Raul Hernan Barcelo-Canton, Mingyi Yu, Chang Liu, Aya Takahashi, Isabelle Xin Yu Lee and Yu-Chi Liu
Diagnostics 2026, 16(4), 602; https://doi.org/10.3390/diagnostics16040602 - 18 Feb 2026
Viewed by 115
Abstract
Corneal nerves (CNs) are essential to maintain corneal epithelial integrity and ocular surface homeostasis. In vivo confocal microscopy (IVCM) enables the acquisition of high-resolution visualization of CNs, allowing visualization on a microscopic level. Traditionally, CN images must be analyzed by manual examination, which [...] Read more.
Corneal nerves (CNs) are essential to maintain corneal epithelial integrity and ocular surface homeostasis. In vivo confocal microscopy (IVCM) enables the acquisition of high-resolution visualization of CNs, allowing visualization on a microscopic level. Traditionally, CN images must be analyzed by manual examination, which is time consuming and labor intensive. Artificial intelligence (AI) has facilitated reliable analysis of CN parameters, allowing for automatic and semiautomatic analysis of CNs. These include the identification, segmentation, and quantitative analysis of various CN parameters. This review summarizes the applications of AI-driven, automatic, and semiautomatic models in the CN analysis of IVCM images while also focusing on their diagnostic relevance in dry eye disease (DED) and neuropathic corneal pain (NCP). Recent advancements in AI have transformed IVCM image analysis by improving reproducibility and reducing operator dependency and time. The AI-based algorithm has been demonstrated to have good performance and sensitivity to identify and quantify the CN metrics. AI has also been utilized to improve the diagnostic accuracy of DED with IVCM scans, involving multiple portions of the CNs, such as the inferior whorl region. When employed with IVCM images of patients with NCP, AI-assisted identification of microneuromas and changes in CN metrics has provided an improvement in diagnostic accuracy. Despite promising advances and outcomes, the widespread implementation of these AI models in CN image analysis requires large-scale validation. Future integration of multimodal AI algorithms remains a promising endeavor to enhance diagnostic accuracy and disease stratification. Full article
Show Figures

Figure 1

39 pages, 1783 KB  
Article
Safety, Acceptability, and Usability of Immersive Gamification System for Use in Rehabilitation Management of Pediatric Patients with Cerebral Palsy and with Mobility Limitations (Phase 1 Trial)
by Maria Eliza R. Aguila, Cherica A. Tee, Josiah Cyrus R. Boque, Juan Raphael M. Gonzales, Isabel Teresa O. Salido, Bryan Andrei C. Galecio, Ben Anthony A. Lopez, Christian Alfredo K. Cruz, Michael L. Tee, Veeda Michelle M. Anlacan, Roland Dominic G. Jamora and Jaime D. L. Caro
Information 2026, 17(2), 206; https://doi.org/10.3390/info17020206 - 16 Feb 2026
Viewed by 153
Abstract
Virtual reality (VR) is increasingly integrated into the rehabilitation of children with cerebral palsy (CP). However, evidence to substantiate its potential as part of standard care remains limited. This Phase 1 study aimed to evaluate a VR-based immersive gamification technology system (ImGTS) for [...] Read more.
Virtual reality (VR) is increasingly integrated into the rehabilitation of children with cerebral palsy (CP). However, evidence to substantiate its potential as part of standard care remains limited. This Phase 1 study aimed to evaluate a VR-based immersive gamification technology system (ImGTS) for use in CP rehabilitation based on its safety, acceptability, and usability in healthy children. The system included software and hardware designs informed by discussions with CP rehabilitation and VR development experts (e.g., developmental pediatricians, physical therapists) and tailored to the local context, tested with two setups: the head-mounted display (HMD) and the semi-cave automatic virtual environment (semi-CAVE). We describe the experience of 30 healthy children aged 6–12 years using the ImGTS (Mission to Planet Axel version 1.0) using either the HMD (n = 15) or semi-CAVE (n = 15) setup. Descriptive and thematic analyses of data from semi-structured interviews based on questionnaires for safety and acceptability, as well as observations of behaviors for the usability dimensions of effectiveness, efficiency, and satisfaction, indicated that participants were engaged and motivated with the ImGTS, with low incidence and severity of VR-related symptoms for both setups and high acceptance of the ImGTS, based on perceptions of the environment and feelings of presence. Usability was also high. These findings suggest that the ImGTS is safe, acceptable, and usable for healthy children. This trial provides initial evidence to guide the methods of subsequent trials testing the safety, acceptability, usability, and clinical effectiveness of the ImGTS in children with cerebral palsy, and, eventually, to guide its deployment. Full article
(This article belongs to the Special Issue Advances in Human-Centered Artificial Intelligence)
26 pages, 2554 KB  
Article
Semi-Automated Reporting from Environmental Monitoring Data Using a Large Language Model-Based Chatbot
by Angelica Lo Duca, Rosa Lo Duca, Arianna Marinelli, Donatella Occhiuto and Alessandra Scariot
ISPRS Int. J. Geo-Inf. 2026, 15(2), 80; https://doi.org/10.3390/ijgi15020080 - 14 Feb 2026
Viewed by 181
Abstract
Producing high-quality analytical reports for the environmental domain is typically time-consuming and requires significant human expertise. This paper describes MeteoChat, a semi-automatic framework for efficiently generating specialized environmental reports from heterogeneous environmental data. MeteoChat utilizes a Large Language Model (LLM) fine-tuned and integrated [...] Read more.
Producing high-quality analytical reports for the environmental domain is typically time-consuming and requires significant human expertise. This paper describes MeteoChat, a semi-automatic framework for efficiently generating specialized environmental reports from heterogeneous environmental data. MeteoChat utilizes a Large Language Model (LLM) fine-tuned and integrated with Retrieval-Augmented Generation (RAG). The system’s core is its plug-and-play philosophy, which separates analytical reasoning from the data source and the report’s intended audience. The fine-tuning phase uses data-agnostic, parameterized question–context–answer triples defined by an environmental expert to teach the LLM domain-specific analytical logic and audience-appropriate communication styles. Subsequently, the RAG phase integrates the model with actual datasets, which are processed via an Extract–Transform–Load (ETL) workflow to generate statistical summaries. This architectural separation ensures that the same reporting engine can operate on different sources, such as meteorological time series, satellite imagery, or geographical data, without additional training. Users interact with the system via a web-based conversational interface, where responses are tailored for either technical experts (using explicit calculations and tables) or the general public (using simplified, narrative language). MeteoChat has been tested with real data extracted from the micrometeorological network of ARPA Lazio. Full article
(This article belongs to the Special Issue LLM4GIS: Large Language Models for GIS)
Show Figures

Figure 1

28 pages, 14898 KB  
Article
Deep Learning for Classification of Internal Defects in Fused Filament Fabrication Using Optical Coherence Tomography
by Valentin Lang, Qichen Zhu, Malgorzata Kopycinska-Müller and Steffen Ihlenfeldt
Appl. Syst. Innov. 2026, 9(2), 42; https://doi.org/10.3390/asi9020042 - 14 Feb 2026
Viewed by 227
Abstract
Additive manufacturing is increasingly adopted for the industrial production of small series of functional components, particularly in thermoplastic strand extrusion processes such as Fused Filament Fabrication. This transition relies on technological advances addressing key process limitations, including dimensional instability, weak interlayer bonding, extrusion [...] Read more.
Additive manufacturing is increasingly adopted for the industrial production of small series of functional components, particularly in thermoplastic strand extrusion processes such as Fused Filament Fabrication. This transition relies on technological advances addressing key process limitations, including dimensional instability, weak interlayer bonding, extrusion defects, moisture sensitivity, and insufficient melting. Process monitoring therefore focuses on early defect detection to minimize failed builds and costs, while ultimately enabling process optimization and adaptive control to mitigate defects during fabrication. For this purpose, a data processing pipeline for monitoring Optical Coherence Tomography images acquired in Fused Filament Fabrication is introduced. Convolutional neural networks are used for the automatic classification of tomographic cross-sections. A dataset of tomographic images passes semi-automatic labeling, preprocessing, model training and evaluation. A sliding window detects outlier regions in the tomographic cross-sections, while masks suppress peripheral noise, enabling label generation based on outlier ratios. Data are split into training, validation, and test sets using block-based partitioning to limit leakage. The classification model employs a ResNet-V2 architecture with BottleneckV2 modules. Hyperparameters are optimized, with N = 2, K = 2, dropout 0.5, and learning rate 0.001 yielding best performance. The model achieves 0.9446 accuracy and outperforms EfficientNet-B0 and VGG16 in accuracy and efficiency. Full article
(This article belongs to the Special Issue AI-Driven Decision Support for Systemic Innovation)
Show Figures

Figure 1

25 pages, 12097 KB  
Article
SIDe-HBIM: Single-Image Depth Inference as a Tool for Semi-Automatic Decorative Modeling
by Fabio Bianconi, Marco Filippucci, Claudia Cerbai, Filippo Cornacchini and Andrea Migliosi
Heritage 2026, 9(2), 70; https://doi.org/10.3390/heritage9020070 - 11 Feb 2026
Viewed by 168
Abstract
This paper introduces SIDe-HBIM (Single-Image Depth inference for HBIM), a semi-automated image-to-BIM pipeline aimed at improving the integration of architectural decorative elements into HBIM environments. The research addresses the difficulty of representing geometrically complex yet information-oriented heritage components when traditional survey techniques are [...] Read more.
This paper introduces SIDe-HBIM (Single-Image Depth inference for HBIM), a semi-automated image-to-BIM pipeline aimed at improving the integration of architectural decorative elements into HBIM environments. The research addresses the difficulty of representing geometrically complex yet information-oriented heritage components when traditional survey techniques are impractical or disproportionate. Starting from a single photographic input, the methodology combines AI-based depth estimation, quantitative computational evaluation and parametric modeling to generate lightweight, morphologically coherent 3D elements suitable for non-photorealistic HBIM applications. Multiple image-to-depth models are processed in parallel and ranked through a weighted synthetic index based on geometric and structural indicators, after which the selected depthmap is converted into a continuous NURBS surface and integrated into a BIM environment. Application to three heterogeneous case studies from the Basilica of Santa Maria degli Angeli (Assisi) demonstrates that SIDe-HBIM is particularly effective for bas-reliefs and moderate-relief decorative apparatuses, offering a reproducible and efficient alternative for HBIM-oriented documentation. Full article
Show Figures

Figure 1

13 pages, 1044 KB  
Article
Quantitative Texture Analysis of Cervical Cytology Identifies Endometrial Lesions in Atypical Glandular Cells on Liquid-Based Cytology: A Pilot Study
by Toshimichi Onuma, Akiko Shinagawa, Makoto Orisaka and Yoshio Yoshida
Diagnostics 2026, 16(4), 531; https://doi.org/10.3390/diagnostics16040531 - 10 Feb 2026
Viewed by 192
Abstract
Background/Objectives: Within human papillomavirus (HPV)-based screening, cytology remains essential for cervical cancer detection while also potentially revealing endometrial pathology. This pilot study aimed to distinguish benign (normal) cases from atypical endometrial hyperplasia (AEH) and endometrial cancer (EC) within atypical glandular cell (AGC) [...] Read more.
Background/Objectives: Within human papillomavirus (HPV)-based screening, cytology remains essential for cervical cancer detection while also potentially revealing endometrial pathology. This pilot study aimed to distinguish benign (normal) cases from atypical endometrial hyperplasia (AEH) and endometrial cancer (EC) within atypical glandular cell (AGC) cytology using quantitative analysis of liquid-based cervical cytology. Methods: SurePath and ThinPrep sets included 62 (37 normal, 25 AEH/EC) and 52 (24 normal, 28 AEH/EC) AGC cases, respectively. Semi-automatic QuPath analysis workflow detected cellular clusters; extracted texture, intensity, and geometric features; and produced case-level summaries. A random forest (RF) classifier was used to discriminate AEH/EC from normal cases. Feature subset selection was performed using a beam-search wrapper and joint hyperparameter tuning. Primary performance evaluation comprised stratified 5-fold cross-validation with metrics averaged across these folds. Results: Across both preparations, univariable analyses showed moderate discrimination overall which improved post-menopause. For SurePath and ThinPrep, the highest 10 areas under the curve (AUCs) were 0.701–0.773 (improving to 0.798–0.841 post-menopause) and 0.740–0.778 (improving to 0.832–0.884 post-menopause), respectively. Machine-learning RF models improved performance beyond univariable baselines. Cross-validated AUCs for SurePath and ThinPrep were 0.805 (95% confidence interval [CI], 0.683–0.927) and 0.887 (95% CI, 0.787–0.987), respectively. Features associated with higher AUCs differed between SurePath and ThinPrep, indicating platform-specific signals. Conclusions: Quantitative analysis of routine cervical cytology can augment expert reviews to help distinguish endometrial lesions among AGCs, particularly post-menopause. These software-based readouts can fit within existing workflows and may improve triage when morphology is subtle, including scenarios with HPV-negative screening results. Full article
(This article belongs to the Section Pathology and Molecular Diagnostics)
Show Figures

Figure 1

12 pages, 2762 KB  
Article
A Two-Stage Localization and Refinement Neural Network Structure for Data-Efficient Microbleed Detection
by Lukas Rau, Oliver Granert, Nils G. Margraf, Stephan Schneider and Ulf Jensen-Kondering
Brain Sci. 2026, 16(2), 207; https://doi.org/10.3390/brainsci16020207 - 10 Feb 2026
Viewed by 170
Abstract
Background/Objectives: In medical diagnostics, (semi-)automatic detection of pathological structures in images is becoming increasingly important. In particular, detecting cerebral microbleeds (CMBs) poses a challenge in clinical practice because the process is time-consuming and prone to error. Methods: Compared to previous methods [...] Read more.
Background/Objectives: In medical diagnostics, (semi-)automatic detection of pathological structures in images is becoming increasingly important. In particular, detecting cerebral microbleeds (CMBs) poses a challenge in clinical practice because the process is time-consuming and prone to error. Methods: Compared to previous methods of (semi-) automatic CMB detection that rely on large training datasets, we propose a method that can be adapted with a small dataset while still performing well. We propose a workflow that uses a two-stage approach to detect cerebral microbleeds that can be trained with a small dataset. The first stage is a 3D U-Net that retrieves potential CMB locations in the SWI image volume. Then, a 3D convolutional neural network (CNN) is used for discrimination to distinguish between real CMB and CMB mimics. Results: Using a dataset of 15 MRI scans with 40 marked CMBs, we are able to achieve a sensitivity of 97.5%. Conclusions: We showed that it is possible to create a workflow with high sensitivity using only a few training samples, enabling smaller radiological facilities to train networks using their own datasets. Even though the workflow performs well on a small dataset, it still requires further testing with other larger datasets. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

18 pages, 4312 KB  
Article
Semi-Automatic Wave Mode Recognition Applied to Acoustic Emission Signals from a Spherical Storage Tank
by Ruben Büch, Benjamin Dirix, Martine Wevers and Joris Everaerts
Appl. Sci. 2026, 16(3), 1625; https://doi.org/10.3390/app16031625 - 5 Feb 2026
Viewed by 234
Abstract
Acoustic emission testing is a non-destructive inspection method in which ultrasonic waves emitted by defects in an object are detected and assessed based on their time of arrival and waveform, which strongly depends on the geometry of the object. Those waves appear in [...] Read more.
Acoustic emission testing is a non-destructive inspection method in which ultrasonic waves emitted by defects in an object are detected and assessed based on their time of arrival and waveform, which strongly depends on the geometry of the object. Those waves appear in different modes with their own velocity and dispersion and different degrees of attenuation can occur for different wave modes. In previous work, a new method for (semi-)automatic recognition of the arrival time of wave modes was presented and validated on a dataset obtained in laboratory conditions on a flat plate. This paper builds upon the previous research and presents a modified method that can be applied to data obtained from an industrial gas storage sphere. The following two wave modes were commonly detected for this sphere: one similar to the zero-order anti-symmetrical mode (A0) and the other similar to the zero-order symmetrical Lamb mode (S0) in a plate. The method was adapted to solve the new challenges that were encountered for the sphere. The performance of the adapted automatic mode recognition method was assessed using a dataset with the following four different source types: Hsu–Nielsen sources, sensor pulses, impact by a metallic object and natural sources. The resulting wave mode recognition was compared to manual recognition to determine the rates of successful recognition. The resulting successful recognition rates range from 97% for A0 and S0 for Hsu–Nielsen sources down to 73% for A0 in signals due to natural sources and 74% for A0 in signals due to impact by a metallic object. Full article
Show Figures

Figure 1

26 pages, 48917 KB  
Article
A Low-Cost Framework for 3D Phenotyping of Sugarcane via Instance Segmentation and 3D Gaussian Splatting
by Yan Chen, Xiyao Huang, Fen Liao, Hengyi Li, Jinxin Chen and Xiangyu Lu
Agriculture 2026, 16(3), 375; https://doi.org/10.3390/agriculture16030375 - 5 Feb 2026
Viewed by 225
Abstract
Sugarcane is an important economic crop, and key phenotypic traits such as plant height and leaf area play a crucial role in yield potential assessment and breeding selection. However, the quantification of these traits currently relies mainly on inefficient and destructive manual measurements, [...] Read more.
Sugarcane is an important economic crop, and key phenotypic traits such as plant height and leaf area play a crucial role in yield potential assessment and breeding selection. However, the quantification of these traits currently relies mainly on inefficient and destructive manual measurements, making it difficult to achieve continuous monitoring of plant growth. To address this limitation, this study integrates a YOLOv8x-seg instance segmentation model with 3D Gaussian Splatting (3DGS) and proposes a non-contact, high-precision 3D phenotyping method based on low-cost data acquisition using a smartphone. Multi-view RGB images are first processed using YOLOv8x-seg to extract plant foreground masks, which are then used as inputs for 3DGS-based reconstruction to generate 3D models. Plant height is automatically measured from the reconstructed models, while leaf area extraction involves a semi-automatic workflow combining image processing and manual steps. Experimental results demonstrate that the proposed approach enables accurate trait estimation, achieving a coefficient of determination (R2) of 0.9644 for plant height estimation (evaluated on a subset of 15 plants, with a mean absolute percentage error of approximately 1.5%) and an R2 of 0.8551 for leaf area estimation (validated on 10 plants). Ground-truth plant height was measured using a telescopic measuring rod, and leaf area was determined through destructive measurement with a leaf area meter (LI-COR Model LI-3000A). Ground-truth plant height values were obtained using a telescopic measuring rod, and leaf area was determined through destructive measurement with a leaf area meter (LI-COR Model LI-3000A). This method demonstrates the feasibility of using consumer-grade devices for high-fidelity 3D phenotyping and offers an effective approach for high-throughput sugarcane breeding applications. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

22 pages, 1659 KB  
Article
Lightweight Depression Detection Using 3D Facial Landmark Pseudo-Images and CNN-LSTM on DAIC-WOZ and E-DAIC
by Achraf Jallaglag, My Abdelouahed Sabri, Ali Yahyaouy and Abdellah Aarab
BioMedInformatics 2026, 6(1), 8; https://doi.org/10.3390/biomedinformatics6010008 - 4 Feb 2026
Viewed by 340
Abstract
Background: Depression is a common mental disorder, and early and objective diagnosis of depression is challenging. New advances in deep learning show promise for processing audio and video content when screening for depression. Nevertheless, the majority of current methods rely on raw video [...] Read more.
Background: Depression is a common mental disorder, and early and objective diagnosis of depression is challenging. New advances in deep learning show promise for processing audio and video content when screening for depression. Nevertheless, the majority of current methods rely on raw video processing or multimodal pipelines, which are computationally costly and challenging to understand and create privacy issues, restricting their use in actual clinical settings. Methods: Based solely on spatiotemporal 3D face landmark representations, we describe a unique, totally visual, and lightweight deep learning approach to overcome these constraints. In this paper we introduce, for the first time, a pure visual deep learning framework, based on spatiotemporal 3D facial landmarks extracted from clinical interview videos contained in the DAIC-WOZ and Extended DAIC-WOZ (E-DAIC) datasets. Our method does not use raw video or any type of semi-automated multimodal fusion. Whereas raw video streaming can be computationally expensive and is not well suited to investigating specific variables, we first take a temporal series of 3D landmarks, convert them to pseudo-images (224 × 224 × 3), and then use them within a CNN-LSTM framework. Importantly, CNN-LSTM provides the ability to analyze the spatial configuration and temporal dimensions of facial behavior. Results: The experimental results indicate macro-average F1 scores of 0.74 on DAIC-WOZ and 0.762 on E-DAIC, demonstrating robust performance under heavy class imbalances, with a variability of ±0.03 across folds. Conclusion: These results indicate that landmark-based spatiotemporal modeling represents the future of lightweight, interpretable, and scalable automatic depression detection. Second, our results suggest exciting opportunities for completely embedding ADI systems within the framework of real-world MHA. Full article
Show Figures

Graphical abstract

25 pages, 8314 KB  
Article
Ridge Regression Modeling of Evaporation Reduction Strategies for Small-Scale Water Storage in Semi-Arid Regions
by Kishore Nalabolu, Madhusudhan Reddy Karakala, Apparao Chodisetti, Bhaskara Rao Ijjurouthu, Narayanaswamy Gutta, Nataraj Kolavanahalli Chikkamuniyappa, Murali Krishna Chitte, Arun Kumar Kondeti, Veera Prasad Godugula, Rajakumar Kommathoti Navaneetha, Mohana Rao Boyinapalli Venkata, Ratnaraju Chebrolu, Srigiri Doppalapudi and Shobhan Naik Vankanavath
AgriEngineering 2026, 8(2), 55; https://doi.org/10.3390/agriengineering8020055 - 3 Feb 2026
Viewed by 330
Abstract
In semi-arid areas, water loss from small agricultural water storage facilities is significant, owing to evaporation. A longitudinal study was conducted between 2019 and 2022 at the Agricultural Research Station, Ananthapuramu, located in the semi-arid climate of Peninsular India, which compared 12 distinct [...] Read more.
In semi-arid areas, water loss from small agricultural water storage facilities is significant, owing to evaporation. A longitudinal study was conducted between 2019 and 2022 at the Agricultural Research Station, Ananthapuramu, located in the semi-arid climate of Peninsular India, which compared 12 distinct treatments designed to reduce evaporation. These treatments included bamboo sheets, agricultural residues, Azolla (Azolla pinnata), monomolecular alcohol films, and oil-based films, along with an untreated control. Evaporation rates and meteorological data were measured using the depth loss method and automatic weather station. Results indicated substantial treatment effects, such as bamboo sheets decreasing evaporation by 88%, reducing daily loss from 5.2 mm to 0.8 mm, while Azolla achieved a 62% reduction (2.8 mm). Organic residues decreased evaporation by 37–47%, and chemical monolayers and oils by 21–42%. Ridge regression models demonstrated strong performance (R2 = 0.789–0.808), with bamboo sheets exhibiting the lowest Root Mean Square Error (0.127 mm day−1). Economic analysis revealed annual water savings of 4700–4800 m3 ha−1 for bamboo sheets and 2300–2500 m3 ha−1 for less effective covers. Assuming a baseline water value of 0.20 US$ m−3, annual net benefits ranged from 250 to 900 US$ ha−1, with Net Present Values spanning from 7000 to 160,000 US$ ha−1 across various scenarios. Overall, bamboo sheets and Azolla were identified as the most effective and economically viable options for mitigating evaporation in semi-arid smallholder water systems. Maximum air temperature (Tmax) was a key meteorological variable used to model daily evaporation, together with wind speed, followed by relative humidity and sunshine duration. Full article
(This article belongs to the Section Agricultural Irrigation Systems)
Show Figures

Graphical abstract

24 pages, 18419 KB  
Article
Semi-Automatic Artificial Lips Device for Brass Instruments with Real-Time Pitch Feedback Control
by Hiroaki Sonoda, Hikari Kuriyama, Kouki Tomiyoshi and Gou Koutaki
Sensors 2026, 26(3), 984; https://doi.org/10.3390/s26030984 - 3 Feb 2026
Viewed by 344
Abstract
We propose a semi-automatic artificial lips control device that allows a human performer to produce sound on a brass instrument without the need to vibrate their own lips. The device integrates position control that presses artificial lips toward the mouthpiece and aperture control [...] Read more.
We propose a semi-automatic artificial lips control device that allows a human performer to produce sound on a brass instrument without the need to vibrate their own lips. The device integrates position control that presses artificial lips toward the mouthpiece and aperture control via wire traction, together with a pre-calibrated motor table and acoustic feedback for pitch stabilization. In evaluations using a euphonium, we verified timbre, pitch range, and pitch stabilization, including harmonic modes. The results showed that the harmonic structure of tones produced by a human using the device can be comparable to those produced by a human player in the conventional manner. Pitch-range and pitch-stabilization tests confirmed that the system can generate practical musical intervals and achieve reliable harmonic mode changes. Furthermore, real-time acoustic feedback improved pitch stability during performance. These findings demonstrate that, rather than fully automating human performance, the proposed system provides a compact and reproducible framework for controllable brass sound generation and pitch stabilization using only three actuators. Full article
(This article belongs to the Special Issue Acoustic Sensing for Musical Instrument Study and Vocal Analysis)
Show Figures

Figure 1

12 pages, 1209 KB  
Article
Deep Learning-Based Semantic Segmentation and Classification of Otoscopic Images for Otitis Media Diagnosis and Health Promotion
by Chien-Yi Yang, Che-Jui Lee, Wen-Sen Lai, Kuan-Yu Chen, Chung-Feng Kuo, Chieh Hsing Liu and Shao-Cheng Liu
Diagnostics 2026, 16(3), 467; https://doi.org/10.3390/diagnostics16030467 - 2 Feb 2026
Viewed by 351
Abstract
Background/Objectives: Otitis media (OM), including acute otitis media (AOM) and chronic otitis media (COM), is a common middle ear disease that can lead to significant morbidity if not accurately diagnosed. Otoscopic interpretation remains subjective and operator-dependent, underscoring the need for objective and reproducible [...] Read more.
Background/Objectives: Otitis media (OM), including acute otitis media (AOM) and chronic otitis media (COM), is a common middle ear disease that can lead to significant morbidity if not accurately diagnosed. Otoscopic interpretation remains subjective and operator-dependent, underscoring the need for objective and reproducible diagnostic support. Recent advances in artificial intelligence (AI) offer promising solutions for automated otoscopic image analysis. Methods: We developed an AI-based diagnostic framework consisting of three sequential steps: (1) semi-supervised learning for automatic recognition and semantic segmentation of tympanic membrane structures, (2) region-based feature extraction, and (3) disease classification. A total of 607 clinical otoscopic images were retrospectively collected, including normal ears (n = 220), AOM (n = 157), and COM with tympanic membrane perforation (n = 230). Among these, 485 images were used for training and 122 for independent testing. Semantic segmentation of five anatomically relevant regions was performed using multiple convolutional neural network architectures, including U-Net, PSPNet, HRNet, and DeepLabV3+. Following segmentation, color and texture features were extracted from each region and used to train a neural network-based classifier to differentiate disease states. Results: Among the evaluated segmentation models, U-Net demonstrated superior performance, achieving an overall pixel accuracy of 96.76% and a mean Dice similarity coefficient of 71.68%. The segmented regions enabled reliable extraction of discriminative chromatic and texture features. In the final classification stage, the proposed framework achieved diagnostic accuracies of 100% for normal ears, 100% for AOM, and 91.3% for COM on the independent test set, with an overall accuracy of 96.72%. Conclusions: This study demonstrates that a semi-supervised, segmentation-driven AI pipeline integrating feature extraction and classification can achieve high diagnostic accuracy for otitis media. The proposed framework offers a clinically interpretable and fully automated approach that may enhance diagnostic consistency, support clinical decision-making, and facilitate scalable otoscopic assessment in diverse healthcare screening settings for disease prevention and health education. Full article
(This article belongs to the Special Issue AI-Assisted Diagnostics in Telemedicine and Digital Health)
Show Figures

Figure 1

22 pages, 4716 KB  
Article
The Prediction of Low-Level Jet Using Machine Learning Based on Turbulence Observations and Remote Sensing
by Minghao Chen, Yan Ren, Hongsheng Zhang, Wei Wei, Weiqi Tang, Jiening Liang, Xianjie Cao, Pengfei Tian and Lei Zhang
Remote Sens. 2026, 18(3), 470; https://doi.org/10.3390/rs18030470 - 2 Feb 2026
Viewed by 210
Abstract
Low-level jets (LLJs) are common strong wind structures in the atmospheric boundary layer. They have important impacts on aviation safety, wind energy utilization and pollutant dispersion. However, the formation mechanisms of LLJs are complex. Traditional parameterization schemes and numerical models still show limitations [...] Read more.
Low-level jets (LLJs) are common strong wind structures in the atmospheric boundary layer. They have important impacts on aviation safety, wind energy utilization and pollutant dispersion. However, the formation mechanisms of LLJs are complex. Traditional parameterization schemes and numerical models still show limitations in forecasting LLJ occurrence and resolving their structures. In this study, wind lidar, near-surface turbulence and gradient meteorological observations from the Semi-Arid Climate and Environment Observatory of Lanzhou University are combined to construct a multi-source low-level dataset. Four processing modules are designed, including multi-source data fusion, turbulence preprocessing, turbulence intermittency metrics and LLJ identification, to overcome the constraints of single-platform observations. Six commonly used machine learning algorithms (LightGBM, XGBoost, CatBoost, K-nearest neighbors, Balanced Random Forest, and ExtraTrees) are compared. A two-stage classification–regression framework is then adopted. LightGBM is used for LLJ occurrence, and CatBoost is used for LLJ height and intensity, to build an LLJ-2Stage prediction system. The system performs automatic LLJ identification and predicts jet intensity and core height. For LLJ occurrence, the harmonic-mean F1-score of precision and recall reaches 0.820. The coefficient of determination R2 is 0.643 for height prediction and 0.794 for intensity prediction. Both the classification and regression parts show good accuracy and stability. The SHAP method is further applied to assess model interpretability and to identify key predictors that control LLJ occurrence, height and intensity. Results indicate that thermal variables, such as net radiation (Rn) and sensible heat flux (H), dominate LLJ occurrence and structural changes. The strength of turbulence intermittency provides valuable supplementary information for locating the LLJ core height. Two representative nocturnal LLJ cases further show a consistent near-surface evolution during the LLJ period, with enhanced TKE and reduced H, followed by a gradual recovery after decay, while Rn remains persistently low, consistent with the SHAP-indicated effects. The proposed framework predicts LLJ occurrence and structural evolution and is of significance for improving understanding of boundary layer processes, air-pollution control, wind energy utilization and low-level aviation safety. Full article
(This article belongs to the Special Issue Advancements in Atmospheric Turbulence Remote Sensing)
Show Figures

Figure 1

19 pages, 2387 KB  
Article
High-Precision Marine Radar Object Detection Using Tiled Training and SAHI Enhanced YOLOv11-OBB
by Sercan Külcü
Sensors 2026, 26(3), 942; https://doi.org/10.3390/s26030942 - 2 Feb 2026
Viewed by 388
Abstract
Reliable object detection in marine radar imagery is critical for maritime situational awareness, collision avoidance, and autonomous navigation. However, it remains challenging due to sea clutter, small targets, and interference from fixed navigational aids. This study proposes a high-precision detection pipeline that integrates [...] Read more.
Reliable object detection in marine radar imagery is critical for maritime situational awareness, collision avoidance, and autonomous navigation. However, it remains challenging due to sea clutter, small targets, and interference from fixed navigational aids. This study proposes a high-precision detection pipeline that integrates tiled training, Sliced Aided Hyper Inference (SAHI), and an oriented bounding box (OBB) variant of the lightweight YOLOv11 architecture. The proposed approach effectively addresses scale variability in Plan Position Indicator (PPI) radar images. Experiments were conducted on the real-world DAAN dataset provided by the German Aerospace Center (DLR). The dataset consists of 760 full-resolution radar frames containing multiple moving vessels, dynamic own-ship, and clutter sources. A semi-automatic contour-based annotation pipeline was developed to generate multi-format labels, including axis-aligned bounding boxes, oriented bounding boxes (OBBs), and instance segmentation masks, directly from radar echo characteristics. The results demonstrate that the tiled YOLOv11n-OBB model with SAHI achieves an mAP@0.5 exceeding 0.95, with a mean center localization error below 10 pixels. The proposed method shows better performance on small targets compared to standard full-image baselines and other YOLOv11 variants. Moreover, the lightweight models enable near real-time inference at 4–6 FPS on edge hardware. These findings indicate that OBBs and scale-aware strategies enhance detection precision in complex marine radar environments, providing practical advantages for tracking and navigation tasks. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Back to TopTop