Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,895)

Search Parameters:
Keywords = customers’ image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6482 KiB  
Article
Surface Damage Detection in Hydraulic Structures from UAV Images Using Lightweight Neural Networks
by Feng Han and Chongshi Gu
Remote Sens. 2025, 17(15), 2668; https://doi.org/10.3390/rs17152668 (registering DOI) - 1 Aug 2025
Abstract
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial [...] Read more.
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial vehicles (UAVs) enable efficient acquisition of high-resolution visual data across expansive hydraulic environments. However, existing deep learning (DL) models often lack architectural adaptations for the visual complexities of UAV imagery, including low-texture contrast, noise interference, and irregular crack patterns. To address these challenges, this study proposes a lightweight, robust, and high-precision segmentation framework, called LFPA-EAM-Fast-SCNN, specifically designed for pixel-level damage detection in UAV-captured images of hydraulic concrete surfaces. The developed DL-based model integrates an enhanced Fast-SCNN backbone for efficient feature extraction, a Lightweight Feature Pyramid Attention (LFPA) module for multi-scale context enhancement, and an Edge Attention Module (EAM) for refined boundary localization. The experimental results on a custom UAV-based dataset show that the proposed damage detection method achieves superior performance, with a precision of 0.949, a recall of 0.892, an F1 score of 0.906, and an IoU of 87.92%, outperforming U-Net, Attention U-Net, SegNet, DeepLab v3+, I-ST-UNet, and SegFormer. Additionally, it reaches a real-time inference speed of 56.31 FPS, significantly surpassing other models. The experimental results demonstrate the proposed framework’s strong generalization capability and robustness under varying noise levels and damage scenarios, underscoring its suitability for scalable, automated surface damage assessment in UAV-based remote sensing of civil infrastructure. Full article
Show Figures

Figure 1

20 pages, 4292 KiB  
Article
A Novel Method for Analysing the Curvature of the Anterior Lens: Multi-Radial Scheimpflug Imaging and Custom Conic Fitting Algorithm
by María Arcas-Carbonell, Elvira Orduna-Hospital, María Mechó-García, Guisela Fernández-Espinosa and Ana Sanchez-Cano
J. Imaging 2025, 11(8), 257; https://doi.org/10.3390/jimaging11080257 (registering DOI) - 1 Aug 2025
Abstract
This study describes and validates a novel method for assessing anterior crystalline lens curvature along vertical and horizontal meridians using radial measurements derived from Scheimpflug imaging. The aim was to evaluate whether pupil diameter (PD), anterior lens curvature, and anterior chamber depth (ACD) [...] Read more.
This study describes and validates a novel method for assessing anterior crystalline lens curvature along vertical and horizontal meridians using radial measurements derived from Scheimpflug imaging. The aim was to evaluate whether pupil diameter (PD), anterior lens curvature, and anterior chamber depth (ACD) change during accommodation and whether these changes are age-dependent. A cross-sectional study was conducted on 104 right eyes from healthy participants aged 21–62 years. Sixteen radial images per eye were acquired using the Galilei Dual Scheimpflug Placido Disk Topographer under four accommodative demands (0, 1, 3, and 5 dioptres (D)). Custom software analysed lens curvature by calculating eccentricity in both meridians. Participants were analysed as a total group and by age subgroups. Accommodative amplitude and monocular accommodative facility were inversely correlated with age. Both PD and ACD significantly decreased with higher accommodative demands and age. Relative eccentricity decreased under accommodation, indicating increased lens curvature, especially in younger participants. Significant curvature changes were detected in the horizontal meridian only, although no statistically significant differences between meridians were found overall. The vertical meridian showed slightly higher eccentricity values, suggesting that it remained less curved. By enabling detailed, meridionally stratified in vivo assessment of anterior lens curvature, this novel method provides a valuable non-invasive approach for characterizing age-related biomechanical changes during accommodation. The resulting insights enhance our understanding of presbyopia progression, particularly regarding the spatial remodelling of the anterior lens surface. Full article
(This article belongs to the Special Issue Current Progress in Medical Image Segmentation)
Show Figures

Figure 1

20 pages, 5369 KiB  
Article
Smart Postharvest Management of Strawberries: YOLOv8-Driven Detection of Defects, Diseases, and Maturity
by Luana dos Santos Cordeiro, Irenilza de Alencar Nääs and Marcelo Tsuguio Okano
AgriEngineering 2025, 7(8), 246; https://doi.org/10.3390/agriengineering7080246 - 1 Aug 2025
Abstract
Strawberries are highly perishable fruits prone to postharvest losses due to defects, diseases, and uneven ripening. This study proposes a deep learning-based approach for automated quality assessment using the YOLOv8n object detection model. A custom dataset of 5663 annotated strawberry images was compiled, [...] Read more.
Strawberries are highly perishable fruits prone to postharvest losses due to defects, diseases, and uneven ripening. This study proposes a deep learning-based approach for automated quality assessment using the YOLOv8n object detection model. A custom dataset of 5663 annotated strawberry images was compiled, covering eight quality categories, including anthracnose, gray mold, powdery mildew, uneven ripening, and physical defects. Data augmentation techniques, such as rotation and Gaussian blur, were applied to enhance model generalization and robustness. The model was trained over 100 and 200 epochs, and its performance was evaluated using standard metrics: Precision, Recall, and mean Average Precision (mAP). The 200-epoch model achieved the best results, with a mAP50 of 0.79 and an inference time of 1 ms per image, demonstrating suitability for real-time applications. Classes with distinct visual features, such as anthracnose and gray mold, were accurately classified. In contrast, visually similar categories, such as ‘Good Quality’ and ‘Unripe’ strawberries, presented classification challenges. Full article
Show Figures

Figure 1

26 pages, 1790 KiB  
Article
A Hybrid Deep Learning Model for Aromatic and Medicinal Plant Species Classification Using a Curated Leaf Image Dataset
by Shareena E. M., D. Abraham Chandy, Shemi P. M. and Alwin Poulose
AgriEngineering 2025, 7(8), 243; https://doi.org/10.3390/agriengineering7080243 - 1 Aug 2025
Abstract
In the era of smart agriculture, accurate identification of plant species is critical for effective crop management, biodiversity monitoring, and the sustainable use of medicinal resources. However, existing deep learning approaches often underperform when applied to fine-grained plant classification tasks due to the [...] Read more.
In the era of smart agriculture, accurate identification of plant species is critical for effective crop management, biodiversity monitoring, and the sustainable use of medicinal resources. However, existing deep learning approaches often underperform when applied to fine-grained plant classification tasks due to the lack of domain-specific, high-quality datasets and the limited representational capacity of traditional architectures. This study addresses these challenges by introducing a novel, well-curated leaf image dataset consisting of 39 classes of medicinal and aromatic plants collected from the Aromatic and Medicinal Plant Research Station in Odakkali, Kerala, India. To overcome performance bottlenecks observed with a baseline Convolutional Neural Network (CNN) that achieved only 44.94% accuracy, we progressively enhanced model performance through a series of architectural innovations. These included the use of a pre-trained VGG16 network, data augmentation techniques, and fine-tuning of deeper convolutional layers, followed by the integration of Squeeze-and-Excitation (SE) attention blocks. Ultimately, we propose a hybrid deep learning architecture that combines VGG16 with Batch Normalization, Gated Recurrent Units (GRUs), Transformer modules, and Dilated Convolutions. This final model achieved a peak validation accuracy of 95.24%, significantly outperforming several baseline models, such as custom CNN (44.94%), VGG-19 (59.49%), VGG-16 before augmentation (71.52%), Xception (85.44%), Inception v3 (87.97%), VGG-16 after data augumentation (89.24%), VGG-16 after fine-tuning (90.51%), MobileNetV2 (93.67), and VGG16 with SE block (94.94%). These results demonstrate superior capability in capturing both local textures and global morphological features. The proposed solution not only advances the state of the art in plant classification but also contributes a valuable dataset to the research community. Its real-world applicability spans field-based plant identification, biodiversity conservation, and precision agriculture, offering a scalable tool for automated plant recognition in complex ecological and agricultural environments. Full article
(This article belongs to the Special Issue Implementation of Artificial Intelligence in Agriculture)
Show Figures

Figure 1

23 pages, 4379 KiB  
Article
Large Vision Language Model: Enhanced-RSCLIP with Exemplar-Image Prompting for Uncommon Object Detection in Satellite Imagery
by Taiwo Efunogbon, Abimbola Efunogbon, Enjie Liu, Dayou Li and Renxi Qiu
Electronics 2025, 14(15), 3071; https://doi.org/10.3390/electronics14153071 (registering DOI) - 31 Jul 2025
Viewed by 16
Abstract
Large Vision Language Models (LVLMs) have shown promise in remote sensing applications, yet struggle with “uncommon” objects that lack sufficient public labeled data. This paper presents Enhanced-RSCLIP, a novel dual-prompt architecture that combines text prompting with exemplar-image processing for cattle herd detection in [...] Read more.
Large Vision Language Models (LVLMs) have shown promise in remote sensing applications, yet struggle with “uncommon” objects that lack sufficient public labeled data. This paper presents Enhanced-RSCLIP, a novel dual-prompt architecture that combines text prompting with exemplar-image processing for cattle herd detection in satellite imagery. Our approach introduces a key innovation where an exemplar-image preprocessing module using crop-based or attention-based algorithms extracts focused object features which are fed as a dual stream to a contrastive learning framework that fuses textual descriptions with visual exemplar embeddings. We evaluated our method on a custom dataset of 260 satellite images across UK and Nigerian regions. Enhanced-RSCLIP with crop-based exemplar processing achieved 72% accuracy in cattle detection and 56.2% overall accuracy on cross-domain transfer tasks, significantly outperforming text-only CLIP (31% overall accuracy). The dual-prompt architecture enables effective few-shot learning and cross-regional transfer from data-rich (UK) to data-sparse (Nigeria) environments, demonstrating a 41% improvement over baseline approaches for uncommon object detection in satellite imagery. Full article
Show Figures

Figure 1

40 pages, 3463 KiB  
Review
Machine Learning-Powered Smart Healthcare Systems in the Era of Big Data: Applications, Diagnostic Insights, Challenges, and Ethical Implications
by Sita Rani, Raman Kumar, B. S. Panda, Rajender Kumar, Nafaa Farhan Muften, Mayada Ahmed Abass and Jasmina Lozanović
Diagnostics 2025, 15(15), 1914; https://doi.org/10.3390/diagnostics15151914 - 30 Jul 2025
Viewed by 275
Abstract
Healthcare data rapidly increases, and patients seek customized, effective healthcare services. Big data and machine learning (ML) enabled smart healthcare systems hold revolutionary potential. Unlike previous reviews that separately address AI or big data, this work synthesizes their convergence through real-world case studies, [...] Read more.
Healthcare data rapidly increases, and patients seek customized, effective healthcare services. Big data and machine learning (ML) enabled smart healthcare systems hold revolutionary potential. Unlike previous reviews that separately address AI or big data, this work synthesizes their convergence through real-world case studies, cross-domain ML applications, and a critical discussion on ethical integration in smart diagnostics. The review focuses on the role of big data analysis and ML towards better diagnosis, improved efficiency of operations, and individualized care for patients. It explores the principal challenges of data heterogeneity, privacy, computational complexity, and advanced methods such as federated learning (FL) and edge computing. Applications in real-world settings, such as disease prediction, medical imaging, drug discovery, and remote monitoring, illustrate how ML methods, such as deep learning (DL) and natural language processing (NLP), enhance clinical decision-making. A comparison of ML models highlights their value in dealing with large and heterogeneous healthcare datasets. In addition, the use of nascent technologies such as wearables and Internet of Medical Things (IoMT) is examined for their role in supporting real-time data-driven delivery of healthcare. The paper emphasizes the pragmatic application of intelligent systems by highlighting case studies that reflect up to 95% diagnostic accuracy and cost savings. The review ends with future directions that seek to develop scalable, ethical, and interpretable AI-powered healthcare systems. It bridges the gap between ML algorithms and smart diagnostics, offering critical perspectives for clinicians, data scientists, and policymakers. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

19 pages, 9284 KiB  
Article
UAV-YOLO12: A Multi-Scale Road Segmentation Model for UAV Remote Sensing Imagery
by Bingyan Cui, Zhen Liu and Qifeng Yang
Drones 2025, 9(8), 533; https://doi.org/10.3390/drones9080533 - 29 Jul 2025
Viewed by 287
Abstract
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes UAV-YOLOv12, a multi-scale segmentation model specifically designed for UAV-based road imagery analysis. The proposed model builds on the YOLOv12 architecture by adding two key modules. It uses a Selective Kernel Network (SKNet) to adjust receptive fields dynamically and a Partial Convolution (PConv) module to improve spatial focus and robustness in occluded regions. These enhancements help the model better detect small and irregular road features in complex aerial scenes. Experimental results on a custom UAV dataset collected from national highways in Wuxi, China, show that UAV-YOLOv12 achieves F1-scores of 0.902 for highways (road-H) and 0.825 for paths (road-P), outperforming the original YOLOv12 by 5% and 3.2%, respectively. Inference speed is maintained at 11.1 ms per image, supporting near real-time performance. Moreover, comparative evaluations with U-Net show that UAV-YOLOv12 improves by 7.1% and 9.5%. The model also exhibits strong generalization ability, achieving F1-scores above 0.87 on public datasets such as VHR-10 and the Drone Vehicle dataset. These results demonstrate that the proposed UAV-YOLOv12 can achieve high accuracy and robustness in diverse road environments and object scales. Full article
Show Figures

Figure 1

22 pages, 5706 KiB  
Article
Improved Dab-Deformable Model for Runway Foreign Object Debris Detection in Airport Optical Images
by Yang Cao, Yuming Wang, Yilin Zhu and Rui Yang
Appl. Sci. 2025, 15(15), 8284; https://doi.org/10.3390/app15158284 - 25 Jul 2025
Viewed by 133
Abstract
Foreign Object Debris (FOD) detection is paramount for airport operations. The precise identification and removal of FOD are critical for ensuring airplane flight safety. This study collected FOD images using optical imaging sensors installed at Urumqi Airport and created a custom FOD dataset [...] Read more.
Foreign Object Debris (FOD) detection is paramount for airport operations. The precise identification and removal of FOD are critical for ensuring airplane flight safety. This study collected FOD images using optical imaging sensors installed at Urumqi Airport and created a custom FOD dataset based on these images. To address the challenges of small targets and complex backgrounds in the dataset, this paper proposes optimizations and improvements based on the advanced detection network Dab-Deformable. First, this paper introduces a Lightweight Deep-Shallow Feature Fusion algorithm (LDSFF), which integrates a hotspot sensing network and a spatial mapping enhancer aimed at focusing the model on significant regions. Second, we devise a Multi-Directional Deformable Channel Attention (MDDCA) module for rational feature weight allocation. Furthermore, a feedback mechanism is incorporated into the encoder structure, enhancing the model’s capacity to capture complex dependencies within sequential data. Additionally, when combined with a Threshold Selection (TS) algorithm, the model effectively mitigates the distraction caused by the serialization of multi-layer feature maps in the Transformer architecture. Experimental results on the optical small FOD dataset show that the proposed network achieves a robust performance and improved accuracy in FOD detection. Full article
Show Figures

Figure 1

27 pages, 6456 KiB  
Article
An Open Multifunctional FPGA-Based Pulser/Receiver System for Intravascular Ultrasound (IVUS) Imaging and Therapy
by Amauri A. Assef, Paula L. S. de Moura, Joaquim M. Maia, Phuong Vu, Adeoye O. Olomodosi, Stephan Strassle Rojas and Brooks D. Lindsey
Sensors 2025, 25(15), 4599; https://doi.org/10.3390/s25154599 - 25 Jul 2025
Viewed by 298
Abstract
Coronary artery disease (CAD) is the third leading cause of disability and death globally. Intravascular ultrasound (IVUS) is the most commonly used imaging modality for the characterization of vulnerable plaques. The development of novel intravascular imaging and therapy devices requires dedicated open systems [...] Read more.
Coronary artery disease (CAD) is the third leading cause of disability and death globally. Intravascular ultrasound (IVUS) is the most commonly used imaging modality for the characterization of vulnerable plaques. The development of novel intravascular imaging and therapy devices requires dedicated open systems (e.g., for pulse sequences for imaging or thrombolysis), which are not currently available. This paper presents the development of a novel multifunctional FPGA-based pulser/receiver system for intravascular ultrasound imaging and therapy research. The open platform consists of a host PC with a Matlab-based software interface, an FPGA board, and a proprietary analog front-end board with state-of-the-art electronics for highly flexible transmission and reception schemes. The main features of the system include the capability to convert arbitrary waveforms into tristate bipolar pulses by using the PWM technique and by the direct acquisition of raw radiofrequency (RF) echo data. The results of a multicycle excitation pulse applied to a custom 550 kHz therapy transducer for acoustic characterization and a pulse-echo experiment conducted with a high-voltage, short-pulse excitation for a 19.48 MHz transducer are reported. Testing results show that the proposed system can be easily controlled to match the frequency and bandwidth required for different IVUS transducers across a broad class of applications. Full article
(This article belongs to the Special Issue Ultrasonic Imaging and Sensors II)
Show Figures

Figure 1

25 pages, 3790 KiB  
Article
Studying Inverse Problem of Microscale Droplets Squeeze Flow Using Convolutional Neural Network
by Aryan Mehboudi, Shrawan Singhal and S.V. Sreenivasan
Fluids 2025, 10(8), 190; https://doi.org/10.3390/fluids10080190 - 24 Jul 2025
Viewed by 219
Abstract
We present a neural-network-based approach to solve the image-to-image translation problem in microscale droplets squeeze flow. A residual convolutional neural network is proposed to address the inverse problem: reconstructing a low-resolution (LR) droplet pattern image from a high-resolution (HR) liquid film thickness imprint. [...] Read more.
We present a neural-network-based approach to solve the image-to-image translation problem in microscale droplets squeeze flow. A residual convolutional neural network is proposed to address the inverse problem: reconstructing a low-resolution (LR) droplet pattern image from a high-resolution (HR) liquid film thickness imprint. This enables the prediction of initial droplet configurations that evolve into target HR imprints after a specified spreading time. The developed neural network architecture aims at learning to tune the refinement level of its residual convolutional blocks by using function approximators that are trained to map a given film thickness to an appropriate refinement level indicator. We use multiple stacks of convolutional layers, the output of which is translated according to the refinement level indicators provided by the directly connected function approximators. Together with a non-linear activation function, the translation mechanism enables the HR imprint image to be refined sequentially in multiple steps until the target LR droplet pattern image is revealed. We believe that this work holds value for the semiconductor manufacturing and packaging industry. Specifically, it enables desired layouts to be imprinted on a surface by squeezing strategically placed droplets with a blank surface, eliminating the need for customized templates and reducing manufacturing costs. Additionally, this approach has potential applications in data compression and encryption. Full article
Show Figures

Figure 1

31 pages, 4937 KiB  
Article
Proximal LiDAR Sensing for Monitoring of Vegetative Growth in Rice at Different Growing Stages
by Md Rejaul Karim, Md Nasim Reza, Shahriar Ahmed, Kyu-Ho Lee, Joonjea Sung and Sun-Ok Chung
Agriculture 2025, 15(15), 1579; https://doi.org/10.3390/agriculture15151579 - 23 Jul 2025
Viewed by 256
Abstract
Precise monitoring of vegetative growth is essential for assessing crop responses to environmental changes. Conventional methods of geometric characterization of plants such as RGB imaging, multispectral sensing, and manual measurements often lack precision or scalability for growth monitoring of rice. LiDAR offers high-resolution, [...] Read more.
Precise monitoring of vegetative growth is essential for assessing crop responses to environmental changes. Conventional methods of geometric characterization of plants such as RGB imaging, multispectral sensing, and manual measurements often lack precision or scalability for growth monitoring of rice. LiDAR offers high-resolution, non-destructive 3D canopy characterization, yet applications in rice cultivation across different growth stages remain underexplored, while LiDAR has shown success in other crops such as vineyards. This study addresses that gap by using LiDAR for geometric characterization of rice plants at early, middle, and late growth stages. The objective of this study was to characterize rice plant geometry such as plant height, canopy volume, row distance, and plant spacing using the proximal LiDAR sensing technique at three different growth stages. A commercial LiDAR sensor (model: VPL−16, Velodyne Lidar, San Jose, CA, USA) mounted on a wheeled aluminum frame for data collection, preprocessing, visualization, and geometric feature characterization using a commercial software solution, Python (version 3.11.5), and a custom algorithm. Manual measurements compared with the LiDAR 3D point cloud data measurements, demonstrating high precision in estimating plant geometric characteristics. LiDAR-estimated plant height, canopy volume, row distance, and spacing were 0.5 ± 0.1 m, 0.7 ± 0.05 m3, 0.3 ± 0.00 m, and 0.2 ± 0.001 m at the early stage; 0.93 ± 0.13 m, 1.30 ± 0.12 m3, 0.32 ± 0.01 m, and 0.19 ± 0.01 m at the middle stage; and 0.99 ± 0.06 m, 1.25 ± 0.13 m3, 0.38 ± 0.03 m, and 0.10 ± 0.01 m at the late growth stage. These measurements closely matched manual observations across three stages. RMSE values ranged from 0.01 to 0.06 m and r2 values ranged from 0.86 to 0.98 across parameters, confirming the high accuracy and reliability of proximal LiDAR sensing under field conditions. Although precision was achieved across growth stages, complex canopy structures under field conditions posed segmentation challenges. Further advances in point cloud filtering and classification are required to reliably capture such variability. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

18 pages, 2028 KiB  
Article
Research on Single-Tree Segmentation Method for Forest 3D Reconstruction Point Cloud Based on Attention Mechanism
by Lishuo Huo, Zhao Chen, Lingnan Dai, Dianchang Wang and Xinrong Zhao
Forests 2025, 16(7), 1192; https://doi.org/10.3390/f16071192 - 19 Jul 2025
Viewed by 239
Abstract
The segmentation of individual trees holds considerable significance in the investigation and management of forest resources. Utilizing smartphone-captured imagery combined with image-based 3D reconstruction techniques to generate corresponding point cloud data can serve as a more accessible and potentially cost-efficient alternative for data [...] Read more.
The segmentation of individual trees holds considerable significance in the investigation and management of forest resources. Utilizing smartphone-captured imagery combined with image-based 3D reconstruction techniques to generate corresponding point cloud data can serve as a more accessible and potentially cost-efficient alternative for data acquisition compared to conventional LiDAR methods. In this study, we present a Sparse 3D U-Net framework for single-tree segmentation which is predicated on a multi-head attention mechanism. The mechanism functions by projecting the input data into multiple subspaces—referred to as “heads”—followed by independent attention computation within each subspace. Subsequently, the outputs are aggregated to form a comprehensive representation. As a result, multi-head attention facilitates the model’s ability to capture diverse contextual information, thereby enhancing performance across a wide range of applications. This framework enables efficient, intelligent, and end-to-end instance segmentation of forest point cloud data through the integration of multi-scale features and global contextual information. The introduction of an iterative mechanism at the attention layer allows the model to learn more compact feature representations, thereby significantly enhancing its convergence speed. In this study, Dongsheng Bajia Country Park and Jiufeng National Forest Park, situated in Haidian District, Beijing, China, were selected as the designated test sites. Eight representative sample plots within these areas were systematically sampled. Forest stand sequential photographs were captured using an iPhone, and these images were processed to generate corresponding point cloud data for the respective sample plots. This methodology was employed to comprehensively assess the model’s capability for single-tree segmentation. Furthermore, the generalization performance of the proposed model was validated using the publicly available dataset TreeLearn. The model’s advantages were demonstrated across multiple aspects, including data processing efficiency, training robustness, and single-tree segmentation speed. The proposed method achieved an F1 score of 91.58% on the customized dataset. On the TreeLearn dataset, the method attained an F1 score of 97.12%. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

28 pages, 1112 KiB  
Article
Customer Retention in the Philippine Food Sector: Health Measures, Market Access, and Strategic Adaptation After the COVID-19 Pandemic
by Ma. Janice J. Gumasing
Foods 2025, 14(14), 2535; https://doi.org/10.3390/foods14142535 - 19 Jul 2025
Viewed by 597
Abstract
This study investigates the critical determinants of customer retention in casual dining restaurants within the context of the post-pandemic “new normal.” Anchored in service quality and consumer behavior theories, the research examines the influences of food quality, health measures, perceived price, brand image, [...] Read more.
This study investigates the critical determinants of customer retention in casual dining restaurants within the context of the post-pandemic “new normal.” Anchored in service quality and consumer behavior theories, the research examines the influences of food quality, health measures, perceived price, brand image, ambiance, and location on customer decision making. Using Partial Least Squares Structural Equation Modeling (PLS-SEM), data from 336 respondents in the National Capital Region, Philippines were analyzed to assess the relationships among these variables and their effects on restaurant selection and customer retention. The results reveal that food quality (β = 0.698, p < 0.05) exerts the strongest influence on restaurant selection, followed by health measures (β = 0.477, p = 0.001), perceived price (β = 0.378, p < 0.02), and brand image (β = 0.341, p < 0.035). Furthermore, health measures (β = 0.436, p = 0.002) and restaurant selection (β = 0.475, p < 0.05) significantly enhance customer retention, while ambiance and location were not found to be significant predictors. These findings offer theoretical contributions to the service quality and consumer trust literature and provide practical and policy-relevant insights for food establishments adapting to health-driven consumer expectations. The study highlights the need for the strategic integration of safety protocols, pricing value, and brand positioning to foster long-term loyalty and resilience in the evolving food service market. Full article
(This article belongs to the Section Sensory and Consumer Sciences)
Show Figures

Figure 1

33 pages, 15612 KiB  
Article
A Personalized Multimodal Federated Learning Framework for Skin Cancer Diagnosis
by Shuhuan Fan, Awais Ahmed, Xiaoyang Zeng, Rui Xi and Mengshu Hou
Electronics 2025, 14(14), 2880; https://doi.org/10.3390/electronics14142880 - 18 Jul 2025
Viewed by 310
Abstract
Skin cancer is one of the most prevalent forms of cancer worldwide, and early and accurate diagnosis critically impacts patient outcomes. Given the sensitive nature of medical data and its fragmented distribution across institutions (data silos), privacy-preserving collaborative learning is essential to enable [...] Read more.
Skin cancer is one of the most prevalent forms of cancer worldwide, and early and accurate diagnosis critically impacts patient outcomes. Given the sensitive nature of medical data and its fragmented distribution across institutions (data silos), privacy-preserving collaborative learning is essential to enable knowledge-sharing without compromising patient confidentiality. While federated learning (FL) offers a promising solution, existing methods struggle with heterogeneous and missing modalities across institutions, which reduce the diagnostic accuracy. To address these challenges, we propose an effective and flexible Personalized Multimodal Federated Learning framework (PMM-FL), which enables efficient cross-client knowledge transfer while maintaining personalized performance under heterogeneous and incomplete modality conditions. Our study contains three key contributions: (1) A hierarchical aggregation strategy that decouples multi-module aggregation from local deployment via global modular-separated aggregation and local client fine-tuning. Unlike conventional FL (which synchronizes all parameters in each round), our method adopts a frequency-adaptive synchronization mechanism, updating parameters based on their stability and functional roles. (2) A multimodal fusion approach based on multitask learning, integrating learnable modality imputation and attention-based feature fusion to handle missing modalities. (3) A custom dataset combining multi-year International Skin Imaging Collaboration(ISIC) challenge data (2018–2024) to ensure comprehensive coverage of diverse skin cancer types. We evaluate PMM-FL through diverse experiment settings, demonstrating its effectiveness in heterogeneous and incomplete modality federated learning settings, achieving 92.32% diagnostic accuracy with only a 2% drop in accuracy under 30% modality missingness, with a 32.9% communication overhead decline compared with baseline FL methods. Full article
(This article belongs to the Special Issue Multimodal Learning and Transfer Learning)
Show Figures

Figure 1

13 pages, 1471 KiB  
Article
Effect of X-Ray Tube Angulations and Digital Sensor Alignments on Profile Angle Distortion of CAD-CAM Abutments: A Pilot Radiographic Study
by Chang-Hun Choi, Seungwon Back and Sunjai Kim
Bioengineering 2025, 12(7), 772; https://doi.org/10.3390/bioengineering12070772 - 17 Jul 2025
Viewed by 361
Abstract
Purpose: This pilot study aimed to evaluate how deviations in X-ray tube head angulation and digital sensor alignment affect the radiographic measurement of the profile angle in CAD-CAM abutments. Materials and Methods: A mandibular model was used with five implant positions (central, buccal, [...] Read more.
Purpose: This pilot study aimed to evaluate how deviations in X-ray tube head angulation and digital sensor alignment affect the radiographic measurement of the profile angle in CAD-CAM abutments. Materials and Methods: A mandibular model was used with five implant positions (central, buccal, and lingual offsets). Custom CAD-CAM abutments were designed with identical bucco-lingual direction contours and varying mesio-distal asymmetry for the corresponding implant positions. Periapical radiographs were acquired under controlled conditions by systematically varying vertical tube angulation, horizontal tube angulation, and horizontal sensor rotation from 0° to 20° in 5° increments for each parameter. Profile angles, interthread distances, and proximal overlaps were measured and compared with baseline STL data. Results: Profile angle measurements were significantly affected by both X-ray tube and sensor deviations. Horizontal tube angulation produced the greatest profile angle distortion, particularly in buccally positioned implants. Vertical x-ray tube angulations beyond 15° led to progressive underestimation of profile angles, while horizontal tube head rotation introduced asymmetric mesial–distal variation. Sensor rotation also caused marked interthread elongation, in some cases exceeding 100%, despite vertical projection being maintained. Profile angle deviations greater than 5° occurred in multiple conditions. Conclusions: X-ray tube angulation and sensor alignment influence the reliability of profile angle measurements. Radiographs with > 10% interthread elongation or crown overlap may be inaccurate and warrant re-acquisition. Special attention is needed when imaging buccally positioned implants. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Figure 1

Back to TopTop