Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (27,449)

Search Parameters:
Keywords = imaging system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1389 KiB  
Article
An Automatic Ear Temperature Monitoring Method for Group-Housed Pigs Adopting Infrared Thermography
by Changzhen Zhang, Xiaoping Wu, Deqin Xiao, Xude Zhang, Xiaopeng Lei and Sicong Lin
Animals 2025, 15(15), 2279; https://doi.org/10.3390/ani15152279 (registering DOI) - 4 Aug 2025
Abstract
The goal of this study was to develop an automated monitoring system based on infrared thermography (IRT) for the detection of group-housed pig ears temperature. The aim in the first part of the study was to recognize pigs’ ears by using neural network [...] Read more.
The goal of this study was to develop an automated monitoring system based on infrared thermography (IRT) for the detection of group-housed pig ears temperature. The aim in the first part of the study was to recognize pigs’ ears by using neural network analysis (SwinStar-YOLO). In the second part of the study, the goal was to automatically extract the maximum and average values of the temperature in the ear region using morphological image processing and a temperature matrix. Our dataset (3600 pictures, 10,812 pig ears) was processed using 5-fold cross-validation before training the ear detection model. The model recognized pigs’ ears with a precision of 93.74% related to threshold intersection over union (IoU). Correlation analysis between manually extracted and algorithm-derived ear temperatures from 400 pig ear samples showed coefficients of determination () of 0.97 for maximum and 0.88 for average values. This demonstrates that our proposed method is feasible and reliable for automatic pig ear temperature monitoring, serving as a powerful tool for early health warning. Full article
(This article belongs to the Special Issue Infrared Thermography in Animals)
25 pages, 1569 KiB  
Article
Real-Time Signal Quality Assessment and Power Adaptation of FSO Links Operating Under All-Weather Conditions Using Deep Learning Exploiting Eye Diagrams
by Somia A. Abd El-Mottaleb and Ahmad Atieh
Photonics 2025, 12(8), 789; https://doi.org/10.3390/photonics12080789 (registering DOI) - 4 Aug 2025
Abstract
This paper proposes an intelligent power adaptation framework for Free-Space Optics (FSO) communication systems operating under different weather conditions exploiting a deep learning (DL) analysis of received eye diagram images. The system incorporates two Convolutional Neural Network (CNN) architectures, LeNet and Wide Residual [...] Read more.
This paper proposes an intelligent power adaptation framework for Free-Space Optics (FSO) communication systems operating under different weather conditions exploiting a deep learning (DL) analysis of received eye diagram images. The system incorporates two Convolutional Neural Network (CNN) architectures, LeNet and Wide Residual Network (Wide ResNet) algorithms to perform regression tasks that predict received signal quality metrics such as the Quality Factor (Q-factor) and Bit Error Rate (BER) from the received eye diagram. These models are evaluated using Mean Squared Error (MSE) and the coefficient of determination (R2 score) to assess prediction accuracy. Additionally, a custom CNN‑based classifier is trained to determine whether the BER reading from the eye diagram exceeds a critical threshold of 10−4; this classifier achieves an overall accuracy of 99%, correctly detecting 194/195 “acceptable” and 4/5 “unacceptable” instances. Based on the predicted signal quality, the framework activates a dual-amplifier configuration comprising a pre-channel amplifier with a maximum gain of 25 dB and a post-channel amplifier with a maximum gain of 10 dB. The total gain of the amplifiers is adjusted to support the operation of the FSO system under all-weather conditions. The FSO system uses a 15 dBm laser source at 1550 nm. The DL models are tested on both internal and external datasets to validate their generalization capability. The results show that the regression models achieve strong predictive performance, and the classifier reliably detects degraded signal conditions, enabling the real-time gain control of the amplifiers to achieve the quality of transmission. The proposed solution supports robust FSO communication under challenging atmospheric conditions including dry snow, making it suitable for deployment in regions like Northern Europe, Canada, and Northern Japan. Full article
24 pages, 6437 KiB  
Article
LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving
by Yunchuan Yang, Shubin Yang and Qiqing Chan
Sensors 2025, 25(15), 4800; https://doi.org/10.3390/s25154800 (registering DOI) - 4 Aug 2025
Abstract
The accurate detection of small objects remains a critical challenge in autonomous driving systems, where improving detection performance typically comes at the cost of increased model complexity, conflicting with the lightweight requirements of edge deployment. To address this dilemma, this paper proposes LEAD-YOLO [...] Read more.
The accurate detection of small objects remains a critical challenge in autonomous driving systems, where improving detection performance typically comes at the cost of increased model complexity, conflicting with the lightweight requirements of edge deployment. To address this dilemma, this paper proposes LEAD-YOLO (Lightweight Efficient Autonomous Driving YOLO), an enhanced network architecture based on YOLOv11n that achieves superior small object detection while maintaining computational efficiency. The proposed framework incorporates three innovative components: First, the Backbone integrates a lightweight Convolutional Gated Transformer (CGF) module, which employs normalized gating mechanisms with residual connections, and a Dilated Feature Fusion (DFF) structure that enables progressive multi-scale context modeling through dilated convolutions. These components synergistically enhance small object perception and environmental context understanding without compromising network efficiency. Second, the neck features a hierarchical feature fusion module (HFFM) that establishes guided feature aggregation paths through hierarchical structuring, facilitating collaborative modeling between local structural information and global semantics for robust multi-scale object detection in complex traffic scenarios. Third, the head implements a shared feature detection head (SFDH) structure, incorporating shared convolution modules for efficient cross-scale feature sharing and detail enhancement branches for improved texture and edge modeling. Extensive experiments validate the effectiveness of LEAD-YOLO: on the nuImages dataset, the method achieves 3.8% and 5.4% improvements in mAP@0.5 and mAP@[0.5:0.95], respectively, while reducing parameters by 24.1%. On the VisDrone2019 dataset, performance gains reach 7.9% and 6.4% for corresponding metrics. These findings demonstrate that LEAD-YOLO achieves an excellent balance between detection accuracy and model efficiency, thereby showcasing substantial potential for applications in autonomous driving. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

30 pages, 15717 KiB  
Article
Channel Amplitude and Phase Error Estimation of Fully Polarimetric Airborne SAR with 0.1 m Resolution
by Jianmin Hu, Yanfei Wang, Jinting Xie, Guangyou Fang, Huanjun Chen, Yan Shen, Zhenyu Yang and Xinwen Zhang
Remote Sens. 2025, 17(15), 2699; https://doi.org/10.3390/rs17152699 - 4 Aug 2025
Abstract
In order to achieve 0.1 m resolution and fully polarimetric observation capabilities for airborne SAR systems, the adoption of stepped-frequency modulation waveform combined with the polarization time-division transmit/receive (T/R) technique proves to be an effective technical approach. Considering the issue of range resolution [...] Read more.
In order to achieve 0.1 m resolution and fully polarimetric observation capabilities for airborne SAR systems, the adoption of stepped-frequency modulation waveform combined with the polarization time-division transmit/receive (T/R) technique proves to be an effective technical approach. Considering the issue of range resolution degradation and paired echoes caused by multichannel amplitude–phase mismatch in fully polarimetric airborne SAR with 0.1 m resolution, an amplitude–phase error estimation algorithm based on echo data is proposed in this paper. Firstly, the subband amplitude spectrum correction curve is obtained by the statistical average of the subband amplitude spectrum. Secondly, the paired-echo broadening function is obtained by selecting high-quality sample points after single-band imaging and the nonlinear phase error within the subbands is estimated via Sinusoidal Frequency Modulation Fourier Transform (SMFT). Thirdly, based on the minimum entropy criterion of the synthesized compressed pulse image, residual linear phase errors between subbands are quickly acquired. Finally, two-dimensional cross-correlation of the image slice is utilized to estimate the positional deviation between polarization channels. This method only requires high-quality data samples from the echo data, then rapidly estimates both intra-band and inter-band amplitude/phase errors by using SMFT and the minimum entropy criterion, respectively, with the characteristics of low computational complexity and fast convergence speed. The effectiveness of this method is verified by the imaging results of the experimental data. Full article
Show Figures

Figure 1

25 pages, 2418 KiB  
Review
Contactless Vital Sign Monitoring: A Review Towards Multi-Modal Multi-Task Approaches
by Ahmad Hassanpour and Bian Yang
Sensors 2025, 25(15), 4792; https://doi.org/10.3390/s25154792 (registering DOI) - 4 Aug 2025
Abstract
Contactless vital sign monitoring has emerged as a transformative healthcare technology, enabling the assessment of vital signs without physical contact with the human body. This review comprehensively reviews the rapidly evolving landscape of this field, with particular emphasis on multi-modal sensing approaches and [...] Read more.
Contactless vital sign monitoring has emerged as a transformative healthcare technology, enabling the assessment of vital signs without physical contact with the human body. This review comprehensively reviews the rapidly evolving landscape of this field, with particular emphasis on multi-modal sensing approaches and multi-task learning paradigms. We systematically categorize and analyze existing technologies based on sensing modalities (vision-based, radar-based, thermal imaging, and ambient sensing), integration strategies, and application domains. The paper examines how artificial intelligence has revolutionized this domain, transitioning from early single-modality, single-parameter approaches to sophisticated systems that combine complementary sensing technologies and simultaneously extract multiple vital sign parameters. We discuss the theoretical foundations and practical implementations of multi-modal fusion, analyzing signal-level, feature-level, decision-level, and deep learning approaches to sensor integration. Similarly, we explore multi-task learning frameworks that leverage the inherent relationships between vital sign parameters to enhance measurement accuracy and efficiency. The review also critically addresses persisting technical challenges, clinical limitations, and ethical considerations, including environmental robustness, cross-subject variability, sensor fusion complexities, and privacy concerns. Finally, we outline promising future directions, from emerging sensing technologies and advanced fusion architectures to novel application domains and privacy-preserving methodologies. This review provides a holistic perspective on contactless vital sign monitoring, serving as a reference for researchers and practitioners in this rapidly advancing field. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

16 pages, 3834 KiB  
Article
Deep Learning Tongue Cancer Detection Method Based on Mueller Matrix Microscopy Imaging
by Hanyue Wei, Yingying Luo, Feiya Ma and Liyong Ren
Optics 2025, 6(3), 35; https://doi.org/10.3390/opt6030035 (registering DOI) - 4 Aug 2025
Abstract
Tongue cancer, the most aggressive subtype of oral cancer, presents critical challenges due to the limited number of specialists available and the time-consuming nature of conventional histopathological diagnosis. To address these issues, we developed an intelligent diagnostic system integrating Mueller matrix microscopy with [...] Read more.
Tongue cancer, the most aggressive subtype of oral cancer, presents critical challenges due to the limited number of specialists available and the time-consuming nature of conventional histopathological diagnosis. To address these issues, we developed an intelligent diagnostic system integrating Mueller matrix microscopy with deep learning to enhance diagnostic accuracy and efficiency. Through Mueller matrix polar decomposition and transformation, micro-polarization feature parameter images were extracted from tongue cancer tissues, and purity parameter images were generated by calculating the purity of the Mueller matrices. A multi-stage feature dataset of Mueller matrix parameter images was constructed using histopathological samples of tongue cancer tissues with varying stages. Based on this dataset, the clinical potential of Mueller matrix microscopy was preliminarily validated for histopathological diagnosis of tongue cancer. Four mainstream medical image classification networks—AlexNet, ResNet50, DenseNet121 and VGGNet16—were employed to quantitatively evaluate the classification performance for tongue cancer stages. DenseNet121 achieved the highest classification accuracy of 98.48%, demonstrating its potential as a robust framework for rapid and accurate intelligent diagnosis of tongue cancer. Full article
Show Figures

Figure 1

14 pages, 642 KiB  
Article
Cerebrospinal Fluid Volume and Other Intracranial Volumes Are Associated with Fazekas Score in Adults: A Single Center Experience
by Melike Elif Kalfaoglu, Zeliha Cosgun, Aysenur Buz Yasar, Abdullah Emre Sarioglu and Gulali Aktas
Medicina 2025, 61(8), 1411; https://doi.org/10.3390/medicina61081411 - 4 Aug 2025
Abstract
Background and Objectives: The objective of this research is to make a comparative evaluation of the correlation between the volumetric examination of subcortical cerebral regions and white matter hyperintensities classified according to the Fazekas scoring system. Materials and Methods: A total [...] Read more.
Background and Objectives: The objective of this research is to make a comparative evaluation of the correlation between the volumetric examination of subcortical cerebral regions and white matter hyperintensities classified according to the Fazekas scoring system. Materials and Methods: A total of 236 cases with cranial MRI studies were retrospectively analyzed. This study included patients aged over 45 years who had white matter hyperintensities and who did not have a prior stroke diagnosis. White matter hyperintensities were evaluated in axial FLAIR images according to Fazekas’s grading scale. Patients with Fazekas 0 and 1 were grouped in group 1 and the patients with Fazekas 2 and 3 were grouped in group 2. MRI data processing and subcortical volumetric analyses were performed using the volBrain MRI brain volumetry system. Results: There were statistically significant differences between groups 1 and 2 in terms of cerebrospinal fluid total brain white and gray matter (p < 0.001), total brain white and gray matter (p = 0.009), total cerebrum (p < 0.001), accumbens (p < 0.001), thalamus (p < 0.001), frontal lobe (p < 0.001), parietal lobe (p < 0.001), and lateral ventricle (p < 0.001) volumes. Conclusions: Our study finds a strong link between white matter hyperintensity burden and brain atrophy. This includes volume reductions in total brain white and gray matter, frontal and parietal lobe atrophy, increased cerebrospinal fluid (CSF), and atrophy in specific brain regions such as the accumbens and thalamus. Full article
(This article belongs to the Special Issue Magnetic Resonance in Various Diseases and Biomedical Applications)
Show Figures

Figure 1

28 pages, 6199 KiB  
Article
Dual Chaotic Diffusion Framework for Multimodal Biometric Security Using Qi Hyperchaotic System
by Tresor Lisungu Oteko and Kingsley A. Ogudo
Symmetry 2025, 17(8), 1231; https://doi.org/10.3390/sym17081231 - 4 Aug 2025
Abstract
The proliferation of biometric technology across various domains including user identification, financial services, healthcare, security, law enforcement, and border control introduces convenience in user identity verification while necessitating robust protection mechanisms for sensitive biometric data. While chaos-based encryption systems offer promising solutions, many [...] Read more.
The proliferation of biometric technology across various domains including user identification, financial services, healthcare, security, law enforcement, and border control introduces convenience in user identity verification while necessitating robust protection mechanisms for sensitive biometric data. While chaos-based encryption systems offer promising solutions, many existing chaos-based encryption schemes exhibit inherent shortcomings including deterministic randomness and constrained key spaces, often failing to balance security robustness with computational efficiency. To address this, we propose a novel dual-layer cryptographic framework leveraging a four-dimensional (4D) Qi hyperchaotic system for protecting biometric templates and facilitating secure feature matching operations. The framework implements a two-tier encryption mechanism where each layer independently utilizes a Qi hyperchaotic system to generate unique encryption parameters, ensuring template-specific encryption patterns that enhance resistance against chosen-plaintext attacks. The framework performs dimensional normalization of input biometric templates, followed by image pixel shuffling to permutate pixel positions before applying dual-key encryption using the Qi hyperchaotic system and XOR diffusion operations. Templates remain encrypted in storage, with decryption occurring only during authentication processes, ensuring continuous security while enabling biometric verification. The proposed system’s framework demonstrates exceptional randomness properties, validated through comprehensive NIST Statistical Test Suite analysis, achieving statistical significance across all 15 tests with p-values consistently above 0.01 threshold. Comprehensive security analysis reveals outstanding metrics: entropy values exceeding 7.99 bits, a key space of 10320, negligible correlation coefficients (<102), and robust differential attack resistance with an NPCR of 99.60% and a UACI of 33.45%. Empirical evaluation, on standard CASIA Face and Iris databases, demonstrates practical computational efficiency, achieving average encryption times of 0.50913s per user template for 256 × 256 images. Comparative analysis against other state-of-the-art encryption schemes verifies the effectiveness and reliability of the proposed scheme and demonstrates our framework’s superior performance in both security metrics and computational efficiency. Our findings contribute to the advancement of biometric template protection methodologies, offering a balanced performance between security robustness and operational efficiency required in real-world deployment scenarios. Full article
(This article belongs to the Special Issue New Advances in Symmetric Cryptography)
Show Figures

Figure 1

19 pages, 455 KiB  
Article
A Quantum-Resistant FHE Framework for Privacy-Preserving Image Processing in the Cloud
by Rafik Hamza
Algorithms 2025, 18(8), 480; https://doi.org/10.3390/a18080480 (registering DOI) - 4 Aug 2025
Abstract
The advent of quantum computing poses an existential threat to the security of cloud services that handle sensitive visual data. Simultaneously, the need for computational privacy requires the ability to process data without exposing it to the cloud provider. This paper introduces and [...] Read more.
The advent of quantum computing poses an existential threat to the security of cloud services that handle sensitive visual data. Simultaneously, the need for computational privacy requires the ability to process data without exposing it to the cloud provider. This paper introduces and evaluates a hybrid quantum-resistant framework that addresses both challenges by integrating NIST-standardized post-quantum cryptography with optimized fully homomorphic encryption (FHE). Our solution uses CRYSTALS-Kyber for secure channel establishment and the CKKS FHE scheme with SIMD batching to perform image processing tasks on a cloud server without ever decrypting the image. This work provides a comprehensive performance analysis of the complete, end-to-end system. Our empirical evaluation demonstrates the framework’s practicality, detailing the sub-millisecond PQC setup costs and the amortized transfer of 33.83 MB of public FHE materials. The operational performance shows remarkable scalability, with server-side computations and client-side decryption completing within low single-digit milliseconds. By providing a detailed analysis of a viable and efficient architecture, this framework establishes a practical foundation for the next generation of privacy-preserving cloud applications. Full article
Show Figures

Figure 1

11 pages, 1709 KiB  
Article
Beam Profile Prediction of High-Repetition-Rate SBS Pulse Compression Using Convolutional Neural Networks
by Hongli Wang, Chaoshuai Liu, Panpan Yan and Qinglin Niu
Photonics 2025, 12(8), 784; https://doi.org/10.3390/photonics12080784 (registering DOI) - 4 Aug 2025
Abstract
Fast prediction of beam quality in SBS pulse compression for high-repetition-rate operation is urgently important for SBS experimental parameter acquisition. In this study, a fast computational prediction model for SBS beam profiles is developed using a convolutional neural network (CNN) method, which is [...] Read more.
Fast prediction of beam quality in SBS pulse compression for high-repetition-rate operation is urgently important for SBS experimental parameter acquisition. In this study, a fast computational prediction model for SBS beam profiles is developed using a convolutional neural network (CNN) method, which is trained and validated using experimental data from SBS pulse compression experiments. The CNN method can predict beam spot images for experimental conditions in the range of 100–500 Hz repetition rates and 5–40 mJ injection energy. The proposed CNN-based SBS beam profile prediction model has a fast convergence of the loss function and an average error of 15% with respect to the experimental results, indicating a high accuracy of the model. The CNN-based prediction model achieves an average error of 11.8% for beam profile prediction across various experimental conditions, demonstrating its potential for SBS beam profile characterization. The CNN method could provide a fast means for predicting the characteristic law of the beam intensity distribution in high-repetition-rate SBS pulse compression systems. Full article
Show Figures

Figure 1

3 pages, 468 KiB  
Interesting Images
Fatal Congenital Heart Disease in a Postpartum Woman
by Corina Cinezan, Camelia Bianca Rus, Mihaela Mirela Muresan and Ovidiu Laurean Pop
Diagnostics 2025, 15(15), 1952; https://doi.org/10.3390/diagnostics15151952 - 4 Aug 2025
Abstract
The image represents the post-mortem heart of a 28-year-old female patient, diagnosed in childhood with complete common atrioventricular canal defect. At time of diagnosis, the family refused surgery, as did the patient during her adulthood. Despite being advised against pregnancy, she became pregnant. [...] Read more.
The image represents the post-mortem heart of a 28-year-old female patient, diagnosed in childhood with complete common atrioventricular canal defect. At time of diagnosis, the family refused surgery, as did the patient during her adulthood. Despite being advised against pregnancy, she became pregnant. On presentation to hospital, she was cyanotic, with clubbed fingers, and hemodynamically unstable, in sinus rhythm, with Eisenmenger syndrome and respiratory failure partially responsive to oxygen. During pregnancy, owing to systemic vasodilatation, the right-to-left shunt is increased, with more severe cyanosis and low cardiac output. Echocardiography revealed the complete common atrioventricular canal defect, with a single atrioventricular valve with severe regurgitation, right ventricular hypertrophy, pulmonary artery dilatation, severe pulmonary hypertension and a hypoplastic left ventricle. The gestational age at delivery was 38 weeks. She gave birth to a healthy boy, with an Apgar score of 10. The vaginal delivery was chosen by an interdisciplinary team. The cesarean delivery and the anesthesia were considered too risky compared to vaginal delivery. Three days later, the patient died. The autopsy revealed hepatomegaly, a greatly hypertrophied right ventricle with a purplish clot ascending the dilated pulmonary arteries and a hypoplastic left ventricle with a narrowed chamber. A single valve was observed between the atria and ventricles, making all four heart chambers communicate, also insufficiently developed interventricular septum and its congenital absence in the cranial third. These morphological changes indicate the complete common atrioventricular canal defect, with right ventricular dominance, which is a rare and impressive malformation that requires mandatory treatment in early childhood in order for the condition to be solved. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

17 pages, 6471 KiB  
Article
A Deep Learning Framework for Traffic Accident Detection Based on Improved YOLO11
by Weijun Li, Liyan Huang and Xiaofeng Lai
Vehicles 2025, 7(3), 81; https://doi.org/10.3390/vehicles7030081 (registering DOI) - 4 Aug 2025
Abstract
The automatic detection of traffic accidents plays an increasingly vital role in advancing intelligent traffic monitoring systems and improving road safety. Leveraging computer vision techniques offers a promising solution, enabling rapid, reliable, and automated identification of accidents, thereby significantly reducing emergency response times. [...] Read more.
The automatic detection of traffic accidents plays an increasingly vital role in advancing intelligent traffic monitoring systems and improving road safety. Leveraging computer vision techniques offers a promising solution, enabling rapid, reliable, and automated identification of accidents, thereby significantly reducing emergency response times. This study proposes an enhanced version of the YOLO11 architecture, termed YOLO11-AMF. The proposed model integrates a Mamba-Like Linear Attention (MLLA) mechanism, an Asymptotic Feature Pyramid Network (AFPN), and a novel Focaler-IoU loss function to optimize traffic accident detection performance under complex and diverse conditions. The MLLA module introduces efficient linear attention to improve contextual representation, while the AFPN adopts an asymptotic feature fusion strategy to enhance the expressiveness of the detection head. The Focaler-IoU further refines bounding box regression for improved localization accuracy. To evaluate the proposed model, a custom dataset of traffic accident images was constructed. Experimental results demonstrate that the enhanced model achieves precision, recall, mAP50, and mAP50–95 scores of 96.5%, 82.9%, 90.0%, and 66.0%, respectively, surpassing the baseline YOLO11n by 6.5%, 6.0%, 6.3%, and 6.3% on these metrics. These findings demonstrate the effectiveness of the proposed enhancements and suggest the model’s potential for robust and accurate traffic accident detection within real-world conditions. Full article
Show Figures

Figure 1

16 pages, 13514 KiB  
Article
Development of a High-Speed Time-Synchronized Crop Phenotyping System Based on Precision Time Protoco
by Runze Song, Haoyu Liu, Yueyang Hu, Man Zhang and Wenyi Sheng
Appl. Sci. 2025, 15(15), 8612; https://doi.org/10.3390/app15158612 (registering DOI) - 4 Aug 2025
Abstract
Aiming to address the problems of asynchronous acquisition time of multiple sensors in the crop phenotype acquisition system and high cost of the acquisition equipment, this paper developed a low-cost crop phenotype synchronous acquisition system based on the PTP synchronization protocol, realizing the [...] Read more.
Aiming to address the problems of asynchronous acquisition time of multiple sensors in the crop phenotype acquisition system and high cost of the acquisition equipment, this paper developed a low-cost crop phenotype synchronous acquisition system based on the PTP synchronization protocol, realizing the synchronous acquisition of three types of crop data: visible light images, thermal infrared images, and laser point clouds. The paper innovatively proposed the Difference Structural Similarity Index Measure (DSSIM) index, combined with statistical indicators (average point number difference, average coordinate error), distribution characteristic indicators (Charm distance), and Hausdorff distance to characterize the stability of the system. After 72 consecutive hours of synchronization testing on the timing boards, it was verified that the root mean square error of the synchronization time for each timing board reached the ns level. The synchronous trigger acquisition time for crop parameters under time synchronization was controlled at the microsecond level. Using pepper as the crop sample, 133 consecutive acquisitions were conducted. The acquisition success rate for the three phenotypic data types of pepper samples was 100%, with a DSSIM of approximately 0.96. The average point number difference and average coordinate error were both about 3%, while the Charm distance and Hausdorff distance were only 1.14 mm and 5 mm. This system can provide hardware support for multi-parameter acquisition and data registration in the fast mobile crop phenotype platform, laying a reliable data foundation for crop growth monitoring, intelligent yield analysis, and prediction. Full article
(This article belongs to the Special Issue Smart Farming: Internet of Things (IoT)-Based Sustainable Agriculture)
Show Figures

Figure 1

13 pages, 1283 KiB  
Communication
Clinical Performance of Analog and Digital 18F-FDG PET/CT in Pediatric Epileptogenic Zone Localization: Preliminary Results
by Oreste Bagni, Roberta Danieli, Francesco Bianconi, Barbara Palumbo and Luca Filippi
Biomedicines 2025, 13(8), 1887; https://doi.org/10.3390/biomedicines13081887 - 3 Aug 2025
Abstract
Background: Despite its central role in pediatric pre-surgical evaluation of drug-resistant focal epilepsy, conventional analog 18F-fluorodeoxyglucose (18F-FDG) PET/CT (aPET) systems often yield modest epileptogenic zone (EZ) detection rates (~50–60%). Silicon photomultiplier–based digital PET/CT (dPET) promises enhanced image quality, but [...] Read more.
Background: Despite its central role in pediatric pre-surgical evaluation of drug-resistant focal epilepsy, conventional analog 18F-fluorodeoxyglucose (18F-FDG) PET/CT (aPET) systems often yield modest epileptogenic zone (EZ) detection rates (~50–60%). Silicon photomultiplier–based digital PET/CT (dPET) promises enhanced image quality, but its performance in pediatric epilepsy remains untested. Methods: We retrospectively analyzed 22 children (mean age 11.5 ± 2.6 years) who underwent interictal brain 18F-FDG PET/CT: 11 on an analog system (Discovery ST, 2018–2019) and 11 on a digital system (Biograph Vision 450, 2020–2021). Three blinded nuclear medicine physicians independently scored EZ localization and image quality (4-point scale); post-surgical histology and ≥1-year clinical follow-up served as reference. Results: The EZ was correctly identified in 8/11 analog scans (72.7%) versus 10/11 digital scans (90.9%). Average image quality was significantly higher with dPET (3.0 ± 0.9 vs. 2.1 ± 0.9; p < 0.05), and inter-reader agreement improved from good (ICC = 0.63) to excellent (ICC = 0.91). Conclusions: Our preliminary findings suggest that dPET enhances image clarity and reader consistency, potentially improving localization accuracy in pediatric epilepsy presurgical workups. Full article
Show Figures

Figure 1

28 pages, 3364 KiB  
Review
Principles, Applications, and Future Evolution of Agricultural Nondestructive Testing Based on Microwaves
by Ran Tao, Leijun Xu, Xue Bai and Jianfeng Chen
Sensors 2025, 25(15), 4783; https://doi.org/10.3390/s25154783 (registering DOI) - 3 Aug 2025
Abstract
Agricultural nondestructive testing technology is pivotal in safeguarding food quality assurance, safety monitoring, and supply chain transparency. While conventional optical methods such as near-infrared spectroscopy and hyperspectral imaging demonstrate proficiency in surface composition analysis, their constrained penetration depth and environmental sensitivity limit effectiveness [...] Read more.
Agricultural nondestructive testing technology is pivotal in safeguarding food quality assurance, safety monitoring, and supply chain transparency. While conventional optical methods such as near-infrared spectroscopy and hyperspectral imaging demonstrate proficiency in surface composition analysis, their constrained penetration depth and environmental sensitivity limit effectiveness in dynamic agricultural inspections. This review highlights the transformative potential of microwave technologies, systematically examining their operational principles, current implementations, and developmental trajectories for agricultural quality control. Microwave technology leverages dielectric response mechanisms to overcome traditional limitations, such as low-frequency penetration for grain silo moisture testing and high-frequency multi-parameter analysis, enabling simultaneous detection of moisture gradients, density variations, and foreign contaminants. Established applications span moisture quantification in cereal grains, oilseed crops, and plant tissues, while emerging implementations address storage condition monitoring, mycotoxin detection, and adulteration screening. The high-frequency branch of the microwave–millimeter wave systems enhances analytical precision through molecular resonance effects and sub-millimeter spatial resolution, achieving trace-level contaminant identification. Current challenges focus on three areas: excessive absorption of low-frequency microwaves by high-moisture agricultural products, significant path loss of microwave high-frequency signals in complex environments, and the lack of a standardized dielectric database. In the future, it is essential to develop low-cost, highly sensitive, and portable systems based on solid-state microelectronics and metamaterials, and to utilize IoT and 6G communications to enable dynamic monitoring. This review not only consolidates the state-of-the-art but also identifies future innovation pathways, providing a roadmap for scalable deployment of next-generation agricultural NDT systems. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Back to TopTop