Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,459)

Search Parameters:
Keywords = AP-1

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 1391 KB  
Article
Enhancing Multiple Vehicle Collision Protections with Parallelization and Adaptive Data Compression
by Yuanzhi Zhao, Liwei Huang, Kun Hua and Xiaomin Jin
Electronics 2026, 15(6), 1322; https://doi.org/10.3390/electronics15061322 (registering DOI) - 22 Mar 2026
Abstract
Recent advancements in intelligent transportation systems have enabled smart vehicles to autonomously detect, predict, and respond to potential hazards in real time. However, achieving sub-second reaction performance remains challenging due to computational latency in sensor data processing. This paper presents an adaptive parallel [...] Read more.
Recent advancements in intelligent transportation systems have enabled smart vehicles to autonomously detect, predict, and respond to potential hazards in real time. However, achieving sub-second reaction performance remains challenging due to computational latency in sensor data processing. This paper presents an adaptive parallel processing framework that integrates multi-core concurrency and adjustable spatial down-sampling (compression) for real-time multi-vehicle collision prevention. We benchmark four operating modes (sequential/parallel × compressed/uncompressed) on a 22-thread CPU platform. Compared to the sequential uncompressed baseline, the proposed fork-compress mode reduces end-to-end pipeline latency by approximately 66%. Compared to the sequential compressed baseline, the reduction is smaller (≈24%), highlighting the importance of explicitly stating the baseline for headline claims. The scalability analysis is based on Amdahl’s Law and indicates an effective parallelizable fraction of about 25% under our implementation, with the remaining time dominated by I/O, synchronization, and coordination overhead. We define compression factor k as linear spatial down-sampling where both image width and height are divided by k (pixel area reduced to 1/k2). Empirical results show that moderate down-sampling (around k ≈ 4–6) provides the best latency–accuracy trade-off. A supporting detection study using YOLOv4-tiny on BDD100K demonstrates that down-sampling can significantly reduce mAP if the model is not retrained, and that compression-aware fine-tuning partially recovers the lost accuracy. Full article
Show Figures

Figure 1

27 pages, 3171 KB  
Article
Research on Lightweight Apple Detection and 3D Accurate Yield Estimation for Complex Orchard Environments
by Bangbang Chen, Xuzhe Sun, Xiangdong Liu, Baojian Ma and Feng Ding
Horticulturae 2026, 12(3), 393; https://doi.org/10.3390/horticulturae12030393 (registering DOI) - 22 Mar 2026
Abstract
Severe foliage occlusion and dynamically changing lighting conditions in complex orchard environments pose significant challenges for visual perception systems in automated apple harvesting, including low detection accuracy, poor robustness, and insufficient real-time performance. To address these issues, this study proposes an improved lightweight [...] Read more.
Severe foliage occlusion and dynamically changing lighting conditions in complex orchard environments pose significant challenges for visual perception systems in automated apple harvesting, including low detection accuracy, poor robustness, and insufficient real-time performance. To address these issues, this study proposes an improved lightweight detection network based on YOLOv11, named YOLO-WBL, along with a precise yield estimation algorithm based on 3D point clouds, termed CLV. The YOLO-WBL network is optimized in three aspects: (1) A C3K2_WT module integrating wavelet transform is introduced into the backbone network to enhance multi-scale feature extraction capability; (2) A weighted bidirectional feature pyramid network (BiFPN) is adopted in the neck network to improve the efficiency of multi-scale feature fusion; (3) A lightweight shared convolution separated batch normalization detection head (Detect-SCGN) is designed to significantly reduce the parameter count while maintaining accuracy. Based on this detection model, the CLV algorithm deeply integrates depth camera point cloud information through 3D coordinate mapping, irregular point cloud reconstruction, and convex hull volume calculation to achieve accurate estimation of individual fruit volume and total yield. Experimental results demonstrate that: (1) The YOLO-WBL model achieves a precision of 93.8%, recall of 79.3%, and mean average precision (mAP@0.5) of 87.2% on the apple test set; (2) The model size is only 3.72 MB, a reduction of 28.87% compared to the baseline model; (3) When deployed on an NVIDIA Jetson Xavier NX edge device, its inference speed reaches 8.7 FPS, meeting real-time requirements; (4) In scenarios with an occlusion rate below 40%, the mean absolute percentage error (MAPE) of yield estimation can be controlled within 8%. Experimental validation was conducted using apple images selected from the dataset under varying lighting intensities and fruit occlusion conditions. The results demonstrate that the CLV algorithm significantly outperforms traditional average-weight-based estimation methods. This study provides an efficient, accurate, and deployable visual solution for intelligent apple harvesting and yield estimation in complex orchard environments, offering practical reference value for advancing smart orchard production. Full article
(This article belongs to the Special Issue AI for a Precision and Resilient Horticulture)
16 pages, 1756 KB  
Article
Evaluating Performance Limitations in Aquaponic vs. Hydroponic: Dynamics of Nutrient Release by Fish and Accumulation Rate in Plants
by Syed Ejaz Hussain Mehdi, Aparna Sharma, Suleman Shahzad, Woochang Kang, Sandesh Pandey, Byung-Jun Park, Hyuck-Soo Kim and Sang-Eun Oh
Water 2026, 18(6), 742; https://doi.org/10.3390/w18060742 (registering DOI) - 22 Mar 2026
Abstract
Aquaponics (AP) is the combination of aquaculture and hydroponic systems, developed based on waste to wealth theory. This study compared the plant growth and overall productivity of an aquaponic system (AP) with a controlled hydroponic system (HP) to assess the AP system’s performance [...] Read more.
Aquaponics (AP) is the combination of aquaculture and hydroponic systems, developed based on waste to wealth theory. This study compared the plant growth and overall productivity of an aquaponic system (AP) with a controlled hydroponic system (HP) to assess the AP system’s performance and identification of the performance-limiting factors. This comparative study spanned over a 35-day period, supported by batch tests for the nutrient accumulation rate in plants and the NH4+-N excretion rate by fish as a baseline for the system design. HP performed better in terms of plant growth, showing a mean plant fresh weight (g) of 165.6 ± 3.01 while AP showed 147.0 ± 4.6. Nutrient accumulation was better in HP for K and P; however, Ca2+, Mg2+, and Fe accumulation was higher in AP plants. The AP system supported a better fish growth of 31.95 ± 3.21% (FCR 1.29 ± 0.1, SGR 0.79 ± 0.06, and PER 2.24 ± 0.18) and a moderate plant biomass production. Further system design modifications and integrations are required to optimize the nutrient availability and sustainability of the AP systems. Full article
(This article belongs to the Special Issue Advanced Aquaculture Water Quality Management Research)
Show Figures

Figure 1

16 pages, 1437 KB  
Review
Environmental Regulation of 2-Acetyl-1-pyrroline Biosynthesis in Fragrant Rice: From Metabolic Pathways to Sustainable Quality Management
by Junjun Guo, Junyi Miao, Jin Chen, Deqian Huang, Chuyi Wang and Jiancheng Wen
Genes 2026, 17(3), 349; https://doi.org/10.3390/genes17030349 (registering DOI) - 22 Mar 2026
Abstract
The market value of fragrant rice is largely defined by the presence and intensity of its aroma, which is primarily attributed to volatile compound 2-acetyl-1-pyrroline (2-AP). The biosynthesis of 2-AP is chiefly governed by recessive alleles of the badh2 gene. Nevertheless, 2-AP accumulation [...] Read more.
The market value of fragrant rice is largely defined by the presence and intensity of its aroma, which is primarily attributed to volatile compound 2-acetyl-1-pyrroline (2-AP). The biosynthesis of 2-AP is chiefly governed by recessive alleles of the badh2 gene. Nevertheless, 2-AP accumulation is also profoundly shaped by environmental factors and agronomic management. Field practices—such as balanced nitrogen and potassium fertilization, supplementation with trace elements, and application of plant growth regulators like methyl jasmonate—promote 2-AP synthesis by increasing precursor availability and enhancing the activity of key enzymes. Additionally, tillage systems, alternate wetting and drying irrigation, optimal planting density, and harvest timing significantly affect aroma quality. Abiotic stresses, including moderate drought, salinity, optimal temperatures around 25 °C, and low light during grain filling, can also stimulate 2-AP accumulation, often through shifts in proline metabolism and activation of stress-responsive pathways involving GABA and methylglyoxal. Despite the promise of these strategies, several challenges persist, such as the common trade-off between yield and aroma intensity, complex genotype-by-environment interactions, and incomplete elucidation of the molecular mechanisms involved. Moving forward, integrating multi-omics analyses with smart agriculture technologies will be essential to unravel the regulatory networks underlying aroma formation and to advance the breeding of high-yielding fragrant rice varieties with stable aroma traits under changing climate scenarios. Full article
(This article belongs to the Section Genes & Environments)
Show Figures

Figure 1

20 pages, 39023 KB  
Article
Lightweight Insulator Defect Detection in High-Resolution UAV Imagery via System-Level Co-Design
by Yujie Zhu, Guanhua Chen, Linghao Zhang, Jiajun Zhou, Junwei Kuang and Jiangxiong Zhu
Remote Sens. 2026, 18(6), 953; https://doi.org/10.3390/rs18060953 (registering DOI) - 21 Mar 2026
Abstract
The inspection of minuscule insulator defects from high-resolution (HR) UAV imagery presents a significant algorithmic challenge. The severe scale mismatch between HR images and low-resolution model inputs often leads to feature distortion for sparsely distributed targets. To address these issues, this paper proposes [...] Read more.
The inspection of minuscule insulator defects from high-resolution (HR) UAV imagery presents a significant algorithmic challenge. The severe scale mismatch between HR images and low-resolution model inputs often leads to feature distortion for sparsely distributed targets. To address these issues, this paper proposes an integrated data–model collaborative framework. At the data level, an offline label-guided optimal tiling (LGOT) strategy is introduced to alleviate scale mismatch by curating information-dense training tiles. At the model level, we design the semi-decoupled prior-driven detection head (SDPD-Head), which leverages evolutionary priors to stabilize the learning of microscopic spatial features. During inference, an online inference-time adaptive tiling (ITAT) strategy is used to match the spatial scale distribution between training and inference and to reduce feature loss caused by direct downscaling. Experiments on a real-world inspection dataset show that the proposed framework achieves an mAP@50 of 92.9% with 2.17 M parameters and 4.7 GFLOPs. Full article
Show Figures

Figure 1

26 pages, 11062 KB  
Article
Rapid Extraction of Tea Bud Phenotypic Parameters ‘In Situ’ Combining Key Point Recognition and Depth Image Fusion
by Yang Guo, Yiyong Chen, Weihao Yao, Junshu Wang, Jianlong Li, Bo Zhou, Junhong Zhao and Jinchi Tang
Agriculture 2026, 16(6), 704; https://doi.org/10.3390/agriculture16060704 (registering DOI) - 21 Mar 2026
Abstract
Real-time measurement of tea bud phenotypes via mobile devices is constrained by model lightweighting challenges, and research on non-contact measurement of tea bud phenotypes based on key points remains largely unexplored. Information on the growth posture of tea buds is an important basis [...] Read more.
Real-time measurement of tea bud phenotypes via mobile devices is constrained by model lightweighting challenges, and research on non-contact measurement of tea bud phenotypes based on key points remains largely unexplored. Information on the growth posture of tea buds is an important basis for determining tea maturity grades, quality monitoring, and tea breeding. Therefore, this work develops a deep learning-enabled YOLOv8p-Tea model to estimate key point information of tea bud posture and automatically obtain three-dimensional point cloud information of tea buds by integrating depth information, thereby achieving in situ measurement of tea bud phenotypic parameters. Meanwhile, the model is trained and validated using a tea bud (one-bud-three-leaf) image dataset, and its effectiveness is demonstrated through experiments. Compared to the YOLOv8p-pose model, the model achieves a mAP50 of 98.3%, a P of 97%, and parameters of 0.72 M, with mAP50 and P improved by 1.5% and 1.9%, respectively, and the parameter count is reduced by 25%. To validate the accuracy of phenotypic extraction, the model was deployed on edge devices, and 30 tea buds with one bud and three leaves were randomly selected in a tea garden. The final in situ measurement results showed an MRE of 6.63%. Experimental findings indicate that the developed method is capable of not only effectively estimate tea bud posture but also accurately achieves in situ measurement of tea bud phenotypes, which holds potential applications for meeting the construction needs of smart tea gardens and optimizing tea breeding. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
21 pages, 22338 KB  
Article
Nighttime Driver Fatigue Detection Based on Real-Time Joint Face and Facial Landmarks Detection
by Zhuofan Huang, Shangkun Liu, Jingli Huang and Jie Huang
Modelling 2026, 7(2), 60; https://doi.org/10.3390/modelling7020060 (registering DOI) - 21 Mar 2026
Abstract
Driver fatigue detection (DFD) in low-light nighttime driving environments is crucial for road safety, but it remains challenging due to degraded image quality and computational constraints. This paper proposes a real-time three-stage framework specifically designed for nighttime driver fatigue detection, integrating low-light image [...] Read more.
Driver fatigue detection (DFD) in low-light nighttime driving environments is crucial for road safety, but it remains challenging due to degraded image quality and computational constraints. This paper proposes a real-time three-stage framework specifically designed for nighttime driver fatigue detection, integrating low-light image enhancement, joint face and facial landmark detection, and geometry-based fatigue judgment. In the initial stage, the framework utilizes the Zero-Reference Deep Curve Estimation (Zero-DCE) algorithm to improve the visual quality of input images under low-light conditions. Subsequently, a novel lightweight single-stage detector, You Only Look Once for Joint Face and Facial Landmark Detection (YOLOJFF), is introduced for efficient joint localization. Finally, fatigue judgment is performed in real-time by calculating the Eye Aspect Ratio (EAR) and Mouth Aspect Ratio (MAR) from the detected landmarks and using a sliding time window strategy. Experimental results demonstrate that the enhancement module significantly improves detection performance. The YOLOJFF model achieves a favorable balance, with 90.9% precision, 87.6% mean Average Precision (mAP), and 5.2 Normalized Mean Error (NME), while requiring only 3.7 million (M) parameters and running at 107.5 FPS. The proposed framework provides a robust and efficient solution for real-time DFD in nighttime scenarios. Full article
Show Figures

Figure 1

18 pages, 4159 KB  
Article
Advancing Breast Cancer Lesion Analysis in Real-Time Sonography Through Multi-Layer Transfer Learning and Adaptive Tracking
by Suliman Thwib, Radwan Qasrawi, Ghada Issa, Razan AbuGhoush, Hussein AlMasri and Marah Qawasmi
Mach. Learn. Knowl. Extr. 2026, 8(3), 82; https://doi.org/10.3390/make8030082 (registering DOI) - 21 Mar 2026
Abstract
Background: Real-time and accurate analysis of breast ultrasounds is crucial for diagnosis but remains challenging due to issues like low image contrast and operator dependency. This study aims to address these challenges by developing an integrated framework for real-time lesion detection and [...] Read more.
Background: Real-time and accurate analysis of breast ultrasounds is crucial for diagnosis but remains challenging due to issues like low image contrast and operator dependency. This study aims to address these challenges by developing an integrated framework for real-time lesion detection and tracking. Methods: The proposed system combines Contrast-Limited Adaptive Histogram Equalization (CLAHE) for image preprocessing, a transfer learning-enhanced YOLOv11 model following a continual learning paradigm for cross-center generalization in for lesion detection, and a novel Detection-Based Tracking (DBT) approach that integrates Kernelized Correlation Filters (KCF) with periodic detection verification. The framework was evaluated on a dataset comprising 11,383 static images and 40 ultrasound video sequences, with a subset verified through biopsy and the remainder annotated by two radiologists based on radiological reports. Results: The proposed framework demonstrated high performance across all components. The transfer learning strategy (TL12) significantly improved detection outcomes, achieving a mean Average Precision (mAP) of 0.955, a sensitivity of 0.938, and an F1 score of 0.956. The DBT method (KCF + YOLO) achieved high tracking accuracy, with a success rate of 0.984, an Intersection over Union (IoU) of 0.85, and real-time operation at 54 frames per second (FPS) with a latency of 7.74 ms. The use of CLAHE preprocessing was shown to be a critical factor in improving both detection and tracking stability across diverse imaging conditions. Conclusions: This research presents a robust, fully integrated framework that bridges the gap between speed and accuracy in breast ultrasound analysis. The system’s high performance and real-time efficiency underscore its strong potential for clinical adoption to enhance diagnostic workflows, reduce operator variability, and improve breast cancer assessment. Full article
Show Figures

Figure 1

29 pages, 9360 KB  
Article
Spatial Relation Reasoning Based on Keypoints for Railway Intrusion Detection and Risk Assessment
by Shanping Ning, Feng Ding and Bangbang Chen
Appl. Sci. 2026, 16(6), 3026; https://doi.org/10.3390/app16063026 - 20 Mar 2026
Abstract
Foreign object intrusion in railway tracks is a major threat to train operation safety, yet current detection methods face challenges in identifying small distant targets and adapting to low-light conditions. Moreover, existing systems often lack the ability to assess intrusion risk levels, limiting [...] Read more.
Foreign object intrusion in railway tracks is a major threat to train operation safety, yet current detection methods face challenges in identifying small distant targets and adapting to low-light conditions. Moreover, existing systems often lack the ability to assess intrusion risk levels, limiting real-time warning and graded response capabilities. To address these gaps, this paper proposes a novel method for intrusion detection and risk assessment based on keypoint spatial discrimination. First, an XS-BiSeNetV2-based track segmentation network is developed, incorporating cross-feature fusion and spatial feature recalibration to improve track extraction accuracy in complex scenes. Second, an enhanced STI-YOLO detection model is introduced, integrating a Shuffle attention mechanism for better feature interaction, a high-resolution Transformer detection head to improve small-target sensitivity, and the Inner-IoU loss function to refine bounding box regression. Detected targets’ bottom keypoints are then analyzed relative to track boundaries to determine intrusion direction. By combining lateral distance and motion state features, a multi-level risk classification system is established for quantitative threat assessment. Experiments on the RailSem19 and GN-rail-Object datasets show that the method achieves a track segmentation mIoU of 88.19% and a detection mAP of 82.6%. The risk assessment module effectively quantifies threats across scenarios and maintains stable performance under low-light and strong-glare conditions. This work offers a quantifiable risk assessment solution for intelligent railway safety systems. Full article
27 pages, 6761 KB  
Article
An Approach to Crayfish Weight Estimation Based on Pose Awareness
by Xuhui Ye, Mingyang He, Jun Wang, Lilu Huang, Jing Xu, Rihui Zhang and Bo Li
Appl. Sci. 2026, 16(6), 3019; https://doi.org/10.3390/app16063019 - 20 Mar 2026
Abstract
To address the challenges of low accuracy and poor robustness in industrial crayfish weight estimation caused by variable postures, this paper proposes a lightweight method that integrates pose awareness. First, a multi-task perception model, Crayfish-YOLO, is developed based on the YOLOv8s-Seg framework. By [...] Read more.
To address the challenges of low accuracy and poor robustness in industrial crayfish weight estimation caused by variable postures, this paper proposes a lightweight method that integrates pose awareness. First, a multi-task perception model, Crayfish-YOLO, is developed based on the YOLOv8s-Seg framework. By reconstructing the backbone with MobileNetV3 and integrating Coordinate Attention (CA), CARAFE upsampling, and the Wise Intersection over Union (Wise-IoU) loss function, the model is significantly compressed while enhancing its ability to output high-fidelity pixel-level masks and pose categories. Second, a pose-adaptive weight estimation strategy is proposed, which leverages perceived pose information to dynamically invoke the optimal regression model from a pre-constructed heterogeneous model library. Using seven core geometric features extracted from the segmentation masks, the system achieves precise weight estimation. Experimental results on a self-built dataset show that Crayfish-YOLO reduces parameters by 75.2% compared to YOLOv8s-Seg, while core segmentation accuracy (mAP50~95 (Seg)) improves by 1.1%. The integrated end-to-end system achieves a Mean Absolute Error (MAE) of 2.1 g and a mean coefficient of determination (R2) of 0.92, significantly outperforming comparative algorithms. This research provides an efficient visual perception and estimation solution for the automated grading of crayfish and similar non-rigid aquatic products. Full article
27 pages, 1393 KB  
Systematic Review
Computer Vision-Based Detection of Agonistic Behaviors in Pigs: Advances and Applications for Precision Livestock Farming
by Md Kamrul Hasan, Hong-Seok Mun, Ahsan Mehtab, Jin-Gu Kang, Md Sharifuzzaman, Eddiemar B. Lagua, Young-Hwa Kim, Hae-Rang Park and Chul-Ju Yang
Agriculture 2026, 16(6), 700; https://doi.org/10.3390/agriculture16060700 (registering DOI) - 20 Mar 2026
Abstract
Agonistic behaviors such as aggression, ear biting, and tail biting remain major challenges for pig welfare, particularly during the weaning and growing periods. Computer vision (CV) technologies are emerging as scalable tools for non-invasive monitoring of these behaviors. This systematic review summarizes recent [...] Read more.
Agonistic behaviors such as aggression, ear biting, and tail biting remain major challenges for pig welfare, particularly during the weaning and growing periods. Computer vision (CV) technologies are emerging as scalable tools for non-invasive monitoring of these behaviors. This systematic review summarizes recent advances in CV-based detection of agonistic behaviors in pigs and identifies factors influencing their reliability and commercial adoption. Following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines, a structured search of Scopus, Web of Science, and PubMed identified 42 eligible studies. Most studies employ deep learning approaches, including you only look once (YOLO)-based detectors and spatio-temporal models, achieving detection accuracy of up to 97% for behaviors such as head knocking, head-to-body pushing, and tail biting, typically evaluated under controlled conditions using mAP@0.5. Three key findings emerged: rapid progress in deep learning-based detection; methodological heterogeneity in behavioral definitions, validation strategies, and annotation protocols; and a gap between high detection accuracy and demonstrated improvements in welfare or productivity. Progress is limited by scarce cross-farm validation, inconsistent bout definitions, reliance on manual annotations, and weak integration with physiological and production indicators. Future research should prioritize standardized behavioral definitions, multimodal integration, predictive modeling, and rigorous external validation. Full article
(This article belongs to the Special Issue Computer Vision Analysis Applied to Farm Animals)
24 pages, 2603 KB  
Article
Communication-Fairness Trade-Offs in Federated Learning for 6G Resource Allocation: A 200 Client Study
by Nizamuddin Maitlo, Mahmood Hussain Shah, Abdullah Maitlo, Ghulam Mustafa, Kaleem Arshid and Nooruddin Noonari
Inventions 2026, 11(2), 31; https://doi.org/10.3390/inventions11020031 - 20 Mar 2026
Abstract
Resource allocation in sixth-generation (6G) networks must meet throughput, latency, and reliability targets while network conditions keep changing. At the same time, the telemetry needed to train good models is distributed across many devices and edge nodes, so sending it to a central [...] Read more.
Resource allocation in sixth-generation (6G) networks must meet throughput, latency, and reliability targets while network conditions keep changing. At the same time, the telemetry needed to train good models is distributed across many devices and edge nodes, so sending it to a central server can violate privacy or data-sharing constraints. Federated learning (FL) helps, but two practical concerns usually determine whether it works in practice: how much communication is needed to achieve strong performance, and whether weaker (tail) clients benefit-not only the average client. In this study, we run large-scale FL on 6G telemetry with 200 clients and quantify the communication fairness trade-off. We evaluate FedAvg and FedProx under multiple settings and benchmark them against a strong centralized model and a local-only baseline. Results are reported as mean ± 95% confidence intervals over five random seeds. We measure the accuracy, macro-F1, AUC, and AP, and we also focus on tail behavior using the worst eligible client accuracy, p10 client accuracy, and fairness gap. By plotting the accuracy/macro-F1 against cumulative communication (bytes), we show that some configurations match the average performance while transmitting far fewer data. Finally, we find that the worst client performance improves early and then stabilizes, and a sensitivity study suggests that FedProx’s μ has a limited impact in this setup. These findings offer actionable guidance for 6G operators and system designers by quantifying how participation and dropout policies translate into concrete communication budgets and tail client behavior. Full article
Show Figures

Figure 1

21 pages, 4335 KB  
Article
Real-Time Small UAV Detection in Complex Airspace Using YOLOv11 with Residual Attention and High-Resolution Feature Enhancement
by Chuang Han, Md Redwan Ullah, Amrul Kayes, Khalid Hasan, Md Abdur Rouf, Md Rakib Hasan, Shen Tao, Guo Gengli and Mohammad Masum Billah
J. Imaging 2026, 12(3), 140; https://doi.org/10.3390/jimaging12030140 - 20 Mar 2026
Abstract
Detecting small unmanned aerial vehicles (UAVs) in complex airspace presents significant challenges due to their minimal pixel footprint, resemblance to birds, and frequent occlusion. To address these issues, we propose YOLOv11-ResCBAM, a novel real-time detection framework that integrates a Residual Convolutional Block Attention [...] Read more.
Detecting small unmanned aerial vehicles (UAVs) in complex airspace presents significant challenges due to their minimal pixel footprint, resemblance to birds, and frequent occlusion. To address these issues, we propose YOLOv11-ResCBAM, a novel real-time detection framework that integrates a Residual Convolutional Block Attention Module (ResCBAM) and a high-resolution P2 detection head into the YOLOv11 architecture. ResCBAM enhances channel and spatial feature refinement while preserving original feature contexts through residual connections, and the P2 head maintains fine spatial details crucial for small-object localization. Evaluated on a custom dataset of 4917 images (11,733 after augmentation) across three classes (drone, bird, airplane), our model achieves a mean average precision at the 0.5–0.95 IoU threshold (mAP@0.5–0.95) of 0.845, representing a 7.9% improvement over the baseline YOLOv11n, while maintaining real-time inference at 50.51 FPS. Cross-dataset validation on VisDrone2019-DET and UAVDT benchmarks demonstrates promising generalization trends. This work demonstrates the effectiveness of the proposed approach for UAV surveillance systems, balancing detection accuracy with computational efficiency for deployment in security-critical environments. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 3rd Edition)
Show Figures

Figure 1

22 pages, 6052 KB  
Article
HSMD-YOLO: An Anti-Aliasing Feature-Enhanced Network for High-Speed Microbubble Detection
by Wenda Luo, Yongjie Li and Siguang Zong
Algorithms 2026, 19(3), 234; https://doi.org/10.3390/a19030234 - 20 Mar 2026
Abstract
Underwater micro-bubble detection entails multiple challenges, including diminutive target sizes, sparse pixel information, pronounced specular highlights and water scattering, indistinct bubble boundaries, and adhesion or overlap between instances. To address these issues, we propose HSMD-YOLO, an improved detector tailored for high-resolution micro-bubble detection [...] Read more.
Underwater micro-bubble detection entails multiple challenges, including diminutive target sizes, sparse pixel information, pronounced specular highlights and water scattering, indistinct bubble boundaries, and adhesion or overlap between instances. To address these issues, we propose HSMD-YOLO, an improved detector tailored for high-resolution micro-bubble detection and built upon YOLOv11. The model incorporates three novel components: the Scale Switch Block (SSB), a scale-transformation module that suppresses artifacts and background noise, thereby stabilizing edges in thin-walled bubble regions and enhancing sensitivity to geometric contours; the Global Local Refine Block (GLRB), which achieves efficient global relationship modeling with an asymptotic linear complexity (O(N)) in spatial dimensions while further refining local features, thereby strengthening boundary perception and improving bubble–background separability; and the Bidirectional Exponential Moving Attention Fusion (BEMAF), which accommodates the multi-scale nature of bubbles by employing a parallel multi-kernel architecture to extract spatial features across scales, coupled with a multi-stage EMA based attention mechanism to enhance detection robustness under weak boundaries and complex backgrounds. Experiments conducted on an Side-Illuminated Light Field Bubble Database (SILB-DB) and a public gas–liquid two-phase flow dataset (GTFD) demonstrate that HSMD-YOLO achieves mAP@50 scores of 0.911 and 0.854, respectively, surpassing mainstream detection methods. Ablation studies indicate that SSB, GLRB, and BEMAF contribute performance gains of 1.3%, 2.0%, and 0.4%, respectively, thereby corroborating the effectiveness of each module for micro-scale object detection. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

21 pages, 3564 KB  
Article
Theoretical Survey of the Intrinsic Reactivity of Functionalized (CH2=C(R)XH) Enols, Enethiols and Eneselenols: Potential Interstellar Species
by Al Mokhtar Lamsabhi, Otilia Mó, Jean-Claude Guillemin and Manuel Yáñez
Molecules 2026, 31(6), 1040; https://doi.org/10.3390/molecules31061040 - 20 Mar 2026
Abstract
The conformational properties and intrinsic reactivity of unsaturated CH2=C(R)XH systems (R = –H, –CH=CH2, –C≡CH, –C≡N, –Cl, –phenyl, –cyclopentadienyl, –pyrrole; X = O, S, Se)—namely enols, enethiols, and eneselenols—have been investigated using G4 and CCSD(T) calculations. All compounds exhibit [...] Read more.
The conformational properties and intrinsic reactivity of unsaturated CH2=C(R)XH systems (R = –H, –CH=CH2, –C≡CH, –C≡N, –Cl, –phenyl, –cyclopentadienyl, –pyrrole; X = O, S, Se)—namely enols, enethiols, and eneselenols—have been investigated using G4 and CCSD(T) calculations. All compounds exhibit antiperiplanar (ap) and anticlinal (ac)-conformers that are nearly isoenergetic, as their relative stabilities are governed by subtle noncovalent interactions, which are analyzed in detail. Both conformers are therefore expected to coexist in the gas phase, and because the rotational barriers are very low, their interconversion is effectively barrierless under typical conditions. In contrast, the corresponding protonated species display significantly higher barriers, approximately three to five times larger. The keto–enol tautomerization involves activation barriers exceeding 180 kJ·mol−1, confirming that, as in other keto–enol rearrangements, the process is not monomolecular. Protonation generally occurs at the methylene carbon, with the exceptions of the –C≡CH and –C≡N derivatives. Strong linear correlations are found among the proton affinities of the three families studied, which follow the trend: enols > enethiols > eneselenols. All systems behave as strong carbon bases; some are predicted to be 20–21 orders of magnitude more basic than ketene and 3–5 orders of magnitude more basic than vinylimine in terms of equilibrium constants. Deprotonation preferentially occurs at the X–H group in nearly all cases. The only exception is the cyclopentadienyl-substituted enol, for which deprotonation of the cyclopentadienyl moiety is favored due to enhanced aromatic stabilization of the resulting anion. Overall, acidity increases along the series O < S < Se. Full article
Show Figures

Graphical abstract

Back to TopTop