Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,668)

Search Parameters:
Keywords = pixel size

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3624 KB  
Article
Multi-Scale Feature Fusion and Attention-Enhanced R2U-Net for Dynamic Weight Monitoring of Chicken Carcasses
by Tian Hua, Pengfei Zou, Ao Zhang, Runhao Chen, Hao Bai, Wenming Zhao, Qian Fan and Guobin Chang
Animals 2026, 16(3), 410; https://doi.org/10.3390/ani16030410 - 28 Jan 2026
Viewed by 158
Abstract
In recent years, real-time monitoring of broiler chicken weight has become crucial for assessing growth and health status. Currently, obtaining weight data often relies on manual collection. However, this process is cumbersome, labor-intensive, and inefficient. This paper proposes a broiler carcass weight detection [...] Read more.
In recent years, real-time monitoring of broiler chicken weight has become crucial for assessing growth and health status. Currently, obtaining weight data often relies on manual collection. However, this process is cumbersome, labor-intensive, and inefficient. This paper proposes a broiler carcass weight detection model based on deep learning image segmentation and regression to address these issues. The model first segments broiler carcasses and then uses the pixel area of the segmented region as a key feature for a regression model to predict weight. A custom dataset comprising 2709 images from 301 Taihu yellow chickens was established for this study. A novel segmentation network, AR2U-AtNet, derived from R2U-Net, is proposed. To mitigate the interference of background color and texture on target carcasses in slaughterhouse production lines, the Convolutional Block Attention Module (CBAM) is introduced to enable the network to focus on areas containing carcasses. Furthermore, broilers exhibit significant variations in size, morphology, and posture, which impose high demands on the model’s scale adaptability. Selective Kernel Attention (SKAttention) is therefore integrated to flexibly handle broiler images with diverse body conditions. The model achieved an average Intersection over Union (mIoU) score of 90.45%, and Dice and F1 scores of 95.18%. The regression-based weight prediction achieved an R2 value of 0.9324. The results demonstrate that the proposed method can quickly and accurately determine individual broiler carcass weights, thereby alleviating the burden of traditional weighing methods and ultimately improving the production efficiency of yellow-feather broilers. Full article
(This article belongs to the Section Poultry)
Show Figures

Figure 1

21 pages, 1574 KB  
Article
Watershed Encoder–Decoder Neural Network for Nuclei Segmentation of Breast Cancer Histology Images
by Vincent Majanga, Ernest Mnkandla, Donatien Koulla Moulla, Sree Thotempudi and Attipoe David Sena
Bioengineering 2026, 13(2), 154; https://doi.org/10.3390/bioengineering13020154 - 28 Jan 2026
Viewed by 103
Abstract
Recently, deep learning methods have seen major advancements and are preferred for medical image analysis. Clinically, deep learning techniques for cancer image analysis are among the main applications for early diagnosis, detection, and treatment. Consequently, segmentation of breast histology images is a key [...] Read more.
Recently, deep learning methods have seen major advancements and are preferred for medical image analysis. Clinically, deep learning techniques for cancer image analysis are among the main applications for early diagnosis, detection, and treatment. Consequently, segmentation of breast histology images is a key step towards diagnosing breast cancer. However, the use of deep learning methods for image analysis is constrained by challenging features in the histology images. These challenges include poor image quality, complex microscopic tissue structures, topological intricacies, and boundary/edge inhomogeneity. Furthermore, this leads to a limited number of images required for analysis. The U-Net model was introduced and gained significant traction for its ability to produce high-accuracy results with very few input images. Many modifications of the U-Net architecture exist. Therefore, this study proposes the watershed encoder–decoder neural network (WEDN) to segment cancerous lesions in supervised breast histology images. Pre-processing of supervised breast histology images via augmentation is introduced to increase the dataset size. The augmented dataset is further enhanced and segmented into the region of interest. Data enhancement methods such as thresholding, opening, dilation, and distance transform are used to highlight foreground and background pixels while removing unwanted parts from the image. Consequently, further segmentation via the connected component analysis method is used to combine image pixel components with similar intensity values and assign them their respective labeled binary masks. The watershed filling method is then applied to these labeled binary mask components to separate and identify the edges/boundaries of the regions of interest (cancerous lesions). This resultant image information is sent to the WEDN model network for feature extraction and learning via training and testing. Residual convolutional block layers of the WEDN model are the learnable layers that extract the region of interest (ROI), which is the cancerous lesion. The method was evaluated on 3000 images–watershed masks, an augmented dataset. The model was trained on 2400 training set images and tested on 600 testing set images. This proposed method produced significant results of 98.53% validation accuracy, 96.98% validation dice coefficient, and 97.84% validation intersection over unit (IoU) metric scores. Full article
Show Figures

Figure 1

17 pages, 2764 KB  
Article
Radiomics as a Decision Support Tool for Detecting Occult Periapical Lesions on Intraoral Radiographs
by Barbara Obuchowicz, Joanna Zarzecka, Marzena Jakubowska, Rafał Obuchowicz, Michał Strzelecki, Adam Piórkowski, Joanna Gołda, Karolina Nurzynska and Julia Lasek
J. Clin. Med. 2026, 15(3), 971; https://doi.org/10.3390/jcm15030971 - 25 Jan 2026
Viewed by 190
Abstract
Background: Periapical lesions are common consequences of pulp necrosis but may remain undetectable on conventional intraoral radiographs, becoming evident only on cone-beam computed tomography (CBCT). Improving lesion recognition on plain radiographs is therefore of high clinical relevance. Methods: This retrospective, single-center study analyzed [...] Read more.
Background: Periapical lesions are common consequences of pulp necrosis but may remain undetectable on conventional intraoral radiographs, becoming evident only on cone-beam computed tomography (CBCT). Improving lesion recognition on plain radiographs is therefore of high clinical relevance. Methods: This retrospective, single-center study analyzed 56 matched pairs of intraoral periapical radiographs (RVG) and CBCT scans. A total of 109 regions of interest (ROIs) were included, which were classified as CBCT-positive/RVG-negative (onlyCBCT, n = 64) or true negative (noLesion, n = 45). Radiomic texture features were extracted from circular ROIs on RVG images using PyRadiomics. Feature distributions were compared using Mann–Whitney U tests with false discovery rate correction, and classification was performed using a logistic regression model with nested cross-validation. Results: Forty-four radiomic texture features showed statistically significant differences between onlyCBCT and noLesion ROIs, predominantly with small to medium effect sizes. For a 40-pixel ROI radius, the classifier achieved a mean area under the ROC curve of 0.71, mean accuracy of 68%, and mean sensitivity of 73%. Smaller ROIs (20–40 pixels) yielded higher AUCs and substantially better accuracy than larger sampling regions (≥60 pixels). Conclusions: Quantifiable radiomic signatures of periapical pathology are present on conventional radiographs even when lesions are visually occult. Radiomics may serve as a complementary decision support tool for identifying CBCT-only periapical lesions in routine clinical imaging. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

18 pages, 14590 KB  
Article
VTC-Net: A Semantic Segmentation Network for Ore Particles Integrating Transformer and Convolutional Block Attention Module (CBAM)
by Yijing Wu, Weinong Liang, Jiandong Fang, Chunxia Zhou and Xiaolu Sun
Sensors 2026, 26(3), 787; https://doi.org/10.3390/s26030787 - 24 Jan 2026
Viewed by 243
Abstract
In mineral processing, visual-based online particle size analysis systems depend on high-precision image segmentation to accurately quantify ore particle size distribution, thereby optimizing crushing and sorting operations. However, due to multi-scale variations, severe adhesion, and occlusion within ore particle clusters, existing segmentation models [...] Read more.
In mineral processing, visual-based online particle size analysis systems depend on high-precision image segmentation to accurately quantify ore particle size distribution, thereby optimizing crushing and sorting operations. However, due to multi-scale variations, severe adhesion, and occlusion within ore particle clusters, existing segmentation models often exhibit undersegmentation and misclassification, leading to blurred boundaries and limited generalization. To address these challenges, this paper proposes a novel semantic segmentation model named VTC-Net. The model employs VGG16 as the backbone encoder, integrates Transformer modules in deeper layers to capture global contextual dependencies, and incorporates a Convolutional Block Attention Module (CBAM) at the fourth stage to enhance focus on critical regions such as adhesion edges. BatchNorm layers are used to stabilize training. Experiments on ore image datasets show that VTC-Net outperforms mainstream models such as UNet and DeepLabV3 in key metrics, including MIoU (89.90%) and pixel accuracy (96.80%). Ablation studies confirm the effectiveness and complementary role of each module. Visual analysis further demonstrates that the model identifies ore contours and adhesion areas more accurately, significantly improving segmentation robustness and precision under complex operational conditions. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

36 pages, 4183 KB  
Article
Distinguishing a Drone from Birds Based on Trajectory Movement and Deep Learning
by Andrii Nesteruk, Valerii Nikitin, Yosyp Albrekht, Łukasz Ścisło, Damian Grela and Paweł Król
Sensors 2026, 26(3), 755; https://doi.org/10.3390/s26030755 - 23 Jan 2026
Viewed by 151
Abstract
Unmanned aerial vehicles (UAVs) increasingly share low-altitude airspace with birds, making early distinguishing between drones and biological targets critical for safety and security. This work addresses long-range scenarios where objects occupy only a few pixels and appearance-based recognition becomes unreliable. We develop a [...] Read more.
Unmanned aerial vehicles (UAVs) increasingly share low-altitude airspace with birds, making early distinguishing between drones and biological targets critical for safety and security. This work addresses long-range scenarios where objects occupy only a few pixels and appearance-based recognition becomes unreliable. We develop a model-driven simulation pipeline that generates synthetic data with a controlled camera model, atmospheric background and realistic motion of three aerial target types: multicopter, fixed-wing UAV and bird. From these sequences, each track is encoded as a time series of image-plane coordinates and apparent size, and a bidirectional long short-term memory (LSTM) network is trained to classify trajectories as drone-like or bird-like. The model learns characteristic differences in smoothness, turning behavior and velocity fluctuations, and to achieve reliable separation between drone and bird motion patterns on synthetic test data. Motion-trajectory cues alone can support early distinguishing of drones from birds when visual details are scarce, providing a complementary signal to conventional image-based detection. The proposed synthetic data and sequence classification pipeline forms a reproducible testbed that can be extended with real trajectories from radar or video tracking systems and used to prototype and benchmark trajectory-based recognizers for integrated surveillance solutions. The proposed method is designed to generalize naturally to real surveillance systems, as it relies on trajectory-level motion patterns rather than appearance-based features that are sensitive to sensor quality, illumination, or weather conditions. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Graphical abstract

30 pages, 6863 KB  
Article
Explainable Deep Learning and Edge Inference for Chilli Thrips Severity Classification in Strawberry Canopies
by Uchechukwu Ilodibe, Daeun Choi, Sriyanka Lahiri, Changying Li, Daniel Hofstetter and Yiannis Ampatzidis
Agriculture 2026, 16(2), 252; https://doi.org/10.3390/agriculture16020252 - 19 Jan 2026
Viewed by 210
Abstract
Traditional plant scouting is often a costly and labor-intensive task that requires experienced specialists to diagnose and manage plant stresses. Artificial intelligence (AI), particularly deep learning and computer vision, offers the potential to transform scouting by enabling rapid, non-intrusive detection and classification of [...] Read more.
Traditional plant scouting is often a costly and labor-intensive task that requires experienced specialists to diagnose and manage plant stresses. Artificial intelligence (AI), particularly deep learning and computer vision, offers the potential to transform scouting by enabling rapid, non-intrusive detection and classification of early stress symptoms from plant images. However, deep learning models are often opaque, relying on millions of parameters to extract complex nonlinear features that are not interpretable by growers. Recently, eXplainable AI (XAI) techniques have been used to identify key spatial regions that contribute to model predictions. This project explored the potential of convolutional neural networks (CNNs) for classifying the severity of chilli thrips damage in strawberry plants in Florida and employed XAI techniques to interpret model decisions and identify symptom-relevant canopy features. Four CNN architectures, YOLOv11, EfficientNetV2, Xception, and MobileNetV3, were trained and evaluated using 2353 square RGB canopy images of different sizes (256, 480, 640 and 1024 pixels) to classify symptoms as healthy, moderate, or severe. Trade-offs between image size, model parameter count, inference speed, and accuracy were examined in determining the best-performing model. The models achieved accuracies ranging from 77% to 85% with inference times of 5.7 to 262.3 ms, demonstrating strong potential for real-time pest severity estimation. Gradient-Weighted Class Activation Mapping (Grad-CAM) visualization revealed that model attention focused on biologically relevant regions such as fruits, stems, leaf edges, leaf surfaces, and dying leaves, areas commonly affected by chilli thrips. Subsequent analysis showed that model attention spread from localized regions in healthy plants to wide diffuse regions in severe plants. This alignment between model attention and expert scouting logic suggests that CNNs internalize symptom-specific visual cues and can reliably classify pest-induced plant stress. Full article
Show Figures

Graphical abstract

16 pages, 1725 KB  
Article
A Lightweight Modified Adaptive UNet for Nucleus Segmentation
by Md Rahat Kader Khan, Tamador Mohaidat and Kasem Khalil
Sensors 2026, 26(2), 665; https://doi.org/10.3390/s26020665 - 19 Jan 2026
Viewed by 348
Abstract
Cell nucleus segmentation in microscopy images is an initial step in the quantitative analysis of imaging data, which is crucial for diverse biological and biomedical applications. While traditional machine learning methodologies have demonstrated limitations, recent advances in U-Net models have yielded promising improvements. [...] Read more.
Cell nucleus segmentation in microscopy images is an initial step in the quantitative analysis of imaging data, which is crucial for diverse biological and biomedical applications. While traditional machine learning methodologies have demonstrated limitations, recent advances in U-Net models have yielded promising improvements. However, it is noteworthy that these models perform well on balanced datasets, where the ratio of background to foreground pixels is equal. Within the realm of microscopy image segmentation, state-of-the-art models often encounter challenges in accurately predicting small foreground entities such as nuclei. Moreover, the majority of these models exhibit large parameter sizes, predisposing them to overfitting issues. To overcome these challenges, this study introduces a novel architecture, called mA-UNet, designed to excel in predicting small foreground elements. Additionally, a data preprocessing strategy inspired by road segmentation approaches is employed to address dataset imbalance issues. The experimental results show that the MIoU score attained by the mA-UNet model stands at 95.50%, surpassing the nearest competitor, UNet++, on the 2018 Data Science Bowl dataset. Ultimately, our proposed methodology surpasses all other state-of-the-art models in terms of both quantitative and qualitative evaluations. The mA-UNet model is also implemented in VHDL on the Zynq UltraScale+ FPGA, demonstrating its ability to perform complex computations with minimal hardware resources, as well as its efficiency and scalability on advanced FPGA platforms. Full article
(This article belongs to the Special Issue Sensing and Processing for Medical Imaging: Methods and Applications)
Show Figures

Figure 1

23 pages, 40663 KB  
Article
Time Series Analysis of Fucheng-1 Interferometric SAR for Potential Landslide Monitoring and Synergistic Evaluation with Sentinel-1 and ALOS-2
by Guangmin Tang, Keren Dai, Feng Yang, Weijia Ren, Yakun Han, Chenwen Guo, Tianxiang Liu, Shumin Feng, Chen Liu, Hao Wang, Chenwei Zhang and Rui Zhang
Remote Sens. 2026, 18(2), 304; https://doi.org/10.3390/rs18020304 - 16 Jan 2026
Viewed by 151
Abstract
Fucheng-1 is China’s first commercial synthetic aperture radar (SAR) satellite equipped with interferometric capabilities. Since its launch in 2023, it has demonstrated strong potential across a range of application domains. However, a comprehensive and systematic evaluation of its overall performance, including its time-series [...] Read more.
Fucheng-1 is China’s first commercial synthetic aperture radar (SAR) satellite equipped with interferometric capabilities. Since its launch in 2023, it has demonstrated strong potential across a range of application domains. However, a comprehensive and systematic evaluation of its overall performance, including its time-series monitoring capability, is still lacking. This study applies the Small Baseline Subset (SBAS-InSAR) method to conduct the first systematic processing and evaluation of 22 Fucheng-1 images acquired between 2023 and 2024. A total of 45 potential landslides were identified and subsequently validated through field investigations and UAV-based LiDAR data. Comparative analysis with Sentinel-1 and ALOS-2 indicates that Fucheng-1 demonstrates superior performance in small-scale deformation identification, temporal-variation characterization, and maintaining a high density of coherent pixels. Specifically, in the time-series InSAR-based potential landslide identification, Fucheng-1 identified 13 small-scale potential landslides, whereas Sentinel-1 identified none; the number of identifications is approximately 2.17 times that of ALOS-2. For time-series subsidence monitoring, the deformation magnitudes retrieved from Fucheng-1 are generally larger than those from Sentinel-1, mainly attributable to finer spatial sampling enabled by its higher spatial resolution and a higher maximum detectable deformation gradient. Moreover, as landslide size decreases, the advantages of Fucheng-1 in deformation identification and subsidence estimation become increasingly evident. Interferometric results further show that the number of high-coherence pixels for Fucheng-1 is 7–8 times that of co-temporal Sentinel-1 and 1.1–1.4 times that of ALOS-2, providing more high-quality observations for time-series inversion and thereby supporting a more detailed and spatially continuous reconstruction of deformation fields. Meanwhile, the orbital stability of Fucheng-1 is comparable to that of Sentinel-1, and its maximum detectable deformation gradient in mountainous terrain reaches twice that of Sentinel-1. Overall, this study provides the first systematic validation of the time-series InSAR capability of Fucheng-1 under complex terrain conditions, offering essential support and a solid foundation for the operational deployment of InSAR technologies based on China’s domestic SAR satellite constellation. Full article
Show Figures

Graphical abstract

27 pages, 9362 KB  
Article
A Multi-Task EfficientNetV2S Approach with Hierarchical Hybrid Attention for MRI Enhancing Brain Tumor Segmentation and Classification
by Nawal Benzorgat, Kewen Xia, Mustapha Noure Eddine Benzorgat and Malek Nasser Ali Algabri
Brain Sci. 2026, 16(1), 37; https://doi.org/10.3390/brainsci16010037 - 27 Dec 2025
Viewed by 390
Abstract
Background: Brain tumors present a significant clinical problem due to high mortality and strong heterogeneity in size, shape, location, and tissue characteristics, complicating reliable MRI analysis. Existing automated methods are limited by non-selective skip connections that propagate noise, axis-separable attention modules that poorly [...] Read more.
Background: Brain tumors present a significant clinical problem due to high mortality and strong heterogeneity in size, shape, location, and tissue characteristics, complicating reliable MRI analysis. Existing automated methods are limited by non-selective skip connections that propagate noise, axis-separable attention modules that poorly integrate channel and spatial cues, shallow encoders with insufficiently discriminative features, and isolated optimization of segmentation or classification tasks. Methods: We propose a model using an EfficientNetV2S backbone with a Hierarchical Hybrid Attention (HHA) mechanism. The HHA couples a global-context pathway with a local-spatial pathway, employing a correlation-driven, per-pixel fusion gate to explicitly model interactions between them. Multi-scale dilated blocks are incorporated to enlarge the effective receptive field. The model is applied to a multiclass brain tumor MRI dataset, leveraging shared representation learning for joint segmentation and classification. Results: The design attains a Dice score of 92.25% and a Jaccard index of 86% for segmentation. For classification, it achieves an accuracy of 99.53%, with precision, recall, and F1 scores all close to 99%. These results indicate sharper tumor boundaries, stronger noise suppression in segmentation, and more robust discrimination in classification. Conclusions: The proposed framework effectively overcomes key limitations in brain tumor MRI analysis. The integrated HHA mechanism and shared representation learning yield superior segmentation quality with enhanced boundary delineation and noise suppression, alongside highly accurate tumor classification, demonstrating strong clinical utility. Full article
Show Figures

Figure 1

16 pages, 15460 KB  
Article
A Parallel Algorithm for Background Subtraction: Modeling Lognormal Pixel Intensity Distributions on GPUs
by Sotirios Diamantas, Ethan Reaves and Bryant Wyatt
Mathematics 2026, 14(1), 43; https://doi.org/10.3390/math14010043 - 22 Dec 2025
Viewed by 259
Abstract
Background subtraction is a core preprocessing step for video analytics, enabling downstream tasks such as detection, tracking, and scene understanding in applications ranging from surveillance to transportation. However, real-time deployment remains challenging when illumination changes, shadows, and dynamic backgrounds produce heavy-tailed pixel variations [...] Read more.
Background subtraction is a core preprocessing step for video analytics, enabling downstream tasks such as detection, tracking, and scene understanding in applications ranging from surveillance to transportation. However, real-time deployment remains challenging when illumination changes, shadows, and dynamic backgrounds produce heavy-tailed pixel variations that are difficult to capture with simple Gaussian assumptions. In this work, we propose a fully parallel GPU implementation of a per-pixel background model that represents temporal pixel deviations with lognormal distributions. During a short training phase, a circular buffer of n frames (as small as n=3) is used to estimate, for every pixel, robust log-domain parameters (μ,σ). During testing, each incoming frame is compared against a robust reference (per-pixel median), and a lognormal cumulative density function yields a probabilistic foreground score that is thresholded to produce a binary mask. We evaluate the method on multiple videos under varying illumination and motion conditions and compare qualitatively with widely used mixture of Gaussians baselines (MOG and MOG2). Our method achieves, on average, 87 fps with a buffer size of 10, and reaches about 188 fps with a buffer size of 3, on an NVIDIA 3080 Ti. Finally, we discuss the accuracy–latency trade-off with larger buffers. Full article
Show Figures

Figure 1

24 pages, 5060 KB  
Article
Enhancing Machine Learning-Based GPP Upscaling Error Correction: An Equidistant Sampling Method with Optimized Step Size and Intervals
by Zegen Wang, Jiaqi Zuo, Zhiwei Yong and Xinyao Xie
Remote Sens. 2026, 18(1), 23; https://doi.org/10.3390/rs18010023 - 22 Dec 2025
Viewed by 370
Abstract
Current machine learning-based gross primary productivity (GPP) upscaling error correction approaches exhibit two critical limitations: (1) failure to account for nonuniform density distributions of sub-pixel heterogeneity factors during upscaling and (2) dependence on subjective classification thresholds for characterizing factor variations. These shortcomings reduce [...] Read more.
Current machine learning-based gross primary productivity (GPP) upscaling error correction approaches exhibit two critical limitations: (1) failure to account for nonuniform density distributions of sub-pixel heterogeneity factors during upscaling and (2) dependence on subjective classification thresholds for characterizing factor variations. These shortcomings reduce accuracy and limit transferability. To address these issues, we propose an equidistant sampling method with optimized step size and intervals that precisely quantifies nonuniform density distributions and enhances correction precision. We validate our approach by applying it to correct 480 m resolution GPP simulations generated from an eco-hydrological model, with performance evaluation against 30 m resolution benchmarks using determination coefficient (R2) and root mean square error (RMSE). The proposed method demonstrates a significant improvement over previous elevation-based correction research (baseline R2 = 0.48, RMSE = 285 gCm−2yr−1), achieving a 0.27 increase in R2 and 91.22 gCm−2yr−1 reduction in RMSE. For comparative analysis, we implement k-means clustering as an alternative geostatistical method, which shows lesser improvements (ΔR2 = 0.21, ΔRMSE = −63.54 gCm−2yr−1). Crucially, when using identical statistical interval counts, our optimized-step equidistant sampling method consistently surpasses k-means clustering in performance metrics. The optimal-step equidistant sampling method, paired with appropriate interval selection, offers an efficient solution that maintains high correction accuracy while minimizing computational costs. Controlled variable experiments further revealed that the most significant factors affecting GPP upscaling error correction are land cover, altitude, slope, and TNI, trailed by LAI, whereas slope orientation, SVF, and TWI hold equal relevance. Full article
Show Figures

Graphical abstract

25 pages, 12611 KB  
Article
Crop Row Line Detection for Rapeseed Seedlings in Complex Environments Based on Improved BiSeNetV2 and Dynamic Sliding Window Fitting
by Wanjing Dong, Rui Wang, Fanguo Zeng, Youming Jiang, Yang Zhang, Qingyang Shi, Zhendong Liu and Wei Xu
Agriculture 2026, 16(1), 23; https://doi.org/10.3390/agriculture16010023 - 21 Dec 2025
Viewed by 290
Abstract
Crop row line detection is essential for precision agriculture, supporting autonomous navigation, field management, and growth monitoring. To address the low detection accuracy of rapeseed seedling rows under complex field conditions, this study proposes a detection framework that integrates an improved BiSeNetV2 with [...] Read more.
Crop row line detection is essential for precision agriculture, supporting autonomous navigation, field management, and growth monitoring. To address the low detection accuracy of rapeseed seedling rows under complex field conditions, this study proposes a detection framework that integrates an improved BiSeNetV2 with a dynamic sliding-window fitting strategy. The improved BiSeNetV2 incorporates the Efficient Channel Attention (ECA) mechanism to strengthen crop-specific feature representation, an Atrous Spatial Pyramid Pooling (ASPP) decoder to improve multi-scale perception, and Depthwise Separable Convolutions (DS Conv) in the Detail Branch to reduce model complexity while preserving accuracy. After semantic segmentation, a Gaussian-filtered vertical projection method is applied to identify crop-row regions by locating density peaks. A dynamic sliding-window algorithm is then used to extract row trajectories, with the window size adaptively determined by the row width and the sliding process incorporating both a lateral inertial-drift strategy and a dynamically adjusted longitudinal step size. Finally, variable-order polynomial fitting is performed within each crop-row region to achieve precise extraction of the crop-row lines. Experimental results indicate that the improved BiSeNetV2 model achieved a Mean Pixel Accuracy (mPA) of 87.73% and a Mean Intersection over Union (MIoU) of 79.40% on the rapeseed seedling dataset, marking improvements of 9.98% and 8.56%, respectively, compared to the original BiSeNetV2. The crop row detection performance for rapeseed seedlings under different environmental conditions demonstrated that the Curve Fitting Coefficient (CFC), Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE) were 0.85, 1.57, and 1.27 pixels on sunny days; 0.86, 2.05 and 1.63 pixels on cloudy days; 0.74, 2.89, and 2.22 pixels on foggy days; and 0.76, 1.38, and 1.11 pixels during the evening, respectively. The results reveal that the improved BiSeNetV2 can effectively identify rapeseed seedlings, and the detection algorithm can identify crop row lines in various complex environments. This research provides methodological support for crop row line detection in precision agriculture. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

23 pages, 2527 KB  
Article
Super Encryption Standard (SES): A Key-Dependent Block Cipher for Image Encryption
by Mohammed Abbas Fadhil Al-Husainy, Bassam Al-Shargabi and Omar Sabri
Information 2026, 17(1), 2; https://doi.org/10.3390/info17010002 - 19 Dec 2025
Viewed by 583
Abstract
Data encryption is a core mechanism in modern security services for protecting confidential data at rest and in transit. This work introduces the Super Encryption Standard (SES), a symmetric block cipher that follows the overall workflow of the Advanced Encryption Standard (AES) but [...] Read more.
Data encryption is a core mechanism in modern security services for protecting confidential data at rest and in transit. This work introduces the Super Encryption Standard (SES), a symmetric block cipher that follows the overall workflow of the Advanced Encryption Standard (AES) but adopts a key-dependent design to enlarge the effective key space and improve execution efficiency. The SES accepts a user-supplied key file and a selectable block dimension, from which it derives per-block round material and a dynamic substitution box generated using SHA-512. Each round relies only on XOR and a conditional half-byte swap driven by key-derived row and column vectors, enabling lightweight diffusion and confusion with low implementation cost. Experimental evaluation using multiple color images of different sizes shows that the proposed SES algorithm achieves faster encryption than the AES baseline and produces a ciphertext that behaves statistically like random noise. The encrypted images exhibit very low correlation between adjacent pixels, strong sensitivity to even minor changes in the plaintext and in the key, and resistance to standard statistical and differential attacks. Analysis of the SES substitution box also indicates favorable differential and linear properties that are comparable to those of the AES. The SES further supports a very wide key range, scaling well beyond typical fixed-length keys, which substantially increases brute-force difficulty. Therefore, the SES is a promising cipher for image encryption and related data-protection applications. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing, 2nd Edition)
Show Figures

Graphical abstract

16 pages, 1560 KB  
Article
Performance Comparison of U-Net and Its Variants for Carotid Intima–Media Segmentation in Ultrasound Images
by Seungju Jeong, Minjeong Park, Sumin Jeong and Dong Chan Park
Diagnostics 2026, 16(1), 2; https://doi.org/10.3390/diagnostics16010002 - 19 Dec 2025
Viewed by 463
Abstract
Background/Objectives: This study systematically compared the performance of U-Net and variants for automatic analysis of carotid intima-media thickness (CIMT) in ultrasound images, focusing on segmentation accuracy and real-time efficiency. Methods: Ten models were trained and evaluated using a publicly available Carotid [...] Read more.
Background/Objectives: This study systematically compared the performance of U-Net and variants for automatic analysis of carotid intima-media thickness (CIMT) in ultrasound images, focusing on segmentation accuracy and real-time efficiency. Methods: Ten models were trained and evaluated using a publicly available Carotid Ultrasound Boundary Study (CUBS) dataset (2176 images from 1088 subjects). Images were preprocessed using histogram-based smoothing and resized to a resolution of 256 × 256 pixels. Model training was conducted using identical hyperparameters (50 epochs, batch size 8, Adam optimizer with a learning rate of 1 × 10−4, and binary cross-entropy loss). Segmentation accuracy was assessed using Dice, Intersection over Union (IoU), Precision, Recall, and Accuracy metrics, while real-time performance was evaluated based on training/inference times and the model parameter counts. Results: All models achieved high accuracy, with Dice/IoU scores above 0.80/0.67. Attention U-Net achieved the highest segmentation accuracy, while UNeXt demonstrated the fastest training/inference speeds (approximately 420,000 parameters). Qualitatively, UNet++ produced smooth and natural boundaries, highlighting its strength in boundary reconstruction. Additionally, the relationship between the model parameter count and Dice performance was visualized to illustrate the tradeoff between accuracy and efficiency. Conclusions: This study provides a quantitative/qualitative evaluation of the accuracy, efficiency, and boundary reconstruction characteristics of U-Net-based models for CIMT segmentation, offering guidance for model selection according to clinical requirements (accuracy vs. real-time performance). Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

17 pages, 1509 KB  
Article
Full Characterization of Corpus Luteum Morphological Dynamics, Echotexture, and Blood Flow During Different Stages of the Follicular Wave in Spontaneously Non-Mated Female Camels (Camelus dromedarius)
by Abdulrhman K. Alhaider, Ibrahim A. Emam and Elshymaa A. Abdelnaby
Vet. Sci. 2025, 12(12), 1212; https://doi.org/10.3390/vetsci12121212 - 18 Dec 2025
Viewed by 472
Abstract
This study was designed, for the first time, to fully characterize the corpus luteum’s (CL) dynamics, echotexture, and ovarian blood flow on the ipsilateral side of the CL during different stages of the follicular wave in spontaneously non-mated camels (Camelus dromedarius) [...] Read more.
This study was designed, for the first time, to fully characterize the corpus luteum’s (CL) dynamics, echotexture, and ovarian blood flow on the ipsilateral side of the CL during different stages of the follicular wave in spontaneously non-mated camels (Camelus dromedarius) and to correlate the CL’s size echotexture with Doppler parameters. Of 20 non-mated camels, only 7 exhibited spontaneous ovulation. B- and color-mode analyses of the CL were estimated; CL frozen image echotextures [CL echogenicity (CLE) and CL heterogeneity (CLH)] and ovarian artery (OV. A.) dynamics were recorded, and ultrasound scanning was performed. Blood sampling and progesterone (P4) levels were measured after ovulation. CL diameter and echotexture were elevated (p = 0.025 and p = 0.037) at the mid-maturation stage and compared to the early growth and late regression stages (1.03 ± 0.45/cm and 82.65 ± 2.87 for CLE and 33.65 ± 1.83 for CLH vs. 1.98 ± 0.88 cm; 66.52 ± 4.32 for CLE and 15.66 ± 0.25 for CLH vs. 1.02 ± 0.02 cm, 65.12 ± 2.66 for CLE, and 19.32 ± 1.33 for CLH), as those parameters are critical in the determination of CL activity. Ipsilateral OV. A. diameter increased (p = 0.021) in the mid-maturation and regression stages, with a significant elevation in Doppler velocities (p = 0.025) in the maturation stage, with a decline in Doppler indices (p = 0.013), while the contralateral side was not affected. Ipsilateral mean velocity (Vm; cm/s) and blood flow volume (BFV; mL/min) were increased in the mid-maturation stage (23.55 ± 0.66 cm/s and 25.62 ± 0.32 mL/min). CL diameter was positively correlated with the CL’s total colored area/pixels (r = 0.81; p = 0.001), total colored area % (r = 0.93; p = 0.001), and OV. A. velocities (r = 0.96; p = 0.001). In addition, there was a positive correlation between CLH and OV. A.BFV (r = 0.89; p = 0.001). After spontaneous ovulation, the CL increases in diameter and reaches its peak on day 12, with an elevation in the P4 level at day 10, and the total colored area of the CL continues to elevate until day 14. Ipsilateral OV. A. blood flow is elevated and linked to changes that occur in the CL’s total coloration %. Evaluating luteal function in camels presents several challenges due to the species’ unique reproductive physiology. Full article
(This article belongs to the Special Issue Advances in Morphology and Histopathology in Veterinary Medicine)
Show Figures

Figure 1

Back to TopTop