Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,594)

Search Parameters:
Keywords = similarity map

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 30787 KB  
Article
Cluster Analysis for Different Physiognomies and Spatiotemporal Patterns from Vegetation Indices in São Paulo State
by Francisco Javier Tipan Salazar, Carla Rodrigues Santos, Fernanda Beatriz Jordan Rojas Dallaqua and Bruno Schultz
Geographies 2026, 6(2), 46; https://doi.org/10.3390/geographies6020046 (registering DOI) - 2 May 2026
Abstract
Multi-temporal orbital satellite imagery is an alternative for measuring behavioral patterns or trends in different physiognomies through vegetation indices (VIs) and Spectral Linear Mixture Models (SLMMs). In this study, time series of Landsat 7/8/9 and Sentinel-2 have been used to classify a considerable [...] Read more.
Multi-temporal orbital satellite imagery is an alternative for measuring behavioral patterns or trends in different physiognomies through vegetation indices (VIs) and Spectral Linear Mixture Models (SLMMs). In this study, time series of Landsat 7/8/9 and Sentinel-2 have been used to classify a considerable quantity of areas spread over the São Paulo state from 2021 to 2024. Because the large amount of samples considered in our analysis, self-organizing maps (SOMs) have been applied as a convenient method to group similar satellite image time series samples with respect to a certain vegetation index or green vegetation fraction (VEG). Since every dataset area belongs to different types of physiognomies, each cluster has been labeled according to the plurality technique. Additionally, we obtained the mean spectral behavior of the VIs and VEG in the 2021–2024 seasonal cycle of all samples. The results showed similar variations from the rainy to the dry season for most of the physiognomies. On the other hand, this research indicates that the proposed method for classification the Brazilian areas spread over the São Paulo state is consistently good, obtaining the best performance (quantization error) associated with Normalized Difference Vegetation Index (NDVI) time series samples. Full article
(This article belongs to the Special Issue Geography as a Transdisciplinary Science in a Changing World)
Show Figures

Figure 1

22 pages, 5905 KB  
Article
Towards Balanced Supervision: Cumulative Quality-Based Dynamic Assignment for Fine-Grained Remote Sensing Object Detection
by Yida Pan, Haoran Zhu, Zijuan Chen, Guangyou Yang and Wen Yang
Remote Sens. 2026, 18(9), 1406; https://doi.org/10.3390/rs18091406 (registering DOI) - 2 May 2026
Abstract
Fine-grained object detection (FGOD) is crucial for identifying visually similar sub-categories in remote sensing imagery. However, existing detectors suffer from severe supervision imbalance because static label assignment strategies assign a fixed number of positive samples to all sub-categories and targets. To address this [...] Read more.
Fine-grained object detection (FGOD) is crucial for identifying visually similar sub-categories in remote sensing imagery. However, existing detectors suffer from severe supervision imbalance because static label assignment strategies assign a fixed number of positive samples to all sub-categories and targets. To address this challenge, this paper presents Cumulative Quality-based Dynamic Assignment (CQDA), a fine-grained aware label assignment algorithm that dynamically calculates the optimal positive budget for each instance based on its cumulative alignment quality. Moreover, to further resolve feature-space confusion, this paper introduces two modules: a frequency-decoupled enhancement algorithm to sharpen discriminative features, and an orthogonal classification head to maximize inter-class separability. Integrated into the KFIoU framework, extensive experiments demonstrate that the proposed method consistently achieves performance improvements of 4.2, 15.8, and 35.3 in mAP@0.5 on the fine-grained oriented object detection datasets FAIR1M-v2, MAR20, and ShipRSImageNet, respectively. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
29 pages, 1779 KB  
Article
BWT-Enhanced Compression for GIS Raster Data: A Hybrid AV1-Inspired Approach with Burrows–Wheeler Transform
by Yair Wiseman
Big Data Cogn. Comput. 2026, 10(5), 140; https://doi.org/10.3390/bdcc10050140 - 1 May 2026
Abstract
The AVIF (AV1 Image File Format) is a modern, royalty-free image format that leverages the AV1 video codec for superior compression efficiency, supporting both lossy and lossless modes. Its entropy encoding relies on a multi-symbol context-adaptive arithmetic coder (range coding with adaptive cumulative [...] Read more.
The AVIF (AV1 Image File Format) is a modern, royalty-free image format that leverages the AV1 video codec for superior compression efficiency, supporting both lossy and lossless modes. Its entropy encoding relies on a multi-symbol context-adaptive arithmetic coder (range coding with adaptive cumulative distribution functions (CDFs)), which is effective for general imagery but may not optimally exploit the repetitive structures common in Geographic Information System (GIS) maps/data. This paper proposes replacing AVIF’s entropy encoder with the Burrows–Wheeler Transform (BWT), a reversible preprocessing algorithm that rearranges data to create runs of similar symbols, enhancing subsequent compression. We detail the technical steps for modification, drawing from AV1’s open-source implementation, and explain why BWT is advantageous for GIS raster maps/data, which often feature large uniform areas, limited color palettes, and spatial redundancies. Empirical evidence from related studies on BWT-based image compression shows improvements in lossless scenarios, potentially considerably reducing file sizes over standard methods while preserving data integrity critical for geospatial analysis. This swap could improve storage, transmission, and processing efficiency in GIS applications, such as remote sensing and cartography. The discussion includes challenges like computational overhead and compatibility, with recommendations for implementations. The resulting BWT-AVIF hybrid produces a non-standard AV1 bit-stream that is not compliant with the AV1 or AVIF specifications and therefore requires custom decoders. It is presented here as a research prototype for GIS-specific compression rather than a compliant AVIF extension. Full article
Show Figures

Figure 1

39 pages, 963 KB  
Article
Complex-Valued Unitary Superposition–Driven Multi-Qubit Encoding for Quantum Video Transmission
by Udara Jayasinghe and Anil Fernando
Electronics 2026, 15(9), 1906; https://doi.org/10.3390/electronics15091906 - 30 Apr 2026
Viewed by 20
Abstract
Reliable high-fidelity video transmission over noisy quantum channels remains challenging, especially due to temporal dependencies introduced by modern video compression standards. These codecs, such as versatile video coding (VVC), employ inter-frame prediction and group-of-pictures (GOP) structures, which are highly sensitive to channel noise [...] Read more.
Reliable high-fidelity video transmission over noisy quantum channels remains challenging, especially due to temporal dependencies introduced by modern video compression standards. These codecs, such as versatile video coding (VVC), employ inter-frame prediction and group-of-pictures (GOP) structures, which are highly sensitive to channel noise and can lead to error propagation across frames. Conventional quantum encoding schemes, such as Hadamard-based superposition encoding, use fixed real-valued basis transformations that provide limited phase diversity and underutilize the multi-qubit state-space, reducing robustness under noisy quantum channels. To overcome these limitations, this study proposes a multi-qubit complex-valued orthogonal unitary superposition (COUS) encoding framework for quantum video transmission. In the proposed system, VVC-compressed video bitstreams are first protected using classical channel encoding, then segmented and mapped onto multi-qubit COUS quantum states, enabling joint amplitude and phase representation with improved resilience to quantum noise. At the receiver, transmitted quantum states undergo sequential COUS decoding, channel decoding, and VVC bitstream reconstruction to recover the original video frames. The simulation results show that COUS-based multi-qubit system outperforms the Hadamard encoding-based multi-qubit system, achieving peak signal-to-noise ratio (PSNR) up to 47.22 dB, structural similarity index measure (SSIM) up to 0.9905, and video multi-method assessment fusion (VMAF) up to 96.49. Even single-qubit COUS encoding achieves 3–4 dB channel SNR gain, while higher-qubit configurations further enhance robustness and reconstructed video quality. These results confirm that the proposed framework is scalable, noise-resilient, and provides high-fidelity quantum video transmission over noisy channels. Full article
16 pages, 2473 KB  
Article
Incorporating Crop-Centric Segmentation and Enhanced YOLOv10 for Indirect Weed Detection in Bok Choy Fields
by Weili Li, Wenpeng Zhu, Qianyu Wang, Feng Gao, Kang Han and Xiaojun Jin
Agronomy 2026, 16(9), 907; https://doi.org/10.3390/agronomy16090907 - 30 Apr 2026
Viewed by 57
Abstract
Weed infestation poses a significant threat to bok choy (Brassica rapa subsp. chinensis) cultivation, reducing crop yield and quality through resource competition and pest facilitation. Traditional weed detection methods face two major bottlenecks: one is data annotation, arising from the need for [...] Read more.
Weed infestation poses a significant threat to bok choy (Brassica rapa subsp. chinensis) cultivation, reducing crop yield and quality through resource competition and pest facilitation. Traditional weed detection methods face two major bottlenecks: one is data annotation, arising from the need for extensive, species-diverse datasets, and the other is visual discrimination, due to the high morphological similarity between crops and weeds at certain growth stages. To address these challenges, this study proposed an indirect weed detection framework that combines an optimized You Only Look Once version 10 (YOLOv10) model for crop detection with Excess Green ExG-based segmentation of residual vegetation. The model incorporates RFD and C2f-WDBB modules to improve feature preservation and multi-scale fusion. Compared with baseline YOLOv10, the final proposed RCW-YOLOv10 reduced the number of parameters by 1.04 million and improved detection performance, achieving increases of 3.5%, 1.5%, and 1.1% percentage points in Precision, Recall, and mAP50, respectively, under field conditions. The system initially detected bok choy plants, subsequently localizing weeds by masking crop regions and thresholding residual ExG signals in the uncovered areas. The detected weed coordinates were used to construct a distribution map that may support targeted control in precision agriculture. This approach simplifies weed identification under the tested bok choy field conditions and may be adaptable to other crops after further validation. Full article
Show Figures

Figure 1

16 pages, 3451 KB  
Article
Air Knives: Going Beyond the Classical Midspan Pressure Distributions
by Celia Miguel-González, Aitor Vega-Valladares, Manuel García-Díaz, Alejandro Rodrígurez de Castro, José González Pérez and Bruno Pereiras
Fluids 2026, 11(5), 113; https://doi.org/10.3390/fluids11050113 - 30 Apr 2026
Viewed by 11
Abstract
Air knives are extensively employed in many cold rolling or tin plate production lines for drying purposes. Generally, these systems are oversized, resulting in excessive energy consumption, a consequence of insufficient understanding of their performance. Considering this deficiency, an empirical exploration was initiated [...] Read more.
Air knives are extensively employed in many cold rolling or tin plate production lines for drying purposes. Generally, these systems are oversized, resulting in excessive energy consumption, a consequence of insufficient understanding of their performance. Considering this deficiency, an empirical exploration was initiated to analyze the functionality of an air knife oriented perpendicularly to a given surface. Given the scarcity of information within the current body of literature, particular emphasis was placed on the regions affected by the finite dimensions of the device. Impingement pressure distributions were measured at the midspan plane and planes parallel to the midspan but extending beyond the projection of the air knife. The midspan impingement pressure profile aligned with the established bell-shaped distribution, whereas the outcomes beyond the air knife’s projection conformed to an analytically fitted similarity principle. Consequently, the mathematical formulations introduced in this study facilitate the mapping of the impingement pressure within the whole impingement plane, encompassing areas influenced by the finite length of the air knife, thereby representing the innovative contribution of this research. Full article
17 pages, 5249 KB  
Article
An Indoor Mapping Algorithm Fusing LiDAR-IMU Tightly Coupled Fusion and Scan Context: IS-LEGO-LOAM
by Junying Yun, Zhoufeng Liu, Xintong Wan, Gefei Duan, Bowen Tian and Yajing Gao
Sensors 2026, 26(9), 2789; https://doi.org/10.3390/s26092789 - 30 Apr 2026
Viewed by 234
Abstract
Indoor environments often contain numerous areas with sparse structural features, such as long corridors, large atriums, and glass curtain walls, and other scenarios. These conditions can lead to difficulties in loop closure detection and accumulated positioning errors, resulting in localization drift or even [...] Read more.
Indoor environments often contain numerous areas with sparse structural features, such as long corridors, large atriums, and glass curtain walls, and other scenarios. These conditions can lead to difficulties in loop closure detection and accumulated positioning errors, resulting in localization drift or even mapping failure during map construction. This paper proposes an indoor mapping algorithm called IS-LEGO-LOAM that integrates tightly coupled LiDAR-IMU fusion and Scan Context. A tightly coupled LiDAR-IMU odometry is constructed, and an adaptive covariance matrix is designed to solve the problems of abnormal LiDAR echoes and insufficient effective feature extraction caused by sparse indoor feature points. By introducing the Scan Context global descriptor and adopting the strategies of vector nearest neighbor search and similarity score matching, the drift problem in large-scale scenes is alleviated. Finally, validation is performed on the KITTI dataset and in real-world scenarios, respectively. Experiments show that the improved IS-LEGO-LOAM achieves superior mapping performance. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

26 pages, 54080 KB  
Article
MPES-YOLO: A Multi-Scale Lightweight Framework with Selective Edge Enhancement for Loess Landslide Detection
by Hanyu Cheng, Jiali Su, Jiangbo Xi, Haixing Shang, Zhen Zhang, Bingkun Wang and Pan Li
Remote Sens. 2026, 18(9), 1374; https://doi.org/10.3390/rs18091374 - 29 Apr 2026
Viewed by 179
Abstract
Loess landslides in northwestern China are highly unstable and difficult to distinguish due to sparse vegetation and their spectral and morphological similarity to the surrounding terrain. These landslides demonstrate considerable diversity in manifestation, encompassing shallow translational slides, small-scale features, partially obscured formations, and [...] Read more.
Loess landslides in northwestern China are highly unstable and difficult to distinguish due to sparse vegetation and their spectral and morphological similarity to the surrounding terrain. These landslides demonstrate considerable diversity in manifestation, encompassing shallow translational slides, small-scale features, partially obscured formations, and instances with irregular or poorly defined boundaries. To address the above issues, we propose MPES-YOLO, a multi-scale lightweight YOLO-based framework with selective edge enhancement to detect loess landslides. This model is based on the YOLOv8 architecture and incorporates a multi-scale partial convolution and exponential moving average (MPCE) module to improve multi-scale feature representation while reducing computational cost and enhancing small-target sensitivity. Additionally, to address ambiguous boundaries, a selective edge enhancement (SEE) module is introduced to extract authentic object edges from original images and inject them into key training layers, improving boundary perception. Finally, SIoU is adopted to improve geometric consistency for irregular landslide boundary localization. This paper first verified the basic detection performance of MPES-YOLO on the publicly available Bijie landslide dataset. Then, an experimental study was conducted in the loess landslides of Yan’an City, Shaanxi Province. The mAP@0.5 was 91.9%, and the parameter quantity was reduced by 23.3% compared with the baseline model. A generalization experiment was also carried out on the landslides in the Ningxia region, with the mAP@0.5 being 97.4%. The results show that MPES-YOLO achieves a strong balance between detection accuracy and computational efficiency, providing an effective and scalable solution for automated loess landslide detection and geological disaster early warning. Full article
Show Figures

Figure 1

21 pages, 41291 KB  
Article
Unraveling the Spectral–Spatial Mechanisms of Mineral Identification: A Case Study on CASI Data Using SpectralFormer and Traditional Classifiers
by Huilin Yang, Kai Qin, Yuxi Hao, Ming Li, Ling Zhu, Yuechao Yang and Yingjun Zhao
Remote Sens. 2026, 18(9), 1365; https://doi.org/10.3390/rs18091365 - 29 Apr 2026
Viewed by 182
Abstract
Traditional diagnostic spectroscopy provides a physically interpretable basis for mineral identification. However, how modern classifiers balance spectral and spatial information remains insufficiently understood. This study investigates this issue using CASI airborne hyperspectral data from the Liuyuan area, China. A geologically constrained ground-truth dataset [...] Read more.
Traditional diagnostic spectroscopy provides a physically interpretable basis for mineral identification. However, how modern classifiers balance spectral and spatial information remains insufficiently understood. This study investigates this issue using CASI airborne hyperspectral data from the Liuyuan area, China. A geologically constrained ground-truth dataset was constructed based on expert knowledge and a semi-automatic Spectral Hourglass workflow. We evaluated representative shallow machine learning methods and deep learning models, including a three-dimensional convolutional neural network (3D-CNN), Vision Transformer (ViT), and SpectralFormer. The Support Vector Machine (SVM) achieved the highest overall accuracy but showed a strong bias toward dominant background classes and failed to reliably detect rare minerals such as jarosite. Deep learning models improved class balance by incorporating broader spectral features. However, excessive spatial aggregation reduced their sensitivity to small and fragmented alteration zones. SpectralFormer models hyperspectral data as ordered spectral sequences and showed more stable performance for spectrally similar and rare minerals. Multi-scale experiments reveal a spectral-dominant discrimination mechanism. Increasing the spectral receptive field improves classification up to an optimal level. In contrast, overly large spatial patches introduce background interference and obscure diagnostic absorption features. These findings highlight the fundamental role of spectral continuity in airborne hyperspectral alteration mineral mapping and clarify the trade-offs involved in integrating spatial context. Full article
(This article belongs to the Special Issue Advanced Hyperspectral Imaging and AI for Geological Applications)
Show Figures

Figure 1

29 pages, 10384 KB  
Article
OShipNet: Occlusion Ship Detection Based on Multidomain Fusion and Multiscale Refinement
by Shengying Yang, Haowei Luo, Zhenyu Xu, Jing Yang and Wei Zhang
J. Mar. Sci. Eng. 2026, 14(9), 804; https://doi.org/10.3390/jmse14090804 - 28 Apr 2026
Viewed by 189
Abstract
The growth in international trade has precipitated operational demands on port facilities, mandating the development of advanced intelligent monitoring systems. Existing ship detection algorithms struggle with feature confusion and difficulty in extracting contextual features under occlusion, which reduces the discriminability between object features [...] Read more.
The growth in international trade has precipitated operational demands on port facilities, mandating the development of advanced intelligent monitoring systems. Existing ship detection algorithms struggle with feature confusion and difficulty in extracting contextual features under occlusion, which reduces the discriminability between object features and background noise. This leads to positional misalignment and mismatching of similar targets, which reduce the detection accuracy. To resolve this, we propose OShipNet, an architecture engineered to optimize feature fusion and refinement for occluded ship detection. First, we design the OShipNeXt backbone network, which provides complementary feature representation in frequency and spatial domains. This approach enables the reconstruction of global–local semantic associations for occluded objects, enhancing feature representation and improving detection accuracy. Secondly, to further refine target boundaries, we develop a Multiscale Pooling Attention Module (MSPAM) to enhance contextual awareness and better capture occluded edge features. Furthermore, we propose a dual-path cooperative loss function that mitigates the effects of low-quality bounding boxes. Comprehensive evaluations on the MVDD13 dataset demonstrate the robustness of OShipNet, which achieved 94.98% mAP@50 and 84.37% mAP@50-95, demonstrating advantages over existing object detection methods and establishing an effective framework for intelligent port monitoring. Full article
Show Figures

Figure 1

21 pages, 2785 KB  
Article
Comparative Evaluation of Deep Learning Object Detectors for Embedded Weed Detection on Resource-Constrained Platforms
by Nurtay Albanbay, Yerik Nugman, Mukhagali Sagyntay, Azamat Mustafa, Ramona Blanes, Algazy Zhauyt, Rustem Kaiyrov and Nurgali Nurgozhayev
Technologies 2026, 14(5), 265; https://doi.org/10.3390/technologies14050265 - 27 Apr 2026
Viewed by 140
Abstract
Computer vision–based weed detection plays a critical role in agricultural robotics, enabling accurate, selective weeding. These systems operate on resource-constrained embedded platforms, which introduces a significant trade-off between accuracy and efficiency. This study presents a comparative evaluation of six detection models (YOLOv11n, YOLOv11s, [...] Read more.
Computer vision–based weed detection plays a critical role in agricultural robotics, enabling accurate, selective weeding. These systems operate on resource-constrained embedded platforms, which introduces a significant trade-off between accuracy and efficiency. This study presents a comparative evaluation of six detection models (YOLOv11n, YOLOv11s, SSD-Lite, NanoDet, Faster R-CNN, RT-DETR) for agro-robotic applications, measuring precision, recall, mAP@0.5, and runtime on low-power hard-ware. NanoDet achieved the highest detection accuracy (precision 98.6%, recall 94.2%, mAP@0.5 97.7%). YOLOv11s demonstrated similar performance (mAP@0.5: 96.1%) but required more computation. YOLOv11n provides the most favourable balance between accuracy and throughput (mAP@0.5: 94.6%, 207 FPS on a workstation). On Raspberry Pi 5, light models achieved 3–5 FPS. RT-DETR and Faster R-CNN exhibited high latency (3112–6500 ms/frame), which prevents real-time operation. NanoDet excelled in detection, while YOLOv11n provides the best balance between accuracy and efficiency for limited devices. Full article
25 pages, 5188 KB  
Article
MonoCrown for Crown-Level Tree Species Semantic Segmentation in Heterogeneous Forests Using UAV RGB Imagery
by Linzhi Wen and Guangsheng Chen
Remote Sens. 2026, 18(9), 1338; https://doi.org/10.3390/rs18091338 - 27 Apr 2026
Viewed by 201
Abstract
Crown-level tree species semantic segmentation enables fine-grained forest inventory and management. Current high-precision tree species classification typically relies on multi-source remote sensing data, the acquisition and processing of which remain costly for large-area applications, making low-cost unmanned aerial vehicle (UAV) RGB imagery an [...] Read more.
Crown-level tree species semantic segmentation enables fine-grained forest inventory and management. Current high-precision tree species classification typically relies on multi-source remote sensing data, the acquisition and processing of which remain costly for large-area applications, making low-cost unmanned aerial vehicle (UAV) RGB imagery an attractive option for large-scale forest mapping. However, in heterogeneous forests, complex canopy structures and the limited spectral discriminability of low-cost UAV RGB imagery make 2D appearance cues alone insufficient for reliable species discrimination, crown delineation, and accurate separation of adjacent crowns. This often leads to inter-class confusion, blurred crown boundaries, and poor recognition of small crowns. To address these limitations, this paper proposes MonoCrown (MCrown), which strengthens geometric and contextual representation for distinguishing visually similar species and delineating crowns from single-temporal UAV RGB imagery. To compensate for the insufficiency of appearance cues, MCrown introduces monocular depth inferred offline from the same RGB image as a frozen geometric prior, and integrates cross-window global–local attention (CW-GLA), bidirectional cross-modal attention (BiCoAttn), and depth-adaptive injection (DAI) to capture long-range dependencies and promote complementary use of appearance and geometric features, especially for small crowns with similar visual patterns in complex scenes. To validate the method’s effectiveness, a crown-level UAV RGB dataset covering approximately 40 km2 was constructed. Systematic comparative experiments were conducted on the proposed dataset and on public benchmarks, supporting the effectiveness of the proposed approach across ten dominant classes, especially for small crowns and visually similar categories. Its mean Intersection over Union (mIoU) and overall accuracy (OA) reached 74.1% and 87.3%, respectively. The method achieves high-precision crown-level tree species semantic segmentation using single-temporal UAV RGB as the sole acquired modality, while monocular depth inferred from the same RGB image serves only as a frozen geometric prior, without requiring multispectral, multi-temporal, or active-sensor acquisitions. This offers a practical solution for crown-level tree species mapping in heterogeneous forests. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

20 pages, 5788 KB  
Article
YOLO-ESO: A Lightweight YOLOv10-Based Model for Individual Pig Identification in Complex Farming Environments
by Juanhua Zhu, Lele Song, Tong Fu, Yan Wang, Miao Wang and Ang Wu
Information 2026, 17(5), 421; https://doi.org/10.3390/info17050421 - 27 Apr 2026
Viewed by 184
Abstract
In intensive farming, contactless individual pig identification is crucial for precision feeding and health monitoring. However, real-world barn conditions—such as fluctuating illumination, severe occlusions, non-rigid poses, and high inter-individual similarity—pose significant challenges. Existing models struggle to balance high accuracy with lightweight deployment. To [...] Read more.
In intensive farming, contactless individual pig identification is crucial for precision feeding and health monitoring. However, real-world barn conditions—such as fluctuating illumination, severe occlusions, non-rigid poses, and high inter-individual similarity—pose significant challenges. Existing models struggle to balance high accuracy with lightweight deployment. To address this, we propose YOLO-ESO, an optimized detection framework based on YOLOv10n. YOLO-ESO introduces three core innovations: (1) integrating the C2f_ODConv module into the backbone to strengthen feature learning under complex poses via dynamic convolution; (2) redesigning the neck with a Semantics and Detail Infusion (SDI) module to improve multi-scale fusion while suppressing background noise; and (3) embedding an Efficient Multi-Scale Attention (EMA) mechanism before the detection head to capture fine-grained identity cues like texture and contours. Evaluated on a real-world pig dataset, YOLO-ESO achieves an mAP@0.5 of 96.6%, an mAP@0.5:0.95 of 71.1%, and an F1 of 92.0%. YOLO-ESO surpasses state-of-the-art detectors including YOLOv8, YOLOv11, and RT-DETR, while introducing only 8.7 GFLOPs and 3.48 million parameters. Overall, the proposed YOLO-ESO provides an accurate and lightweight solution for robust individual pig identification in complex farming environments, showing strong potential for practical deployment in precision livestock farming. Full article
Show Figures

Figure 1

23 pages, 1845 KB  
Article
Dynamics and Engagement Mechanisms of the Intangible Cultural Heritage Knowledge Ecosystem: An Integration of Topic Characteristics and User Demands on Social Q&A Platforms
by Liuxing Lu, Xiaoyang Lin, Jiaqi Zhang and Ning Zhang
Systems 2026, 14(5), 468; https://doi.org/10.3390/systems14050468 - 26 Apr 2026
Viewed by 122
Abstract
Despite the rapid digitization of intangible cultural heritage (ICH), the complex mechanisms governing how users interact and co-create knowledge in digital spaces remain underexplored. Understanding the internal dynamics and engagement logic of these interactive environments is therefore essential to developing sustainable heritage knowledge [...] Read more.
Despite the rapid digitization of intangible cultural heritage (ICH), the complex mechanisms governing how users interact and co-create knowledge in digital spaces remain underexplored. Understanding the internal dynamics and engagement logic of these interactive environments is therefore essential to developing sustainable heritage knowledge ecosystems. Conceptualizing the Zhihu community as such an ecosystem, this study investigates ICH thematic structures, knowledge demands, and user participation. By employing an LLM-refined BERTopic framework, this study identified 36 core topics and mapped them onto a four-layer architecture (Cultural Resource Layer, Action Subject Layer, Social Support Layer, and External Interaction Layer) and five knowledge demand dimensions (Basic Knowledge, Cultural Experience, Professional Development, Protection and Inheritance, and Modern Application) through weighted semantic similarity and Spearman correlation analysis. The results reveal a structural configuration dominated by the External Interaction Layer. A dual-track demand mechanism was identified, comprising a professionalized ability-oriented pathway and an affective experience-driven mode. Furthermore, deep engagement was primarily catalyzed by topics that integrate technology, action, and narrative, rather than structural prominence alone. The ICH knowledge ecosystem was characterized by an outward-looking and emotion-driven orientation. This research study contributes an ecosystem framework to heritage information while providing insights for practitioners to optimize digital ICH information services through multi-dimensional semantic integration and public co-creation. Full article
19 pages, 8343 KB  
Article
TAHRNet: An Improved HRNet-Based Semantic Segmentation Model for Mangrove Remote Sensing Imagery
by Haonan Lin, Dongyang Fu, Chuhong Wang, Jinjun Huang, Hanrui Wu, Yu Huang and Litian Xiong
Forests 2026, 17(5), 525; https://doi.org/10.3390/f17050525 - 25 Apr 2026
Viewed by 109
Abstract
Mangrove represent vital coastal ecosystems that contribute to shoreline stabilization, ecological balance, and environmental management. Nevertheless, the precise delineation of mangrove regions using remote sensing data is often impeded by spectral similarities with intertidal mudflats and aquatic features, alongside the irregular spatial patterns [...] Read more.
Mangrove represent vital coastal ecosystems that contribute to shoreline stabilization, ecological balance, and environmental management. Nevertheless, the precise delineation of mangrove regions using remote sensing data is often impeded by spectral similarities with intertidal mudflats and aquatic features, alongside the irregular spatial patterns and intricate margins of mangrove stands. This research utilizes high-resolution Gaofen-6 (GF-6) satellite observations as the foundational data to develop Triplet Axial High-Resolution Network (TAHRNet), a semantic segmentation architecture derived from the High-Resolution Network with Object-Contextual Representations (HRNet-OCR) framework for mangrove identification. The model integrates a Triplet Attention module to facilitate cross-dimensional feature dependencies and an improved Multi-Head Sequential Axial Attention mechanism to capture long-range spatial context while maintaining structural consistency. Based on evaluations using the test dataset, TAHRNet yielded a Mean Intersection over Union (MIoU) of 92.01% and a Overall Accuracy of 96.38%. Relative to U-Net and SegFormer, the proposed approach showed MIoU improvements of 5.25% and 1.88%, with corresponding Accuracy gains of 2.68% and 0.94%. Further application to coastal mapping in Zhanjiang produced results that align with manual visual interpretation. These findings suggest that TAHRNet is a viable tool for mangrove extraction and can provide technical support for coastal monitoring and ecological analysis. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Back to TopTop