Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (999)

Search Parameters:
Keywords = contour features

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1320 KB  
Article
Adaptive Decision Fusion in Probability Space for Pedestrian Gender Recognition
by Lei Cai, Huijie Zheng, Fang Ruan, Feng Chen, Wenjie Xiang, Qi Lin and Yifan Shi
Appl. Sci. 2026, 16(8), 3640; https://doi.org/10.3390/app16083640 - 8 Apr 2026
Viewed by 114
Abstract
Pedestrian gender recognition plays an important role in pedestrian analysis and intelligent video applications, for example, in demographic statistics, soft biometric analysis, and context-aware person retrieval. However, it remains a challenging task owing to viewpoint variations, illumination changes, occlusions, and low image quality [...] Read more.
Pedestrian gender recognition plays an important role in pedestrian analysis and intelligent video applications, for example, in demographic statistics, soft biometric analysis, and context-aware person retrieval. However, it remains a challenging task owing to viewpoint variations, illumination changes, occlusions, and low image quality in real-world imagery. To address these issues, an effective adaptive decision fusion framework, termed the Decision Fusion Learning Network (DFLN), is proposed in this paper. The key novel aspect of DFLN is that it effectively explores both an appearance-centered view that emphasizes detailed texture and clothing information and a structure-centered view that captures rich contour and structural information for pedestrian gender recognition. To realize DFLN, a Parallel CNN Prediction Probability Learning Module (PCNNM) is first constructed to independently learn modality-specific probabilities from color image and edge maps. Subsequently, a learnable Decision Fusion Module (DFM) is designed to fuse the modality-specific probabilities and explore their complementary merits for realizing accurate pedestrian gender recognition. The DFM can be easily coupled with the PCNNM, forming an end-to-end decision fusion learning framework that simultaneously learns the feature representations and carries out adaptive decision fusion. Experiments on two pedestrian benchmark datasets, named PETA and PA-100K, show that DFLN achieves competitive or superior performance compared with several state-of-the-art pedestrian gender recognition methods. Extensive experimental analysis further confirms the effectiveness of the proposed decision fusion strategy and its favorable generalization ability under domain shift. Full article
Show Figures

Figure 1

29 pages, 10248 KB  
Article
Fs2PA: A Full-Scale Feature Synergistic Perception Architecture for Vehicular Infrared Object Detection via Physical Priors and Semantic Constraints
by Boxuan Pei, Leyuan Wu, Xiaoyan Zheng, Chao Zhou and Dingxiang Wang
Sensors 2026, 26(7), 2257; https://doi.org/10.3390/s26072257 - 6 Apr 2026
Viewed by 178
Abstract
Vehicular infrared object detection is a key technology supporting autonomous driving systems to achieve all-weather environmental perception. However, infrared images inherently lack texture, resulting in blurred object contours. Additionally, deep network propagation severely erodes and loses feature information of distant tiny objects. To [...] Read more.
Vehicular infrared object detection is a key technology supporting autonomous driving systems to achieve all-weather environmental perception. However, infrared images inherently lack texture, resulting in blurred object contours. Additionally, deep network propagation severely erodes and loses feature information of distant tiny objects. To address the above issues, this study proposes a Full-Scale Feature Synergistic Perception Architecture for vehicular infrared object detection. This architecture first designs a Gradient-Informed Attention module, which initializes convolution kernels through physical gradient operators to inject geometric prior information into the network, enhancing the model’s perception capability of blurred object boundaries. Secondly, it constructs a Full-Scale Feature Pyramid containing a P2 high-resolution feature layer to effectively recover the geometric detail features of distant tiny objects. Finally, it proposes a Scale-Aware Shared Head, which relies on a cross-scale parameter sharing mechanism to achieve extreme parameter compression, and simultaneously introduces deep semantic information to form strong constraints, suppressing noise interference in shallow features. Experimental results on the FLIR v2 and M3FD datasets show that the proposed architecture exhibits excellent detection performance. On FLIR v2, it raises mAP@50 to 64.06% (6.51% relative gain vs. YOLOv11) while maintaining 547 FPS inference speed, achieving an optimal accuracy–efficiency balance. Full article
Show Figures

Graphical abstract

34 pages, 56063 KB  
Article
Deep Learning-Based Intelligent Analysis of Rock Thin Sections: From Cross-Scale Lithology Classification to Grain Segmentation for Quantitative Fabric Characterization
by Wenhao Yang, Ang Li, Liyan Zhang and Xiaoyao Qin
Electronics 2026, 15(7), 1509; https://doi.org/10.3390/electronics15071509 - 3 Apr 2026
Viewed by 264
Abstract
Quantitative microstructure evaluation of sedimentary rock thin sections is essential for revealing reservoir flow mechanisms and assessing reservoir quality. However, traditional manual identification is inefficient and prone to subjectivity. Although current deep learning approaches have improved efficiency, most remain confined to single tasks [...] Read more.
Quantitative microstructure evaluation of sedimentary rock thin sections is essential for revealing reservoir flow mechanisms and assessing reservoir quality. However, traditional manual identification is inefficient and prone to subjectivity. Although current deep learning approaches have improved efficiency, most remain confined to single tasks and lack a pathway to translate image recognition into quantifiable geological parameters. Moreover, these methods struggle with cross-scale feature extraction and accurate grain boundary localization in complex textures. To overcome these limitations, this study proposes a three-stage automated analysis framework integrating intelligent lithology identification, sandstone grain segmentation, and quantitative analysis of fabric parameters. To address scale discrepancies in lithology discrimination, Rock-PLionNet integrates a Partial-to-Whole Context Fusion (PWC-Fusion) module and the Lion optimizer, which mitigates cross-scale feature inconsistencies and enables accurate screening of target sandstone samples. Subsequently, to correct boundary deviations caused by low contrast and grain adhesion, the PetroSAM-CRF strategy integrates polarization-aware enhancement with dense conditional random field (DenseCRF)-based probabilistic refinement to extract precise grain contours. Based on these outputs, the framework automatically calculates key fabric parameters, including grain size and roundness. Experiments on 3290 original multi-source thin-section images show that Rock-PLionNet achieves a classification accuracy of 96.57% on the test set. Furthermore, PetroSAM-CRF reduces segmentation bias observed in general-purpose models under complex texture conditions, enabling accurate parameter estimation with a roundness error of 2.83%. Overall, this study presents an intelligent workflow linking microscopic image recognition with quantitative analysis of geological fabric parameters, providing a practical pathway for digital petrographic evaluation in hydrocarbon exploration. Full article
Show Figures

Figure 1

18 pages, 8172 KB  
Article
Dual-Flow Driver Distraction Driving Detection Model Based on Sobel Edge Detection
by Binbin Qin and Bolin Zhang
Vehicles 2026, 8(4), 74; https://doi.org/10.3390/vehicles8040074 - 1 Apr 2026
Viewed by 316
Abstract
Cognitive or visual distraction caused by drivers using mobile phones, operating the central console, or conversing with passengers while driving is a significant contributing factor to road traffic accidents. Aiming to solve the problem that existing driving behavior monitoring systems exhibit insufficient recognition [...] Read more.
Cognitive or visual distraction caused by drivers using mobile phones, operating the central console, or conversing with passengers while driving is a significant contributing factor to road traffic accidents. Aiming to solve the problem that existing driving behavior monitoring systems exhibit insufficient recognition accuracy and low real-time detection performance in complex driving environments, this study proposes a dual-flow driver distraction detection model based on Sobel edge detection (DFSED-Model). The model is designed with a collaborative learning framework: the first flow adopts a lightweight pre-trained backbone network to achieve efficient semantic feature extraction. The second flow utilizes Sobel edge detection to extract the driver’s driving contours and enhances the model’s spatial sensitivity to driving movements and hand movements. Through the feature learning process of the first-flow-guided auxiliary branch, collaborative optimization of knowledge transfer and attention focusing is realized, thereby improving the model’s convergence speed and discriminative performance. The proposed model is evaluated on three widely used public datasets: the State Farm Distracted Driver Detection (SFD) dataset, the 100-Driver dataset, and the American University in Cairo Distracted Driver Dataset (AUCDD-V1). Under the premise of maintaining low computational overhead, the accuracy of the DFSED-Model reaches 99.87%, 99.86%, and 95.71%, respectively, which is significantly superior to that of many mainstream models. The results demonstrate that the proposed method achieves a favorable balance between accuracy, parameter count, and efficiency, and possesses strong practical value and deployment potential. Full article
(This article belongs to the Special Issue Computer Vision Applications in Autonomous Vehicles)
Show Figures

Figure 1

27 pages, 3514 KB  
Article
ECAB-SegFormer: A Boundary-Aware and Efficient Channel Attention Network for Ulva prolifera Semantic Segmentation in Remote Sensing Imagery
by Yue Liang, Danyang Cao, Zice Ji, Hao Yang, Maohua Guo, Xiaoya Liu, Xutong Guo, Jiahao Wu, Yulong Song and Shanzhe Zhang
Sensors 2026, 26(7), 2166; https://doi.org/10.3390/s26072166 - 31 Mar 2026
Viewed by 205
Abstract
To achieve high-precision Ulva prolifera semantic segmentation from remote sensing imagery and address issues such as boundary fragmentation, contour dilation, and missed segmentation of scattered patches under complex marine backgrounds, this paper proposes an improved SegFormer-based network termed ECAB-SegFormer. The proposed method enhances [...] Read more.
To achieve high-precision Ulva prolifera semantic segmentation from remote sensing imagery and address issues such as boundary fragmentation, contour dilation, and missed segmentation of scattered patches under complex marine backgrounds, this paper proposes an improved SegFormer-based network termed ECAB-SegFormer. The proposed method enhances near-infrared feature representation and boundary perception by embedding an Efficient Channel Attention (ECA) module into shallow features and introducing a boundary supervision branch. Experimental results on the HYU dataset demonstrate that the proposed method achieves consistent improvements over classical baseline models and further outperforms several representative modern strong segmentation baselines. Compared with advanced methods such as DeepLabV3+, Swin-Unet, and Gated-SCNN, the proposed model achieves maximum improvements of 2.77%, 5.80%, and 4.26(pixel) in mIoU, BFScore, and Hausdorff Distance (HD), respectively, while also obtaining superior Precision and F1 Scores. These results demonstrate significant advantages in both regional segmentation accuracy and boundary localization quality, validating the effectiveness, robustness, and practical potential of the proposed method for Ulva prolifera semantic segmentation in remote sensing applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

23 pages, 4933 KB  
Article
Research on Angle-Adaptive Look-Ahead Compensation Method for Five-Degree-of-Freedom Additive Manufacturing Based on Sech Attenuation Curve
by Xingguo Han, Wenquan Li, Shizheng Chen, Xuan Liu and Lixiu Cui
Micromachines 2026, 17(4), 423; https://doi.org/10.3390/mi17040423 - 30 Mar 2026
Viewed by 268
Abstract
To address over-extrusion and forming defects at path corners caused by path overlap in additive manufacturing, this paper proposes an angle-adaptive look-ahead compensation algorithm based on a Sech attenuation curve. This method establishes a mapping model between the path angle and the adaptive [...] Read more.
To address over-extrusion and forming defects at path corners caused by path overlap in additive manufacturing, this paper proposes an angle-adaptive look-ahead compensation algorithm based on a Sech attenuation curve. This method establishes a mapping model between the path angle and the adaptive look-ahead distance of the overlapping area, aiming to eliminate the material accumulation at the corner by precisely identifying the overlapping area and modulating the flow rate. By building a Beckhoff five-axis 3D-printing device and relying on the TwinCAT control platform, the compensation triggering logic based on PLC real-time Euclidean distance calculation was realized, and a slicing software with dynamic bias compensation was also developed. Experiments were conducted on triangular samples with extreme acute angles of 5°, universal acute angles of 85°, and 90° straight angles for printing verification. The results show that this algorithm can effectively suppress the material over-extrusion and accumulation at the path overlap in multiple angles, achieving a smooth transition of the sharp corners in the printed contour. The research confirms that the algorithm proposed in this study, together with the integrated software and hardware system, can ensure the forming accuracy of extreme and conventional geometric features while also guaranteeing the printing efficiency, providing an important reference for ensuring the quality coordination control of the formation process of extreme geometric features in additive manufacturing. Full article
Show Figures

Figure 1

17 pages, 2659 KB  
Article
Estimation of Fingertip Contact Angle from Tactile Pressure Contours
by Qianqian Tian, Jixiao Liu, Funing Hou and Shijie Guo
Appl. Sci. 2026, 16(7), 3172; https://doi.org/10.3390/app16073172 - 25 Mar 2026
Viewed by 288
Abstract
Tactile sensing is an important perceptual modality that enables robots to understand human contact behaviors. Estimating the fingertip contact angle based on tactile pressure distribution provides a simplified representation of the finger’s contact configuration and supports tactile-based perception in human–robot interaction. However, the [...] Read more.
Tactile sensing is an important perceptual modality that enables robots to understand human contact behaviors. Estimating the fingertip contact angle based on tactile pressure distribution provides a simplified representation of the finger’s contact configuration and supports tactile-based perception in human–robot interaction. However, the relationship between tactile pressure distributions and fingertip contact configuration remains insufficiently understood. In this study, a simplified contact mechanics model was employed to investigate the relationship between tactile pressure characteristics and fingertip contact conditions. Theoretical analysis indicates that both the contact area and the contour dimensions of the pressure distribution are influenced by the contact angle and contact force, with varying sensitivities in different directions to these factors. Based on this theory, simplified finite element modeling of the fingertip and multi-subject experiments were conducted. The deformation behavior of the contact region under different contact angles and contact forces was analyzed. The experimental results were generally consistent with the theoretical analysis. Furthermore, contour descriptors were extracted from the tactile pressure distribution to establish a relationship model for estimating the fingertip contact angle, and the model’s accuracy was analyzed. The experimental results indicate that the extracted contour features exhibit systematic variations with contact angle, and the proposed method achieves a mean absolute error (MAE) of 2.73° and a root mean square error (RMSE) of 7.25°. These results demonstrate that tactile pressure contours provide an effective and computationally efficient cue for estimating fingertip contact configuration. This approach may help robots understand human behavior and has potential applications in human–robot interaction and robotic grasping. Full article
Show Figures

Figure 1

20 pages, 2605 KB  
Article
Spatial-Frequency Decoupling Alignment Encoding for Remote Sensing Change Detection
by Xu Zhang, Yue Du, Weiran Zhou and Kaihua Zhang
Sensors 2026, 26(6), 1979; https://doi.org/10.3390/s26061979 - 21 Mar 2026
Viewed by 425
Abstract
Existing remote sensing change detection methods often struggle to accurately capture the contours of complex change targets and subtle textural differences. This makes it difficult to effectively distinguish between the boundaries of change targets and the background. To address this challenge, we propose [...] Read more.
Existing remote sensing change detection methods often struggle to accurately capture the contours of complex change targets and subtle textural differences. This makes it difficult to effectively distinguish between the boundaries of change targets and the background. To address this challenge, we propose a novel method called spatial-frequency decoupling alignment encoding (SDA-Encoding), which is designed to fully leverage information from both the spatial and frequency domains. Specifically, we first use a Transformer encoder to extract bi-temporal features. Next, we apply wavelet transform to decouple these features into low-frequency and high-frequency components. In the multi-scale high-frequency interaction (MHI) module, we combine local spatial enhancement using spatial pyramid pooling with cross-scale dependency supplementation via the dual-domain alignment fusion (DAF) module. Meanwhile, in the position-aware low-frequency enhancement (PLE) module, spatial position sensitivity is restored using coordinate attention, and region-level contextual dependencies are captured through the selective fusion attention (SFA) module. Finally, the two frequency-domain branches are complementarily fused within the spatial domain to achieve unified detection of both fine-grained and structural changes. Experimental results on three benchmark datasets demonstrate the significant performance improvements of SDA-Encoding. Full article
(This article belongs to the Special Issue Image Processing and Analysis for Object Detection: 3rd Edition)
Show Figures

Figure 1

22 pages, 6052 KB  
Article
HSMD-YOLO: An Anti-Aliasing Feature-Enhanced Network for High-Speed Microbubble Detection
by Wenda Luo, Yongjie Li and Siguang Zong
Algorithms 2026, 19(3), 234; https://doi.org/10.3390/a19030234 - 20 Mar 2026
Viewed by 229
Abstract
Underwater micro-bubble detection entails multiple challenges, including diminutive target sizes, sparse pixel information, pronounced specular highlights and water scattering, indistinct bubble boundaries, and adhesion or overlap between instances. To address these issues, we propose HSMD-YOLO, an improved detector tailored for high-resolution micro-bubble detection [...] Read more.
Underwater micro-bubble detection entails multiple challenges, including diminutive target sizes, sparse pixel information, pronounced specular highlights and water scattering, indistinct bubble boundaries, and adhesion or overlap between instances. To address these issues, we propose HSMD-YOLO, an improved detector tailored for high-resolution micro-bubble detection and built upon YOLOv11. The model incorporates three novel components: the Scale Switch Block (SSB), a scale-transformation module that suppresses artifacts and background noise, thereby stabilizing edges in thin-walled bubble regions and enhancing sensitivity to geometric contours; the Global Local Refine Block (GLRB), which achieves efficient global relationship modeling with an asymptotic linear complexity (O(N)) in spatial dimensions while further refining local features, thereby strengthening boundary perception and improving bubble–background separability; and the Bidirectional Exponential Moving Attention Fusion (BEMAF), which accommodates the multi-scale nature of bubbles by employing a parallel multi-kernel architecture to extract spatial features across scales, coupled with a multi-stage EMA based attention mechanism to enhance detection robustness under weak boundaries and complex backgrounds. Experiments conducted on an Side-Illuminated Light Field Bubble Database (SILB-DB) and a public gas–liquid two-phase flow dataset (GTFD) demonstrate that HSMD-YOLO achieves mAP@50 scores of 0.911 and 0.854, respectively, surpassing mainstream detection methods. Ablation studies indicate that SSB, GLRB, and BEMAF contribute performance gains of 1.3%, 2.0%, and 0.4%, respectively, thereby corroborating the effectiveness of each module for micro-scale object detection. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

20 pages, 3218 KB  
Article
MIP-YOLO11: An Underwater Object Detection Model Based on Improved YOLO11
by Xinyu Qu, Ying Shao, Zheng Wang and Man Chang
J. Mar. Sci. Eng. 2026, 14(6), 572; https://doi.org/10.3390/jmse14060572 - 19 Mar 2026
Viewed by 280
Abstract
Due to challenges such as inadequate lighting, water scattering, high density of small objects, and complex object morphology in underwater environments, traditional YOLO11 models face difficulties including interference from complex backgrounds, weak perception of small objects, and insufficient feature extraction when applied underwater. [...] Read more.
Due to challenges such as inadequate lighting, water scattering, high density of small objects, and complex object morphology in underwater environments, traditional YOLO11 models face difficulties including interference from complex backgrounds, weak perception of small objects, and insufficient feature extraction when applied underwater. This paper proposes an improved MIP-YOLO11 model for underwater object detection based on the YOLO11 framework. First, a MCEA module is designed in the backbone network to replace the basic CBS convolution module. Through a lightweight multi-branch convolutional structure, the perception ability for small objects, object edges, contours, and morphological features in underwater scenes are enhanced without significantly increasing computational overhead. Second, an IMCA module based on the coordinate attention mechanism is introduced at the end of the backbone network to replace the C2PSA module, reducing the number of model parameters while maintaining detection accuracy. Finally, the Bottleneck module in C3k2 is improved by incorporating a PConv and a dual residual connection mechanism, thereby expanding the receptive field and enhancing the efficiency of complex feature extraction. Experimental results demonstrate that MIP-YOLO11 significantly outperforms the traditional YOLO11 in underwater environments. P and R are improved by 2.5% and 4.1%, respectively. Moreover, the mAP0.5 and mAP0.5:0.95 metrics are increased by 4.2% and 7.5%, respectively. The improved model achieves a good balance between high accuracy and light weight, and can provide a more reliable underwater object detection scheme for AUV underwater detection and other application scenarios. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

25 pages, 6368 KB  
Article
Comfort-Oriented Pothole Traversal Using Multi-Sensor Perception and Fuzzy Control
by Chaochun Yuan, Shiqi Hang, Youguo He, Jie Shen, Long Chen, Yingfeng Cai, Shuofeng Weng and Junxian Wang
Sensors 2026, 26(6), 1925; https://doi.org/10.3390/s26061925 - 19 Mar 2026
Viewed by 208
Abstract
Potholes are typical negative road obstacles that can significantly compromise vehicle safety and ride comfort when traversed at inappropriate speeds. To address this issue, this paper proposes a pothole-detection-based, comfort-oriented pothole traversal algorithm that integrates multi-sensor fusion perception, comfort-constrained speed planning, and fuzzy [...] Read more.
Potholes are typical negative road obstacles that can significantly compromise vehicle safety and ride comfort when traversed at inappropriate speeds. To address this issue, this paper proposes a pothole-detection-based, comfort-oriented pothole traversal algorithm that integrates multi-sensor fusion perception, comfort-constrained speed planning, and fuzzy control. A camera and a single-point ranging LiDAR are first fused to extract key geometric features of potholes, including contour, area, and depth. Based on these features, a vehicle–pothole dynamic model is developed in ADAMS to quantify the influence of pothole area and depth on vehicle vertical vibration. The vertical frequency-weighted root-mean-square (RMS) acceleration is adopted as the ride comfort indicator, based on which the maximum allowable traversal speed under different pothole geometries is determined. Furthermore, a longitudinal pothole traversal control strategy based on fuzzy theory is designed to regulate vehicle acceleration, enabling the vehicle to reach the comfort-constrained limiting speed within a finite preview distance while ensuring braking safety. The proposed method is validated through multi-scenario co-simulations using MATLAB/Simulink and CarSim, as well as real-vehicle experiments. Results demonstrate that the proposed strategy can effectively adjust vehicle speed before pothole traversal, satisfying comfort constraints and improving ride comfort without sacrificing driving safety. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

21 pages, 6855 KB  
Article
Hierarchical Multi-Scale Feature Fusion Network with Implicit Neural Representation and Mamba for Cross-Modality MRI Synthesis
by Zhihao Luo and Jun Lyu
Sensors 2026, 26(6), 1901; https://doi.org/10.3390/s26061901 - 18 Mar 2026
Viewed by 284
Abstract
Magnetic resonance imaging (MRI), a widely adopted modality in clinical practice, enables the acquisition of multi-contrast images from the same anatomical structure, commonly referred to as multimodal images. Integrating these diverse modalities is crucial for enhancing model performance across a variety of medical [...] Read more.
Magnetic resonance imaging (MRI), a widely adopted modality in clinical practice, enables the acquisition of multi-contrast images from the same anatomical structure, commonly referred to as multimodal images. Integrating these diverse modalities is crucial for enhancing model performance across a variety of medical image analysis tasks. However, in real-world clinical scenarios, it is often impractical to acquire all MRI modalities simultaneously due to factors such as patient discomfort, time constraints, and scanning costs. As a result, synthesizing missing modalities from available ones has emerged as an effective solution. To address these challenges, we propose HMF-MambaINR, a hierarchical multi-scale feature fusion network for cross-modality MRI synthesis. The model integrates Mamba-based Selective State Space Modeling (SSM) and implicit neural representation (INR) to capture long-range dependencies and enable continuous spatial reconstruction. A Multi-Feature Extraction Block (MFEB) captures local and global representations via multi-scale receptive fields, while a Modulation Fusion Module (MFM) adaptively fuses multi-modal features with dynamic weighting. Extensive experiments show that HMF-MambaINR surpasses state-of-the-art CNN-, Transformer-, and Mamba-based methods in synthesizing missing MRI modalities. Notably, the synthesized MRI images received positive feedback from radiologists in terms of image quality, contrast, and structural contour accuracy, highlighting the potential of the proposed method as a practical tool for clinical applications. Full article
(This article belongs to the Special Issue Medical Imaging and Sensing Technologies)
Show Figures

Figure 1

16 pages, 23439 KB  
Case Report
Transmission Electron Microscopy Corneal Ultrastructure Study in Hematocornea of Corneal Transplant Graft
by Paul Filip Curcă, Laura Macovei, Ovidiu Mușat, Mihail Zemba, Valentin Dinu, Mihaela Gherghiceanu, Cătălina Ioana Tătaru and Călin Petru Tătaru
Diagnostics 2026, 16(6), 890; https://doi.org/10.3390/diagnostics16060890 - 17 Mar 2026
Viewed by 313
Abstract
Background and Clinical Significance: To our knowledge, there is a lack of electron microscopy studies in hematocornea since 1985, and more so for graft hematocornea after deep anterior lamellar keratoplasty (DALK). This study provides an ultrastructural characterization of hematocornea occurring in a [...] Read more.
Background and Clinical Significance: To our knowledge, there is a lack of electron microscopy studies in hematocornea since 1985, and more so for graft hematocornea after deep anterior lamellar keratoplasty (DALK). This study provides an ultrastructural characterization of hematocornea occurring in a DALK graft. Our study presents several limitations: single-case design and lack of control tissue. Case Presentation: The DALK graft with hematocornea was excised and introduced inside of the operating room in glutaraldehyde solution recipient. The graft was quickly cold-transported to light and transmission electron microscopy. Hematocornea in the DALK transplant graft resulted in features of stromal alteration and dysfunctional cellular clean-up response. The collagen lamellae ultrastructure was affected near electron-dense hem deposits. Two cellular aspects were observed: adaptation and degeneration. Electron-dense granules were found in keratocytes, which may exhibit cellular adaptations, such as vacuoles and phagosomes. Macropinocytosis may mechanistically explain ingestion of electron-dense granules, and dysfunctions in the macropinocytosis process may have led to cell degeneration. Cellular degeneration was marked by loss of organelle contour and loss of cellular membrane integrity (burst-cell aspect). Microscopic corneal alteration corresponded to macroscopic total loss of corneal transparency and elasticity. Conclusions: This study described lamellar ultrastructure alterations and dysfunctional cellular response in hematocornea of a DALK corneal transplant graft. Full article
(This article belongs to the Special Issue Diagnostic Imaging in Ocular Surface)
Show Figures

Figure 1

15 pages, 3813 KB  
Article
Real-Time Detection of Small Liquid Drip in Pipeline in Complex Industrial Scenes Based on Machine Vision
by Jingcan Zeng and Biao Cai
Appl. Sci. 2026, 16(6), 2823; https://doi.org/10.3390/app16062823 - 15 Mar 2026
Viewed by 221
Abstract
Pipeline leakage can lead to catastrophic consequences, and traditional sensor-based detection methods often struggle to identify changes caused by slow or minor leaks. This paper proposes a real-time machine vision-based method for detecting liquid leakage in pipelines, suitable for complex industrial scenarios. By [...] Read more.
Pipeline leakage can lead to catastrophic consequences, and traditional sensor-based detection methods often struggle to identify changes caused by slow or minor leaks. This paper proposes a real-time machine vision-based method for detecting liquid leakage in pipelines, suitable for complex industrial scenarios. By extracting droplet foreground regions and constructing a detection model based on the contour and motion features of droplets, the proposed method effectively filters out interference from lighting variations, equipment vibrations, and personnel movement in industrial environments, while accurately identifying the vertical motion characteristics of dripping liquids. An experimental platform was established to validate the effectiveness of the proposed approach. The results demonstrate that the proposed method achieves a detection rate of 98.04%, a false alarm rate of 5.26%, and a processing speed of 90.71 fps. Comparative experiments show that this method significantly outperforms traditional approaches, such as the dense optical flow method, which yields a higher false alarm rate and a processing speed of only 2.2 fps under the same test conditions. These findings confirm that our approach offers a more accurate and efficient solution for real-time pipeline liquid leakage detection. Full article
(This article belongs to the Section Applied Industrial Technologies)
Show Figures

Figure 1

21 pages, 4869 KB  
Article
Integrating Computer Vision and GIS for Large-Scale Morphological Mapping and Driving Force Analysis of Vernacular Courtyard Dwellings
by Lihua Liang, Xianda Li, Shutong Liu, Zhenhao Guo, Shuo Tang and Baohua Wen
Buildings 2026, 16(6), 1118; https://doi.org/10.3390/buildings16061118 - 11 Mar 2026
Cited by 1 | Viewed by 267
Abstract
This study develops and applies an integrated methodology that combines deep learning-based computer vision and spatial statistics to automate the large-scale identification and analysis of morphological features in vernacular courtyard dwellings. Focusing on Liangshuaixiu dwellings in Wu’an, southern Hebei, we trained an HRNetV2 [...] Read more.
This study develops and applies an integrated methodology that combines deep learning-based computer vision and spatial statistics to automate the large-scale identification and analysis of morphological features in vernacular courtyard dwellings. Focusing on Liangshuaixiu dwellings in Wu’an, southern Hebei, we trained an HRNetV2 semantic segmentation model on high-resolution satellite imagery to identify and extract contours for 134,280 courtyard spaces. Core morphological parameters (area, orientation) were calculated and analyzed using GIS spatial statistics and the geographic detector model. The results show that (1) the computer vision pipeline achieved efficient recognition with satisfactory accuracy (~10% mean error); (2) spatial autocorrelation and hotspot analysis revealed distinct regional patterns, including a west–east increase in average courtyard area; and (3) geographic detector analysis demonstrated that courtyard morphology is shaped by complex interactions between natural and socio-economic factors. While average area and orientation were primarily governed by climate (air pressure, wind, temperature) and topography (elevation), diversity and internal variation were strongly influenced by nonlinear interactions, particularly between natural factors (e.g., wind–aspect) and between natural and human factors (e.g., population–climate). This work provides a scalable, data-driven framework for the quantitative spatial analysis of vernacular architectural heritage, advancing the understanding of building morphology as an outcome of coupled human–environment systems. Full article
(This article belongs to the Special Issue Artificial Intelligence in Architecture and Interior Design)
Show Figures

Figure 1

Back to TopTop