Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (483)

Search Parameters:
Keywords = rotational object detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 10703 KB  
Article
WE-KAN: SAR Image Rotated Object Detection Method Based on Wavelet Domain Feature Enhancement and KAN Prediction Head
by Mingchun Li, Yang Liu, Qiang Wang and Dali Chen
Sensors 2026, 26(7), 2011; https://doi.org/10.3390/s26072011 - 24 Mar 2026
Abstract
Synthetic aperture radar (SAR) imagery plays a vital role in critical applications such as military reconnaissance and disaster monitoring. These applications require high detection accuracy. Therefore, rotated object detection has gained increasing attention. By predicting an object orientation angle, it offers advantages over [...] Read more.
Synthetic aperture radar (SAR) imagery plays a vital role in critical applications such as military reconnaissance and disaster monitoring. These applications require high detection accuracy. Therefore, rotated object detection has gained increasing attention. By predicting an object orientation angle, it offers advantages over horizontal bounding boxes, especially for elongated structures such as ships and bridges in SAR scenes. However, challenges such as speckle noise and complex backgrounds in SAR imagery still hinder high-precision detection. To address this, we propose WE-KAN, a novel rotated object detection framework based on wavelet features and Kolmogorov–Arnold network (KAN) prediction. First, we enhance the backbone by incorporating wavelet domain features from SAR grayscale images. The extracted wavelet domain features and image features are fused by a proposed attention module. Second, considering the sensitivity to angle prediction, we design a angle predictor based on KAN. This architecture provides a powerful and dedicated solution for accurate angle regression. Finally, for precise rotated bounding box regression, we employ a joint loss function combining a rotated intersection over union (RIoU) with a Gaussian distance loss function. These designs improve the model’s robustness to noise and its perception of fine object structures. When evaluated on the large-scale public RSAR dataset, our method achieves an AP50 of 70.1 and a mAP of 35.9 under the same training schedule and backbone network, significantly outperforming existing baselines. This demonstrates the effectiveness and robustness of our method for dense, small, and highly oriented objects in complex SAR scenes. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 7636 KB  
Article
Deformable 1D Directional Convolution with Bidirectional Offsets for Oriented Object Detection
by Ying Li, Xuemei Li and Caiming Zhang
Remote Sens. 2026, 18(6), 934; https://doi.org/10.3390/rs18060934 - 19 Mar 2026
Viewed by 20
Abstract
Oriented object detection is an important and challenging task in the field of image processing and computer vision. The main challenge in detecting oriented objects comes from their high aspect ratio and being distributed with arbitrary orientations. Various methods have been developed to [...] Read more.
Oriented object detection is an important and challenging task in the field of image processing and computer vision. The main challenge in detecting oriented objects comes from their high aspect ratio and being distributed with arbitrary orientations. Various methods have been developed to handle this issue. However, most existing works rely on time-consuming rotation and interpolation operations to align the feature representations of oriented objects. To avoid these operations, in this paper, we first introduce a simple yet effective deformable 1D directional convolution (D1DD-Conv), which implements a rotated convolution by deforming the 1D convolution kernel with horizontal and vertical offsets. Based upon this directional convolution, we then design a tri-branch convolution layer and integrate D1DD-Conv into the feature pyramid network for extracting the directional features of objects. Furthermore, we present a deep model to deal with the oriented object detection task. By allowing the offsets only along with horizontal and vertical directions, D1DD-Conv essentially corresponds to a rotated 1D convolution but without any rotation operations. This simple design is beneficial for efficiently capturing the orientation features of different oriented objects, leading to accurate prediction of the oriented bounding box of each oriented object. Some experiments on three popular datasets show that our model can achieve superior detection performance. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

10 pages, 279 KB  
Article
Determining the Level to Affect of Physical Findings and Outcome Measures on Functional Status in Partial-Thickness Rotator Cuff Tears Using a Multiple Linear Regression Model
by Ezgi Türkmen, İpek Yeldan, Nezih Ziroğlu and Süleyman Altun
Medicina 2026, 62(3), 574; https://doi.org/10.3390/medicina62030574 - 19 Mar 2026
Viewed by 30
Abstract
Background and Objectives: It is crucial to determine physical findings and outcome measures that affect functional status of the patients, and the impact levels of these parameters on patients. Therefore, the aim of this study was to investigate the determinant and predictive [...] Read more.
Background and Objectives: It is crucial to determine physical findings and outcome measures that affect functional status of the patients, and the impact levels of these parameters on patients. Therefore, the aim of this study was to investigate the determinant and predictive effect of pain levels, shoulder range of motion (ROM) values, disability and health-related quality of life factors on functional status in individuals with partial-thickness rotator cuff tears (PRCT). Materials and Methods: Firstly, the functional status of 45 patients (mean age: 50.78 ± 5.28 years; 29 female) with PRCT, then activity and night pain levels with Numeric Pain Rating Scale, active flexion, abduction and external rotation of the shoulder ROM values with goniometer, disability level with Quick Disabilities of Arm, Shoulder & Hand Questionnaire, and health-related quality of life levels with Short Form-12 were evaluated and recorded. Results: It was detected that all determinants whose effect on functionality was evaluated with a multiple regression model explained 76% of the variance, and this effect level was statistically significant (R square = 0.760, adjusted R square = 0.707, F = 14.272, p < 0.001). Detailed evaluation showed that flexion and external rotation ROM values (respectively; β = 0.54, p < 0.001; β = 0.38, p = 0.001) and disability level (β = 0.44, p < 0.001) had statistically significant determinant effects on functional status. No statistically significant results which could be correlated with functional status were found for activity and night pain, abduction ROM value, and health-related quality of life domains (p > 0.05). Conclusions: Shoulder flexion and external rotation ROMs and disability level were found to have a predictive effect on the functional status in individuals with PRCT. It is noteworthy that more subjective and patient-reported findings and outcome measures such as pain and health-related quality of life had no predictive effect on functionality. By determining the level of these effects, results were reached that can shed light on the literature by guiding the development of reliable assessment algorithms. Full article
(This article belongs to the Section Orthopedics)
27 pages, 15300 KB  
Article
Axial X-Ray Microscopy in Nanotomography
by Konstantin P. Gaikovich, Ilya V. Malyshev, Dmitry G. Reunov and Nikolay I. Chkhalo
Tomography 2026, 12(3), 41; https://doi.org/10.3390/tomography12030041 - 18 Mar 2026
Viewed by 70
Abstract
Background/Objectives: This article develops theory and methods for 3D tomographic imaging of absorption coefficient distributions using axial scanning with EUV microscopes at 46× and 345× magnification. Unlike conventional CT that requires sample rotation, axial scanning moves cells through the microscope focus. The aim [...] Read more.
Background/Objectives: This article develops theory and methods for 3D tomographic imaging of absorption coefficient distributions using axial scanning with EUV microscopes at 46× and 345× magnification. Unlike conventional CT that requires sample rotation, axial scanning moves cells through the microscope focus. The aim is tomographic reconstruction of living cell fine structure without the organelle staining used in optical fluorescence microscopy or ultra-thin cell slicing as in electron microscopy. Methods: By generalizing the geometric-optical approximation for small absorption coefficient inhomogeneities in absorbing media, we derived a new explicit tomography equation and solution algorithm validated through numerical simulation. The approach was applied to Convallaria cell analysis using the ×46 microscope. For the ×345 microscope, we developed an alternative method where the kernel of the tomography integral equation was determined experimentally using gold nanospheres with known absorption coefficient, shape, and position. This method was tested through modeling and applied to diagnostics of Convallaria and mouse cerebellar granule cells. Results: The developed methods resolve subcellular features down to 140 nm using the ×46 microscope and 50 nm using the ×345 microscope. Thin low-contrast intracellular structures and individual 50–100 nm organelles were detected. Conclusions: Methods for retrieving absorption coefficient distributions in cone-beam geometry based on geometric-optical theory generalization and on calibration by gold nanoparticles have been developed and validated through numerical simulation and cell analysis. These methods demonstrate for the first time the effectiveness of axial nanotomography using multilayer mirror microscopes for cell diagnostics. Full article
Show Figures

Figure 1

12 pages, 3478 KB  
Case Report
Diagnosis and Treatment of Ectopic Pregnancy in a Cesarean Section Scar—Case Report
by Polina V. Kulabukhova, Tatyana V. Fokina, Maria N. Babaeva, Aleksandra V. Asaturova and Natalia V. Nizyaeva
J. Clin. Med. 2026, 15(6), 2302; https://doi.org/10.3390/jcm15062302 - 17 Mar 2026
Viewed by 196
Abstract
Background/Objectives: Post-cesarean section scar niche pregnancy is one of the rarest forms. It is characterized by implantation of the gestation sac within the scar niche and is often associated with chorionic villi adhesion into the thinned cesarean section scar. The increasing incidence of [...] Read more.
Background/Objectives: Post-cesarean section scar niche pregnancy is one of the rarest forms. It is characterized by implantation of the gestation sac within the scar niche and is often associated with chorionic villi adhesion into the thinned cesarean section scar. The increasing incidence of this condition is associated with the increasing frequency of cesarean sections and the widespread use of ultrasound in early pregnancy. The most significant clinical findings are the detection of chorionic villus invasion and uterine wall insufficiency, which may be detected using magnetic resonance imaging, including contrast, and are crucial for determining patient management. This pathology may be considered life-threatening due to complications such as early uterine rupture with bleeding, which, if not diagnosed promptly, can lead to hysterectomy and loss of the woman’s reproductive health. Early diagnosis allows for the use of conservative treatment methods, preserving the uterus. The aim of the study is to clarify the clinical practices to follow in cases where an MRI examination with contrast agent is indicated to be performed on a pregnant patient. Methods: Ultrasound and MRI examination with counter-rotation, as well as histological and immunohistochemical examination of the remnants of the gestational sac were performed. Results: A 36-year-old pregnant woman was hospitalized in her eighth week of pregnancy with complaints of vaginal bleeding and persistent abdominal pain. An ultrasound scan revealed a pregnancy of 8 weeks and 5 days, and a low-lying chorion in the isthmus of the uterus, along with thinning of the cesarean scar and the formation of a scar niche resembling a hernia. Early signs of chorionic invasion were not treated. An MRI revealed signs of superficial chorionic adhesion to the cesarean scar, both to the isthmus and the internal os. Given that the woman did not wish to continue the pregnancy, uterine artery embolization was performed to reduce potential blood loss. Subsequently, laparoscopy, adhesiolysis, vacuum aspiration of the gestational sac, uterine curettage, hysteroresectoscopy, and coagulation of the fetal bed were performed. Histological and immunohistochemical examination revealed signs of inflammation in the area of the suspected lesion. Conclusions: This case report shows the potential value of MRI in complex cases of ultrasound detection of a gestational sac within scar tissue. MRI was used to assess the location of the gestational sac and evaluate the thickness of the cesarean scar to detect its dysfunction. Furthermore, contrast enhancement of the MRI may be useful in the most complex cases but requires an informed consent discussion with the patient. However, the latter issue requires discussion and proof of its safety for the fetus. Full article
(This article belongs to the Section Nuclear Medicine & Radiology)
Show Figures

Figure 1

11 pages, 3758 KB  
Article
Does Resident Rotation Affect the Learning Curve of Active Robotic TKA? A Study of Surgical Efficiency and Radiographic Precision
by Yong-Beom Park, Jin-Woong Jeon, Seong Hwan Kim and Han-Jun Lee
Medicina 2026, 62(3), 533; https://doi.org/10.3390/medicina62030533 - 13 Mar 2026
Viewed by 157
Abstract
Background and Objectives: Learning curves robotic arm-assisted total knee arthroplasty (TKA) are well-documented for semi-active systems, but evidence for advanced fully active robotic systems remains scarce. This study aimed to characterize the learning curve for operative time, implant positioning, and lower-limb alignment [...] Read more.
Background and Objectives: Learning curves robotic arm-assisted total knee arthroplasty (TKA) are well-documented for semi-active systems, but evidence for advanced fully active robotic systems remains scarce. This study aimed to characterize the learning curve for operative time, implant positioning, and lower-limb alignment using a fully active robotic TKA system, specifically accounting for the impact of rotating resident involvement in a tertiary center. Materials and Methods: Sixty consecutive primary TKAs were performed using the advanced active robotic system (CUVIS-Joint®). The learning curve for operative time was evaluated using cumulative summation (CUSUM) analysis. To identify independent predictors of surgical duration and radiographic precision, a multivariate linear regression model was constructed, including case number, implant type, and resident rotation period as variables. Results: CUSUM analysis identified a statistically significant inflection point at the 39th case. Beyond this point, mean operative time decreased approximately 20 min (133.3 ± 13.5 vs. 113.8 ± 7.9 min, p < 0.001). Multivariate regression confirmed that case number was the sole independent predictor of operative time (p < 0.001). Notably, implant positioning and lower-limb alignment showed no detectable difference across the sequential cases (p > 0.05), maintaining high precision from the outset. Conclusions: Active robotic TKA demonstrated a learning curve for operative time that stabilized after 39 cases within a clinical setting of rotational resident participation. Radiographic accuracy remained consistent despite these educational requirements, supporting the technical feasibility and reliability of this advanced system for the management of end-stage knee osteoarthritis Full article
(This article belongs to the Special Issue Recent Advances and Future Prospects in Knee Surgery)
Show Figures

Figure 1

20 pages, 21647 KB  
Article
Spatial Orthogonal and Boundary-Aware Network for Rotated and Elongated-Target Detection
by Yong Liu, Zhengbiao Jing, Yinghong Chang and Donglin Jing
Algorithms 2026, 19(3), 206; https://doi.org/10.3390/a19030206 - 9 Mar 2026
Viewed by 155
Abstract
In recent years, the refinement of bounding box representations has emerged as a major research focus in remote sensing. Nevertheless, mainstream detection algorithms typically ignore the disruptive impacts induced by the diverse morphologies and arbitrary orientations of high-aspect-ratio aerial objects throughout model training, [...] Read more.
In recent years, the refinement of bounding box representations has emerged as a major research focus in remote sensing. Nevertheless, mainstream detection algorithms typically ignore the disruptive impacts induced by the diverse morphologies and arbitrary orientations of high-aspect-ratio aerial objects throughout model training, thereby giving rise to several critical technical challenges: (1) Anisotropic information distribution: Target features are highly concentrated in one spatial dimension but sparse in the other, with significant feature differences across bounding box parameters, breaking the symmetry of feature distribution. (2) Missing high-quality positive samples: IoU-based assignment strategies fail to adequately capture the symmetric structural characteristics of elongated targets, resulting in incomplete coverage of critical features. (3) Loss function gradient instability: Small deviations in large-aspect-ratio bounding boxes cause drastic loss value fluctuations, as the asymmetric gradient changes hinder stable optimization directions during training. To address the challenges, we propose a Spatial Orthogonal and Boundary-Aware Network (SOBA-Net) for rotated and elongated target detection, leveraging symmetry-aware designs to enhance feature representation. Specifically, spatial staggered convolutions are constructed to fuse local and directional contextual features, effectively modeling long-range symmetric information across multiple spatial scales and reducing background noise interference. Secondly, the designed Symmetric-Constrained Label Assignment (SC-LA) introduces an IoU-weighted function, ensuring high-quality samples with symmetric structural features are classified as positive samples. Ultimately, the designed Gradient Dynamic Equilibrium Loss Function mitigates the problem of unstable gradients associated with high-aspect-ratio objects by enforcing symmetrical gradient regulation across samples with negligible localization deviations. Comprehensive evaluations across three representative remote sensing benchmarks—DOTA, UCAS-AOD, and HRSC2016—sufficiently corroborate the superiority of symmetry-aware enhancement schemes, which boast straightforward implementation and efficient inference deployment. Full article
(This article belongs to the Special Issue Advances in Deep Learning-Based Data Analysis)
Show Figures

Figure 1

10 pages, 2909 KB  
Proceeding Paper
Sea Turtle Recognition with Multiple Data Augmentation Methods Suitable for Marine Scenarios
by Yi-Chieh Hung, Jhih-Ya Chan, Wei-Cheng Lien, Yan-Tsung Peng and Li-Shu Chen
Eng. Proc. 2026, 128(1), 11; https://doi.org/10.3390/engproc2026128011 - 9 Mar 2026
Viewed by 208
Abstract
The sea turtle is an indicator organism used in marine conservation to identify the health status of ecosystems in various marine regions. In the past, researchers had to review an 8 h underwater video every day to monitor and count sea turtle appearances. [...] Read more.
The sea turtle is an indicator organism used in marine conservation to identify the health status of ecosystems in various marine regions. In the past, researchers had to review an 8 h underwater video every day to monitor and count sea turtle appearances. However, since sea turtles often appear for only short periods, traditional approaches of manual searching and counting require significant labor and time to ensure accurate periods of their appearance. To address this issue, we adopted the You Only Look Once (YOLO) model for object detection, utilizing real underwater videos captured from three different areas in the Taiwan Keelung City Chaojing Bay Aquatic Plants and Animals Conservation Area for training and testing. To overcome limitations, such as underwater blur, sediment interference, obstructions from other fish, and distant targets that are challenging to identify, we applied data augmentation techniques, including scaling, rotation, and depth blur, with labeled data of different fish species to improve generalization capability. The experimental results of this study showed that this method achieves a 99.4% accuracy in sea turtle detection. After 60 days of deployment across the three areas, the model reduced search time by over 99%, significantly improving efficiency and reducing workload. Full article
Show Figures

Figure 1

22 pages, 19634 KB  
Article
SGFNet: Semantic-Guided Fusion Network with Closed-Loop Feedback for RGB-Infrared Oriented Object Detection
by Liang Zhang, Yueqiu Jiang, Wei Yang and Bo Liu
Electronics 2026, 15(5), 1003; https://doi.org/10.3390/electronics15051003 - 28 Feb 2026
Viewed by 250
Abstract
In oriented object detection from drone imagery, many existing RGB-infrared (RGB-IR) fusion methods derive modality weights from input statistics alone, without regard for downstream detection objectives. We present SGFNet, a Semantic-Guided Fusion Network that feeds detection-level semantics back into the fusion stage through [...] Read more.
In oriented object detection from drone imagery, many existing RGB-infrared (RGB-IR) fusion methods derive modality weights from input statistics alone, without regard for downstream detection objectives. We present SGFNet, a Semantic-Guided Fusion Network that feeds detection-level semantics back into the fusion stage through learned importance masks. SGFNet comprises three modules: (1) a Frequency-aware Disentanglement Module (FDM) that separates high-frequency textures from low-frequency thermal structures through Laplacian and Gaussian filtering; (2) a Semantic-Guided Module (SGM) that generates P5-level semantic masks to steer fusion toward detection-critical regions; and (3) an Adaptive Geometric Convolution (AGC) whose rotation-aware sampling matches receptive fields to arbitrarily oriented objects. On the DroneVehicle benchmark (28,439 RGB-IR pairs, five vehicle categories), SGFNet achieves 82.0% mAP@0.5, surpassing the runner-up DMM by 3.2 percentage points while lowering mean angular error from 7.4° to 6.2° (−16%). Ablation analysis attributes the largest single-module gain (+1.7 pp) to the semantic feedback path. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

14 pages, 1772 KB  
Article
Accuracy of Deep Learning-Driven MR Arthrography of the Shoulder: Compressed 3D in Comparison to Standard FSE Sequences
by Gianluca Tripodi, Flavio Spoto, Giuseppe Ocello, Leonardo Monterubbiano, Paolo Avanzi and Giovanni Foti
Osteology 2026, 6(1), 4; https://doi.org/10.3390/osteology6010004 - 27 Feb 2026
Viewed by 239
Abstract
Background/Objectives: Magnetic resonance arthrography is the reference standard for evaluating glenoid labral lesions. Deep learning (DL) reconstruction algorithms may accelerate 3D acquisitions while maintaining image quality. This study assesses the diagnostic accuracy of DL-based isotropic 3D MR imaging for detecting glenoid labral lesions. [...] Read more.
Background/Objectives: Magnetic resonance arthrography is the reference standard for evaluating glenoid labral lesions. Deep learning (DL) reconstruction algorithms may accelerate 3D acquisitions while maintaining image quality. This study assesses the diagnostic accuracy of DL-based isotropic 3D MR imaging for detecting glenoid labral lesions. Methods: This prospective study included 128 consecutive patients (79 men, 49 women; mean age 38.4 years) undergoing shoulder MR arthrography between June 2023 and April 2025. DL-based 3D sequences (acquisition time: 3:26) were compared with conventional multiplanar TSE and PD-FS sequences (acquisition time: 24–28 min). Two independent radiologists assessed glenoid labral lesions, bone marrow edema, and rotator cuff abnormalities using a four-point Likert scale. Sensitivity, specificity, and interobserver agreement were calculated. Results: DL-based 3D sequences demonstrated 94.7–95.1% sensitivity and 100% specificity for glenoid labral lesions, with excellent interobserver agreement (κ = 0.812). The area under the ROC curve was 0.894. Combined 3D protocols (T1 + PD-FS) showed superior accuracy (97.8%) compared to single sequences (90.5%, p = 0.012). For bone marrow edema, sensitivity was 82.9% with 100% specificity. Rotator cuff evaluation achieved 75% sensitivity with 100% specificity. Conclusions: DL-based isotropic 3D sequences provide high diagnostic accuracy for glenoid labral pathology while reducing scan time by 75%. Combined T1 and PD-FS protocols optimize performance. These findings support selective implementation of DL-accelerated 3D protocols in shoulder MR arthrography, particularly for labral assessment, while acknowledging that conventional protocols may remain preferable in specific clinical scenarios. Full article
Show Figures

Figure 1

19 pages, 5229 KB  
Article
Automated Metrics for the Diagnosis of Instability Between the 2nd and 7th Cervical Vertebrae
by John Hipp, Charles Reitman, Christopher Chaput, Mathew Gornet and Trevor Grieco
Bioengineering 2026, 13(3), 258; https://doi.org/10.3390/bioengineering13030258 - 24 Feb 2026
Viewed by 423
Abstract
Diagnosing cervical spine instability with flexion-extension radiographs is challenging, as current guidelines are based on limited cadaver studies and do not adequately account for level, vertebral size, or patient effort. There is a need for automated cervical instability metrics anchored to normative reference [...] Read more.
Diagnosing cervical spine instability with flexion-extension radiographs is challenging, as current guidelines are based on limited cadaver studies and do not adequately account for level, vertebral size, or patient effort. There is a need for automated cervical instability metrics anchored to normative reference data, accompanied by evidence on how often abnormal findings occur in real clinical populations and which soft-tissue injury patterns they can detect. We developed and evaluated fully automated, radiographic-based cervical intervertebral motion (IVM) metrics—adapted from prior lumbar methods—using an FDA-cleared analysis pipeline that segments C2–C7 and derives rotation, translation, disc heights, and regression-based instability indices. Normative reference data were first established from flexion-extension radiographs of 341 asymptomatic volunteers after excluding radiographically degenerated levels. Abnormality prevalence was then estimated in two symptomatic cohorts: pooled preoperative clinical-trial radiographs and 881 patients with symptoms attributed to motor-vehicle accidents, excluding levels with <5° rotation to reduce unreliable data due to insufficiently stressed spines. Finally, potential diagnostic performance was assessed in a controlled cadaveric ligament-sectioning model (12 cadavers) using ROC analysis and Youden’s J thresholds. Across clinical cohorts, objective IVM abnormalities were uncommon. Prevalence increased when studies demonstrated adequate total C2–C7 motion, emphasizing the importance of patient effort. In cadavers, vertical instability metrics were most discriminative (AUC 0.96–0.97) with high sensitivity (0.89) and perfect specificity at optimal thresholds, whereas translation changed minimally with sectioning. These results support regression-based instability indices as promising candidates for standardized, physiology-guided cervical instability assessment. Full article
(This article belongs to the Special Issue Advancing Spinal Instability Diagnosis with Artificial Intelligence)
Show Figures

Figure 1

19 pages, 2606 KB  
Article
Composite Fault Feature Index-Guided Variational Mode Decomposition with Dynamic Weighted Central Clustering for Bearing Fault Detection
by Bangcheng Zhang, Boyu Shen, Zhi Gao, Yubo Shao, Zaixiang Pang and Xiaojing Yin
Sensors 2026, 26(4), 1394; https://doi.org/10.3390/s26041394 - 23 Feb 2026
Viewed by 378
Abstract
To address the periodic impacts and amplitude-modulated high-frequency resonance phenomena caused by bearing faults in rotating machinery, this paper proposes a detection method. The core innovation lies in: firstly, constructing a composite fault feature index (CFFI) that integrates normalized kurtosis and fuzzy entropy, [...] Read more.
To address the periodic impacts and amplitude-modulated high-frequency resonance phenomena caused by bearing faults in rotating machinery, this paper proposes a detection method. The core innovation lies in: firstly, constructing a composite fault feature index (CFFI) that integrates normalized kurtosis and fuzzy entropy, which synchronously quantifies the fault impact intensity and periodic structure, and serves as an optimization objective; secondly, definining a spectral energy retention rate (SERR) that includes both the full spectrum and characteristic frequency bands to evaluate the denoising effect and fault feature retention, respectively. Based on this, the method adaptively determines the Variational Mode Decomposition (VMD) parameters through the Triangular Topology Aggregation Optimizer (TTAO), and uses Dynamic Weighted Center Clustering (DWCC) to screen key IMFs containing fault-envelope information. On the IMS bearing dataset, the SERR of the reconstructed signal is 0.21356, which is higher than the actual collected signal value of 0.22465, with a relative error of 4.9%, indicating a higher reconstruction accuracy. These quantitative results indicate that CFFI-guided optimization enhances impulsive and periodic fault components while maintaining stable feature-band retention. This approach is suitable for real-world equipment monitoring and exhibits strong engineering applicability. Full article
(This article belongs to the Special Issue Sensing Technologies in Industrial Defect Detection)
Show Figures

Figure 1

19 pages, 56435 KB  
Article
Deep-Guided Dual-Task Collaborative Learning for Oriented Object Detection in Remote Sensing Images
by Jing Bai, Caizhi Gu, Haiyang Hu, Congcong Li, Yuqi Jiang, Yanran Dai, Zhengyou Wang and Shanna Zhuang
Electronics 2026, 15(4), 887; https://doi.org/10.3390/electronics15040887 - 21 Feb 2026
Viewed by 324
Abstract
Object detection, as a fundamental task, forms the cornerstone of intelligent applications in both UAV surveillance and satellite remote sensing. While most prior works concentrate on solving object scale and rotation angle variance caused by altitude changes, the spatial misalignment stemming from the [...] Read more.
Object detection, as a fundamental task, forms the cornerstone of intelligent applications in both UAV surveillance and satellite remote sensing. While most prior works concentrate on solving object scale and rotation angle variance caused by altitude changes, the spatial misalignment stemming from the differing demands of classification subtask and regression subtask also plays a critical role. To tackle these problems, a novel deep-guided dual-task collaborative learning framework is proposed. This framework integrates two key modules: deep-guided collaborative feature fusion (DGC-FF) and dual-task collaborative feature alignment (DTC-FA). DGC-FF effectively integrates fine-grained spatial and semantic information to enhance the network’s multi-scale perception capability. DTC-FA alleviates spatial misalignment between classification and regression branches through collaborative feature alignment and incorporates a rotation-aware detection branch to adapt to varying object orientations. Experimental results show that the proposed method achieves mAP@0.5 of 79.3% on the DroneVehicle dataset and mAP@0.5 of 81.6% on the DIOR-R dataset. The proposed method not only outperforms all compared methods in accuracy but also strikes a favorable efficiency–accuracy balance with an inference rate of 55–58 FPS. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

39 pages, 9763 KB  
Article
SAR-DRBNet: Adaptive Feature Weaving and Algebraically Equivalent Aggregation for High-Precision Rotated SAR Detection
by Lanfang Lei, Sheng Chang, Zhongzhen Sun, Xinli Zheng, Changyu Liao, Wenjun Wei, Long Ma and Ping Zhong
Remote Sens. 2026, 18(4), 619; https://doi.org/10.3390/rs18040619 - 16 Feb 2026
Viewed by 417
Abstract
Synthetic aperture radar (SAR) imagery is widely used for target detection in complex backgrounds and adverse weather conditions. However, high-precision detection of rotated small targets remains challenging due to severe speckle noise, significant scale variations, and the need for robust rotation-aware representations. To [...] Read more.
Synthetic aperture radar (SAR) imagery is widely used for target detection in complex backgrounds and adverse weather conditions. However, high-precision detection of rotated small targets remains challenging due to severe speckle noise, significant scale variations, and the need for robust rotation-aware representations. To address these issues, we propose SAR-DRBNet, a high-precision rotated small-target detection framework built upon YOLOv13. First, we introduce a Detail-Enhanced Oriented Bounding Box detection head (DEOBB), which leverages multi-branch enhanced convolutions to strengthen fine-grained feature extraction and improve oriented bounding box regression, thereby enhancing rotation sensitivity and localization accuracy for small targets. Second, we design a Ck-MultiDilated Reparameterization Block (CkDRB) that captures multi-scale contextual cues and suppresses speckle interference via multi-branch dilated convolutions and an efficient reparameterization strategy. Third, we propose a Dynamic Feature Weaving module (DynWeave) that integrates global–local dual attention with dynamic large-kernel convolutions to adaptively fuse features across scales and orientations, improving robustness in cluttered SAR scenes. Extensive experiments on three widely used SAR rotated object detection benchmarks (HRSID, RSDD-SAR, and DSSDD) demonstrate that SAR-DRBNet achieves a strong balance between detection accuracy and computational efficiency compared with state-of-the-art oriented bounding box detectors, while exhibiting superior cross-dataset generalization. These results indicate that SAR-DRBNet provides an effective and reliable solution for rotated small-target detection in SAR imagery. Full article
Show Figures

Figure 1

26 pages, 3435 KB  
Article
Young White Pine Detection Using UAV Imagery and Deep Learning Object Detection Models
by Abishek Poudel and Eddie Bevilacqua
Sensors 2026, 26(4), 1284; https://doi.org/10.3390/s26041284 - 16 Feb 2026
Viewed by 332
Abstract
This study demonstrates the power of combining unmanned aerial vehicle (UAV) imagery and deep learning (DL) for monitoring forest regeneration, specifically focusing on young white pine (Pinus strobus). Using high-resolution three-band RGB and five-band multispectral orthomosaics derived from UAV flights, 20 [...] Read more.
This study demonstrates the power of combining unmanned aerial vehicle (UAV) imagery and deep learning (DL) for monitoring forest regeneration, specifically focusing on young white pine (Pinus strobus). Using high-resolution three-band RGB and five-band multispectral orthomosaics derived from UAV flights, 20 DL object-detection models were evaluated within ArcGIS Pro 3.4 software (Esri Inc., Redlands, CA, USA). The models were tested across study sites in St. Lawrence County, NY, to assess performance on three distinct size classes of white pine, each stratified into low, medium, and high density areas. The Faster R-CNN (F-RCNN) model, particularly when trained with image rotation and no augmentation, significantly outperformed others, achieving an average precision of 0.88 across both imagery types. Subsequent confusion matrix analysis yielded 91% and 90% overall accuracy in medium and high-density white pine blocks, respectively. These findings validate the use of UAV-DL systems as an accurate and efficient tool for operational white pine regeneration assessment, reducing the need for labor-intensive fieldwork. Full article
(This article belongs to the Special Issue Remote Sensing Image Fusion and Object Tracking)
Show Figures

Figure 1

Back to TopTop