Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (338)

Search Parameters:
Keywords = framing detector

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 681 KB  
Systematic Review
Systematic Review of HPLC Methods Using UV Detection for Quantification of Vitamin E in Human Plasma
by Miriam Demtschuk and Priska Heinz
LabMed 2026, 3(1), 4; https://doi.org/10.3390/labmed3010004 - 30 Jan 2026
Viewed by 84
Abstract
Measurement of vitamin E levels is used to evaluate the health status in humans. For routine analytics in clinical laboratories, an accurate, quick, and simple determination method is required. An option for the quantification of vitamin E (α-tocopherol) in human blood samples is [...] Read more.
Measurement of vitamin E levels is used to evaluate the health status in humans. For routine analytics in clinical laboratories, an accurate, quick, and simple determination method is required. An option for the quantification of vitamin E (α-tocopherol) in human blood samples is the use of high-performance liquid chromatography (HPLC) in combination with a UV detector. Several sample preparation methods for this purpose have been reported in the literature. Our aim was to generate an overview and comparison of the different methods. The online database PubMed was searched for published HPLC methods. Of 77 reports screened, 16 methods were selected and summarized in tables. These present the parameters of the sample preparation procedure, HPLC settings, and some validation criteria (limit of detection (LOD), limit of quantification (LOQ), and intra- and inter-assay values, recovery rates) of the reported methods. In the frame of our methodological review, we could find some extraction approaches. The liquid–liquid extraction with hexane or the double extraction with hexane were often used. Another possibility is the single extraction approach. This systematic review highlights the similarities and differences in methods, and it can therefore be used to develop and establish methods in a laboratory. Full article
Show Figures

Figure 1

27 pages, 49730 KB  
Article
AMSRDet: An Adaptive Multi-Scale UAV Infrared-Visible Remote Sensing Vehicle Detection Network
by Zekai Yan and Yuheng Li
Sensors 2026, 26(3), 817; https://doi.org/10.3390/s26030817 - 26 Jan 2026
Viewed by 204
Abstract
Unmanned Aerial Vehicle (UAV) platforms enable flexible and cost-effective vehicle detection for intelligent transportation systems, yet small-scale vehicles in complex aerial scenes pose substantial challenges from extreme scale variations, environmental interference, and single-sensor limitations. We present AMSRDet (Adaptive Multi-Scale Remote Sensing Detector), an [...] Read more.
Unmanned Aerial Vehicle (UAV) platforms enable flexible and cost-effective vehicle detection for intelligent transportation systems, yet small-scale vehicles in complex aerial scenes pose substantial challenges from extreme scale variations, environmental interference, and single-sensor limitations. We present AMSRDet (Adaptive Multi-Scale Remote Sensing Detector), an adaptive multi-scale detection network fusing infrared (IR) and visible (RGB) modalities for robust UAV-based vehicle detection. Our framework comprises four novel components: (1) a MobileMamba-based dual-stream encoder extracting complementary features via Selective State-Space 2D (SS2D) blocks with linear complexity O(HWC), achieving 2.1× efficiency improvement over standard Transformers; (2) a Cross-Modal Global Fusion (CMGF) module capturing global dependencies through spatial-channel attention while suppressing modality-specific noise via adaptive gating; (3) a Scale-Coordinate Attention Fusion (SCAF) module integrating multi-scale features via coordinate attention and learned scale-aware weighting, improving small object detection by 2.5 percentage points; and (4) a Separable Dynamic Decoder generating scale-adaptive predictions through content-aware dynamic convolution, reducing computational cost by 48.9% compared to standard DETR decoders. On the DroneVehicle dataset, AMSRDet achieves 45.8% mAP@0.5:0.95 (81.2% mAP@0.5) at 68.3 Frames Per Second (FPS) with 28.6 million (M) parameters and 47.2 Giga Floating Point Operations (GFLOPs), outperforming twenty state-of-the-art detectors including YOLOv12 (+0.7% mAP), DEIM (+0.8% mAP), and Mamba-YOLO (+1.5% mAP). Cross-dataset evaluation on Camera-vehicle yields 52.3% mAP without fine-tuning, demonstrating strong generalization across viewpoints and scenarios. Full article
(This article belongs to the Special Issue AI and Smart Sensors for Intelligent Transportation Systems)
Show Figures

Figure 1

22 pages, 6609 KB  
Article
CAMS-AI: A Coarse-to-Fine Framework for Efficient Small Object Detection in High-Resolution Images
by Zhanqi Chen, Zhao Chen, Baohui Yang, Qian Guo, Haoran Wang and Xiangquan Zeng
Remote Sens. 2026, 18(2), 259; https://doi.org/10.3390/rs18020259 - 14 Jan 2026
Viewed by 211
Abstract
Automated livestock monitoring in wide-area grasslands is a critical component of smart agriculture development. Devices such as Unmanned Aerial Vehicles (UAVs), remote sensing, and high-mounted cameras provide unique monitoring perspectives for this purpose. The high-resolution images they capture cover vast grassland backgrounds, where [...] Read more.
Automated livestock monitoring in wide-area grasslands is a critical component of smart agriculture development. Devices such as Unmanned Aerial Vehicles (UAVs), remote sensing, and high-mounted cameras provide unique monitoring perspectives for this purpose. The high-resolution images they capture cover vast grassland backgrounds, where targets often appear as small, distant objects and are extremely unevenly distributed. Applying standard detectors directly to such images yields poor results and extremely high miss rates. To improve the detection accuracy of small targets in high-resolution images, methods represented by Slicing Aided Hyper Inference (SAHI) have been widely adopted. However, in specific scenarios, SAHI’s drawbacks are dramatically amplified. Its strategy of uniform global slicing divides each original image into a fixed number of sub-images, many of which may be pure background (negative samples) containing no targets. This results in a significant waste of computational resources and a precipitous drop in inference speed, falling far short of practical application requirements. To resolve this conflict between accuracy and efficiency, this paper proposes an efficient detection framework named CAMS-AI (Clustering and Adaptive Multi-level Slicing for Aided Inference). CAMS-AI adopts a “coarse-to-fine” intelligent focusing strategy: First, a Region Proposal Network (RPN) is used to rapidly locate all potential target areas. Next, a clustering algorithm is employed to generate precise Regions of Interest (ROIs), effectively focusing computational resources on target-dense areas. Finally, an innovative multi-level slicing strategy and a high-precision model are applied only to these high-quality ROIs for fine-grained detection. Experimental results demonstrate that the CAMS-AI framework achieves a mean Average Precision (mAP) comparable to SAHI while significantly increasing inference speed. Taking the RT-DETR detector as an example, while achieving 96% of the mAP50–95 accuracy level of the SAHI method, CAMS-AI’s end-to-end frames per second (FPS) is 10.3 times that of SAHI, showcasing its immense application potential in real-world, high-resolution monitoring scenarios. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

23 pages, 17044 KB  
Article
BEHAVE-UAV: A Behaviour-Aware Synthetic Data Pipeline for Wildlife Detection from UAV Imagery
by Larisa Taskina, Kirill Vorobyev, Leonid Abakumov and Timofey Kazarkin
Drones 2026, 10(1), 29; https://doi.org/10.3390/drones10010029 - 4 Jan 2026
Viewed by 285
Abstract
Unmanned aerial vehicles (UAVs) are increasingly used to monitor wildlife, but training robust detectors still requires large, consistently annotated datasets collected across seasons, habitats and flight altitudes. In practice, such data are scarce and expensive to label, especially when animals occupy only a [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly used to monitor wildlife, but training robust detectors still requires large, consistently annotated datasets collected across seasons, habitats and flight altitudes. In practice, such data are scarce and expensive to label, especially when animals occupy only a few pixels in high-altitude imagery. We present a behaviour-aware synthetic data pipeline, implemented in Unreal Engine 5, that combines parameterised animal agents, procedurally varied environments and UAV-accurate camera trajectories to generate large volumes of labelled UAV imagery without manual annotation. Each frame is exported with instance masks, YOLO-format bounding boxes and tracking metadata, enabling both object detection and downstream behavioural analysis. Using this pipeline, we study YOLOv8s trained under six regimes that vary by data source (synthetic versus real) and input resolution, including a fractional fine-tuning sweep on a public deer dataset. High-resolution synthetic pre-training at 1280 px substantially improves small-object detection and, after fine-tuning on only 50% of the real images, recovers nearly all performance achieved with the fully labelled real set. At lower resolution (640 px), synthetic initialisation matches real-only training after fine-tuning, indicating that synthetic data do not harm and can accelerate convergence. These results show that behaviour-aware synthetic data can make UAV wildlife monitoring more sample-efficient while reducing annotation cost. Full article
(This article belongs to the Section Drones in Ecology)
Show Figures

Figure 1

32 pages, 28708 KB  
Article
Adaptive Thermal Imaging Signal Analysis for Real-Time Non-Invasive Respiratory Rate Monitoring
by Riska Analia, Anne Forster, Sheng-Quan Xie and Zhiqiang Zhang
Sensors 2026, 26(1), 278; https://doi.org/10.3390/s26010278 - 1 Jan 2026
Viewed by 531
Abstract
(1) Background: This study presents an adaptive, contactless, and privacy-preserving respiratory-rate monitoring system based on thermal imaging, designed for real-time operation on embedded edge hardware. The system continuously processes temperature data from a compact thermal camera without external computation, enabling practical deployment for [...] Read more.
(1) Background: This study presents an adaptive, contactless, and privacy-preserving respiratory-rate monitoring system based on thermal imaging, designed for real-time operation on embedded edge hardware. The system continuously processes temperature data from a compact thermal camera without external computation, enabling practical deployment for home or clinical vital-sign monitoring. (2) Methods: Thermal frames are captured using a 256×192 TOPDON TC001 camera and processed entirely on an NVIDIA Jetson Orin Nano. A YOLO-based detector localizes the nostril region in every even frame (stride = 2) to reduce the computation load, while a Kalman filter predicts the ROI position on skipped frames to maintain spatial continuity and suppress motion jitter. From the stabilized ROI, a temperature-based breathing signal is extracted and analyzed through an adaptive median–MAD hysteresis algorithm that dynamically adjusts to signal amplitude and noise variations for breathing phase detection. Respiratory rate (RR) is computed from inter-breath intervals (IBI) validated within physiological constraints. (3) Results: Ten healthy subjects participated in six experimental conditions including resting, paced breathing, speech, off-axis yaw, posture (supine), and distance variations up to 2.0 m. Across these conditions, the system attained a MAE of 0.57±0.36 BPM and an RMSE of 0.64±0.42 BPM, demonstrating stable accuracy under motion and thermal drift. Compared with peak-based and FFT spectral baselines, the proposed method reduced errors by a large margin across all conditions. (4) Conclusions: The findings confirm that accurate and robust respiratory-rate estimation can be achieved using a low-resolution thermal sensor running entirely on an embedded edge device. The combination of YOLO-based nostril detector, Kalman ROI prediction, and adaptive MAD–hysteresis phase that self-adjusts to signal variability provides a compact, efficient, and privacy-preserving solution for non-invasive vital-sign monitoring in real-world environments. Full article
Show Figures

Graphical abstract

23 pages, 4261 KB  
Article
Efficient Drone Detection Using Temporal Anomalies and Small Spatio-Temporal Networks
by Abhijit Mahalanobis and Amadou Tall
Sensors 2026, 26(1), 170; https://doi.org/10.3390/s26010170 - 26 Dec 2025
Viewed by 411
Abstract
Detecting small drones in Infrared (IR) sequences poses significant challenges due to their low visibility, low resolution, and complex cluttered backgrounds. These factors often lead to high false alarm and missed detection rates. This paper frames drone detection as a spatio-temporal anomaly detection [...] Read more.
Detecting small drones in Infrared (IR) sequences poses significant challenges due to their low visibility, low resolution, and complex cluttered backgrounds. These factors often lead to high false alarm and missed detection rates. This paper frames drone detection as a spatio-temporal anomaly detection problem and proposes a remarkably lightweight pipeline solution (well-suited for edge applications), by employing a statistical temporal anomaly detector (known as the temporal Reed Xiaoli (TRX) algorithm), in parallel with a light-weight convolutional neural network known as the TCRNet. While the TRX detector is unsupervised, the TCRNet is trained to discriminate between drones and clutter using spatio-temporal patches (or chips). The confidence maps from both modules are additively fused to localize drones in video imagery. We compare our method, dubbed TRX-TCRnet, to other state-of-the-art drone detection techniques using the Detection of Aircraft Under Background (DAUB) dataset. Our approach achieves exceptional computational efficiency with only 0.17 GFLOPs with 0.83 M parameters, outperforming methods that require 145–795 times more computational resources. At the same time, the TRX–TCRNet achieves one of the highest detection accuracies (mAP50 of 97.40) while requiring orders of magnitude fewer computational resources than competing methods, demonstrating unprecedented efficiency–performance trade-offs for real-time applications. Experimental results, including ROC and PR curves, confirm the framework’s exceptional suitability for resource-constrained environments and embedded systems. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning for Sensor Systems)
Show Figures

Figure 1

24 pages, 11407 KB  
Article
An Autonomous UAV Power Inspection Framework with Vision-Based Waypoint Generation
by Qi Wang, Zixuan Zhang and Wei Wang
Appl. Sci. 2026, 16(1), 76; https://doi.org/10.3390/app16010076 - 21 Dec 2025
Viewed by 351
Abstract
With the rapid development of Unmanned Aerial Vehicle (UAV) technology, it plays an increasingly important role in electrical power inspection. Automated approaches that generate inspection waypoints based on tower features have emerged in recent years. However, these solutions commonly rely on tower coordinates, [...] Read more.
With the rapid development of Unmanned Aerial Vehicle (UAV) technology, it plays an increasingly important role in electrical power inspection. Automated approaches that generate inspection waypoints based on tower features have emerged in recent years. However, these solutions commonly rely on tower coordinates, which can be difficult to obtain at times. To address this issue, this study presents an autonomous inspection waypoint generation method based on object detection. The main contributions are as follows: (1) After acquiring and constructing the distribution tower dataset, we propose a lightweight object detector based on You Only Look Once (YOLOv8). The model integrates the Generalized Efficient Layer Aggregation Network (GELAN) module in the backbone to reduce model parameters and incorporates Powerful Intersection over Union (PIoU) to enhance the accuracy of bounding box regression. (2) Based on detection results, a three-stage waypoint generator is designed: Stage 1 estimates the initial tower’s coordinates and altitude; Stage 2 refines these estimates; and Stage 3 determines the positions of subsequent towers. The generator ultimately provides the target’s position and heading information, enabling the UAV to perform inspection maneuvers. Compared to classic models, the proposed model runs at 56 Frames Per Second (FPS) and achieves an approximate 2.1% improvement in mAP50:95. In addition, the proposed waypoint estimator achieves tower position estimation errors within 0.8 m and azimuth angle errors within 0.01 rad. Multiple consecutive tower inspection flights in actual environments further validate the effectiveness of the proposed method. The proposed method’s effectiveness is validated through actual flight tests involving multiple consecutive distribution towers. Full article
Show Figures

Figure 1

17 pages, 3109 KB  
Article
Enhanced YOLOv8n-Based Three-Module Lightweight Helmet Detection System
by Xinyu Zuo, Yiqing Dai, Chao Yu and Wang Gang
Sensors 2025, 25(24), 7664; https://doi.org/10.3390/s25247664 - 17 Dec 2025
Viewed by 482
Abstract
Maintaining a safe working environment for construction workers is critical to the improvement of urban areas. Several issues plague the present safety helmet detection technologies utilized on construction sites. Some of these issues include low accuracy, expensive deployment of edge devices, and complex [...] Read more.
Maintaining a safe working environment for construction workers is critical to the improvement of urban areas. Several issues plague the present safety helmet detection technologies utilized on construction sites. Some of these issues include low accuracy, expensive deployment of edge devices, and complex backgrounds. To overcome these obstacles, this paper introduces a detection method that is both efficient and based on an improved version of YOLOv8n. Three components make up the superior algorithm: the C2f-SCConv architecture, the Partial Convolutional Detector (PCD), and Coordinate Attention (CA). Detection, redundancy reduction, and feature localization accuracy are all improved with coordinate attention. To further enhance feature quality, decrease computing cost, and make corrections more effective, a Partial Convolution detector is subsequently constructed. Feature refinement and feature representation are made more effective by using C2f-SCConv instead of the bottleneck C2f module. In comparison to its predecessor, the upgraded YOLOv8n is superior in every respect. It reduced model size by 2.21 MB, increased frame rate by 12.6 percent, decreased FLOPs by 49.9 percent, and had an average accuracy of 94.4 percent. This method is more efficient, quicker, and cheaper to set up on-site than conventional helmet-detection algorithms. Full article
(This article belongs to the Special Issue Intelligent Sensors and Artificial Intelligence in Building)
Show Figures

Figure 1

15 pages, 1730 KB  
Article
Research on Printed Circuit Board (PCB) Defect Detection Algorithm Based on Convolutional Neural Networks (CNN)
by Zhiduan Ni and Yeonhee Kim
Appl. Sci. 2025, 15(24), 13115; https://doi.org/10.3390/app152413115 - 12 Dec 2025
Viewed by 1196
Abstract
Printed Circuit Board (PCB) defect detection is critical for quality control in electronics manufacturing. Traditional manual inspection and classical Automated Optical Inspection (AOI) methods face challenges in speed, consistency, and flexibility. This paper proposes a CNN-based approach for automatic PCB defect detection using [...] Read more.
Printed Circuit Board (PCB) defect detection is critical for quality control in electronics manufacturing. Traditional manual inspection and classical Automated Optical Inspection (AOI) methods face challenges in speed, consistency, and flexibility. This paper proposes a CNN-based approach for automatic PCB defect detection using the YOLOv5 model. The method leverages a Convolutional Neural Network to identify various PCB defect types (e.g., open circuits, short circuits, and missing holes) from board images. In this study, a model was trained on a PCB image dataset with detailed annotations. Data augmentation techniques, such as sharpening and noise filtering, were applied to improve robustness. The experimental results showed that the proposed approach could locate and classify multiple defect types on PCBs, with overall detection precision and recall above 90% and 91%, respectively, enabling reliable automated inspection. A brief comparison with the latest YOLOv8 model is also presented, showing that the proposed CNN-based detector offers competitive performance. This study shows that deep learning-based defect detection can improve the PCB inspection efficiency and accuracy significantly, paving the way for intelligent manufacturing and quality assurance in PCB production. From a sensing perspective, we frame the system around an industrial RGB camera and controlled illumination, emphasizing how imaging-sensor choices and settings shape defect visibility and model robustness, and sketching future sensor-fusion directions. Full article
(This article belongs to the Special Issue Applications in Computer Vision and Image Processing)
Show Figures

Figure 1

19 pages, 2659 KB  
Article
A Structure-Aware Masked Autoencoder for Sparse Character Image Recognition
by Cheng Luo, Wenhong Wang, Junhang Mai, Tianwei Mu, Shuo Guo and Mingzhe Yuan
Electronics 2025, 14(24), 4886; https://doi.org/10.3390/electronics14244886 - 12 Dec 2025
Viewed by 500
Abstract
Conventional vehicle character recognition methods often treat detection and recognition as separate processes, resulting in limited feature interaction and potential error propagation. To address this issue, this paper proposes a structure-aware self-supervised Masked Autoencoder (CharSAM-MAE) framework, combined with an independent region extraction preprocessing [...] Read more.
Conventional vehicle character recognition methods often treat detection and recognition as separate processes, resulting in limited feature interaction and potential error propagation. To address this issue, this paper proposes a structure-aware self-supervised Masked Autoencoder (CharSAM-MAE) framework, combined with an independent region extraction preprocessing stage. A YOLOv8n detector is employed solely to crop the region of interest (ROI) from full-frame vehicle images using 50 single bounding-box annotated samples. After cropping, the detector is discarded, and subsequent self-supervised pre-training and recognition are fully executed using MAE without any involvement of YOLO model parameters or labeled data. CharSAM-MAE incorporates a structure-aware masking strategy and a region-weighted reconstruction loss during pre-training to improve both local structural representation and global feature modeling. During fine-tuning, a multi-head attention-enhanced CTC decoder (A-CTC) is applied to mitigate issues such as sparse characters, adhesion, and long-sequence instability. The framework is trained on 13,544 ROI images, with only 5% of labeled data used for supervised fine-tuning. Experimental results demonstrate that the proposed method achieves 99.25% character accuracy, 88.6% sequence accuracy, and 0.85% character error rate, outperforming the PaddleOCR v5 baseline (98.92%, 85.2%, and 1.15%, respectively). These results verify the effectiveness of structure-aware self-supervised learning and highlight the applicability of the proposed method for industrial character recognition with minimal annotation requirements. Full article
(This article belongs to the Section Electrical and Autonomous Vehicles)
Show Figures

Figure 1

18 pages, 1070 KB  
Article
Advancing Real-Time Polyp Detection in Colonoscopy Imaging: An Anchor-Free Deep Learning Framework with Adaptive Multi-Scale Perception
by Wanyu Qiu, Xiao Yang, Zirui Liu and Chen Qiu
Sensors 2025, 25(24), 7524; https://doi.org/10.3390/s25247524 - 11 Dec 2025
Viewed by 534
Abstract
Accurate and real-time detection of polyps in colonoscopy is a critical task for the early prevention of colorectal cancer. The primary difficulties include insufficient extraction of multi-scale contextual cues for polyps of different sizes, inefficient fusion of multi-level features, and a reliance on [...] Read more.
Accurate and real-time detection of polyps in colonoscopy is a critical task for the early prevention of colorectal cancer. The primary difficulties include insufficient extraction of multi-scale contextual cues for polyps of different sizes, inefficient fusion of multi-level features, and a reliance on hand-crafted anchor priors that require extensive tuning and compromise generalization performance. Therefore, we introduce a one-stage anchor-free detector that achieves state-of-the-art accuracy whilst running in real-time on a GTX 1080-Ti GPU workstation. Specifically, to enrich contextual information across a wide spectrum, our Cross-Stage Pyramid Pooling module efficiently aggregates multi-scale contexts through cascaded pooling and cross-stage partial connections. Subsequently, to achieve a robust equilibrium between low-level spatial details and high-level semantics, our Weighted Bidirectional Feature Pyramid Network adaptively integrates features across all scales using learnable channel-wise weights. Furthermore, by reconceptualizing detection as a direct point-to-boundary regression task, our anchor-free head obviates the dependency on hand-tuned priors. This regression is supervised by a Scale-invariant Distance with Aspect-ratio IoU loss, substantially improving localization accuracy for polyps of diverse morphologies. Comprehensive experiments on a large dataset comprising 103,469 colonoscopy frames substantiate the superiority of our method, achieving 98.8% mAP@0.5 and 82.5% mAP@0.5:0.95 at 35.8 FPS. Our method outperforms widely used CNN-based models (e.g., EfficientDet, YOLO series) and recent Transformer-based competitors (e.g., Adamixer, HDETR), demonstrating its potential for clinical application. Full article
(This article belongs to the Special Issue Advanced Biomedical Imaging and Signal Processing)
Show Figures

Figure 1

26 pages, 4166 KB  
Article
A Family of Fundamental Positive Sequence Detectors Based on Repetitive Schemes
by Glendy Anyali Catzin-Contreras, Gerardo Escobar, Luis Ibarra and Andres Alejandro Valdez-Fernandez
Energies 2025, 18(23), 6283; https://doi.org/10.3390/en18236283 - 29 Nov 2025
Viewed by 366
Abstract
In electrical power systems, the extraction of the fundamental positive sequence (FPS) is paramount for synchronization, power calculation, and a wide variety of metering and control tasks. This work shows that a moving average filter (MAF) used in the synchronous reference frame to [...] Read more.
In electrical power systems, the extraction of the fundamental positive sequence (FPS) is paramount for synchronization, power calculation, and a wide variety of metering and control tasks. This work shows that a moving average filter (MAF) used in the synchronous reference frame to extract the FPS from electrical systems is equivalent to the cascade connection of a comb filter (CF) with a second-order harmonic oscillator (SOHO), with all its variables expressed in fixed reference frame coordinates. On the one hand, the CF introduces an infinite number of notches tuned at all integer harmonics of the fundamental frequency ω0, thus suppressing harmonic distortion in the incoming signal and acting as a repetitive-based pre-filter (RPF). On the other hand, the SOHO is responsible for delivering the fundamental component of the input signal with a unitary gain, while additionally reducing the effect of harmonic distortion. Then, it is shown that other RPFs built from previously reported repetitive schemes (all-harmonics, odd-harmonics, and the 6±1 harmonics) can be placed instead of the CF, giving rise to a family of FPS detectors. In particular, this work also shows that the CF-SOHO is a special case of the FPS detector based on the all-harmonics RPF. This work provides the mathematical derivation of the FPS detector structure, tuning rules for the SOHO gain associated with each FPS detector, as well as experimental results under a reference signal subject to perturbations such as unbalance, harmonic distortion, phase, and amplitude jumps, exhibiting convergence in only half the fundamental period in most carried out tests. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

6 pages, 2324 KB  
Interesting Images
Diagnosing Dysphagia in Forestier Syndrome: A Dynamic Digital Radiology Application
by Michaela Cellina, Daniele Bongetta, Carlo Martinenghi and Giancarlo Oliva
Diagnostics 2025, 15(23), 3020; https://doi.org/10.3390/diagnostics15233020 - 27 Nov 2025
Viewed by 412
Abstract
Diffuse idiopathic skeletal hyperostosis (DISH), or Forestier’s disease, is a non-inflammatory condition characterized by the calcification and ossification of spinal ligaments and entheses, especially the anterior longitudinal ligament. Its prevalence increases with age and it is more common in males. The term DISH [...] Read more.
Diffuse idiopathic skeletal hyperostosis (DISH), or Forestier’s disease, is a non-inflammatory condition characterized by the calcification and ossification of spinal ligaments and entheses, especially the anterior longitudinal ligament. Its prevalence increases with age and it is more common in males. The term DISH usually refers to the imaging aspects of this condition, while “Forestier’s disease” is used for the clinical correlates of the condition, especially the development of dysphagia. Diagnosis is usually made with conventional radiography, based on the Resnick and Niwayama criteria: flowing osteophytes over at least four contiguous vertebral bodies, the preservation of intervertebral disk space, absent facet and costovertebral joint ankylosis, and absent sacroiliac joint abnormalities. A “melted candle wax” appearance along the spine is typical of the advanced disease. Large anterior osteophytes in the cervical spine lead not only to stiffness and chronic neck pain, but also to compressive symptoms such as dysphagia, dysphonia, and even airway compromise. Digital Dynamic Radiography (DDR), thanks to a flat-panel detector system, captures high-temporal resolution sequential low-dose radiographs at high frame rates in dynamic motion studies to provide functional information. We report the case of a 50-year-old female patient diagnosed with Forestier’s disease. Cervical radiography showed coarse anterior osteophytes and calcifications typical of DISH. The patient complained about persistent cervical pain and significant dysphagia. To investigate the underlying mechanism, a DDR with barium oral administration was performed. The examination confirmed the mechanical narrowing of the pharyngeal lumen caused by bulky anterior osteophytes. Given the severity of the symptoms, the patient underwent a surgical resection of the osteophytic and calcified components, with a subsequent improvement of swallowing function. This case highlights how DDR provides functional and morphological information in patients with dysphagia related to cervical DISH. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

17 pages, 8567 KB  
Article
Multi-Object Tracking with Confidence-Based Trajectory Prediction Scheme
by Kai Yi, Jiarong Li and Yi Zhang
Sensors 2025, 25(23), 7221; https://doi.org/10.3390/s25237221 - 26 Nov 2025
Viewed by 1568
Abstract
Multi-Object Tracking (MOT) aims to associate multiple objects across consecutive video sequences and maintain continuous and stable trajectories. Currently, much attention has been paid to data association problems, where many methods filter detection boxes for object matching based on the confidence scores (CS) [...] Read more.
Multi-Object Tracking (MOT) aims to associate multiple objects across consecutive video sequences and maintain continuous and stable trajectories. Currently, much attention has been paid to data association problems, where many methods filter detection boxes for object matching based on the confidence scores (CS) of the detectors without fully utilizing the detection results. Kalman filter (KF) is a traditional means for sequential frame processing, which has been widely adopted in MOT. It matches and updates a predicted trajectory with a detection box in video. However, under crowded scenes, the noise will create low-confidence detection boxes, causing identity switch (IDS) and tracking failure. In this paper, we thoroughly investigate the limitations of existing trajectory prediction schemes in MOT and prove that KF can still achieve competitive results in video sequence processing if proper care is taken to handle the noise. We propose a confidence-based trajectory prediction scheme (dubbed ConfMOT) based on KF. The CS of the detection results is used to adjust the noise during updating KF and to predict the trajectories of the tracked objects in videos. While a cost matrix (CM) is constructed to measure the cost of successful matching of unreliable objects. Meanwhile, each trajectory is labeled with a unique CS, while the lost trajectories that have not been updated for a long time will be removed. Our tracker is simple yet efficient. Extensive experiments have been conducted on mainstream datasets, where our tracker has exhibited superior performance to other advanced competitors. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

18 pages, 2805 KB  
Article
An Improved YOLOv11 Recognition Algorithm for Heavy-Duty Trucks on Highways
by Junkai Guo and Mingjiang Zhang
Electronics 2025, 14(23), 4621; https://doi.org/10.3390/electronics14234621 - 25 Nov 2025
Viewed by 389
Abstract
This paper presents an enhanced YOLOv11-based algorithm for highway freight truck tarpaulin recognition to enhance real-time performance and accuracy in identifying truck axle types and tarpaulin materials. The proposed methodology incorporates four key innovations. First, the lightweight Spatial and Channel Reconstruction Convolution (SCConv) [...] Read more.
This paper presents an enhanced YOLOv11-based algorithm for highway freight truck tarpaulin recognition to enhance real-time performance and accuracy in identifying truck axle types and tarpaulin materials. The proposed methodology incorporates four key innovations. First, the lightweight Spatial and Channel Reconstruction Convolution (SCConv) module is introduced to replace standard convolutional layers in the YOLOv11 backbone feature extraction network, which enables maintaining strong feature extraction capabilities while reducing model parameters and computational complexity. Second, a Channel-Spatial Multi-scale Attention Module (CSMAM) is integrated with the C3k2 module of the YOLOv11 feature fusion network, thereby strengthening the network’s capacity to learn both truck body features and tarpaulin coverage characteristics. Third, a novel Dual-Enhanced Channel Detection Head (DEC-Head) detector is designed to improve recognition performance under ambiguous conditions and reduce parameter quality. Finally, the SIoU loss function is adopted to replace the conventional bounding box loss function, substantially improving prediction box accuracy. Comprehensive experimental results demonstrate that compared to the baseline YOLOv11 algorithm, our proposed method achieves an approximate 4.4% increase in precision, 5.2% improvement in recall rate, and 7.2% higher mean Average Precision (mAP), while also achieving a significant improvement in inference speed (Frames Per Second, FPS), establishing superior recognition performance for truck tarpaulin detection tasks. Full article
Show Figures

Figure 1

Back to TopTop