Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,792)

Search Parameters:
Keywords = time-domain imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 6030 KiB  
Article
Sparse Transform and Compressed Sensing Methods to Improve Efficiency and Quality in Magnetic Resonance Medical Imaging
by Santiago Villota and Esteban Inga
Sensors 2025, 25(16), 5137; https://doi.org/10.3390/s25165137 - 19 Aug 2025
Abstract
This paper explores the application of transform-domain sparsification and compressed sensing (CS) techniques to improve the efficiency and quality of magnetic resonance imaging (MRI). We implement and evaluate three sparsifying methods—discrete wavelet transform (DWT), fast Fourier transform (FFT), and discrete cosine transform (DCT)—which [...] Read more.
This paper explores the application of transform-domain sparsification and compressed sensing (CS) techniques to improve the efficiency and quality of magnetic resonance imaging (MRI). We implement and evaluate three sparsifying methods—discrete wavelet transform (DWT), fast Fourier transform (FFT), and discrete cosine transform (DCT)—which are used to simulate subsampled reconstruction via inverse transforms. Additionally, one accurate CS reconstruction algorithm, basis pursuit (BP), using the L1-MAGIC toolbox, is implemented as a benchmark based on convex optimization with L1-norm minimization. Emphasis is placed on basis pursuit (BP), which satisfies the formal requirements of CS theory, including incoherent sampling and sparse recovery via nonlinear reconstruction. Each method is assessed in MATLAB R2024b using standardized DICOM images and varying sampling rates. The evaluation metrics include peak signal-to-noise ratio (PSNR), root mean square error (RMSE), structural similarity index measure (SSIM), execution time, memory usage, and compression efficiency. The results show that although discrete cosine transform (DCT) outperforms the others under simulation in terms of PSNR and SSIM, it is inconsistent with the physics of MRI acquisition. Conversely, basis pursuit (BP) offers a theoretically grounded reconstruction approach with acceptable accuracy and clinical relevance. Despite the limitations of a controlled experimental setup, this study establishes a reproducible benchmarking framework and highlights the trade-offs between the quality of transform-based reconstruction and computational complexity. Future work will extend this study by incorporating clinically validated CS algorithms with L0 and nonconvex Lp (0 < p < 1) regularization to align with state-of-the-art MRI reconstruction practices. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

21 pages, 4332 KiB  
Article
A Comparative Study of Time–Frequency Representations for Bearing and Rotating Fault Diagnosis Using Vision Transformer
by Ahmet Orhan, Nikolay Yordanov, Merve Ertarğın, Marin Zhilevski and Mikho Mikhov
Machines 2025, 13(8), 737; https://doi.org/10.3390/machines13080737 - 19 Aug 2025
Abstract
This paper presents a comparative analysis of bearing and rotating component fault classification based on different time–frequency representations using vision transformer (ViT). Four different time–frequency transformation techniques—short-time Fourier transform (STFT), continuous wavelet transform (CWT), Hilbert–Huang transform (HHT), and Wigner–Ville distribution (WVD)—were applied to [...] Read more.
This paper presents a comparative analysis of bearing and rotating component fault classification based on different time–frequency representations using vision transformer (ViT). Four different time–frequency transformation techniques—short-time Fourier transform (STFT), continuous wavelet transform (CWT), Hilbert–Huang transform (HHT), and Wigner–Ville distribution (WVD)—were applied to convert the signals into 2D images. A pretrained ViT-Base architecture was fine-tuned on the resulting images for classification tasks. The model was evaluated on two separate scenarios: (i) eight-class rotating component fault classification and (ii) four-class bearing fault classification. Importantly, in each task, the samples were collected under varying conditions of the other component (i.e., different rotating conditions in bearing classification and vice versa). This design allowed for an independent assessment of the model’s ability to generalize across fault domains. The experimental results demonstrate that the ViT-based approach achieves high classification performance across various time–frequency representations, highlighting its potential for mechanical fault diagnosis in rotating machinery. Notably, the model achieved higher accuracy in bearing fault classification compared to rotating component faults, suggesting higher sensitivity to bearing-related anomalies. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

23 pages, 1938 KiB  
Article
Algorithmic Silver Trading via Fine-Tuned CNN-Based Image Classification and Relative Strength Index-Guided Price Direction Prediction
by Yahya Altuntaş, Fatih Okumuş and Adnan Fatih Kocamaz
Symmetry 2025, 17(8), 1338; https://doi.org/10.3390/sym17081338 - 16 Aug 2025
Viewed by 345
Abstract
Predicting short-term buy and sell signals in financial markets remains a significant challenge for algorithmic trading. This difficulty stems from the data’s inherent volatility and noise, which often leads to spurious signals and poor trading performance. This paper presents a novel algorithmic trading [...] Read more.
Predicting short-term buy and sell signals in financial markets remains a significant challenge for algorithmic trading. This difficulty stems from the data’s inherent volatility and noise, which often leads to spurious signals and poor trading performance. This paper presents a novel algorithmic trading model for silver that combines fine-tuned Convolutional Neural Networks (CNNs) with a decision filter based on the Relative Strength Index (RSI). The technique allows for the prediction of buy and sell points by turning time series data into chart images. Daily silver price per ounce data were turned into chart images using technical analysis indicators. Four pre-trained CNNs, namely AlexNet, VGG16, GoogLeNet, and ResNet-50, were fine-tuned using the generated image dataset to find the best architecture based on classification and financial performance. The models were evaluated using walk-forward validation with an expanding window. This validation method made the tests more realistic and the performance evaluation more robust under different market conditions. Fine-tuned VGG16 with the RSI filter had the best cost-adjusted profitability, with a cumulative return of 115.03% over five years. This was nearly double the 61.62% return of a buy-and-hold strategy. This outperformance is especially impressive because the evaluation period was mostly upward, which makes it harder to beat passive benchmarks. Adding the RSI filter also helped models make more disciplined decisions. This reduced transactions with low confidence. In general, the results show that pre-trained CNNs fine-tuned on visual representations, when supplemented with domain-specific heuristics, can provide strong and cost-effective solutions for algorithmic trading, even when realistic cost assumptions are used. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 8759 KiB  
Article
Small Sample Palmprint Recognition Based on Image Augmentation and Dynamic Model-Agnostic Meta-Learning
by Xiancheng Zhou, Huihui Bai, Zhixu Dong, Kaijun Zhou and Yehui Liu
Electronics 2025, 14(16), 3236; https://doi.org/10.3390/electronics14163236 - 14 Aug 2025
Viewed by 129
Abstract
Palmprint recognition is becoming more and more common in the fields of security authentication, mobile payment, and crime detection. Aiming at the problem of small sample size and low recognition rate of palmprint, a small-sample palmprint recognition method based on image expansion and [...] Read more.
Palmprint recognition is becoming more and more common in the fields of security authentication, mobile payment, and crime detection. Aiming at the problem of small sample size and low recognition rate of palmprint, a small-sample palmprint recognition method based on image expansion and Dynamic Model-Agnostic Meta-Learning (DMAML) is proposed. In terms of data augmentation, a multi-connected conditional generative network is designed for generating palmprints; the network is trained using a gradient-penalized hybrid loss function and a dual time-scale update rule to help the model converge stably, and the trained network is used to generate an expanded dataset of palmprints. On this basis, the palmprint feature extraction network is designed considering the frequency domain and residual inspiration to extract the palmprint feature information. The DMAML training method of the network is investigated, which establishes a multistep loss list for query ensemble loss in the inner loop. It dynamically adjusts the learning rate of the outer loop by using a combination of gradient preheating and a cosine annealing strategy in the outer loop. The experimental results show that the palmprint dataset expansion method in this paper can effectively improve the training efficiency of the palmprint recognition model, evaluated on the Tongji dataset in an N-way K-shot setting, our proposed method achieves an accuracy of 94.62% ± 0.06% in the 5-way 1-shot task and 87.52% ± 0.29% in the 10-way 1-shot task, significantly outperforming ProtoNets (90.57% ± 0.65% and 81.15% ± 0.50%, respectively). Under the 5-way 1-shot condition, there was a 4.05% improvement, and under the 10-way 1-shot condition, there was a 6.37% improvement, demonstrating the effectiveness of our method. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

23 pages, 18349 KiB  
Article
Estimating Radicle Length of Germinating Elm Seeds via Deep Learning
by Dantong Li, Yang Luo, Hua Xue and Guodong Sun
Sensors 2025, 25(16), 5024; https://doi.org/10.3390/s25165024 - 13 Aug 2025
Viewed by 142
Abstract
Accurate measurement of seedling traits is essential for plant phenotyping, particularly in understanding growth dynamics and stress responses. Elm trees (Ulmus spp.), ecologically and economically significant, pose unique challenges due to their curved seedling morphology. Traditional manual measurement methods are time-consuming, prone [...] Read more.
Accurate measurement of seedling traits is essential for plant phenotyping, particularly in understanding growth dynamics and stress responses. Elm trees (Ulmus spp.), ecologically and economically significant, pose unique challenges due to their curved seedling morphology. Traditional manual measurement methods are time-consuming, prone to human error, and often lack consistency. Moreover, automated approaches remain limited and often fail to accurately process seedlings with nonlinear or curved morphologies. In this study, we introduce GLEN, a deep learning-based model for detecting germinating elm seeds and accurately estimating their lengths of germinating structures. It leverages a dual-path architecture that combines pixel-level spatial features with instance-level semantic information, enabling robust measurement of curved radicles. To support training, we construct GermElmData, a curated dataset of annotated elm seedling images, and introduce a novel synthetic data generation pipeline that produces high-fidelity, morphologically diverse germination images. This reduces the dependence on extensive manual annotations and improves model generalization. Experimental results demonstrate that GLEN achieves an estimation error on the order of millimeters, outperforming existing models. Beyond quantifying germinating elm seeds, the architectural design and data augmentation strategies in GLEN offer a scalable framework for morphological quantification in both plant phenotyping and broader biomedical imaging domains. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

45 pages, 5794 KiB  
Review
Nanophotonic Materials and Devices: Recent Advances and Emerging Applications
by Yuan-Fong Chou Chau
Micromachines 2025, 16(8), 933; https://doi.org/10.3390/mi16080933 - 13 Aug 2025
Viewed by 359
Abstract
Nanophotonics, the study of light–matter interactions at the nanometer scale, has emerged as a transformative field that bridges photonics and nanotechnology. Using engineered nanomaterials—including plasmonic metals, high-index dielectrics, two-dimensional (2D) materials, and hybrid systems—nanophotonics enables light manipulation beyond the diffraction limit, unlocking novel [...] Read more.
Nanophotonics, the study of light–matter interactions at the nanometer scale, has emerged as a transformative field that bridges photonics and nanotechnology. Using engineered nanomaterials—including plasmonic metals, high-index dielectrics, two-dimensional (2D) materials, and hybrid systems—nanophotonics enables light manipulation beyond the diffraction limit, unlocking novel applications in sensing, imaging, and quantum technologies. This review provides a comprehensive overview of recent advances (post-2020) in nanophotonic materials, fabrication methods, and their cutting-edge applications. We first discuss the fundamental principles governing nanophotonic phenomena, such as localized surface plasmon resonances (LSPRs), Mie resonances, and exciton–polariton coupling, highlighting their roles in enhancing light–matter interactions. Next, we examine state-of-the-art fabrication techniques, including top-down (e.g., electron beam lithography and nanoimprinting) and bottom-up (e.g., chemical vapor deposition and colloidal synthesis) approaches, as well as hybrid strategies that combine scalability with nanoscale precision. We then explore emerging applications across diverse domains: quantum photonics (single-photon sources, entangled light generation), biosensing (ultrasensitive detection of viruses and biomarkers), nonlinear optics (high-harmonic generation and wave mixing), and integrated photonic circuits. Special attention is given to active and tunable nanophotonic systems, such as reconfigurable metasurfaces and hybrid graphene–dielectric devices. Despite rapid progress, challenges remain, including optical losses, thermal management, and scalable integration. We conclude by outlining future directions, such as machine learning-assisted design, programmable photonics, and quantum-enhanced sensing, and offering insights into the next generation of nanophotonic technologies. This review serves as a timely resource for researchers in photonics, materials science, and nanotechnology. Full article
Show Figures

Figure 1

25 pages, 28917 KiB  
Article
Synthetic Data-Driven Methods to Accelerate the Deployment of Deep Learning Models: A Case Study on Pest and Disease Detection in Precision Viticulture
by Telmo Adão, Agnieszka Chojka, David Pascoal, Nuno Silva, Raul Morais and Emanuel Peres
Computers 2025, 14(8), 327; https://doi.org/10.3390/computers14080327 - 13 Aug 2025
Viewed by 221
Abstract
The development of reliable visual inference models is often constrained by the burdensome and time-consuming processes involved in collecting and annotating high-quality datasets. This challenge becomes more acute in domains where key phenomena are time-dependent or event-driven, narrowing the opportunity window to capture [...] Read more.
The development of reliable visual inference models is often constrained by the burdensome and time-consuming processes involved in collecting and annotating high-quality datasets. This challenge becomes more acute in domains where key phenomena are time-dependent or event-driven, narrowing the opportunity window to capture representative observations. Yet, accelerating the deployment of deep learning (DL) models is crucial to support timely, data-driven decision-making in operational settings. To tackle such an issue, this paper explores the use of 2D synthetic data grounded in real-world patterns to train initial DL models in contexts where annotated datasets are scarce or can only be acquired within restrictive time windows. Two complementary approaches to synthetic data generation are investigated: rule-based digital image processing and advanced text-to-image generative diffusion models. These methods can operate independently or be combined to enhance flexibility and coverage. A proof-of-concept is presented through a couple case studies in precision viticulture, a domain often constrained by seasonal dependencies and environmental variability. Specifically, the detection of Lobesia botrana in sticky traps and the classification of grapevine foliar symptoms associated with black rot, ESCA, and leaf blight are addressed. The results suggest that the proposed approach potentially accelerates the deployment of preliminary DL models by comprehensively automating the production of context-aware datasets roughly inspired by specific challenge-driven operational settings, thereby mitigating the need for time-consuming and labor-intensive processes, from image acquisition to annotation. Although models trained on such synthetic datasets require further refinement—for example, through active learning—the approach offers a scalable and functional solution that reduces human involvement, even in scenarios of data scarcity, and supports the effective transition of laboratory-developed AI to real-world deployment environments. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Graphical abstract

22 pages, 17156 KiB  
Article
Adaptive Clustering-Guided Multi-Scale Integration for Traffic Density Estimation in Remote Sensing Images
by Xin Liu, Qiao Meng, Xiangqing Zhang, Xinli Li and Shihao Li
Remote Sens. 2025, 17(16), 2796; https://doi.org/10.3390/rs17162796 - 12 Aug 2025
Viewed by 294
Abstract
Grading and providing early warning of traffic congestion density is crucial for the timely coordination and optimization of traffic management. However, current traffic density detection methods primarily rely on historical traffic flow data, resulting in ambiguous thresholds for congestion classification. To overcome these [...] Read more.
Grading and providing early warning of traffic congestion density is crucial for the timely coordination and optimization of traffic management. However, current traffic density detection methods primarily rely on historical traffic flow data, resulting in ambiguous thresholds for congestion classification. To overcome these challenges, this paper proposes a traffic density grading algorithm for remote sensing images that integrates adaptive clustering and multi-scale fusion. A dynamic neighborhood radius adjustment mechanism guided by spatial distribution characteristics is introduced to ensure consistency between the density clustering parameter space and the decision domain for image cropping, thereby addressing the issues of large errors and low efficiency in existing cropping techniques. Furthermore, a hierarchical detection framework is developed by incorporating a dynamic background suppression strategy to fuse multi-scale spatiotemporal features, thereby enhancing the detection accuracy of small objects in remote sensing imagery. Additionally, we propose a novel method that combines density analysis with pixel-level gradient quantification to construct a traffic state evaluation model featuring a dual optimization strategy. This enables precise detection and grading of traffic congestion areas while maintaining low computational overhead. Experimental results demonstrate that the proposed approach achieves average precision (AP) scores of 32.6% on the VisDrone dataset and 16.2% on the UAVDT dataset. Full article
Show Figures

Figure 1

24 pages, 14557 KiB  
Article
A Tailored Deep Learning Network with Embedded Space Physical Knowledge for Auroral Substorm Recognition: Validation Through Special Case Studies
by Yiyuan Han, Bing Han and Zejun Hu
Universe 2025, 11(8), 265; https://doi.org/10.3390/universe11080265 - 12 Aug 2025
Viewed by 234
Abstract
The dynamic morphological characteristics of the auroral oval serve as critical diagnostic indicators for auroral substorm recognition, with each pixel in ultraviolet imager (UVI) data carrying different physical implications. Existing deep learning approaches often overlook the physical properties of auroral images by directly [...] Read more.
The dynamic morphological characteristics of the auroral oval serve as critical diagnostic indicators for auroral substorm recognition, with each pixel in ultraviolet imager (UVI) data carrying different physical implications. Existing deep learning approaches often overlook the physical properties of auroral images by directly transplanting generic models into space physics applications without adaptation. In this study, we propose a visual–physical interactive deep learning model specifically designed and optimized for accurate auroral substorm recognition. The model leverages the significant variation in auroral morphology across different substorm phases to guide feature extraction. It integrates magnetospheric domain knowledge from space physics through magnetic local time (MLT) and magnetic latitude (MLAT) embeddings and incorporates cognitive features derived from expert eye-tracking data to enhance spatial attention. Experimental results on substorm sequences recognition demonstrate satisfactory performance, achieving an accuracy of 92.64%, precision of 90.29%, recall of 93%, and F1-score of 91.63%. Furthermore, several case studies are presented to illustrate how both visual and physical characteristics contribute to model performance, offering further insight into the spatiotemporal complexity of auroral substorm recognition. Full article
(This article belongs to the Special Issue Universe: Feature Papers 2025—Space Science)
Show Figures

Figure 1

21 pages, 17026 KiB  
Article
Multi-Scale Time-Frequency Representation Fusion Network for Target Recognition in SAR Imagery
by Huiping Lin, Zixuan Xie, Liang Zeng and Junjun Yin
Remote Sens. 2025, 17(16), 2786; https://doi.org/10.3390/rs17162786 - 11 Aug 2025
Viewed by 383
Abstract
This paper proposes a multi-scale time-frequency representation fusion network (MTRFN) for target recognition in synthetic aperture radar (SAR) imagery. Leveraging the spectral characteristics of six radar sub-views, the model incorporates a multi-scale representation fusion (MRF) module to extract discriminative frequency-domain features from two [...] Read more.
This paper proposes a multi-scale time-frequency representation fusion network (MTRFN) for target recognition in synthetic aperture radar (SAR) imagery. Leveraging the spectral characteristics of six radar sub-views, the model incorporates a multi-scale representation fusion (MRF) module to extract discriminative frequency-domain features from two types of radar sub-views with high learnability. Additionally, physical scattering characteristics in SAR images are captured via time-frequency domain analysis. To enhance feature integration, a gated fusion network performs adaptive feature concatenation. The MRF module integrates a lightweight residual block to reduce network complexity and employs a coordinate attention mechanism to prioritize salient targets in the frequency spectrum over background noise, aligning the model’s focus with physical scattering principles. Furthermore, the model introduces an angular additive margin loss function during classification to enhance intra-class compactness and inter-class separability while reducing computational overhead. Compared with existing interpretable methods, the proposed approach combines architectural transparency with physical interpretability, thereby lowering the risk of recognition errors. Extensive experiments conducted on four public datasets demonstrate that the proposed MTRFN significantly outperforms existing benchmark methods. Comparative experiments using heat maps further confirm that the proposed physical feature-guided module effectively directs the model’s attention toward the target rather than the background. Full article
Show Figures

Figure 1

28 pages, 24868 KiB  
Article
Deep Meta-Connectivity Representation for Optically-Active Water Quality Parameters Estimation Through Remote Sensing
by Fangling Pu, Ziang Luo, Yiming Yang, Hongjia Chen, Yue Dai and Xin Xu
Remote Sens. 2025, 17(16), 2782; https://doi.org/10.3390/rs17162782 - 11 Aug 2025
Viewed by 225
Abstract
Monitoring optically-active water quality (OAWQ) parameters faces key challenges, primarily due to limited in situ measurements and the restricted availability of high-resolution multispectral remote sensing imagery. While deep learning has shown promise for OAWQ estimation, existing approaches such as GeoTile2Vec, which relies on [...] Read more.
Monitoring optically-active water quality (OAWQ) parameters faces key challenges, primarily due to limited in situ measurements and the restricted availability of high-resolution multispectral remote sensing imagery. While deep learning has shown promise for OAWQ estimation, existing approaches such as GeoTile2Vec, which relies on geographic proximity, and SimCLR, a domain-agnostic contrastive learning method, fail to capture land cover-driven water quality patterns, limiting their generalizability. To address this, we present deep meta-connectivity representation (DMCR), which integrates multispectral remote sensing imagery with limited in situ measurements to estimate OAWQ parameters. Our approach constructs meta-feature vectors from land cover images to represent the water quality characteristics of each multispectral remote sensing image tile. We introduce the meta-connectivity concept to quantify the OAWQ similarity between different tiles. Building on this concept, we design a contrastive self-supervised learning framework that uses sets of quadruple tiles extracted from Sentinel-2 imagery based on their meta-connectivity to learn DMCR vectors. After the core neural network is trained, we apply a random forest model to estimate parameters such as chlorophyll-a (Chl-a) and turbidity using matched in situ measurements and DMCR vectors across time and space. We evaluate DMCR on Lake Erie and Lake Ontario, generating a series of Chl-a and turbidity distribution maps. Performance is assessed using the R2 and RMSE metrics. Results show that meta-connectivity more effectively captures water quality similarities between tiles than widely utilized geographic proximity approaches such as those used in GeoTile2Vec. Furthermore, DMCR outperforms baseline models such as SimCLR with randomly cropped tiles. The resulting distribution maps align well with known factors influencing Chl-a and turbidity levels, confirming the method’s reliability. Overall, DMCR demonstrates strong potential for large-scale OAWQ estimation and contributes to improved monitoring of inland water bodies with limited in situ measurements through meta-connectivity-informed deep learning. The temporal-spatial water quality maps can support large-scale inland water monitoring, early warning of harmful algal blooms. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

14 pages, 31941 KiB  
Article
PriKMet: Prior-Guided Pointer Meter Reading for Automated Substation Inspections
by Haidong Chu, Jun Feng, Yidan Wang, Weizhen He, Yunfeng Yan and Donglian Qi
Electronics 2025, 14(16), 3194; https://doi.org/10.3390/electronics14163194 - 11 Aug 2025
Viewed by 350
Abstract
Despite the rapid advancement of smart-grid technologies, automated pointer meter reading in power substations remains a persistent challenge due to complex electromagnetic interference and dynamic field conditions. Traditional computer vision methods, typically designed for ideal imaging environments, exhibit limited robustness against real-world perturbations [...] Read more.
Despite the rapid advancement of smart-grid technologies, automated pointer meter reading in power substations remains a persistent challenge due to complex electromagnetic interference and dynamic field conditions. Traditional computer vision methods, typically designed for ideal imaging environments, exhibit limited robustness against real-world perturbations such as illumination fluctuations, partial occlusions, and motion artifacts. To address this gap, we propose PriKMet (Prior-Guided Pointer Meter Reader), a novel meter reading algorithm that integrates deep learning with domain-specific priors through three key contributions: (1) a unified hierarchical framework for joint meter detection and keypoint localization, (2) an intelligent meter reading method that fuses the predefined inspection route information with perception results, and (3) an adaptive offset correction mechanism for UAV-based inspections. Extensive experiments on a comprehensive dataset of 3237 substation meter images demonstrate the superior performance of PriKMet, achieving state-of-the-art meter detection results of 99.4% AP50 and 85.5% for meter reading accuracy. The real-time processing capability of the method offers a practical solution for modernizing power infrastructure monitoring. This approach effectively reduces reliance on manual inspections in complex operational environments while enhancing the intelligence of power maintenance operations. Full article
(This article belongs to the Special Issue Advances in Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

34 pages, 1448 KiB  
Article
High-Fidelity Image Transmission in Quantum Communication with Frequency Domain Multi-Qubit Techniques
by Udara Jayasinghe, Thanuj Fernando and Anil Fernando
Algorithms 2025, 18(8), 501; https://doi.org/10.3390/a18080501 - 11 Aug 2025
Viewed by 312
Abstract
This paper proposes a novel quantum image transmission framework to address the limitations of existing single-qubit time domain systems, which struggle with noise resilience and scalability. The framework integrates frequency domain processing with multi-qubit (1 to 8 qubits) encoding to enhance robustness against [...] Read more.
This paper proposes a novel quantum image transmission framework to address the limitations of existing single-qubit time domain systems, which struggle with noise resilience and scalability. The framework integrates frequency domain processing with multi-qubit (1 to 8 qubits) encoding to enhance robustness against quantum noise. Initially, images are source-coded using JPEG and HEIF formats with rate adjustment to ensure consistent bandwidth usage. The resulting bitstreams are channel-encoded and mapped to multi-qubit quantum states. These states are transformed into the frequency domain via the quantum Fourier transform (QFT) for transmission. At the receiver, the inverse QFT recovers the time domain states, followed by multi-qubit decoding, channel decoding, and source decoding to reconstruct the image. Performance is evaluated using bit error rate (BER), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and universal quality index (UQI). Results show that increasing the number of qubits enhances image quality and noise robustness, albeit at the cost of increased system complexity. Compared to time domain processing, the frequency domain approach achieves superior performance across all qubit configurations, with the eight-qubit system delivering up to a 4 dB maximum channel SNR gain for both JPEG and HEIF images. Although single-qubit systems benefit less from frequency domain encoding due to limited representational capacity, the overall framework demonstrates strong potential for scalable and noise-robust quantum image transmission in future quantum communication networks. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

36 pages, 13404 KiB  
Article
A Multi-Task Deep Learning Framework for Road Quality Analysis with Scene Mapping via Sim-to-Real Adaptation
by Rahul Soans, Ryuichi Masuda and Yohei Fukumizu
Appl. Sci. 2025, 15(16), 8849; https://doi.org/10.3390/app15168849 - 11 Aug 2025
Viewed by 283
Abstract
Robust perception of road surface conditions is a critical challenge for the safe deployment of autonomous vehicles and the efficient management of transportation infrastructure. This paper introduces a synthetic data-driven deep learning framework designed to address this challenge. We present a large-scale, procedurally [...] Read more.
Robust perception of road surface conditions is a critical challenge for the safe deployment of autonomous vehicles and the efficient management of transportation infrastructure. This paper introduces a synthetic data-driven deep learning framework designed to address this challenge. We present a large-scale, procedurally generated 3D synthetic dataset created in Blender, featuring a diverse range of road defects—including cracks, potholes, and puddles—alongside crucial road features like manhole covers and patches. Crucially, our dataset provides dense, pixel-perfect annotations for segmentation masks, depth maps, and camera parameters (intrinsic and extrinsic). Our proposed model leverages these rich annotations in a multi-task learning framework that jointly performs road defect segmentation and depth estimation, enabling a comprehensive geometric and semantic understanding of the road environment. A core contribution is a two-stage domain adaptation strategy to bridge the synthetic-to-real gap. First, we employ a modified CycleGAN with a segmentation-aware loss to translate synthetic images into a realistic domain while preserving defect fidelity. Second, during model training, we utilize a dual-discriminator adversarial approach, applying alignment at both the feature and output levels to minimize domain shift. Benchmarking experiments validate our approach, demonstrating high accuracy and computational efficiency. Our model excels in detecting subtle or occluded defects, attributed to an occlusion-aware loss formulation. The proposed system shows significant promise for real-time deployment in autonomous navigation, automated infrastructure assessment and Advanced Driver-Assistance Systems (ADAS). Full article
Show Figures

Figure 1

48 pages, 15203 KiB  
Article
MRBMO: An Enhanced Red-Billed Blue Magpie Optimization Algorithm for Solving Numerical Optimization Challenges
by Baili Lu, Zhanxi Xie, Junhao Wei, Yanzhao Gu, Yuzheng Yan, Zikun Li, Shirou Pan, Ngai Cheong, Ying Chen and Ruishen Zhou
Symmetry 2025, 17(8), 1295; https://doi.org/10.3390/sym17081295 - 11 Aug 2025
Viewed by 292
Abstract
To address the limitations of the Red-billed Blue Magpie Optimization algorithm (RBMO), such as its tendency to get trapped in local optima and its slow convergence rate, an enhanced version called MRBMO was proposed. MRBMO was improved by integrating Good Nodes Set Initialization, [...] Read more.
To address the limitations of the Red-billed Blue Magpie Optimization algorithm (RBMO), such as its tendency to get trapped in local optima and its slow convergence rate, an enhanced version called MRBMO was proposed. MRBMO was improved by integrating Good Nodes Set Initialization, an Enhanced Search-for-food Strategy, a newly designed Siege-style Attacking-prey Strategy, and Lens-Imaging Opposition-Based Learning (LIOBL). The experimental results showed that MRBMO demonstrated strong competitiveness on the CEC2005 benchmark. Among a series of advanced metaheuristic algorithms, MRBMO exhibited significant advantages in terms of convergence speed and solution accuracy. On benchmark functions with 30, 50, and 100 dimensions, the average Friedman values of MRBMO were 1.6029, 1.6601, and 1.8775, respectively, significantly outperforming other algorithms. The overall effectiveness of MRBMO on benchmark functions with 30, 50, and 100 dimensions was 95.65%, which confirmed the effectiveness of MRBMO in handling problems of different dimensions. This paper designed two types of simulation experiments to test the practicability of MRBMO. First, MRBMO was used along with other heuristic algorithms to solve four engineering design optimization problems, aiming to verify the applicability of MRBMO in engineering design optimization. Then, to overcome the shortcomings of metaheuristic algorithms in antenna S-parameter optimization problems—such as time-consuming verification processes, cumbersome operations, and complex modes—this paper adopted a test suite specifically designed for antenna S-parameter optimization, with the goal of efficiently validating the effectiveness of metaheuristic algorithms in this domain. The results demonstrated that MRBMO had significant advantages in both engineering design optimization and antenna S-parameter optimization. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Graphical abstract

Back to TopTop