Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (65)

Search Parameters:
Keywords = sliding window matching

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3710 KiB  
Article
An Accurate LiDAR-Inertial SLAM Based on Multi-Category Feature Extraction and Matching
by Nuo Li, Yiqing Yao, Xiaosu Xu, Shuai Zhou and Taihong Yang
Remote Sens. 2025, 17(14), 2425; https://doi.org/10.3390/rs17142425 - 12 Jul 2025
Viewed by 192
Abstract
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity [...] Read more.
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity to noise and sparsity, and the inclusion of redundant or low-quality feature correspondences. These weaknesses hinder their performance in complex or dynamic environments and fail to meet the reliability requirements of autonomous systems. To overcome these challenges, we propose a novel and accurate LiDAR-inertial SLAM framework with three major contributions. First, we employ a robust multi-category feature extraction method based on principal component analysis (PCA), which effectively filters out noisy and weakly structured points, ensuring stable feature representation. Second, to suppress outlier correspondences and enhance pose estimation reliability, we introduce a coarse-to-fine two-stage feature correspondence selection strategy that evaluates geometric consistency and structural contribution. Third, we develop an adaptive weighted pose estimation scheme that considers both distance and directional consistency, improving the robustness of feature matching under varying scene conditions. These components are jointly optimized within a sliding-window-based factor graph, integrating LiDAR feature factors, IMU pre-integration, and loop closure constraints. Extensive experiments on public datasets (KITTI, M2DGR) and a custom-collected dataset validate the proposed method’s effectiveness. Results show that our system consistently outperforms state-of-the-art approaches in accuracy and robustness, particularly in scenes with sparse structure, motion distortion, and dynamic interference, demonstrating its suitability for reliable real-world deployment. Full article
(This article belongs to the Special Issue LiDAR Technology for Autonomous Navigation and Mapping)
Show Figures

Figure 1

20 pages, 547 KiB  
Article
Fine-Grained Semantics-Enhanced Graph Neural Network Model for Person-Job Fit
by Xia Xue, Jingwen Wang, Bo Ma, Jing Ren, Wujie Zhang, Shuling Gao, Miao Tian, Yue Chang, Chunhong Wang and Hongyu Wang
Entropy 2025, 27(7), 703; https://doi.org/10.3390/e27070703 - 30 Jun 2025
Viewed by 290
Abstract
Online recruitment platforms are transforming talent acquisition paradigms, where a precise person-job fit plays a pivotal role in intelligent recruitment systems. However, current methodologies predominantly rely on coarse-grained semantic analysis, failing to address the textual structural dependencies and noise inherent in resumes and [...] Read more.
Online recruitment platforms are transforming talent acquisition paradigms, where a precise person-job fit plays a pivotal role in intelligent recruitment systems. However, current methodologies predominantly rely on coarse-grained semantic analysis, failing to address the textual structural dependencies and noise inherent in resumes and job descriptions. To bridge this gap, the novel fine-grained semantics-enhanced graph neural network for person-job fit (FSEGNN-PJF) framework is proposed. First, graph topologies are constructed by modeling word co-occurrence relationships through pointwise mutual information and sliding windows, followed by graph attention networks to learn graph structural semantics. Second, to mitigate textual noise and focus on critical features, a differential transformer and self-attention mechanism are introduced to semantically encode resumes and job requirements. Then, a novel fine-grained semantic matching strategy is designed, using the enhanced feature fusion strategy to fuse the semantic features of resumes and job positions. Extensive experiments on real-world recruitment datasets demonstrate the effectiveness and robustness of FSEGNN-PJF. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 10128 KiB  
Article
Jitter Error Correction for the HaiYang-3A Satellite Based on Multi-Source Attitude Fusion
by Yanli Wang, Ronghao Zhang, Yizhang Xu, Xiangyu Zhang, Rongfan Dai and Shuying Jin
Remote Sens. 2025, 17(9), 1489; https://doi.org/10.3390/rs17091489 - 23 Apr 2025
Viewed by 426
Abstract
The periodic rotation of the Ocean Color and Temperature Scanner (OCTS) introduces jitter errors in the HaiYang-3A (HY-3A) satellite, leading to internal geometric distortion in optical imagery and significant registration errors in multispectral images. These issues severely influence the application value of the [...] Read more.
The periodic rotation of the Ocean Color and Temperature Scanner (OCTS) introduces jitter errors in the HaiYang-3A (HY-3A) satellite, leading to internal geometric distortion in optical imagery and significant registration errors in multispectral images. These issues severely influence the application value of the optical data. To achieve near real-time compensation, a novel jitter error estimation and correction method based on multi-source attitude data fusion is proposed in this paper. By fusing the measurement data from star sensors and gyroscopes, satellite attitude parameters containing jitter errors are precisely resolved. The jitter component of the attitude parameter is extracted using the fitting method with the optimal sliding window. Then, the jitter error model is established using the least square solution and spectral characteristics. Subsequently, using the imaging geometric model and stable resampling, the optical remote sensing image with jitter distortion is corrected. Experimental results reveal a jitter frequency of 0.187 Hz, matching the OCTS rotation period, with yaw, roll, and pitch amplitudes quantified as 0.905”, 0.468”, and 1.668”, respectively. The registration accuracy of the multispectral images from the Coastal Zone Imager improved from 0.568 to 0.350 pixels. The time complexity is low with the single-layer linear traversal structure. The proposed method can achieve on-orbit near real-time processing and provide accurate attitude parameters for on-orbit geometric processing of optical satellite image data. Full article
(This article belongs to the Special Issue Near Real-Time Remote Sensing Data and Its Geoscience Applications)
Show Figures

Figure 1

25 pages, 12729 KiB  
Article
A Robust InSAR-DEM Block Adjustment Method Based on Affine and Polynomial Models for Geometric Distortion
by Zhonghua Hong, Ziyuan He, Haiyan Pan, Zhihao Tang, Ruyan Zhou, Yun Zhang, Yanling Han and Jiang Tao
Remote Sens. 2025, 17(8), 1346; https://doi.org/10.3390/rs17081346 - 10 Apr 2025
Viewed by 429
Abstract
DEMs derived from Interferometric Synthetic Aperture Radar (InSAR) imagery are frequently influenced by multiple factors, resulting in systematic horizontal and elevation inaccuracies that affect their applicability in large-scale scenarios. To mitigate this problem, this study employs affine models and polynomial function models to [...] Read more.
DEMs derived from Interferometric Synthetic Aperture Radar (InSAR) imagery are frequently influenced by multiple factors, resulting in systematic horizontal and elevation inaccuracies that affect their applicability in large-scale scenarios. To mitigate this problem, this study employs affine models and polynomial function models to refine the relative planar precision and elevation accuracy of the DEM. To acquire high-quality control data for the adjustment model, this study introduces a DEM feature matching method that maintains invariance to geometric distortions, utilizing filtered ICESat-2 ATL08 data as elevation control to enhance accuracy. We first validate the effectiveness and features of the proposed InSAR-DEM matching algorithm and select 45 ALOS high-resolution DEM scenes with different terrain features for large-scale DEM block adjustment experiments. Additionally, we select additional Sentinel-1 and Copernicus DEM data to verify the reliability of multi-source DEM matching and adjustment. The experimental results indicate that elevation errors across different study areas were reduced by approximately 50% to 5%, while the relative planar accuracy improved by around 93% to 17%. The TPs extraction method for InSAR-DEM proposed in this paper is more accurate at the sub-pixel level compared to traditional sliding window matching methods and is more robust in the case of non-uniform geometric deformations. Full article
Show Figures

Graphical abstract

25 pages, 974 KiB  
Article
Thompson Sampling for Non-Stationary Bandit Problems
by Han Qi, Fei Guo and Li Zhu
Entropy 2025, 27(1), 51; https://doi.org/10.3390/e27010051 - 9 Jan 2025
Viewed by 1513
Abstract
Non-stationary multi-armed bandit (MAB) problems have recently attracted extensive attention. We focus on the abruptly changing scenario where reward distributions remain constant for a certain period and change at unknown time steps. Although Thompson sampling (TS) has shown success in non-stationary settings, there [...] Read more.
Non-stationary multi-armed bandit (MAB) problems have recently attracted extensive attention. We focus on the abruptly changing scenario where reward distributions remain constant for a certain period and change at unknown time steps. Although Thompson sampling (TS) has shown success in non-stationary settings, there is currently no regret bound analysis for TS with uninformative priors. To address this, we propose two algorithms, discounted TS and sliding-window TS, designed for sub-Gaussian reward distributions. For these algorithms, we establish an upper bound for the expected regret by bounding the expected number of times a suboptimal arm is played. We show that the regret upper bounds of both algorithms are O~(TBT), where T is the time horizon and BT is the number of breakpoints. This upper bound matches the lower bound for abruptly changing problems up to a logarithmic factor. Empirical comparisons with other non-stationary bandit algorithms highlight the competitive performance of our proposed methods. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

12 pages, 693 KiB  
Article
Haralick Texture Analysis for Differentiating Suspicious Prostate Lesions from Normal Tissue in Low-Field MRI
by Dang Bich Thuy Le, Ram Narayanan, Meredith Sadinski, Aleksandar Nacev, Yuling Yan and Srirama S. Venkataraman
Bioengineering 2025, 12(1), 47; https://doi.org/10.3390/bioengineering12010047 - 9 Jan 2025
Viewed by 904
Abstract
This study evaluates the feasibility of using Haralick texture analysis on low-field, T2-weighted MRI images for detecting prostate cancer, extending current research from high-field MRI to the more accessible and cost-effective low-field MRI. A total of twenty-one patients with biopsy-proven prostate cancer (Gleason [...] Read more.
This study evaluates the feasibility of using Haralick texture analysis on low-field, T2-weighted MRI images for detecting prostate cancer, extending current research from high-field MRI to the more accessible and cost-effective low-field MRI. A total of twenty-one patients with biopsy-proven prostate cancer (Gleason score 4+3 or higher) were included. Before transperineal biopsy guided by low-field (58–74mT) MRI, a radiologist annotated suspicious regions of interest (ROIs) on high-field (3T) MRI. Rigid image registration was performed to align corresponding regions on both high- and low-field images, ensuring an accurate propagation of annotations to the co-registered low-field images for texture feature calculations. For each cancerous ROI, a matching ROI of identical size was drawn in a non-suspicious region presumed to be normal tissue. Four Haralick texture features (Energy, Correlation, Contrast, and Homogeneity) were extracted and compared between cancerous and non-suspicious ROIs. Two extraction methods were used: the direct computation of texture measures within the ROIs and a sliding window technique generating texture maps across the prostate from which average values were derived. The results demonstrated statistically significant differences in texture features between cancerous and non-suspicious regions. Specifically, Energy and Homogeneity were elevated (p-values: <0.00001–0.004), while Contrast and Correlation were reduced (p-values: <0.00001–0.03) in cancerous ROIs. These findings suggest that Haralick texture features are both feasible and informative for differentiating abnormalities, offering promise in assisting prostate cancer detection on low-field MRI. Full article
(This article belongs to the Special Issue Advancements in Medical Imaging Technology)
Show Figures

Figure 1

18 pages, 7697 KiB  
Article
GNSS/IMU/ODO Integrated Navigation Method Based on Adaptive Sliding Window Factor Graph
by Xinchun Ji, Chenjun Long, Liuyin Ju, Hang Zhao and Dongyan Wei
Electronics 2025, 14(1), 124; https://doi.org/10.3390/electronics14010124 - 31 Dec 2024
Viewed by 1215
Abstract
One of the predominant technologies for multi-source navigation in vehicles involves the fusion of GNSS/IMU/ODO through a factor graph. To address issues such as the asynchronous sampling frequencies between the IMU and ODO, as well as diminished accuracy during GNSS signal loss, we [...] Read more.
One of the predominant technologies for multi-source navigation in vehicles involves the fusion of GNSS/IMU/ODO through a factor graph. To address issues such as the asynchronous sampling frequencies between the IMU and ODO, as well as diminished accuracy during GNSS signal loss, we propose a GNSS/IMU/ODO integrated navigation method based on an adaptive sliding window factor graph. The measurements from the ODO are utilized as observation factors to mitigate prediction interpolation errors associated with traditional ODO pre-integration methods. Additionally, online estimation and compensation for both installation angle deviations and scale factors of the ODO further enhance its ability to constrain pose errors during GNSS signal loss. A multi-state marginalization algorithm is proposed and then utilized to adaptively adjust the sliding window size based on the quality of GNSS observations, enhancing pose optimization accuracy in multi-source fusion while prioritizing computational efficiency. Tests conducted in typical urban environments and mountainous regions demonstrate that our proposed method significantly enhances fusion navigation accuracy under complex GNSS conditions. In a complex city environment, our method achieves a 55.3% and 29.8% improvement in position and velocity accuracy and enhancements of 32.0% and 61.6% in pitch and heading angle accuracy, respectively. These results match the precision of long sliding windows, with a 75.8% gain in computational efficiency. In mountainous regions, our method enhances the position accuracy in the three dimensions by factors of 89.5%, 83.7%, and 43.4%, the velocity accuracy in the three dimensions by factors of 65.4%, 32.6%, and 53.1%, and reduces the attitude errors in roll, pitch, and yaw by 70.5%, 60.8%, and 26.0%, respectively, demonstrating strong engineering applicability through an optimal balance of precision and efficiency. Full article
Show Figures

Figure 1

17 pages, 13882 KiB  
Article
Accurate Needle Localization in the Image Frames of Ultrasound Videos
by Mohammad I. Daoud, Samira Khraiwesh, Rami Alazrai, Mostafa Z. Ali, Adnan Zayadeen, Sahar Qaadan and Rafiq Ibrahim Alhaddad
Appl. Sci. 2025, 15(1), 207; https://doi.org/10.3390/app15010207 - 29 Dec 2024
Viewed by 1283
Abstract
Ultrasound imaging provides real-time guidance during needle interventions, but localizing the needle in ultrasound videos remains a challenging task. This paper introduces a novel machine learning-based method to localize the needle in ultrasound videos. The method comprises three phases for analyzing the image [...] Read more.
Ultrasound imaging provides real-time guidance during needle interventions, but localizing the needle in ultrasound videos remains a challenging task. This paper introduces a novel machine learning-based method to localize the needle in ultrasound videos. The method comprises three phases for analyzing the image frames of the ultrasound video and localizing the needle in each image frame. The first phase aims to extract features that quantify the speckle variations associated with needle insertion, the edges that match the needle orientation, and the pixel intensity statistics of the ultrasound image. The features are analyzed using a machine learning classifier to generate a quantitative image that characterizes the pixels associated with the needle. In the second phase, the quantitative image is processed to identify the region of interest (ROI) that contains the needle. In the third phase, the ROI is processed using a custom-made Ranklet transform to accurately estimate the needle trajectory. Moreover, the needle tip is identified using a sliding window approach that analyzes the speckle variations along the needle trajectory. The performance of the proposed method was evaluated by localizing the needle in ex vivo and in vivo ultrasound videos. The results show that the proposed method was able to localize the needle with failure rates of 0%. The angular, axis, and tip errors computed for the ex vivo ultrasound videos are within the ranges of 0.3–0.7°, 0.2–0.7 mm, and 0.4–0.8 mm, respectively. Additionally, the angular, axis, and tip errors computed for the in vivo ultrasound videos are within the ranges of 0.2–1.0°, 0.3–1.0 mm, and 0.3–1.1 mm, respectively. A key advantage of the proposed method is the ability to achieve accurate localization of the needle without altering the clinical workflow of the intervention. Full article
Show Figures

Figure 1

18 pages, 8489 KiB  
Article
Tightly Coupled SLAM Algorithm Based on Similarity Detection Using LiDAR-IMU Sensor Fusion for Autonomous Navigation
by Jiahui Zheng, Yi Wang and Yadong Men
World Electr. Veh. J. 2024, 15(12), 558; https://doi.org/10.3390/wevj15120558 - 2 Dec 2024
Viewed by 1414
Abstract
In recent years, the rise of unmanned technology has made Simultaneous Localization and Mapping (SLAM) algorithms a focal point of research in the field of robotics. SLAM algorithms are primarily categorized into visual SLAM and laser SLAM, based on the type of external [...] Read more.
In recent years, the rise of unmanned technology has made Simultaneous Localization and Mapping (SLAM) algorithms a focal point of research in the field of robotics. SLAM algorithms are primarily categorized into visual SLAM and laser SLAM, based on the type of external sensors employed. Laser SLAM algorithms have become essential in robotics and autonomous driving due to their insensitivity to lighting conditions, precise distance measurements, and ease of generating navigation maps. Throughout the development of SLAM technology, numerous effective algorithms have been introduced. However, existing algorithms still encounter challenges, such as localization errors and suboptimal utilization of sensor data. To address these issues, this paper proposes a tightly coupled SLAM algorithm based on similarity detection. The algorithm integrates Inertial Measurement Unit (IMU) and LiDAR odometry modules, employs a tightly coupled processing approach for sensor data, and utilizes curvature feature optimization extraction methods to enhance the accuracy and robustness of inter-frame matching. Additionally, the algorithm incorporates a local keyframe sliding window method and introduces a similarity detection mechanism, which reduces the real-time computational load and improves efficiency. Experimental results demonstrate that the algorithm achieves superior performance, with reduced positioning errors and enhanced global consistency, in tests conducted on the KITTI dataset. The accuracy of the real trajectory data compared to the ground truth is evaluated using metrics such as ATE (absolute trajectory error) and RMSE (root mean square error). Full article
(This article belongs to the Special Issue Motion Planning and Control of Autonomous Vehicles)
Show Figures

Figure 1

22 pages, 12893 KiB  
Article
Research on Visual–Inertial Measurement Unit Fusion Simultaneous Localization and Mapping Algorithm for Complex Terrain in Open-Pit Mines
by Yuanbin Xiao, Wubin Xu, Bing Li, Hanwen Zhang, Bo Xu and Weixin Zhou
Sensors 2024, 24(22), 7360; https://doi.org/10.3390/s24227360 - 18 Nov 2024
Viewed by 1313
Abstract
As mining technology advances, intelligent robots in open-pit mining require precise localization and digital maps. Nonetheless, significant pitch variations, uneven highways, and rocky surfaces with minimal texture present substantial challenges to the precision of feature extraction and positioning in traditional visual SLAM systems, [...] Read more.
As mining technology advances, intelligent robots in open-pit mining require precise localization and digital maps. Nonetheless, significant pitch variations, uneven highways, and rocky surfaces with minimal texture present substantial challenges to the precision of feature extraction and positioning in traditional visual SLAM systems, owing to the intricate terrain features of open-pit mines. This study proposes an improved SLAM technique that integrates visual and Inertial Measurement Unit (IMU) data to address these challenges. The method incorporates a point–line feature fusion matching strategy to enhance the quality and stability of line feature extraction. It integrates an enhanced Line Segment Detection (LSD) algorithm with short segment culling and approximate line merging techniques. The combination of IMU pre-integration and visual feature restrictions is executed inside a tightly coupled visual–inertial framework utilizing a sliding window approach for back-end optimization, enhancing system robustness and precision. Experimental results demonstrate that the suggested method improves RMSE accuracy by 36.62% and 26.88% on the MH and VR sequences of the EuRoC dataset, respectively, compared to ORB-SLAM3. The improved SLAM system significantly reduces trajectory drift in the simulated open-pit mining tests, improving localization accuracy by 40.62% and 61.32%. The results indicate that the proposed method demonstrates significance. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

25 pages, 7789 KiB  
Article
Mix-VIO: A Visual Inertial Odometry Based on a Hybrid Tracking Strategy
by Huayu Yuan, Ke Han and Boyang Lou
Sensors 2024, 24(16), 5218; https://doi.org/10.3390/s24165218 - 12 Aug 2024
Cited by 2 | Viewed by 2662
Abstract
In this paper, we proposed Mix-VIO, a monocular and binocular visual-inertial odometry, to address the issue where conventional visual front-end tracking often fails under dynamic lighting and image blur conditions. Mix-VIO adopts a hybrid tracking approach, combining traditional handcrafted tracking techniques with Deep [...] Read more.
In this paper, we proposed Mix-VIO, a monocular and binocular visual-inertial odometry, to address the issue where conventional visual front-end tracking often fails under dynamic lighting and image blur conditions. Mix-VIO adopts a hybrid tracking approach, combining traditional handcrafted tracking techniques with Deep Neural Network (DNN)-based feature extraction and matching pipelines. The system employs deep learning methods for rapid feature point detection, while integrating traditional optical flow methods and deep learning-based sparse feature matching methods to enhance front-end tracking performance under rapid camera motion and environmental illumination changes. In the back-end, we utilize sliding window and bundle adjustment (BA) techniques for local map optimization and pose estimation. We conduct extensive experimental validations of the hybrid feature extraction and matching methods, demonstrating the system’s capability to maintain optimal tracking results under illumination changes and image blur. Full article
Show Figures

Figure 1

23 pages, 8556 KiB  
Article
Vision-Based Algorithm for Precise Traffic Sign and Lane Line Matching in Multi-Lane Scenarios
by Kerui Xia, Jiqing Hu, Zhongnan Wang, Zijian Wang, Zhuo Huang and Zhongchao Liang
Electronics 2024, 13(14), 2773; https://doi.org/10.3390/electronics13142773 - 15 Jul 2024
Cited by 1 | Viewed by 2064
Abstract
With the rapid development of intelligent transportation systems, lane detection and traffic sign recognition have become critical technologies for achieving full autonomous driving. These technologies offer crucial real-time insights into road conditions, with their precision and resilience being paramount to the safety and [...] Read more.
With the rapid development of intelligent transportation systems, lane detection and traffic sign recognition have become critical technologies for achieving full autonomous driving. These technologies offer crucial real-time insights into road conditions, with their precision and resilience being paramount to the safety and dependability of autonomous vehicles. This paper introduces an innovative method for detecting and recognizing multi-lane lines and intersection stop lines using computer vision technology, which is integrated with traffic signs. In the image preprocessing phase, the Sobel edge detection algorithm and weighted filtering are employed to eliminate noise and interference information in the image. For multi-lane lines and intersection stop lines, detection and recognition are implemented using a multi-directional and unilateral sliding window search, as well as polynomial fitting methods, from a bird’s-eye view. This approach enables the determination of both the lateral and longitudinal positioning on the current road, as well as the sequencing of the lane number for each lane. This paper utilizes convolutional neural networks to recognize multi-lane traffic signs. The required dataset of multi-lane traffic signs is created following specific experimental parameters, and the YOLO single-stage target detection algorithm is used for training the weights. In consideration of the impact of inadequate lighting conditions, the V channel within the HSV color space is employed to assess the intensity of light, and the SSR algorithm is utilized to process images that fail to meet the threshold criteria. In the detection and recognition stage, each lane sign on the traffic signal is identified and then matched with the corresponding lane on the ground. Finally, a visual module joint experiment is conducted to verify the effectiveness of the algorithm. Full article
(This article belongs to the Special Issue Control Systems for Autonomous Vehicles)
Show Figures

Figure 1

38 pages, 14898 KiB  
Article
Audio Steganalysis Estimation with the Goertzel Algorithm
by Blanca E. Carvajal-Gámez, Miguel A. Castillo-Martínez, Luis A. Castañeda-Briones, Francisco J. Gallegos-Funes and Manuel A. Díaz-Casco
Appl. Sci. 2024, 14(14), 6000; https://doi.org/10.3390/app14146000 - 10 Jul 2024
Cited by 2 | Viewed by 1375
Abstract
Audio steganalysis has been little explored due to its complexity and randomness, which complicate the analysis. Audio files generate marks in the frequency domain; these marks are known as fingerprints and make the files unique. This allows us to differentiate between audio vectors. [...] Read more.
Audio steganalysis has been little explored due to its complexity and randomness, which complicate the analysis. Audio files generate marks in the frequency domain; these marks are known as fingerprints and make the files unique. This allows us to differentiate between audio vectors. In this work, the use of the Goertzel algorithm as a steganalyzer in the frequency domain is combined with the proposed sliding window adaptation to allow the analyzed audio vectors to be compared, enabling the differences between the vectors to be identified. We then apply linear prediction to the vectors to detect any modifications in the acoustic signatures. The implemented Goertzel algorithm is computationally less complex than other proposed stegoanalyzers based on convolutional neural networks or other types of classifiers of lower complexity, such as support vector machines (SVD). These methods previously required an extensive audio database to train the network, and thus detect possible stegoaudio through the matches they find. Unlike the proposed Goertzel algorithm, which works individually with the audio vector in question, it locates the difference in tone and generates an alert for the possible stegoaudio. In this work, we apply the classic Goertzel algorithm to detect frequencies that have possibly been modified by insertions or alterations of the audio vectors. The final vectors are plotted to visualize the alteration zones. The obtained results are evaluated qualitatively and quantitatively. To perform a double check of the fingerprint of the audio vectors, we obtain a linear prediction error to establish the percentage of statistical dependence between the processed audio signals. To validate the proposed method, we evaluate the audio quality metrics (AQMs) of the obtained result. Finally, we implement the stegoanalyzer oriented to AQMs to corroborate the obtained results. From the results obtained for the performance of the proposed stegoanalyzer, we demonstrate that we have a success rate of 100%. Full article
(This article belongs to the Special Issue Advances in Security, Trust and Privacy in Internet of Things)
Show Figures

Figure 1

14 pages, 6445 KiB  
Article
Multi-Sensor-Assisted Low-Cost Indoor Non-Visual Semantic Map Construction and Localization for Modern Vehicles
by Guangxiao Shao, Fanyu Lin, Chao Li, Wei Shao, Wennan Chai, Xiaorui Xu, Mingyue Zhang, Zhen Sun and Qingdang Li
Sensors 2024, 24(13), 4263; https://doi.org/10.3390/s24134263 - 30 Jun 2024
Viewed by 1714
Abstract
With the transformation and development of the automotive industry, low-cost and seamless indoor and outdoor positioning has become a research hotspot for modern vehicles equipped with in-vehicle infotainment systems, Internet of Vehicles, or other intelligent systems (such as Telematics Box, Autopilot, etc.). This [...] Read more.
With the transformation and development of the automotive industry, low-cost and seamless indoor and outdoor positioning has become a research hotspot for modern vehicles equipped with in-vehicle infotainment systems, Internet of Vehicles, or other intelligent systems (such as Telematics Box, Autopilot, etc.). This paper analyzes modern vehicles in different configurations and proposes a low-cost, versatile indoor non-visual semantic mapping and localization solution based on low-cost sensors. Firstly, the sliding window-based semantic landmark detection method is designed to identify non-visual semantic landmarks (e.g., entrance/exit, ramp entrance/exit, road node). Then, we construct an indoor non-visual semantic map that includes the vehicle trajectory waypoints, non-visual semantic landmarks, and Wi-Fi fingerprints of RSS features. Furthermore, to estimate the position of modern vehicles in the constructed semantic maps, we proposed a graph-optimized localization method based on landmark matching that exploits the correlation between non-visual semantic landmarks. Finally, field experiments are conducted in two shopping mall scenes with different underground parking layouts to verify the proposed non-visual semantic mapping and localization method. The results show that the proposed method achieves a high accuracy of 98.1% in non-visual semantic landmark detection and a low localization error of 1.31 m. Full article
Show Figures

Figure 1

14 pages, 6104 KiB  
Article
Adaptive Sliding Window–Dynamic Time Warping-Based Fluctuation Series Prediction for the Capacity of Lithium-Ion Batteries
by Sihan Sun, Minming Gu and Tuoqi Liu
Electronics 2024, 13(13), 2501; https://doi.org/10.3390/electronics13132501 - 26 Jun 2024
Cited by 2 | Viewed by 2459
Abstract
Accurately predicting the capacity of lithium-ion batteries is crucial for improving battery reliability and preventing potential incidents. Current prediction models for predicting lithium-ion battery capacity fluctuations encounter challenges like inadequate fitting and suboptimal computational efficiency. This study presents a new approach for fluctuation [...] Read more.
Accurately predicting the capacity of lithium-ion batteries is crucial for improving battery reliability and preventing potential incidents. Current prediction models for predicting lithium-ion battery capacity fluctuations encounter challenges like inadequate fitting and suboptimal computational efficiency. This study presents a new approach for fluctuation prediction termed ASW-DTW, which integrates Adaptive Sliding Window (ASW) and Dynamic Time Warping (DTW). Initially, this approach leverages Empirical Mode Decomposition (EMD) to preprocess the raw battery capacity data and extract local fluctuation components. Subsequent to this, DTW is employed to forecast the fluctuation sequence through pattern-matching methods. Additionally, to boost model precision and versatility, a feature recognition-based ASW technique is used to determine the optimal window size for the current segment and assist in DTW-based predictions. The study concludes with capacity fluctuation prediction experiments carried out across various lithium-ion battery models. The results demonstrate the efficacy and extensive applicability of the proposed method. Full article
(This article belongs to the Topic Energy Storage and Conversion Systems, 2nd Edition)
Show Figures

Figure 1

Back to TopTop