Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (160)

Search Parameters:
Keywords = a priori uncertainty

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 5938 KiB  
Article
Noise-Adaptive GNSS/INS Fusion Positioning for Autonomous Driving in Complex Environments
by Xingyang Feng, Mianhao Qiu, Tao Wang, Xinmin Yao, Hua Cong and Yu Zhang
Vehicles 2025, 7(3), 77; https://doi.org/10.3390/vehicles7030077 - 22 Jul 2025
Cited by 1 | Viewed by 404
Abstract
Accurate and reliable multi-scene positioning remains a critical challenge in autonomous driving systems, as conventional fixed-noise fusion strategies struggle to handle the dynamic error characteristics of heterogeneous sensors in complex operational environments. This paper proposes a novel noise-adaptive fusion framework integrating Global Navigation [...] Read more.
Accurate and reliable multi-scene positioning remains a critical challenge in autonomous driving systems, as conventional fixed-noise fusion strategies struggle to handle the dynamic error characteristics of heterogeneous sensors in complex operational environments. This paper proposes a novel noise-adaptive fusion framework integrating Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS) measurements. Our key innovation lies in developing a dual noise estimation model that synergizes priori weighting with posterior variance compensation. Specifically, we establish an a priori weighting model for satellite pseudorange errors based on elevation angles and signal-to-noise ratios (SNRs), complemented by a Helmert variance component estimation for posterior refinement. For INS error modeling, we derive a bias instability noise accumulation model through Allan variance analysis. These adaptive noise estimates dynamically update both process and observation noise covariance matrices in our Error-State Kalman Filter (ESKF) implementation, enabling real-time calibration of GNSS and INS contributions. Comprehensive field experiments demonstrate two key advantages: (1) The proposed noise estimation model achieves 37.7% higher accuracy in quantifying GNSS single-point positioning uncertainties compared to conventional elevation-based weighting; (2) in unstructured environments with intermittent signal outages, the fusion system maintains an average absolute trajectory error (ATE) of less than 0.6 m, outperforming state-of-the-art fixed-weight fusion methods by 36.71% in positioning consistency. These results validate the framework’s capability to autonomously balance sensor reliability under dynamic environmental conditions, significantly enhancing positioning robustness for autonomous vehicles. Full article
Show Figures

Figure 1

27 pages, 1553 KiB  
Article
Dynamic Edge Loading Balancing with Edge Node Activity Prediction and Accelerating the Model Convergence
by Wen Chen, Sibin Liu, Yuxiao Yang, Wenjing Hu and Jinming Yu
Sensors 2025, 25(5), 1491; https://doi.org/10.3390/s25051491 - 28 Feb 2025
Viewed by 939
Abstract
In mobile edge computing networks, achieving effective load balancing across edge server nodes is essential for minimizing task processing latency. However, the lack of a priori knowledge regarding the current load state of edge nodes for user devices presents a significant challenge in [...] Read more.
In mobile edge computing networks, achieving effective load balancing across edge server nodes is essential for minimizing task processing latency. However, the lack of a priori knowledge regarding the current load state of edge nodes for user devices presents a significant challenge in multi-user, multi-edge node scenarios. This challenge is exacerbated by the inherent dynamics and uncertainty of edge node load variations. To tackle these issues, we propose a deep reinforcement learning-based approach for task offloading and resource allocation, aiming to balance the load on edge nodes while reducing the long-term average cost. Specifically, we decompose the optimization problem into two subproblems, task offloading and resource allocation. The Karush–Kuhn–Tucker (KKT) conditions are employed to derive the optimal strategy for communication bandwidth and computational resource allocation for edge nodes. We utilize Long Short-Term Memory (LSTM) networks to forecast the real-time activity of edge nodes. Additionally, we integrate deep compression techniques to expedite model convergence, facilitating faster execution on user devices. Our simulation results demonstrate that our proposed scheme achieves a 47% reduction in terms of the task drop rate, a 14% decrease in the total system cost, and a 7.6% improvement in the runtime compared to the baseline schemes. Full article
Show Figures

Figure 1

21 pages, 6954 KiB  
Article
Disturbance Observer-Based Dynamic Surface Control for Servomechanisms with Prescribed Tracking Performance
by Xingfa Zhao, Wenhe Liao, Tingting Liu, Dongyang Zhang and Yumin Tao
Mathematics 2025, 13(1), 172; https://doi.org/10.3390/math13010172 - 6 Jan 2025
Viewed by 879
Abstract
The critical design challenge for a class of servomechanisms is to reject unknown dynamics (including internal uncertainties and external disturbances) and achieve the prescribed performance of the tracking error. To get rid of the influence of unknown dynamics, an extended state observer (ESO) [...] Read more.
The critical design challenge for a class of servomechanisms is to reject unknown dynamics (including internal uncertainties and external disturbances) and achieve the prescribed performance of the tracking error. To get rid of the influence of unknown dynamics, an extended state observer (ESO) is employed to estimate system states and total unknown dynamics and does not require a priori information of the known dynamic. Meanwhile, an improved prescribed performance function is presented to guarantee the transient performance of the tracking error (e.g., the overshoot, convergence rate, and the steady state error). Consequently, a modified dynamic surface control strategy is designed based on the estimations of the ESO and error constraints. The stability of the proposed control strategy is demonstrated using Lyapunov theory. Finally, some simulation results based on a turntable servomechanism show that the proposed method is effective, and it has a better control effect and stronger anti-disturbance ability compared with the traditional control method. Full article
(This article belongs to the Section C2: Dynamical Systems)
Show Figures

Figure 1

27 pages, 1888 KiB  
Article
On the Game-Based Approach to Optimal Design
by Vladimir Kobelev
Eng 2024, 5(4), 3212-3238; https://doi.org/10.3390/eng5040169 - 4 Dec 2024
Viewed by 775
Abstract
A game problem of structural design is defined as a problem of playing against external circumstances. There are two classes of players, namely the “ordinal” and “cardinal” players. The ordinal players, designated as the “operator” and “nature”, endeavor to, respectively, minimize or maximize [...] Read more.
A game problem of structural design is defined as a problem of playing against external circumstances. There are two classes of players, namely the “ordinal” and “cardinal” players. The ordinal players, designated as the “operator” and “nature”, endeavor to, respectively, minimize or maximize the payoff function, operating within the constraints of limited resources. The fundamental premise of this study is that the action of player “nature” is a priori unknown. Statistical decision theory addresses decision-making scenarios where these probabilities, whether or not they are known, must be considered. The solution to the substratum game is expressed as a value of the game “against nature”. The structural optimization extension of the game considers the value of the game “against nature” as the function of certain parameters. Thus, the value of the game is contingent upon the design parameters. The cardinal players, “designers”, choose the design parameters. There are two formulations of optimization. For the single cardinal player, the pursuit of the maximum and minimum values of the game reduces the problem of optimal design. In the second formulation, there are multiple cardinal players with conflicting objectives. Accordingly, the superstratum game emerges, which addresses the interests of the superstratum players. Finally, the optimal design problems for games with closed forms are presented. The game formulations could be applied for optimal design with uncertain loading, considering “nature” as the source of uncertainty. Full article
(This article belongs to the Special Issue Feature Papers in Eng 2024)
Show Figures

Figure 1

20 pages, 5658 KiB  
Article
Microelectromechanical System Resonant Devices: A Guide for Design, Modeling and Testing
by Carolina Viola, Davide Pavesi, Lichen Weng, Giorgio Gobat, Federico Maspero and Valentina Zega
Micromachines 2024, 15(12), 1461; https://doi.org/10.3390/mi15121461 - 30 Nov 2024
Cited by 1 | Viewed by 3940
Abstract
Microelectromechanical systems (MEMSs) are attracting increasing interest from the scientific community for the large variety of possible applications and for the continuous request from the market to improve performances, while keeping small dimensions and reduced costs. To be able to simulate a priori [...] Read more.
Microelectromechanical systems (MEMSs) are attracting increasing interest from the scientific community for the large variety of possible applications and for the continuous request from the market to improve performances, while keeping small dimensions and reduced costs. To be able to simulate a priori and in real time the dynamic response of resonant devices is then crucial to guide the mechanical design and to support the MEMSs industry. In this work, we propose a simplified modeling procedure able to reproduce the nonlinear dynamics of MEMS resonant devices of arbitrary geometry. We validate it through the fabrication and testing of a cantilever beam resonator functioning in the nonlinear regime and we employ it to design a ring resonator working in the linear regime. Despite the uncertainties of a fabrication process available in the university facility, we demonstrate the predictability of the model and the effectiveness of the proposed design procedure. The satisfactory agreement between numerical predictions and experimental data proves indeed the proposed a priori design tool based on reduced-order numerical models and opens the way to its practical applications in the MEMS industry. Full article
Show Figures

Figure 1

15 pages, 3908 KiB  
Article
Efficient Trans-Dimensional Bayesian Inversion of C-Response Data from Geomagnetic Observatory and Satellite Magnetic Data
by Rongwen Guo, Shengqi Tian, Jianxin Liu, Yi-an Cui and Chuanghua Cao
Appl. Sci. 2024, 14(23), 10944; https://doi.org/10.3390/app142310944 - 25 Nov 2024
Viewed by 1004
Abstract
To investigate deep Earth information, researchers often utilize geomagnetic observatories and satellite data to obtain the conversion function of geomagnetic sounding, C-response data, and employ traditional inversion techniques to reconstruct subsurface structures. However, the traditional gradient-based inversion produces geophysical models with artificial structure [...] Read more.
To investigate deep Earth information, researchers often utilize geomagnetic observatories and satellite data to obtain the conversion function of geomagnetic sounding, C-response data, and employ traditional inversion techniques to reconstruct subsurface structures. However, the traditional gradient-based inversion produces geophysical models with artificial structure constraint enforced subjectively to guarantee a unique solution. This method typically requires the model parameterization knowledge a priori (e.g., based on personal preference) without uncertainty estimation. In this paper, we apply an efficient trans-dimensional (trans-D) Bayesian algorithm to invert C-response data from observatory and satellite geomagnetic data for the electrical conductivity structure of the Earth’s mantle, with the model parameterization treated as unknown and determined by the data. In trans-D Bayesian inversion, the posterior probability density (PPD) represents a complete inversion solution, based on which useful inversion inferences about the model can be made with the requirement of high-dimensional integration of PPD. This is realized by an efficient reversible-jump Markov-chain Monte Carlo (rjMcMC) sampling algorithm based on the birth/death scheme. Within the trans-D Bayesian algorithm, the model parameter is perturbated in the principal-component parameter space to minimize the effect of inter-parameter correlations and improve the sampling efficiency. A parallel tempering scheme is applied to guarantee the complete sampling of the multiple model space. Firstly, the trans-D Bayesian inversion is applied to invert C-response data from two synthetic models to examine the resolution of the model structure constrained by the data. Then, C-response data from geomagnetic satellites and observatories are inverted to recover the global averaged mantle conductivity structure and the local mantle structure with quantitative uncertainty estimation, which is consistent with the data. Full article
Show Figures

Figure 1

17 pages, 1379 KiB  
Article
Range-Domain Subspace Detector in the Presence of Direct Blast for Forward Scattering Detection in Shallow-Water Environments
by Jiahui Luo, Chao Sun and Mingyang Li
J. Mar. Sci. Eng. 2024, 12(10), 1864; https://doi.org/10.3390/jmse12101864 - 17 Oct 2024
Viewed by 858
Abstract
This paper aims to detect a target that crosses the baseline connecting the source and the receiver in shallow-water environments, which is a special scenario for a bistatic sonar system. In such a detection scenario, an intense sound wave, known as the direct [...] Read more.
This paper aims to detect a target that crosses the baseline connecting the source and the receiver in shallow-water environments, which is a special scenario for a bistatic sonar system. In such a detection scenario, an intense sound wave, known as the direct blast, propagates directly from the source to the receiver without target scattering. This direct blast usually arrives at the receiver simultaneously with the forward scattering signal and exhibits a larger intensity than the signal, posing a significant challenge for target detection. In this paper, a range-domain subspace is constructed by the horizontal distance between the source/target and each element of a horizontal linear array (HLA) when the ranges of environmental parameters are known a priori. Meanwhile, a range-domain subspace detector based on direct blast suppression (RSD-DS) is proposed for forward scattering detection. The source and the target are located at different positions, so the direct blast and the scattered signal are in different range-domain subspaces. By projecting the received data onto the orthogonal complement subspace of the direct blast subspace, the direct blast can be suppressed and the signal that lies outside the direct blast subspace is used for target detection. The simulation results indicate that the proposed RSD-DS exhibits a performance close to the generalized likelihood ratio detector (GLRD) while requiring less prior knowledge of environments (only known are the ranges of the sediment sound speed and the bottom sound speed), and its robustness to environmental uncertainties is better than that of the latter. Moreover, the proposed RSD-DS exhibits better immunity against the direct blast than the GLRD, since it can still work effectively at a signal-to-direct blast ratio (SDR) of −30 dB, while the GLRD stops working in this case. Full article
(This article belongs to the Special Issue Applications of Underwater Acoustics in Ocean Engineering)
Show Figures

Figure 1

12 pages, 4126 KiB  
Article
Hybrid Modeling and Simulation of the Grinding and Classification Process Driven by Multi-Source Compensation
by Jiawei Yang, Guobin Zou, Junwu Zhou, Qingkai Wang, Tao Song and Kang Li
Minerals 2024, 14(10), 1019; https://doi.org/10.3390/min14101019 - 10 Oct 2024
Cited by 1 | Viewed by 1261
Abstract
The grinding process is a key link in mineral processing production and a typical complex, controlled process. The steady-state model is limited by its model structure and thus difficult to applyied in a control system. A hybrid modeling method driven by multi-source compensation [...] Read more.
The grinding process is a key link in mineral processing production and a typical complex, controlled process. The steady-state model is limited by its model structure and thus difficult to applyied in a control system. A hybrid modeling method driven by multi-source compensation is proposed in this paper based on the mechanism model using key equipment in the grinding and classification process, addressing the uncertainties which affect the stability of the control systems. This method combines the relevant multi-source signals with uncertainties by using a priori knowledge, extracts the nonlinear feature vector in the signal through an unsupervised depth network, and constructs a compensation model based on dynamic radial basis function network to realize the integration of mechanism modeling and data-driven compensation modeling. The simulation results show that the model has a high degree of fit to the real physical system; the industrial validation was conducted at a gold concentrator, the grinding product quality was predicted and controlled with the model. Full article
Show Figures

Figure 1

21 pages, 387 KiB  
Article
New Method to Recover Activation Energy: Application to Copper Oxidation
by Dominique Barchiesi and Thomas Grosges
Metals 2024, 14(9), 1066; https://doi.org/10.3390/met14091066 - 18 Sep 2024
Viewed by 1465
Abstract
The calculation of the activation energy helps to understand and to identify the underlying phenomenon of oxidation. We propose a new method without any a priori hypothesis on the oxidation law, to retrieve the activation energy of partially and totally oxidized samples subject [...] Read more.
The calculation of the activation energy helps to understand and to identify the underlying phenomenon of oxidation. We propose a new method without any a priori hypothesis on the oxidation law, to retrieve the activation energy of partially and totally oxidized samples subject to successive annealing. The method handles the uncertainties on the measurement of metal and oxide thicknesses, at the beginning and at the end of the annealing process. The possible change in oxidation law during annealing is included in the model. By using an adapted Particle Swarm Optimization method to solve the inverse problem, we also calculate the time of final oxidation during the last annealing. We apply the method to successive annealings of three samples with initial nanometric layers of copper, at ambient pressure, in the open air. One, two and three successive laws are recovered from experimental data. We found activation energy values about 105–108 kJ mol1 at the beginning of the oxidation, 76–87 kJ mol1 at the second step, and finally 47–59 kJ mol1 in a third step. We also show that the time evolution of copper and oxide thicknesses can also be retrieved with their uncertainties. Full article
(This article belongs to the Special Issue Metallic Nanostructured Materials and Thin Films)
Show Figures

Figure 1

26 pages, 5057 KiB  
Review
Artificial Intelligence Advancements for Accurate Groundwater Level Modelling: An Updated Synthesis and Review
by Saeid Pourmorad, Mostafa Kabolizade and Luca Antonio Dimuccio
Appl. Sci. 2024, 14(16), 7358; https://doi.org/10.3390/app14167358 - 21 Aug 2024
Cited by 2 | Viewed by 4889
Abstract
Artificial Intelligence (AI) methods, including Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference Systems (ANFISs), Support Vector Machines (SVMs), Deep Learning (DL), Genetic Programming (GP) and Hybrid Algorithms, have proven to be important tools for accurate groundwater level (GWL) modelling. Through an analysis of [...] Read more.
Artificial Intelligence (AI) methods, including Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference Systems (ANFISs), Support Vector Machines (SVMs), Deep Learning (DL), Genetic Programming (GP) and Hybrid Algorithms, have proven to be important tools for accurate groundwater level (GWL) modelling. Through an analysis of the results obtained in numerous articles published in high-impact journals during 2001–2023, this comprehensive review examines each method’s capabilities, their combinations, and critical considerations about selecting appropriate input parameters, using optimisation algorithms, and considering the natural physical conditions of the territories under investigation to improve the models’ accuracy. For example, ANN takes advantage of its ability to recognise complex patterns and non-linear relationships between input and output variables. In addition, ANFIS shows potential in processing diverse environmental data and offers higher accuracy than alternative methods such as ANN, SVM, and GP. SVM excels at efficiently modelling complex relationships and heterogeneous data. Meanwhile, DL methods, such as Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNNs), are crucial in improving prediction accuracy at different temporal and spatial scales. GP methods have also shown promise in modelling complex and nonlinear relationships in groundwater data, providing more accurate and reliable predictions when combined with optimisation techniques and uncertainty analysis. Therefore, integrating these methods and optimisation techniques (Hybrid Algorithms), tailored to specific hydrological and hydrogeological conditions, can significantly increase the predictive capability of GWL models and improve the planning and management of water resources. These findings emphasise the importance of thoroughly understanding (a priori) the functionalities and capabilities of each potentially beneficial AI-based methodology, along with the knowledge of the physical characteristics of the territory under investigation, to optimise GWL predictive models. Full article
(This article belongs to the Special Issue Feature Review Papers in "Earth Sciences and Geography" Section)
Show Figures

Figure 1

25 pages, 8503 KiB  
Article
A Deep Learning Quantile Regression Photovoltaic Power-Forecasting Method under a Priori Knowledge Injection
by Xiaoying Ren, Yongqian Liu, Fei Zhang and Lingfeng Li
Energies 2024, 17(16), 4026; https://doi.org/10.3390/en17164026 - 14 Aug 2024
Cited by 5 | Viewed by 1358
Abstract
Accurate and reliable PV power probabilistic-forecasting results can help grid operators and market participants better understand and cope with PV energy volatility and uncertainty and improve the efficiency of energy dispatch and operation, which plays an important role in application scenarios such as [...] Read more.
Accurate and reliable PV power probabilistic-forecasting results can help grid operators and market participants better understand and cope with PV energy volatility and uncertainty and improve the efficiency of energy dispatch and operation, which plays an important role in application scenarios such as power market trading, risk management, and grid scheduling. In this paper, an innovative deep learning quantile regression ultra-short-term PV power-forecasting method is proposed. This method employs a two-branch deep learning architecture to forecast the conditional quantile of PV power; one branch is a QR-based stacked conventional convolutional neural network (QR_CNN), and the other is a QR-based temporal convolutional network (QR_TCN). The stacked CNN is used to focus on learning short-term local dependencies in PV power sequences, and the TCN is used to learn long-term temporal constraints between multi-feature data. These two branches extract different features from input data with different prior knowledge. By jointly training the two branches, the model is able to learn the probability distribution of PV power and obtain discrete conditional quantile forecasts of PV power in the ultra-short term. Then, based on these conditional quantile forecasts, a kernel density estimation method is used to estimate the PV power probability density function. The proposed method innovatively employs two ways of a priori knowledge injection: constructing a differential sequence of historical power as an input feature to provide more information about the ultrashort-term dynamics of the PV power and, at the same time, dividing it, together with all the other features, into two sets of inputs that contain different a priori features according to the demand of the forecasting task; and the dual-branching model architecture is designed to deeply match the data of the two sets of input features to the corresponding branching model computational mechanisms. The two a priori knowledge injection methods provide more effective features for the model and improve the forecasting performance and understandability of the model. The performance of the proposed model in point forecasting, interval forecasting, and probabilistic forecasting is comprehensively evaluated through the case of a real PV plant. The experimental results show that the proposed model performs well on the task of ultra-short-term PV power probabilistic forecasting and outperforms other state-of-the-art deep learning models in the field combined with QR. The proposed method in this paper can provide technical support for application scenarios such as energy scheduling, market trading, and risk management on the ultra-short-term time scale of the power system. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

23 pages, 8620 KiB  
Article
Emission Rate Estimation of Industrial Air Pollutant Emissions Based on Mobile Observation
by Xinlei Cui, Qi Yu, Weichun Ma and Yan Zhang
Atmosphere 2024, 15(8), 969; https://doi.org/10.3390/atmos15080969 - 13 Aug 2024
Viewed by 1432
Abstract
Mobile observation has been widely used in the monitoring of air pollution. However, studies on pollution sources and emission characteristics based on mobile navigational observation are rarely reported in the literature. A method for quantitative source analysis for industrial air pollutant emissions based [...] Read more.
Mobile observation has been widely used in the monitoring of air pollution. However, studies on pollution sources and emission characteristics based on mobile navigational observation are rarely reported in the literature. A method for quantitative source analysis for industrial air pollutant emissions based on mobile observations is introduced in this paper. NOx pollution identified in mobile observations is used as an example of the development of the method. A dispersion modeling scheme that fine-tuned the meteorological parameters according to the actual meteorological conditions was adopted to minimize the impact of uncertainties in meteorological conditions on the accuracy of small-scale dispersion modeling. The matching degree between simulated and observed concentrations was effectively improved through this optimization search. In response to the efficiency requirements of source resolution for multiple sources, a random search algorithm was first used to generate candidate solution samples, and then the solution samples were evaluated and optimized. Meanwhile, the new index Smatch was established to evaluate the quality of candidate samples, considering both numerical error and spatial distribution error of concentration, in order to address the non-uniqueness of the solution in the multi-source problem. Then, the necessity of considering the spatial distribution error of concentration is analyzed with the case study. The average values of NOx emission rates for the two study cases were calculated as 69.8 g/s and 70.8 g/s. The Smatch scores were 0.92–0.97 and 0.92–0.99. The results were close to the online monitoring data, and this kind of pollutant emission monitoring based on the mobile observation experiment was initially considered feasible. Additional analysis and clarifications were provided in the discussion section on the impact of uncertainties in meteorological conditions, the establishment of a priori emission inventories, and the interpretation of inverse calculation results. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

20 pages, 4362 KiB  
Article
Study on the Fast Search Planning Problem of Lost Targets for Maritime Emergency Response Based on an Improved Adaptive Immunogenetic Algorithm
by Tianyue Yu, Yasheng Zhang and Jie Yang
Sensors 2024, 24(12), 3904; https://doi.org/10.3390/s24123904 - 17 Jun 2024
Cited by 2 | Viewed by 1146
Abstract
This study investigates the problem of rapid search planning for moving targets in maritime emergencies using an improved adaptive immune genetic algorithm. Given the complexity and uncertainty inherent in searching for moving targets in maritime emergency situations, a task planning method based on [...] Read more.
This study investigates the problem of rapid search planning for moving targets in maritime emergencies using an improved adaptive immune genetic algorithm. Given the complexity and uncertainty inherent in searching for moving targets in maritime emergency situations, a task planning method based on the improved adaptive immunogenetic algorithm (IAIGA) is proposed to enhance search efficiency and accuracy. This method utilizes a priori information to construct the potential regions of the target and the distribution probability within each region. It establishes a “prediction-scheduling” search strategy model, planning a rapid search task for disconnected targets based on overlapping probability through the IAIGA. By incorporating an immune mechanism, the algorithm enhances its global search capability and robustness. Additionally, the adaptive strategy enables dynamic adjustment of the algorithm’s parameters to accommodate varying search scenarios. The experimental results demonstrate that the proposed IAIGA significantly outperforms traditional methods, providing higher search speeds and more accurate search results in the context of maritime emergency response. These findings offer effective technical support for maritime emergency operations. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 675 KiB  
Article
SVD-Based Parameter Identification of Discrete-Time Stochastic Systems with Unknown Exogenous Inputs
by Andrey Tsyganov and Yulia Tsyganova
Mathematics 2024, 12(7), 1006; https://doi.org/10.3390/math12071006 - 28 Mar 2024
Cited by 1 | Viewed by 1363
Abstract
This paper addresses the problem of parameter identification for discrete-time stochastic systems with unknown exogenous inputs. These systems form an important class of dynamic stochastic system models used to describe objects and processes under a high level of a priori uncertainty, when it [...] Read more.
This paper addresses the problem of parameter identification for discrete-time stochastic systems with unknown exogenous inputs. These systems form an important class of dynamic stochastic system models used to describe objects and processes under a high level of a priori uncertainty, when it is not possible to make any assumptions about the evolution of the unknown input signal or its statistical properties. The main purpose of this paper is to construct a new SVD-based modification of the existing Gillijns and De Moor filtering algorithm for linear discrete-time stochastic systems with unknown exogenous inputs. Using the theoretical results obtained, we demonstrate how this modified algorithm can be applied to solve the problem of parameter identification. The results of our numerical experiments conducted in MATLAB confirm the effectiveness of the SVD-based parameter identification method that was developed, under conditions of unknown exogenous inputs, compared to maximum likelihood parameter identification when exogenous inputs are known. Full article
(This article belongs to the Special Issue New Trends on Identification of Dynamic Systems)
Show Figures

Figure 1

18 pages, 8692 KiB  
Article
Object Detection and Tracking with YOLO and the Sliding Innovation Filter
by Alexander Moksyakov, Yuandi Wu, Stephen Andrew Gadsden, John Yawney and Mohammad AlShabi
Sensors 2024, 24(7), 2107; https://doi.org/10.3390/s24072107 - 26 Mar 2024
Cited by 9 | Viewed by 4487
Abstract
Object detection and tracking are pivotal tasks in machine learning, particularly within the domain of computer vision technologies. Despite significant advancements in object detection frameworks, challenges persist in real-world tracking scenarios, including object interactions, occlusions, and background interference. Many algorithms have been proposed [...] Read more.
Object detection and tracking are pivotal tasks in machine learning, particularly within the domain of computer vision technologies. Despite significant advancements in object detection frameworks, challenges persist in real-world tracking scenarios, including object interactions, occlusions, and background interference. Many algorithms have been proposed to carry out such tasks; however, most struggle to perform well in the face of disturbances and uncertain environments. This research proposes a novel approach by integrating the You Only Look Once (YOLO) architecture for object detection with a robust filter for target tracking, addressing issues of disturbances and uncertainties. The YOLO architecture, known for its real-time object detection capabilities, is employed for initial object detection and centroid location. In combination with the detection framework, the sliding innovation filter, a novel robust filter, is implemented and postulated to improve tracking reliability in the face of disturbances. Specifically, the sliding innovation filter is implemented to enhance tracking performance by estimating the optimal centroid location in each frame and updating the object’s trajectory. Target tracking traditionally relies on estimation theory techniques like the Kalman filter, and the sliding innovation filter is introduced as a robust alternative particularly suitable for scenarios where a priori information about system dynamics and noise is limited. Experimental simulations in a surveillance scenario demonstrate that the sliding innovation filter-based tracking approach outperforms existing Kalman-based methods, especially in the presence of disturbances. In all, this research contributes a practical and effective approach to object detection and tracking, addressing challenges in real-world, dynamic environments. The comparative analysis with traditional filters provides practical insights, laying the groundwork for future work aimed at advancing multi-object detection and tracking capabilities in diverse applications. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Back to TopTop