Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,372)

Search Parameters:
Keywords = probabilistic distributions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 7335 KiB  
Article
COLREGs-Compliant Distributed Stochastic Search Algorithm for Multi-Ship Collision Avoidance
by Bohan Zhang, Jinichi Koue, Tenda Okimoto and Katsutoshi Hirayama
J. Mar. Sci. Eng. 2025, 13(8), 1402; https://doi.org/10.3390/jmse13081402 - 23 Jul 2025
Abstract
The increasing complexity of maritime traffic imposes growing demands on the safety and rationality of ship-collision-avoidance decisions. While most existing research focuses on simple encounter scenarios, autonomous collision-avoidance strategies that comply with the International Regulations for Preventing Collisions at Sea (COLREGs) in complex [...] Read more.
The increasing complexity of maritime traffic imposes growing demands on the safety and rationality of ship-collision-avoidance decisions. While most existing research focuses on simple encounter scenarios, autonomous collision-avoidance strategies that comply with the International Regulations for Preventing Collisions at Sea (COLREGs) in complex multi-ship environments remain insufficiently investigated. To address this gap, this study proposes a novel collision-avoidance framework that integrates a quantitative COLREGs analysis with a distributed stochastic search mechanism. The framework consists of three core components: encounter identification, safety assessment, and stage classification. A cost function is employed to balance safety, COLREGs compliance, and navigational efficiency, incorporating a distance-based weighting factor to modulate the influence of each target vessel. The use of a distributed stochastic search algorithm enables decentralized decision-making through localized information sharing and probabilistic updates. Extensive simulations conducted across a variety of scenarios demonstrate that the proposed method can rapidly generate effective collision-avoidance strategies that fully comply with COLREGs. Comprehensive evaluations in terms of safety, navigational efficiency, COLREGs adherence, and real-time computational performance further validate the method’s strong adaptability and its promising potential for practical application in complex multi-ship environments. Full article
(This article belongs to the Special Issue Maritime Security and Risk Assessments—2nd Edition)
Show Figures

Figure 1

18 pages, 1412 KiB  
Article
Graph-Regularized Orthogonal Non-Negative Matrix Factorization with Itakura–Saito (IS) Divergence for Fault Detection
by Yabing Liu, Juncheng Wu, Jin Zhang and Man-Fai Leung
Mathematics 2025, 13(15), 2343; https://doi.org/10.3390/math13152343 - 23 Jul 2025
Abstract
In modern industrial environments, quickly and accurately identifying faults is crucial for ensuring the smooth operation of production processes. Non-negative Matrix Factorization (NMF)-based fault detection technology has garnered attention due to its wide application in industrial process monitoring and machinery fault diagnosis. As [...] Read more.
In modern industrial environments, quickly and accurately identifying faults is crucial for ensuring the smooth operation of production processes. Non-negative Matrix Factorization (NMF)-based fault detection technology has garnered attention due to its wide application in industrial process monitoring and machinery fault diagnosis. As an effective dimensionality reduction tool, NMF can decompose complex datasets into non-negative matrices with practical and physical significance, thereby extracting key features of the process. This paper presents a novel approach to fault detection in industrial processes, called Graph-Regularized Orthogonal Non-negative Matrix Factorization with Itakura–Saito Divergence (GONMF-IS). The proposed method addresses the challenges of fault detection in complex, non-Gaussian industrial environments. By using Itakura–Saito divergence, GONMF-IS effectively handles data with probabilistic distribution characteristics, improving the model’s ability to process non-Gaussian data. Additionally, graph regularization leverages the structural relationships among data points to refine the matrix factorization process, enhancing the robustness and adaptability of the algorithm. The incorporation of orthogonality constraints further enhances the independence and interpretability of the resulting factors. Through extensive experiments, the GONMF-IS method demonstrates superior performance in fault detection tasks, providing an effective and reliable tool for industrial applications. The results suggest that GONMF-IS offers significant improvements over traditional methods, offering a more robust and accurate solution for fault diagnosis in complex industrial settings. Full article
Show Figures

Figure 1

18 pages, 4165 KiB  
Article
Localization and Pixel-Confidence Network for Surface Defect Segmentation
by Yueyou Wang, Zixuan Xu, Li Mei, Ruiqing Guo, Jing Zhang, Tingbo Zhang and Hongqi Liu
Sensors 2025, 25(15), 4548; https://doi.org/10.3390/s25154548 - 23 Jul 2025
Abstract
Surface defect segmentation based on deep learning has been widely applied in industrial inspection. However, two major challenges persist in specific application scenarios: first, the imbalanced area distribution between defects and the background leads to degraded segmentation performance; second, fine gaps within defects [...] Read more.
Surface defect segmentation based on deep learning has been widely applied in industrial inspection. However, two major challenges persist in specific application scenarios: first, the imbalanced area distribution between defects and the background leads to degraded segmentation performance; second, fine gaps within defects are prone to over-segmentation. To address these issues, this study proposes a two-stage image segmentation network that integrates a Defect Localization Module and a Pixel Confidence Module. In the first stage, the Defect Localization Module performs a coarse localization of defect regions and embeds the resulting feature vectors into the backbone of the second stage. In the second stage, the Pixel Confidence Module captures the probabilistic distribution of neighboring pixels, thereby refining the initial predictions. Experimental results demonstrate that the improved network achieves gains of 1.58%±0.80% in mPA, 1.35%±0.77% in mIoU on the self-built Carbon Fabric Defect Dataset and 2.66%±1.12% in mPA, 1.44%±0.79% in mIoU on the public Magnetic Tile Defect Dataset compared to the other network. These enhancements translate to more reliable automated quality assurance in industrial production environments. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

18 pages, 5079 KiB  
Article
Graph Representation Learning on Street Networks
by Mateo Neira and Roberto Murcio
ISPRS Int. J. Geo-Inf. 2025, 14(8), 284; https://doi.org/10.3390/ijgi14080284 - 22 Jul 2025
Abstract
Street networks provide an invaluable source of information about the different temporal and spatial patterns emerging in our cities. These streets are often represented as graphs where intersections are modeled as nodes and streets as edges between them. Previous work has shown that [...] Read more.
Street networks provide an invaluable source of information about the different temporal and spatial patterns emerging in our cities. These streets are often represented as graphs where intersections are modeled as nodes and streets as edges between them. Previous work has shown that raster representations of the original data can be created through a learning algorithm on low-dimensional representations of the street networks. In contrast, models that capture high-level urban network metrics can be trained through convolutional neural networks. However, the detailed topological data is lost through the rasterization of the street network, and the models cannot recover this information from the image alone, failing to capture complex street network features. This paper proposes a model capable of inferring good representations directly from the street network. Specifically, we use a variational autoencoder with graph convolutional layers and a decoder that generates a probabilistic, fully connected graph to learn latent representations that encode both local network structure and the spatial distribution of nodes. We train the model on thousands of street network segments and use the learned representations to generate synthetic street configurations. Finally, we proposed a possible application to classify the urban morphology of different network segments, investigating their common characteristics in the learned space. Full article
Show Figures

Figure 1

18 pages, 774 KiB  
Article
Bayesian Inertia Estimation via Parallel MCMC Hammer in Power Systems
by Weidong Zhong, Chun Li, Minghua Chu, Yuanhong Che, Shuyang Zhou, Zhi Wu and Kai Liu
Energies 2025, 18(15), 3905; https://doi.org/10.3390/en18153905 - 22 Jul 2025
Abstract
The stability of modern power systems has become critically dependent on precise inertia estimation of synchronous generators, particularly as renewable energy integration fundamentally transforms grid dynamics. Increasing penetration of converter-interfaced renewable resources reduces system inertia, heightening the grid’s susceptibility to transient disturbances and [...] Read more.
The stability of modern power systems has become critically dependent on precise inertia estimation of synchronous generators, particularly as renewable energy integration fundamentally transforms grid dynamics. Increasing penetration of converter-interfaced renewable resources reduces system inertia, heightening the grid’s susceptibility to transient disturbances and creating significant technical challenges in maintaining operational reliability. This paper addresses these challenges through a novel Bayesian inference framework that synergistically integrates PMU data with an advanced MCMC sampling technique, specifically employing the Affine-Invariant Ensemble Sampler. The proposed methodology establishes a probabilistic estimation paradigm that systematically combines prior engineering knowledge with real-time measurements, while the Affine-Invariant Ensemble Sampler mechanism overcomes high-dimensional computational barriers through its unique ensemble-based exploration strategy featuring stretch moves and parallel walker coordination. The framework’s ability to provide full posterior distributions of inertia parameters, rather than single-point estimates, helps for stability assessment in renewable-dominated grids. Simulation results on the IEEE 39-bus and 68-bus benchmark systems validate the effectiveness and scalability of the proposed method, with inertia estimation errors consistently maintained below 1% across all generators. Moreover, the parallelized implementation of the algorithm significantly outperforms the conventional M-H method in computational efficiency. Specifically, the proposed approach reduces execution time by approximately 52% in the 39-bus system and by 57% in the 68-bus system, demonstrating its suitability for real-time and large-scale power system applications. Full article
Show Figures

Figure 1

27 pages, 532 KiB  
Article
Bayesian Binary Search
by Vikash Singh, Matthew Khanzadeh, Vincent Davis, Harrison Rush, Emanuele Rossi, Jesse Shrader and Pietro Lio’
Algorithms 2025, 18(8), 452; https://doi.org/10.3390/a18080452 - 22 Jul 2025
Viewed by 85
Abstract
We present Bayesian Binary Search (BBS), a novel framework that bridges statistical learning theory/probabilistic machine learning and binary search. BBS utilizes probabilistic methods to learn the underlying probability density of the search space. This learned distribution then informs a modified bisection strategy, where [...] Read more.
We present Bayesian Binary Search (BBS), a novel framework that bridges statistical learning theory/probabilistic machine learning and binary search. BBS utilizes probabilistic methods to learn the underlying probability density of the search space. This learned distribution then informs a modified bisection strategy, where the split point is determined by probability density rather than the conventional midpoint. This learning process for search space density estimation can be achieved through various supervised probabilistic machine learning techniques (e.g., Gaussian Process Regression, Bayesian Neural Networks, and Quantile Regression) or unsupervised statistical learning algorithms (e.g., Gaussian Mixture Models, Kernel Density Estimation (KDE), and Maximum Likelihood Estimation (MLE)). Our results demonstrate substantial efficiency improvements using BBS on both synthetic data with diverse distributions and in a real-world scenario involving Bitcoin Lightning Network channel balance probing (3–6% efficiency gain), where BBS is currently in production. Full article
Show Figures

Figure 1

13 pages, 272 KiB  
Article
Asymptotic Behavior of the Bayes Estimator of a Regression Curve
by Agustín G. Nogales
Mathematics 2025, 13(14), 2319; https://doi.org/10.3390/math13142319 - 21 Jul 2025
Viewed by 60
Abstract
In this work, we prove the convergence to 0 in both L1 and L2 of the Bayes estimator of a regression curve (i.e., the conditional expectation of the response variable given the regressor). The strong consistency of the estimator is also [...] Read more.
In this work, we prove the convergence to 0 in both L1 and L2 of the Bayes estimator of a regression curve (i.e., the conditional expectation of the response variable given the regressor). The strong consistency of the estimator is also derived. The Bayes estimator of a regression curve is the regression curve with respect to the posterior predictive distribution. The result is general enough to cover discrete and continuous cases, parametric or nonparametric, and no specific supposition is made about the prior distribution. Some examples, two of them of a nonparametric nature, are given to illustrate the main result; one of the nonparametric examples exhibits a situation where the estimation of the regression curve has an optimal solution, although the problem of estimating the density is meaningless. An important role in the demonstration of these results is the establishment of a probability space as an adequate framework to address the problem of estimating regression curves from the Bayesian point of view, putting at our disposal powerful probabilistic tools in that endeavor. Full article
(This article belongs to the Section D1: Probability and Statistics)
32 pages, 1444 KiB  
Article
Enhancing Airport Resource Efficiency Through Statistical Modeling of Heavy-Tailed Service Durations: A Case Study on Potable Water Trucks
by Changcheng Li, Minghua Hu, Yuxin Hu, Zheng Zhao and Yanjun Wang
Aerospace 2025, 12(7), 643; https://doi.org/10.3390/aerospace12070643 - 21 Jul 2025
Viewed by 160
Abstract
In airport operations management, accurately estimating the service durations of ground support equipment such as Potable Water Trucks (PWTs) is essential for improving resource allocation efficiency and ensuring timely aircraft turnaround. Traditional estimation methods often use fixed averages or assume normal distributions, failing [...] Read more.
In airport operations management, accurately estimating the service durations of ground support equipment such as Potable Water Trucks (PWTs) is essential for improving resource allocation efficiency and ensuring timely aircraft turnaround. Traditional estimation methods often use fixed averages or assume normal distributions, failing to capture real-world variability and extreme scenarios effectively. To address these limitations, this study performs a comprehensive statistical analysis of PWT service durations using operational data from Beijing Daxing International Airport (ZBAD) and Shanghai Pudong International Airport (ZSPD). Employing chi-square goodness-of-fit tests, twenty probability distributions—including several heavy-tailed candidates—were rigorously evaluated under segmented scenarios, such as peak versus non-peak periods, varying temperature conditions, and different aircraft sizes. Results reveal that heavy-tailed distributions offer context-dependent advantages: the stable distribution exhibits superior modeling performance during peak operational periods, whereas the Burr distribution excels under non-peak conditions. Interestingly, contrary to existing operational assumptions, service durations at extremely high and low temperatures showed no significant statistical differences, prompting a reconsideration of temperature-dependent planning practices. Additionally, analysis by aircraft category showed that the Burr distribution best described service durations for large aircraft, while stable and log-logistic distributions were optimal for medium-sized aircraft. Numerical simulations confirmed these findings, demonstrating that the proposed heavy-tailed probabilistic models significantly improved resource prediction accuracy, reducing estimation errors by 13% to 25% compared to conventional methods. This research uniquely demonstrates the practical effectiveness of employing context-sensitive heavy-tailed distributions, substantially enhancing resource efficiency and operational reliability in airport ground handling management. Full article
(This article belongs to the Section Air Traffic and Transportation)
Show Figures

Figure 1

19 pages, 1167 KiB  
Article
A Reservoir Group Flood Control Operation Decision-Making Risk Analysis Model Considering Indicator and Weight Uncertainties
by Tangsong Luo, Xiaofeng Sun, Hailong Zhou, Yueping Xu and Yu Zhang
Water 2025, 17(14), 2145; https://doi.org/10.3390/w17142145 - 18 Jul 2025
Viewed by 159
Abstract
Reservoir group flood control scheduling decision-making faces multiple uncertainties, such as dynamic fluctuations of evaluation indicators and conflicts in weight assignment. This study proposes a risk analysis model for the decision-making process: capturing the temporal uncertainties of flood control indicators (such as reservoir [...] Read more.
Reservoir group flood control scheduling decision-making faces multiple uncertainties, such as dynamic fluctuations of evaluation indicators and conflicts in weight assignment. This study proposes a risk analysis model for the decision-making process: capturing the temporal uncertainties of flood control indicators (such as reservoir maximum water level and downstream control section flow) through the Long Short-Term Memory (LSTM) network, constructing a feasible weight space including four scenarios (unique fixed value, uniform distribution, etc.), resolving conflicts among the weight results from four methods (Analytic Hierarchy Process (AHP), Entropy Weight, Criteria Importance Through Intercriteria Correlation (CRITIC), Principal Component Analysis (PCA)) using game theory, defining decision-making risk as the probability that the actual safety level fails to reach the evaluation threshold, and quantifying risks based on the First-Order Second-Moment (FOSM) method. Case verification in the cascade reservoirs of the Qiantang River Basin of China shows that the model provides a risk assessment framework integrating multi-source uncertainties for flood control scheduling decisions through probabilistic description of indicator uncertainties (e.g., Zmax1 with μ = 65.3 and σ = 8.5) and definition of weight feasible regions (99% weight distribution covered by the 3σ criterion), filling the methodological gap in risk quantification during the decision-making process in existing research. Full article
(This article belongs to the Special Issue Flood Risk Identification and Management, 2nd Edition)
Show Figures

Figure 1

21 pages, 2594 KiB  
Article
Extraction of Basic Features and Typical Operating Conditions of Wind Power Generation for Sustainable Energy Systems
by Yongtao Sun, Qihui Yu, Xinhao Wang, Shengyu Gao and Guoxin Sun
Sustainability 2025, 17(14), 6577; https://doi.org/10.3390/su17146577 - 18 Jul 2025
Viewed by 131
Abstract
Accurate extraction of representative operating conditions is crucial for optimizing systems in renewable energy applications. This study proposes a novel framework that combines the Parzen window estimation method, ideal for nonparametric modeling of wind, solar, and load datasets, with a game theory-based time [...] Read more.
Accurate extraction of representative operating conditions is crucial for optimizing systems in renewable energy applications. This study proposes a novel framework that combines the Parzen window estimation method, ideal for nonparametric modeling of wind, solar, and load datasets, with a game theory-based time scale selection mechanism. The novelty of this work lies in integrating probabilistic density modeling with multi-indicator evaluation to derive realistic operational profiles. We first validate the superiority of the Parzen window approach over traditional Weibull and Beta distributions in estimating wind and solar probability density functions. In addition, we analyze the influence of key meteorological parameters such as wind direction, temperature, and solar irradiance on energy production. Using three evaluation metrics, the main result shows that a 3-day representative time scale offers optimal accuracy when determined through game theory methods. Validation with real-world data from Inner Mongolia confirms the robustness of the proposed method, yielding low errors in wind, solar, and load profiles. This study contributes a novel 3-day typical profile extraction method validated on real meteorological data, providing a data-driven foundation for optimizing energy storage systems under renewable uncertainty. This framework supports energy sustainability by ensuring realistic modeling under renewable intermittency. Full article
Show Figures

Figure 1

41 pages, 1006 KiB  
Article
A Max-Flow Approach to Random Tensor Networks
by Khurshed Fitter, Faedi Loulidi and Ion Nechita
Entropy 2025, 27(7), 756; https://doi.org/10.3390/e27070756 - 15 Jul 2025
Viewed by 143
Abstract
The entanglement entropy of a random tensor network (RTN) is studied using tools from free probability theory. Random tensor networks are simple toy models that help in understanding the entanglement behavior of a boundary region in the anti-de Sitter/conformal field theory (AdS/CFT) context. [...] Read more.
The entanglement entropy of a random tensor network (RTN) is studied using tools from free probability theory. Random tensor networks are simple toy models that help in understanding the entanglement behavior of a boundary region in the anti-de Sitter/conformal field theory (AdS/CFT) context. These can be regarded as specific probabilistic models for tensors with particular geometry dictated by a graph (or network) structure. First, we introduce a model of RTN obtained by contracting maximally entangled states (corresponding to the edges of the graph) on the tensor product of Gaussian tensors (corresponding to the vertices of the graph). The entanglement spectrum of the resulting random state is analyzed along a given bipartition of the local Hilbert spaces. The limiting eigenvalue distribution of the reduced density operator of the RTN state is provided in the limit of large local dimension. This limiting value is described through a maximum flow optimization problem in a new graph corresponding to the geometry of the RTN and the given bipartition. In the case of series-parallel graphs, an explicit formula for the limiting eigenvalue distribution is provided using classical and free multiplicative convolutions. The physical implications of these results are discussed, allowing the analysis to move beyond the semiclassical regime without any cut assumption, specifically in terms of finite corrections to the average entanglement entropy of the RTN. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

15 pages, 3145 KiB  
Article
Probabilistic Prediction of Spudcan Bearing Capacity in Stiff-over-Soft Clay Based on Bayes’ Theorem
by Zhaoyu Sun, Pan Gao, Yanling Gao, Jianze Bi and Qiang Gao
J. Mar. Sci. Eng. 2025, 13(7), 1344; https://doi.org/10.3390/jmse13071344 - 14 Jul 2025
Viewed by 177
Abstract
During offshore operations of jack-up platforms, the spudcan may experience sudden punch-through failure when penetrating from an overlying stiff clay layer into the underlying soft clay, posing significant risks to platform safety. Conventional punch-through prediction methods, which rely on predetermined soil parameters, exhibit [...] Read more.
During offshore operations of jack-up platforms, the spudcan may experience sudden punch-through failure when penetrating from an overlying stiff clay layer into the underlying soft clay, posing significant risks to platform safety. Conventional punch-through prediction methods, which rely on predetermined soil parameters, exhibit limited accuracy as they fail to account for uncertainties in seabed stratigraphy and soil properties. To address this limitation, based on a database of centrifuge model tests, a probabilistic prediction framework for the peak resistance and corresponding depth is developed by integrating empirical prediction formulas based on Bayes’ theorem. The proposed Bayesian methodology effectively refines prediction accuracy by quantifying uncertainties in soil parameters, spudcan geometry, and computational models. Specifically, it establishes prior probability distributions of peak resistance and depth through Monte Carlo simulations, then updates these distributions in real time using field monitoring data during spudcan penetration. The results demonstrate that both the recommended method specified in ISO 19905-1 and an existing deterministic model tend to yield conservative estimates. This approach can significantly improve the predicted accuracy of the peak resistance compared with deterministic methods. Additionally, it shows that the most probable failure zone converges toward the actual punch-through point as more monitoring data is incorporated. The enhanced prediction capability provides critical decision support for mitigating punch-through potential during offshore jack-up operations, thereby advancing the safety and reliability of marine engineering practices. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

28 pages, 1051 KiB  
Article
Probabilistic Load-Shedding Strategy for Frequency Regulation in Microgrids Under Uncertainties
by Wesley Peres, Raphael Paulo Braga Poubel and Rafael Alipio
Symmetry 2025, 17(7), 1125; https://doi.org/10.3390/sym17071125 - 14 Jul 2025
Viewed by 238
Abstract
This paper proposes a novel integer-mixed probabilistic optimal power flow (IM-POPF) strategy for frequency regulation in islanded microgrids under uncertain operating conditions. Existing load-shedding approaches face critical limitations: continuous frameworks fail to reflect the discrete nature of actual load disconnections, while deterministic models [...] Read more.
This paper proposes a novel integer-mixed probabilistic optimal power flow (IM-POPF) strategy for frequency regulation in islanded microgrids under uncertain operating conditions. Existing load-shedding approaches face critical limitations: continuous frameworks fail to reflect the discrete nature of actual load disconnections, while deterministic models inadequately capture the stochastic behavior of renewable generation and load variations. The proposed approach formulates load shedding as an integer optimization problem where variables are categorized as integer (load disconnection decisions at specific nodes) and continuous (voltages, power generation, and steady-state frequency), better reflecting practical power system operations. The key innovation combines integer load-shedding optimization with efficient uncertainty propagation through Unscented Transformation, eliminating the computational burden of Monte Carlo simulations while maintaining accuracy. Load and renewable uncertainties are modeled as normally distributed variables, and probabilistic constraints ensure operational limits compliance with predefined confidence levels. The methodology integrates Differential Evolution metaheuristics with Unscented Transformation for uncertainty propagation, requiring only 137 deterministic evaluations compared to 5000 for Monte Carlo methods. Validation on an IEEE 33-bus radial distribution system configured as an islanded microgrid demonstrates significant advantages over conventional approaches. Results show 36.5-fold computational efficiency improvement while achieving 95.28% confidence level compliance for frequency limits, compared to only 50% for deterministic methods. The integer formulation requires minimal additional load shedding (21.265%) compared to continuous approaches (20.682%), while better aligning with the discrete nature of real-world operational decisions. The proposed IM-POPF framework successfully minimizes total load shedding while maintaining frequency stability under uncertain conditions, providing a computationally efficient solution for real-time microgrid operation. Full article
(This article belongs to the Special Issue Symmetry and Distributed Power System)
Show Figures

Figure 1

19 pages, 2183 KiB  
Systematic Review
Mercury Scenario in Fish from the Amazon Basin: Exploring the Interplay of Social Groups and Environmental Diversity
by Thaís de Castro Paiva, Inácio Abreu Pestana, Lorena Nascimento Leite Miranda, Gabriel Oliveira de Carvalho, Wanderley Rodrigues Bastos and Daniele Kasper
Toxics 2025, 13(7), 580; https://doi.org/10.3390/toxics13070580 - 10 Jul 2025
Viewed by 366
Abstract
The Amazon faces significant challenges related to mercury contamination, including naturally elevated concentrations and gold mining activities. Due to mercury’s toxicity and the importance of fish as a protein source for local populations, assessing mercury levels in regional fish is crucial. However, there [...] Read more.
The Amazon faces significant challenges related to mercury contamination, including naturally elevated concentrations and gold mining activities. Due to mercury’s toxicity and the importance of fish as a protein source for local populations, assessing mercury levels in regional fish is crucial. However, there are gaps in knowledge regarding mercury concentrations in many areas of the Amazon basin. This study aims to synthesize the existing literature on mercury concentrations in fish and the exposure of urban and traditional social groups through fish consumption. A systematic review (1990–2022) was conducted for six fish genera (Cichla spp., Hoplias spp. and Plagioscion spp., Leporinus spp., Semaprochilodus spp., and Schizodon spp.) in the Web of Science (Clarivate Analytics) and Scopus (Elsevier) databases. The database consisted of a total of 46 studies and 455 reports. The distribution of studies in the region was not homogeneous. The most studied regions were the Madeira River sub-basin, while the Paru–Jari basin had no studies. Risk deterministic and probabilistic assessments based on Joint FAO/WHO Expert Committee on Food Additives (JECFA, 2007) guidelines showed high risk exposure, especially for traditional communities. Carnivorous fish from lakes and hydroelectric reservoirs, as well as fish from black-water ecosystems, exhibited higher mercury concentrations. In the Amazon region, even if mercury levels in fish muscle do not exceed regulatory limits, the high fish consumption can still elevate health risks for local populations. Monitoring mercury levels across a broader range of fish species, including both carnivorous and non-carnivorous species, especially in communities heavily reliant on fish for their diet, will enable a more accurate risk assessment and provide an opportunity to recommend fish species with lower mercury exposure risk for human consumption. The present study emphasizes the need to protect regions that already exhibit higher levels of mercury—such as lakes, hydroelectric reservoirs, and black-water ecosystems—to ensure food safety and safeguard public health. Full article
(This article belongs to the Special Issue Mercury Cycling and Health Effects—2nd Edition)
Show Figures

Figure 1

12 pages, 1072 KiB  
Article
Performance Evaluation of IM/DD FSO Communication System Under Dust Storm Conditions
by Maged Abdullah Esmail
Technologies 2025, 13(7), 288; https://doi.org/10.3390/technologies13070288 - 7 Jul 2025
Viewed by 203
Abstract
Free-space optical (FSO) communication is a promising high-capacity solution for future wireless networks, particularly for backhaul and fronthaul links in 5G and emerging 6G systems. However, it remains highly vulnerable to environmental impairment, especially in arid regions prone to dust storms. While prior [...] Read more.
Free-space optical (FSO) communication is a promising high-capacity solution for future wireless networks, particularly for backhaul and fronthaul links in 5G and emerging 6G systems. However, it remains highly vulnerable to environmental impairment, especially in arid regions prone to dust storms. While prior studies have addressed atmospheric effects such as fog and turbulence, the specific impact of dust on signal performance remains insufficiently explored. This work presents a probabilistic modeling framework for evaluating the performance of an intensity modulation/direct detection (IM/DD) FSO system under dust storm conditions. Using a controlled laboratory environment, we conducted measurements of the optical signal under dust-induced channel conditions using real-world dust samples collected from an actual dust storm. We identified the Beta distribution as the most accurate model for the measured signal fluctuations. Closed-form expressions were derived for average bit error rate (BER), outage probability, and channel capacity. The close agreement between the analytical, approximate, and simulated results validates the proposed model as a reliable tool for evaluating FSO system performance. The results show that the forward error correction (FEC) BER threshold of 103 is achieved at approximately 10.5 dB, and the outage probability drops below 103 at 10 dB average SNR. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

Back to TopTop