Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (135)

Search Parameters:
Keywords = poisson point process

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1572 KiB  
Article
AI-Driven Optimization Framework for Smart EV Charging Systems Integrated with Solar PV and BESS in High-Density Residential Environments
by Md Tanjil Sarker, Marran Al Qwaid, Siow Jat Shern and Gobbi Ramasamy
World Electr. Veh. J. 2025, 16(7), 385; https://doi.org/10.3390/wevj16070385 - 9 Jul 2025
Viewed by 429
Abstract
The rapid growth of electric vehicle (EV) adoption necessitates advanced energy management strategies to ensure sustainable, reliable, and efficient operation of charging infrastructure. This study proposes a hybrid AI-based framework for optimizing residential EV charging systems through the integration of Reinforcement Learning (RL), [...] Read more.
The rapid growth of electric vehicle (EV) adoption necessitates advanced energy management strategies to ensure sustainable, reliable, and efficient operation of charging infrastructure. This study proposes a hybrid AI-based framework for optimizing residential EV charging systems through the integration of Reinforcement Learning (RL), Linear Programming (LP), and real-time grid-aware scheduling. The system architecture includes smart wall-mounted chargers, a 120 kWp rooftop solar photovoltaic (PV) array, and a 60 kWh lithium-ion battery energy storage system (BESS), simulated under realistic load conditions for 800 residential units and 50 charging points rated at 7.4 kW each. Simulation results, validated through SCADA-based performance monitoring using MATLAB/Simulink and OpenDSS, reveal substantial technical improvements: a 31.5% reduction in peak transformer load, voltage deviation minimized from ±5.8% to ±2.3%, and solar utilization increased from 48% to 66%. The AI framework dynamically predicts user demand using a non-homogeneous Poisson process and optimizes charging schedules based on a cost-voltage-user satisfaction reward function. The study underscores the critical role of intelligent optimization in improving grid reliability, minimizing operational costs, and enhancing renewable energy self-consumption. The proposed system demonstrates scalability, resilience, and cost-effectiveness, offering a practical solution for next-generation urban EV charging networks. Full article
Show Figures

Figure 1

18 pages, 4203 KiB  
Article
Long-Term Anisotropic Mechanical Characterization of Layered Shale—An Experimental Study for the BaoKang Tunnel of the Zhengwan Railway, China
by Jun Zhao, Changming Li and Wei Huang
Processes 2025, 13(6), 1900; https://doi.org/10.3390/pr13061900 - 16 Jun 2025
Viewed by 407
Abstract
With the further implementation and development of the Western Development Strategy, studying the mechanical behavior and deformation characteristics of deep-buried tunnels in layered hard rock under high ground stress conditions holds considerable engineering significance. To study the mechanical properties and long-term deformation and [...] Read more.
With the further implementation and development of the Western Development Strategy, studying the mechanical behavior and deformation characteristics of deep-buried tunnels in layered hard rock under high ground stress conditions holds considerable engineering significance. To study the mechanical properties and long-term deformation and failure characteristics of different bedding stratified rocks, this research employed an MTS815 electro-hydraulic servo rock testing system and a French TOP rheometer. Triaxial compression tests, rheological property tests, and long-term cyclic and unloading tests were conducted on shale samples under varying confining pressures and bedding angles. The results indicate that (1) under triaxial compression, shale demonstrates pronounced anisotropic behavior. When the confining pressure is constant, the peak strength of the rock sample exhibits a “U”-shaped variation with the bedding angle (its minimum value at 60°). For a fixed bedding angle, the peak strength of the rock sample progressively increases as the confining pressure rises. (2) The mode of shale failure varies with the angle: at 0°, shale exhibits conjugate shear failure; at 30°, shear slip failure along the bedding is controlled by the bedding weak plane; at 60° and 90°, failure occurs through the bedding. (3) During the creep process of layered shale, brittle failure characteristics are evident, with microcracks within the sample gradually failing at stress concentration points. The decelerated and stable creep stages are prominent; while the accelerated creep stage is less noticeable, the creep rate increases with increasing stress level. (4) Under low confining pressure, the peak strength during cyclic loading and unloading creep processes is lower than that of conventional triaxial tests when the bedding plane dip angles are 0° and 30°, which is the opposite at 60° and 90°. (5) In the cyclic loading and unloading process, Poisson’s ratio gradually increases, whereas the elastic modulus, shear modulus, and bulk modulus gradually decrease. Full article
Show Figures

Figure 1

35 pages, 8248 KiB  
Article
Pre-Failure Deformation Response and Dilatancy Damage Characteristics of Beishan Granite Under Different Stress Paths
by Yang Han, Dengke Zhang, Zheng Zhou, Shikun Pu, Jianli Duan, Lei Gao and Erbing Li
Processes 2025, 13(6), 1892; https://doi.org/10.3390/pr13061892 - 15 Jun 2025
Viewed by 332
Abstract
Different from general underground engineering, the micro-damage prior to failure of the surrounding rock has a significant influence on the geological disposal of high-level radioactive waste. However, the quantitative research on pre-failure dilatancy damage characteristics and stress path influence of hard brittle rocks [...] Read more.
Different from general underground engineering, the micro-damage prior to failure of the surrounding rock has a significant influence on the geological disposal of high-level radioactive waste. However, the quantitative research on pre-failure dilatancy damage characteristics and stress path influence of hard brittle rocks under high stress levels is insufficient currently, and especially, the stress path under simultaneous unloading of axial and confining pressures is rarely discussed. Therefore, three representative mechanical experimental studies were conducted on the Beishan granite in the pre-selected area for high-level radioactive waste (HLW) geological disposal in China, including increasing axial pressure with constant confining pressure (path I), increasing axial pressure with unloading confining pressure (path II), and simultaneous unloading of axial and confining pressures (path III). Using the deviatoric stress ratio as a reference, the evolution laws and characteristics of stress–strain relationships, deformation modulus, generalized Poisson’s ratio, dilatancy index, and dilation angle during the path bifurcation stage were quantitatively analyzed and compared. The results indicate that macro-deformation and the plastic dilatancy process exhibit strong path dependency. The critical value and growth gradient of the dilatancy parameter for path I are both the smallest, and the suppressive effect of the initial confining pressure is the most significant. The dilation gradient of path II is the largest, but the degree of dilatancy before the critical point is the smallest due to its susceptibility to fracture. The critical values of the dilatancy parameters for path III are the highest and are minimally affected by the initial confining pressure, indicating the most significant dilatancy properties. Establish the relationship between the deformation parameters and the crack-induced volumetric strain and define the damage variable accordingly. The critical damage state and the damage accumulation process under various stress paths were examined in detail. The results show that the damage evolution is obviously differentiated with the bifurcation of the stress paths, and three different types of damage curve clusters are formed, indicating that the damage accumulation path is highly dependent on the stress path. The research findings quantitatively reveal the differences in deformation response and damage characteristics of Beishan granite under varying stress paths, providing a foundation for studying the nonlinear mechanical behavior and damage failure mechanisms of hard brittle rock under complex loading conditions. Full article
Show Figures

Figure 1

20 pages, 1240 KiB  
Article
Modelling Insurance Claims During Financial Crises: A Systemic Approach
by Francis Agana and Eben Maré
J. Risk Financial Manag. 2025, 18(6), 307; https://doi.org/10.3390/jrfm18060307 - 5 Jun 2025
Viewed by 541
Abstract
In this paper, we introduce a generalised mutually exciting Hawkes process with random and independent jump intensities. This model provides a robust theoretical framework for modelling complex point processes and appropriately characterises the financial system, especially during periods of crisis. Based on this [...] Read more.
In this paper, we introduce a generalised mutually exciting Hawkes process with random and independent jump intensities. This model provides a robust theoretical framework for modelling complex point processes and appropriately characterises the financial system, especially during periods of crisis. Based on this extended Hawkes process, we propose an insurance claim process and demonstrate that claim processes modelled as an aggregated process enable early detection of crises and inform optimal investment strategies in a financial system. Full article
(This article belongs to the Section Mathematics and Finance)
Show Figures

Figure 1

18 pages, 15380 KiB  
Article
A High-Precision Method for Warehouse Material Level Monitoring Using Millimeter-Wave Radar and 3D Surface Reconstruction
by Wenxin Zhang and Yi Gu
Sensors 2025, 25(9), 2716; https://doi.org/10.3390/s25092716 - 25 Apr 2025
Viewed by 409
Abstract
This study presents a high-precision warehouse material level monitoring method that integrates millimeter-wave radar with 3D surface reconstruction to address the limitations of LiDAR, which is highly susceptible to dust and haze interference in complex storage environments. The proposed method employs Chirp-Z Transform [...] Read more.
This study presents a high-precision warehouse material level monitoring method that integrates millimeter-wave radar with 3D surface reconstruction to address the limitations of LiDAR, which is highly susceptible to dust and haze interference in complex storage environments. The proposed method employs Chirp-Z Transform (CZT) super-resolution processing to enhance spectral resolution and measurement accuracy. To improve grain surface identification, an anomalous signal correction method based on angle–range feature fusion is introduced, mitigating errors caused by weak reflections and multipath effects. The point cloud data acquired by the radar undergo denoising, smoothing, and enhancement using statistical filtering, Moving Least Squares (MLS) smoothing, and bicubic spline interpolation to ensure data continuity and accuracy. A Poisson Surface Reconstruction algorithm is then applied to generate a continuous 3D model of the grain heap. The vector triple product method is used to estimate grain volume. Experimental results show a reconstruction volume error within 3%, demonstrating the method’s accuracy, robustness, and adaptability. The reconstructed surface accurately represents grain heap geometry, making this approach well suited for real-time warehouse monitoring and providing reliable support for material balance and intelligent storage management. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

26 pages, 7054 KiB  
Article
Propagation Characteristics of Multi-Cluster Hydraulic Fracturing in Shale Reservoirs with Natural Fractures
by Lianzhi Yang, Xinyue Wang and Tong Niu
Appl. Sci. 2025, 15(8), 4418; https://doi.org/10.3390/app15084418 - 17 Apr 2025
Cited by 1 | Viewed by 421
Abstract
Hydraulic fracturing of gas and oil reservoirs is the primary stimulation method for enhancing production in the field of petroleum engineering. The hydraulic fracturing technology plays a crucial role in increasing shale gas production from shale reservoirs. Understanding the effects of reservoir and [...] Read more.
Hydraulic fracturing of gas and oil reservoirs is the primary stimulation method for enhancing production in the field of petroleum engineering. The hydraulic fracturing technology plays a crucial role in increasing shale gas production from shale reservoirs. Understanding the effects of reservoir and fracturing conditions on fracture propagation is of great significance for optimizing the hydraulic fracturing process and has not been adequately explored in the current literature. In the context of shale reservoirs in Yibin, Sichuan Province, China, the study selects outcrops to prepare samples for uniaxial compression and Brazilian splitting tests. These tests measure the compressive and tensile strengths of shale in parallel bedding and vertical bedding directions, obtaining the shale’s anisotropic elastic modulus and Poisson’s ratio. These parameters are crucial for simulating reservoir hydraulic fracturing. This paper presents a numerical model utilizing a finite element (FE) analysis to simulate the process of multi-cluster hydraulic fracturing in a shale reservoir with natural fractures in three dimensions. A numerical simulation of the intersection of multiple clusters of 3D hydraulic fractures and natural fractures was performed, and the complex 3D fracture morphologies after the interaction between any two fractures were revealed. The influences of natural fractures, reservoir ground stress, fracturing conditions, and fracture interference concerning the spreading of hydraulic fractures were analyzed. The results highlight several key points: (1) Shale samples exhibit distinct layering with significant anisotropy. The elastic compressive modulus and Poisson’s ratio of parallel bedding shale samples are similar to those of vertical bedding shale samples, while the compressive strength of parallel bedding shale samples is significantly greater than that of vertical bedding shale samples. The elastic compressive modulus of shale is 6 to 10 times its tensile modulus. (2) The anisotropy of shale’s tensile properties is pronounced. The ultimate load capacity of vertical bedding shale samples is 2 to 4 times that of parallel bedding shale samples. The tensile strength of vertical bedding shale samples is 2 to 5 times that of parallel bedding shale samples. (3) The hydraulic fractures induced by the injection well closest to the natural fractures expanded the fastest, and the natural fractures opened when they intersected the hydraulic fractures. When the difference in the horizontal ground stress was significant, natural fractures were more inclined to open after the intersection between the hydraulic and natural fractures. (4) The higher the injection rate and viscosity of the fracturing fluid, the faster the fracture propagation. The research findings could improve the fracturing process through a better understanding of the fracture propagation process and provide practical guidance for hydraulic fracturing design in shale gas reservoirs. Full article
Show Figures

Figure 1

15 pages, 567 KiB  
Article
Low-Complexity Relay Selection for Full-Duplex Random Relay Networks
by Jonghyun Bang and Taehyoung Kim
Mathematics 2025, 13(6), 971; https://doi.org/10.3390/math13060971 - 14 Mar 2025
Viewed by 404
Abstract
Full-duplex relay networks have been studied to enhance network performance under the assumption that the number and positions of relay nodes are fixed. To account for the practical randomness in the number and locations of relays, this paper investigates full-duplex random relay networks [...] Read more.
Full-duplex relay networks have been studied to enhance network performance under the assumption that the number and positions of relay nodes are fixed. To account for the practical randomness in the number and locations of relays, this paper investigates full-duplex random relay networks (FDRRNs) where all nodes are randomly distributed following a Poisson point process (PPP) model. In addition, we propose a low-complexity relay selection algorithm that constructs the candidate relay set while considering the selection diversity gain. Our simulation results demonstrate that, rather than simply increasing the number of candidate relay nodes, selecting an appropriate candidate relay set can achieve significant performance enhancement without unnecessarily increasing system complexity. Full article
(This article belongs to the Special Issue Computational Methods in Wireless Communication)
Show Figures

Figure 1

28 pages, 1423 KiB  
Article
Directional Handover Analysis with Stochastic Petri Net and Poisson Point Process in Heterogeneous Networks
by Zhiyi Zhu, Junjun Zheng, Eiji Takimoto, Patrick Finnerty and Chikara Ohta
Mathematics 2025, 13(3), 349; https://doi.org/10.3390/math13030349 - 22 Jan 2025
Viewed by 972
Abstract
Handover is crucial for ensuring seamless connectivity in heterogeneous networks (HetNet) by enabling user equipment (UE) to switch its connection link between cells based on signal conditions. However, conventional analytical approaches ignored the distinctions between macro-cell to small-cell (M2S) and small-cell to macro-cell [...] Read more.
Handover is crucial for ensuring seamless connectivity in heterogeneous networks (HetNet) by enabling user equipment (UE) to switch its connection link between cells based on signal conditions. However, conventional analytical approaches ignored the distinctions between macro-cell to small-cell (M2S) and small-cell to macro-cell (S2M) scenarios during a handover decision-making process, which resulted in handover failures (HoF) or ping-pong handovers. Therefore, this paper proposes a novel framework, Do-SPN-PPP, that combines stochastic Petri net (SPN) and the Poisson point process (PPP) to quantitatively analyze M2S and S2M handover performance differences. The proposed framework also reveals and predicts how handover parameters affect UE residence time in a cell within the HetNet, and it exhibits a higher predictive accuracy compared with the traditional conventional analytical approach. In addition, the Monte Carlo simulation verified the Do-SPN-PPP framework, and the proposed framework exhibits a 96% reduction in computation time while maintaining a 95% confidence interval and 0.5% error tolerance compared with the simulation. Full article
(This article belongs to the Special Issue Mathematics in Advanced Reliability and Maintenance Modeling)
Show Figures

Figure 1

38 pages, 487 KiB  
Article
Probability via Expectation Measures
by Peter Harremoës
Entropy 2025, 27(2), 102; https://doi.org/10.3390/e27020102 - 22 Jan 2025
Viewed by 1033
Abstract
Since the seminal work of Kolmogorov, probability theory has been based on measure theory, where the central components are so-called probability measures, defined as measures with total mass equal to 1. In Kolmogorov’s theory, a probability measure is used to model an experiment [...] Read more.
Since the seminal work of Kolmogorov, probability theory has been based on measure theory, where the central components are so-called probability measures, defined as measures with total mass equal to 1. In Kolmogorov’s theory, a probability measure is used to model an experiment with a single outcome that will belong to exactly one out of several disjoint sets. In this paper, we present a different basic model where an experiment results in a multiset, i.e., for each of the disjoint sets we obtain the number of observations in the set. This new framework is consistent with Kolmogorov’s theory, but the theory focuses on expected values rather than probabilities. We present examples from testing goodness-of-fit, Bayesian statistics, and quantum theory, where the shifted focus gives new insight or better performance. We also provide several new theorems that address some problems related to the change in focus. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

23 pages, 35676 KiB  
Article
Multimodal Data Fusion System for Accurate Identification of Impact Points on Rocks in Mining Comminution Tasks
by John Kern, Daniel Fernando Quintero Bernal and Claudio Urrea
Processes 2025, 13(1), 87; https://doi.org/10.3390/pr13010087 - 2 Jan 2025
Viewed by 1188
Abstract
This study presents a multimodal data fusion system to identify and impact rocks in mining comminution tasks, specifically during the crushing stage. The system integrates information from various sensory modalities to enhance data accuracy, even under challenging environmental conditions such as dust and [...] Read more.
This study presents a multimodal data fusion system to identify and impact rocks in mining comminution tasks, specifically during the crushing stage. The system integrates information from various sensory modalities to enhance data accuracy, even under challenging environmental conditions such as dust and lighting variations. For the strategy selected in this study, 15 rock characteristics are extracted at neighborhood radii of 5 mm, 10 mm, 15 mm, 20 mm, and 25 mm to determine the suitable impact points. Through processes like the Ball−Pivoting Algorithm (BPA) and Poisson Surface Reconstruction techniques, the study achieves a detailed reconstruction of filtered points based on the selected characteristics. Unlike related studies focused on controlled conditions or limited analysis of specific rock shapes, this study examines all rock faces, ensuring the more accurate identification of impact points under adverse conditions. Results show that rock faces with the largest support areas are most suitable for receiving impacts, enhancing the efficiency and stability of the crushing process. This approach addresses the limitations of manual operations and provides a pathway for reducing operational costs and energy consumption. Furthermore, it establishes a robust foundation for future research to develop fully autonomous systems capable of maintaining reliable performance in extreme mining environments. Full article
(This article belongs to the Special Issue Process Systems Engineering for Complex Industrial Systems)
Show Figures

Figure 1

16 pages, 15762 KiB  
Article
A LiDAR-Based Backfill Monitoring System
by Xingliang Xu, Pengli Huang, Zhengxiang He, Ziyu Zhao and Lin Bi
Appl. Sci. 2024, 14(24), 12073; https://doi.org/10.3390/app142412073 - 23 Dec 2024
Viewed by 990
Abstract
A backfill system in underground mines supports the walls and roofs of mined-out areas and improves the structural integrity of mines. However, there has been a significant gap in the visualization and monitoring of the backfill progress. To better observe the process of [...] Read more.
A backfill system in underground mines supports the walls and roofs of mined-out areas and improves the structural integrity of mines. However, there has been a significant gap in the visualization and monitoring of the backfill progress. To better observe the process of the paste backfill material filling the tunnels, a LiDAR-based backfill monitoring system is proposed. As long as the rising top surface of the backfill material enters the LiDAR range, the proposed system can compute the plane coefficient of this surface. The intersection boundary of the tunnel and the backfill material can be obtained by substituting the plane coefficient into the space where the initial tunnel is located. A surface point generation and slurry point determination algorithm are proposed to obtain the point cloud of the backfill body based on the intersection boundary. After Poisson surface reconstruction and volume computation, the point cloud model is reconstructed into a 3D mesh, and the backfill progress is digitized as the ratio of the backfill body volume to the initial tunnel volume. The volumes of the meshes are compared with the results computed by two other algorithms; the error is less than 1%. The time to compute a set of data increases with the amount of data, ranging from 8 to 20 s, which is sufficient to update a set of data with a tiny increase in progress. As the digitized results update, the visualization progress is transmitted to the mining control center, allowing unexpected problems inside the tunnel to be monitored and addressed based on the messages provided by the proposed system. Full article
Show Figures

Figure 1

32 pages, 382 KiB  
Article
Classical Gasses with Singular Densities
by Luca Di Persio, Yuri Kondratiev and Viktorya Vardanyan
Mathematics 2024, 12(24), 4035; https://doi.org/10.3390/math12244035 - 23 Dec 2024
Viewed by 653
Abstract
We investigate classical continuous systems characterized by singular velocity distributions, where the corresponding Radon measures are defined over the entire space with infinite mass. These singular distributions are used to model particle velocities in systems where traditional velocity distributions do not apply. As [...] Read more.
We investigate classical continuous systems characterized by singular velocity distributions, where the corresponding Radon measures are defined over the entire space with infinite mass. These singular distributions are used to model particle velocities in systems where traditional velocity distributions do not apply. As a result, the particle positions in such systems no longer conform to conventional configurations in physical space. This necessitates the development of novel analytical tools to understand the underlying models. To address this, we introduce a new conceptual framework that redefines particle configurations in phase space, where each particle is represented by its spatial position and a velocity vector. The key idea is the construction of the Plato space, which is designed to represent idealized particle configurations where the total velocity remains bounded within any compact subset of phase space. This space serves as a crucial bridge to the space of vector-valued discrete Radon measures, where each measure captures the velocity distribution over the entire system. Given the inherent complexity of analyzing infinite-dimensional spaces, we tackle the problem by reformulating it onto a finite-dimensional configuration space. This is achieved by decomposing the infinite space into smaller, more manageable components. A central tool in this reformulation is the K-transform, which is pivotal in enabling harmonic analysis of the space. The K-transform allows us to represent the system in terms of components that are more amenable to analysis, thus simplifying the study of the system’s dynamics. Furthermore, we extend previous results in the study of correlation functions by developing correlation measures tailored for these vector-valued Radon measures. These generalized functions provide deeper insights into the correlations between particle positions and velocities, expanding the range of analysis to systems with singular velocity distributions. Through this approach, we develop a robust mathematical framework that sheds light on the structure and dynamics of complex particle systems, especially those characterized by singular velocity distributions. Our results offer a new perspective on systems with non-traditional velocity distributions, advancing the theory and methodology of particle systems in both classical and modern contexts. Full article
29 pages, 1608 KiB  
Article
Performance Analysis and Design Principles of Wireless Mutual Broadcast Using Heterogeneous Transmit Power for Proximity-Aware Services
by Taesoo Kwon and HyeonWoo Lee
Sensors 2024, 24(24), 8045; https://doi.org/10.3390/s24248045 - 17 Dec 2024
Viewed by 814
Abstract
As proximity-aware services among devices such as sensors, IoT devices, and user equipment are expected to facilitate a wide range of new applications in the beyond 5G and 6G era, managing heterogeneous environments with diverse node capabilities becomes essential. This paper analytically models [...] Read more.
As proximity-aware services among devices such as sensors, IoT devices, and user equipment are expected to facilitate a wide range of new applications in the beyond 5G and 6G era, managing heterogeneous environments with diverse node capabilities becomes essential. This paper analytically models and characterizes the performance of heterogeneous random access-based wireless mutual broadcast (RA-WMB) with distinct transmit (Tx) power levels, leveraging a marked Poisson point process to account for nodes’ various Tx power. In particular, this study enables the performance of RA-WMB with heterogeneous Tx power to be represented in terms of the performance of RA-WMB with a common Tx power by deriving an equivalent Tx power based on the probability distribution of heterogeneous Tx power and the path loss exponent. This approach allows for an analytical and quantitative comparison of heterogeneous RA-WMB performance with the common Tx power configuration. Further, the study derives performance ratios among node groups with distinct Tx power levels and formulates an optimization problem to design a heterogeneous Tx power configuration that balances individual node group performance improvements with overall network performance, yielding the optimal Tx power configuration. A closed-form suboptimal transmission probability (TxPr) is also proposed to improve heterogeneous RA-WMB performance, providing an efficient alternative to iterative methods for the optimal TxPr. Numerical results demonstrate the accuracy of performance analysis and highlight the effectiveness of the proposed designs. Full article
(This article belongs to the Special Issue Advances in Wireless Sensor and Mobile Networks)
Show Figures

Figure 1

24 pages, 47033 KiB  
Article
Hybrid Denoising Algorithm for Architectural Point Clouds Acquired with SLAM Systems
by Antonella Ambrosino, Alessandro Di Benedetto and Margherita Fiani
Remote Sens. 2024, 16(23), 4559; https://doi.org/10.3390/rs16234559 - 5 Dec 2024
Cited by 1 | Viewed by 1693
Abstract
The sudden development of systems capable of rapidly acquiring dense point clouds has underscored the importance of data processing and pre-processing prior to modeling. This work presents the implementation of a denoising algorithm for point clouds acquired with LiDAR SLAM systems, aimed at [...] Read more.
The sudden development of systems capable of rapidly acquiring dense point clouds has underscored the importance of data processing and pre-processing prior to modeling. This work presents the implementation of a denoising algorithm for point clouds acquired with LiDAR SLAM systems, aimed at optimizing data processing and the reconstruction of surveyed object geometries for graphical rendering and modeling. Implemented in a MATLAB environment, the algorithm utilizes an approximate modeling of a reference surface with Poisson’s model and a statistical analysis of the distances between the original point cloud and the reconstructed surface. Tested on point clouds from historically significant buildings with complex geometries scanned with three different SLAM systems, the results demonstrate a satisfactory reduction in point density to approximately one third of the original. The filtering process effectively removed about 50% of the points while preserving essential details, facilitating improved restitution and modeling of architectural and structural elements. This approach serves as a valuable tool for noise removal in SLAM-derived datasets, enhancing the accuracy of architectural surveying and heritage documentation. Full article
(This article belongs to the Special Issue 3D Scene Reconstruction, Modeling and Analysis Using Remote Sensing)
Show Figures

Graphical abstract

20 pages, 889 KiB  
Article
Slotted ALOHA Based Practical Byzantine Fault Tolerance (PBFT) Blockchain Networks: Performance Analysis and Optimization
by Ziyi Zhou, Oluwakayode Onireti, Lei Zhang and Muhammad Ali Imran
Sensors 2024, 24(23), 7688; https://doi.org/10.3390/s24237688 - 30 Nov 2024
Viewed by 1145
Abstract
Practical Byzantine Fault Tolerance (PBFT) is one of the most popular consensus mechanisms for the consortium and private blockchain technology. It has been recognized as a candidate consensus mechanism for the Internet of Things networks as it offers lower resource requirements and high [...] Read more.
Practical Byzantine Fault Tolerance (PBFT) is one of the most popular consensus mechanisms for the consortium and private blockchain technology. It has been recognized as a candidate consensus mechanism for the Internet of Things networks as it offers lower resource requirements and high performance when compared with other consensus mechanisms such as proof of work. In this paper, by considering the blockchain nodes are wirelessly connected, we model the network nodes distribution and transaction arrival rate as Poisson point process and we develop a framework for evaluating the performance of the wireless PBFT network. The framework utilizes slotted ALOHA as its multiple access technique. We derive the end-to-end success probability of the wireless PBFT network which serves as the basis for obtaining other key performance indicators namely, the optimal transmission interval, the transaction throughput and delay, and the viable area. The viable area represents the minimum PBFT coverage area that guarantees the liveness, safety, and resilience of the PBFT protocol while satisfying a predefined end-to-end success probability. Results show that the transmission interval required to make the wireless PBFT network viable can be reduced if either the end-to-end success probability requirement or the number of faulty nodes is lowered. Full article
Show Figures

Figure 1

Back to TopTop