Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (76)

Search Parameters:
Keywords = sub-exponential distribution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 952 KB  
Article
On Minimum Bregman Divergence Inference
by Soumik Purkayastha and Ayanendranath Basu
Mathematics 2026, 14(4), 670; https://doi.org/10.3390/math14040670 - 13 Feb 2026
Viewed by 214
Abstract
The density power divergence (DPD) is a well-studied member of the Bregman divergence family and forms the basis of widely used minimum divergence estimators that balance efficiency and robustness. In this paper, we introduce and study a new sub-class of Bregman divergences, termed [...] Read more.
The density power divergence (DPD) is a well-studied member of the Bregman divergence family and forms the basis of widely used minimum divergence estimators that balance efficiency and robustness. In this paper, we introduce and study a new sub-class of Bregman divergences, termed the exponentially weighted divergence (EWD), designed to generate competitive and practically interpretable inference procedures. The EWD is constructed so that its associated weight function remains bounded within the interval [0, 1], which facilitates a transparent interpretation of robustness through controlled downweighting of low-density observations and avoids excessive influence from high-density points. We develop minimum EWD estimators (MEWDEs) within a general framework accommodating independent but non-homogeneous data, thereby extending classical minimum divergence theory beyond the i.i.d. setting. Under standard regularity conditions, we establish Fisher consistency and asymptotic normality, and we analyze robustness properties through influence function calculations. The EWD framework is further extended to parametric hypothesis testing, for which we derive the asymptotic null distribution of a Bregman divergence-based test statistic. Extensive simulation studies and real-data applications demonstrate that the proposed estimators perform comparably to, and often more robustly than, existing DPD-based procedures, particularly under moderate to heavy contamination, while retaining high efficiency under clean data. Overall, the EWD provides a tractable and interpretable alternative within the Bregman divergence class for robust parametric estimation and testing. Full article
Show Figures

Figure 1

13 pages, 279 KB  
Article
Accelerating Propagation Induced by Slowly Decaying Initial Data for Nonlocal Reaction-Diffusion Equations in Cylinder Domains
by Ru Hou and Yu Lu
Axioms 2025, 14(12), 925; https://doi.org/10.3390/axioms14120925 - 16 Dec 2025
Viewed by 269
Abstract
This paper investigates the phenomenon of accelerating propagation for nonlocal reaction-diffusion models with spatial and trait structure in a cylinder domain R×Ω. Unlike previous studies focusing on exponentially decaying or compactly supported initial data, we consider initial functions that decay [...] Read more.
This paper investigates the phenomenon of accelerating propagation for nonlocal reaction-diffusion models with spatial and trait structure in a cylinder domain R×Ω. Unlike previous studies focusing on exponentially decaying or compactly supported initial data, we consider initial functions that decay more slowly than any exponential function—such as algebraic or sub-exponential decay. By constructing a pair of super- and sub-solutions via the principal eigenfunction ψ0 of the trait operator, we prove that the solution propagates with infinitely increasing speed in the spatial direction. Explicit upper and lower bounds for the locations of level sets are derived, illustrating how the decay rate of the initial data determines the acceleration profile. The results are extended to a more general model with space- and trait-dependent competition kernels under a boundedness assumption (H3). This work highlights the crucial role of slowly decaying tails in the initial distribution in driving accelerated invasion fronts, providing a theoretical foundation for assessing propagation risks in ecology and population dynamics. Full article
38 pages, 6756 KB  
Article
Generator of Aperiodic Pseudorandom Pulse Trains with Variable Parameters Based on Arduino
by Nebojša Andrijević, Zoran Lovreković, Marina Milovanović, Dragana Božilović Đokić and Vladimir Tomašević
Electronics 2025, 14(23), 4577; https://doi.org/10.3390/electronics14234577 - 22 Nov 2025
Viewed by 620
Abstract
Aperiodic pseudo-random impulse (APPI) trains represent deterministic yet reproducible sequences that mimic the irregularity of natural processes. They allow complete control over inter-spike intervals (ISIs) and pulse widths (PWs). Such signals are increasingly relevant for low-probability-of-intercept (LPI) communications, radar testing, and biomedical applications, [...] Read more.
Aperiodic pseudo-random impulse (APPI) trains represent deterministic yet reproducible sequences that mimic the irregularity of natural processes. They allow complete control over inter-spike intervals (ISIs) and pulse widths (PWs). Such signals are increasingly relevant for low-probability-of-intercept (LPI) communications, radar testing, and biomedical applications, where controlled variability mitigates adaptation and enhances stimulation efficiency. This paper presents a modular APPI generator implemented on an Arduino Mega platform, featuring programmable statistical models for ISI (exponential distribution) and PW (uniform distribution), dual-timing mechanisms (baseline loop and Timer/ISR, clear-timer on compare (CTC)), a real-time telemetry and software interface, and a safe output chain with opto-isolation and current limitation. The generator provides both reproducibility and tunable stochastic dynamics. Experimental validation includes jitter analysis, Kolmogorov–Smirnov tests, Q–Q plots, spectral and autocorrelation analysis, and load integration using a constant-current source with compliance margins. The results demonstrate that the Timer/ISR (CTC) implementation achieves significantly reduced jitter compared to the baseline loop, while maintaining the statistical fidelity of ISI and PW distributions, broad spectral characteristics, and fast decorrelation. Experimental verification was extended across a wider parameter space (λ = 0.1–100 Hz, PW = 10 µs–100 ms, 10 repetitions per condition), confirming robustness and repeatability. Experimental validation confirmed accurate Poisson/Uniform ISI generation, sub-millisecond jitter stability in the timer-controlled mode, robustness across λ = 0.1–100 Hz and PW = 10 µs–100 ms, and preliminary compliance with isolation and leakage limits. The accompanying Python GUI provides real-time control, telemetry, and data-logging capabilities. This work establishes a reproducible, low-cost, and open-source framework for APPI generation, with direct applicability in laboratory and field environments. Full article
Show Figures

Figure 1

36 pages, 4983 KB  
Article
Application of Multivariate Exponential Random Graph Models in Small Multilayer Networks: Latin America, Tariffs, and Importation
by Oralia Nolasco-Jáuregui, Luis Alberto Quezada-Téllez, Yuri Salazar-Flores and Adán Díaz-Hernández
Mathematics 2025, 13(19), 3078; https://doi.org/10.3390/math13193078 - 25 Sep 2025
Viewed by 1341
Abstract
This work is framed as an application of static and small exponential random graph models for complex networks in multiple layers. This document revisits the small network and exhibits its potential. Examining the bibliography reveals considerable interest in large and dynamic complex networks. [...] Read more.
This work is framed as an application of static and small exponential random graph models for complex networks in multiple layers. This document revisits the small network and exhibits its potential. Examining the bibliography reveals considerable interest in large and dynamic complex networks. This research examines the application of small networks (50,000 population) for analyzing global commerce, conducting a comparative graph structure of the tariffs, and importing multilayer networks. The authors created and described the scenario where the readers can compare the graph models visually, at a glance. The proposed methodology represents a significant contribution, providing detailed descriptions and instructions, thereby ensuring the operational effectiveness of the application. The method is organized into five distinct blocks (Bn) and an accompanying appendix containing reproduction notes. Each block encompasses a primary task and associated sub-tasks, articulated through a hierarchical series of steps. The most challenging mathematical aspects of a small network analysis pertain to modeling and sample selection (sel_p). This document describes several modeling tasks that confirm that sel_p = 10 is the best option, including modeling the edges and the convergence and covariance model parameters, modeling the node factor by vertex names, Pearson residual distributions, goodness of fit, and more. This method establishes a foundation for addressing the intricate questions derived from the established hypotheses. It provides eight model specifications and a detailed description. Given the scope of this investigation, a historical examination of the relationships between different network actors is deemed essential, providing context for the study of actors engaged in global trade. Various analytical perspectives (six), encompassing degree analyses, diameter and edges, hubs and authority, co-citation and cliques in mutual and collapse approaches, k-core, and clustering, facilitate the identification of the specific roles played by actors within the importation network in comparison to the tariff network. This study focuses on the Latin American and Caribbean region. Full article
Show Figures

Figure 1

23 pages, 3558 KB  
Article
Research on High-Reliability Energy-Aware Scheduling Strategy for Heterogeneous Distributed Systems
by Ziyu Chen, Jing Wu, Lin Cheng and Tao Tao
Big Data Cogn. Comput. 2025, 9(6), 160; https://doi.org/10.3390/bdcc9060160 - 17 Jun 2025
Cited by 2 | Viewed by 2765
Abstract
With the demand for workflow processing driven by edge computing in the Internet of Things (IoT) and cloud computing growing at an exponential rate, task scheduling in heterogeneous distributed systems has become a key challenge to meet real-time constraints in resource-constrained environments. Existing [...] Read more.
With the demand for workflow processing driven by edge computing in the Internet of Things (IoT) and cloud computing growing at an exponential rate, task scheduling in heterogeneous distributed systems has become a key challenge to meet real-time constraints in resource-constrained environments. Existing studies now attempt to achieve the best balance in terms of time constraints, energy efficiency, and system reliability in Dynamic Voltage and Frequency Scaling environments. This study proposes a two-stage collaborative optimization strategy. With the help of an innovative algorithm design and theoretical analysis, the multi-objective optimization challenges mentioned above are systematically solved. First, based on a reliability-constrained model, we propose a topology-aware dynamic priority scheduling algorithm (EAWRS). This algorithm constructs a node priority function by incorporating in-degree/out-degree weighting factors and critical path analysis to enable multi-objective optimization. Second, to address the time-varying reliability characteristics introduced by DVFS, we propose a Fibonacci search-based dynamic frequency scaling algorithm (SEFFA). This algorithm effectively reduces energy consumption while ensuring task reliability, achieving sub-optimal processor energy adjustment. The collaborative mechanism of EAWRS and SEFFA has well solved the dynamic scheduling challenge based on DAG in heterogeneous multi-core processor systems in the Internet of Things environment. Experimental evaluations conducted at various scales show that, compared with the three most advanced scheduling algorithms, the proposed strategy reduces energy consumption by an average of 14.56% (up to 58.44% under high-reliability constraints) and shortens the makespan by 2.58–56.44% while strictly meeting reliability requirements. Full article
Show Figures

Figure 1

23 pages, 11228 KB  
Article
R-MLGTI: A Grid- and R-Tree-Based Hybrid Index for Unevenly Distributed Spatial Data
by Yuqin Li, Jining Yan, Xiaohui Huang, Xiangyou He, Ze Deng and Yunliang Chen
ISPRS Int. J. Geo-Inf. 2025, 14(6), 231; https://doi.org/10.3390/ijgi14060231 - 12 Jun 2025
Cited by 1 | Viewed by 1666
Abstract
In recent years, with the development of sensor technology, the volume of spatial data has grown exponentially. However, this data is often unevenly distributed, and traditional indexing methods cannot predict the overall data distribution when data are continuously inserted into the database. This [...] Read more.
In recent years, with the development of sensor technology, the volume of spatial data has grown exponentially. However, this data is often unevenly distributed, and traditional indexing methods cannot predict the overall data distribution when data are continuously inserted into the database. This makes them inefficient for indexing large-scale, unevenly distributed spatial data. This paper proposes a hybrid indexing method based on the grid-indexing and R-tree methods, called R-MLGTI (R-Multi-Level Grid–Tree Index). The method first divides the two-dimensional space using the Z-curve to form multiple sub-grid regions. When incrementally inserting data, R-MLGTI calculates the grid encoding of the data and computes the c(G) of the corresponding grid G to measure the sparsity or density within the grid region, where c(G) is a metric that quantifies the data density within grid G. All data in sparse grids are indexed by R-trees associated with grid encodings. In dense grid areas, a finer-grained space-filling curve is recursively applied for further spatial division. This process forms multiple sub-grids until the data within all sub-grids becomes sparse, at which point the original data is re-indexed according to the sparse grids. Finally, this paper presents a prototype system of the in-memory R-MLGTI and conducts benchmark tests for incremental data import and range queries. The incremental data insertion performance of R-MLGTI is lower than that of the grid-indexing and R-tree methods; however, on various unevenly distributed simulated datasets, the average query time for different query regions in R-MLGTI is about 6.49% faster than that of the grid-indexing method and about 51.78% faster than that of the R-tree method. On a real dataset, Landsat 7 EMT, which contains 2,585,203 records, the average query time for various query ranges is 61.39% faster than that of the grid-indexing method and 17.01% faster than that of the R-tree method. Experiments show that R-MLGTI performs better than the traditional R-tree and grid-indexing methods in large-scale, unevenly distributed spatial data query requests. Full article
Show Figures

Figure 1

25 pages, 4101 KB  
Review
Digital Transformation in the Shipping Industry: A Network-Based Bibliometric Analysis
by Luca Ferrarini, Yannes Filippopoulos and Zoran Lajic
J. Mar. Sci. Eng. 2025, 13(5), 894; https://doi.org/10.3390/jmse13050894 - 30 Apr 2025
Cited by 4 | Viewed by 4019
Abstract
This paper presents a network-based bibliometric analysis of digital transformation in the shipping industry, a sector undergoing rapid change due to advancements in automation, artificial intelligence, blockchain, and Internet of Things. The study synthesizes existing knowledge to identify trends, challenges, and opportunities for [...] Read more.
This paper presents a network-based bibliometric analysis of digital transformation in the shipping industry, a sector undergoing rapid change due to advancements in automation, artificial intelligence, blockchain, and Internet of Things. The study synthesizes existing knowledge to identify trends, challenges, and opportunities for industry stakeholders and researchers. Unlike previous literature reviews, this work adopts a graph theory approach applied to a large dataset of scientific publications, without predefined technological or industrial sub-domains. Data were collected from EBSCO, ProQuest, and IEEE eXplore, then refined using OpenAlex to comprise 2293 scientific publications. The analysis includes descriptive statistics, co-authorship network analysis, co-citation network analysis, and thematic analysis. The findings reveal a significant increase in publications since 2005, with exponential growth after 2015. They also suggest a potential inflection point after 2024. A small percentage of authors and institutions account for a disproportionate share of publications, suggesting a skewed distribution of research efforts and encouraging funding agencies to broaden maritime research worldwide. The co-authorship network exhibits a heavy-tail distribution and interconnected communities, indicating extensive national and international collaborations. The co-citation analysis identifies key research areas such as fuel consumption optimization, safety and risk management, and smart port development. Thematic analysis highlights the growing importance of artificial intelligence and cybersecurity. Full article
Show Figures

Figure 1

19 pages, 306 KB  
Article
Asymptotic Tail Moments of the Time Dependent Aggregate Risk Model
by Dechen Gao and Jiandong Ren
Mathematics 2025, 13(7), 1153; https://doi.org/10.3390/math13071153 - 31 Mar 2025
Viewed by 432
Abstract
In this paper, we study an extension of the classical compound Poisson risk model with a dependence structure among the inter-claim time and the subsequent claim size. Under a flexible dependence structure and assuming that the claim amounts are heavy tail distributed, we [...] Read more.
In this paper, we study an extension of the classical compound Poisson risk model with a dependence structure among the inter-claim time and the subsequent claim size. Under a flexible dependence structure and assuming that the claim amounts are heavy tail distributed, we derive asymptotic tail moments for the aggregate claims. Numerical examples and simulation studies are provided to validate the results. Full article
(This article belongs to the Section D1: Probability and Statistics)
30 pages, 2840 KB  
Article
Development and Engineering Applications of a Novel Mixture Distribution: Exponentiated and New Topp–Leone-G Families
by Hebatalla H. Mohammad, Sulafah M. S. Binhimd, Abeer A. EL-Helbawy, Gannat R. AL-Dayian, Fatma G. Abd EL-Maksoud and Mervat K. Abd Elaal
Symmetry 2025, 17(3), 399; https://doi.org/10.3390/sym17030399 - 7 Mar 2025
Viewed by 912
Abstract
In this paper, two different families are mixed: the exponentiated and new Topp–Leone-G families. This yields a new family, which we named the mixture of the exponentiated and new Topp–Leone-G family. Some statistical properties of the proposed family are obtained. Then, the mixture [...] Read more.
In this paper, two different families are mixed: the exponentiated and new Topp–Leone-G families. This yields a new family, which we named the mixture of the exponentiated and new Topp–Leone-G family. Some statistical properties of the proposed family are obtained. Then, the mixture of two exponentiated new Topp–Leone inverse Weibull distribution is introduced as a sub-model from the mixture of exponentiated and new Topp–Leone-G family. Some related properties are studied, such as the quantile function, moments, moment generating function, and order statistics. Furthermore, the maximum likelihood and Bayes approaches are employed to estimate the unknown parameters, reliability and hazard rate functions of the mixture of exponentiated and new Topp–Leone inverse Weibull distribution. Bayes estimators are derived under both the symmetric squared error loss function and the asymmetric linear exponential loss function. The performance of maximum likelihood and Bayes estimators is evaluated through a Monte Carlo simulation. The applicability and flexibility of the MENTL-IW distribution are demonstrated by well-fitting two real-world engineering datasets. The results demonstrate the superior performance of the MENTL-IW distribution compared to other competing models. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

19 pages, 768 KB  
Article
A New Lomax-G Family: Properties, Estimation and Applications
by Hanan Baaqeel, Hibah Alnashri and Lamya Baharith
Entropy 2025, 27(2), 125; https://doi.org/10.3390/e27020125 - 25 Jan 2025
Cited by 3 | Viewed by 1512
Abstract
Given the increasing number of phenomena that demand interpretation and investigation, developing new distributions and families of distributions has become increasingly essential. This article introduces a novel family of distributions based on the exponentiated reciprocal of the hazard rate function named the new [...] Read more.
Given the increasing number of phenomena that demand interpretation and investigation, developing new distributions and families of distributions has become increasingly essential. This article introduces a novel family of distributions based on the exponentiated reciprocal of the hazard rate function named the new Lomax-G family of distributions. We demonstrate the family’s flexibility to predict a wide range of lifetime events by deriving its cumulative and probability density functions. The new Lomax–Weibull distribution (NLW) is studied as a sub-model, with analytical and graphical evidence indicating its efficiency for reliability analysis and complex data modeling. The NLW density encompasses a variety of shapes, such as symmetrical, semi-symmetrical, right-skewed, left-skewed, and inverted J shapes. Furthermore, its hazard function exhibits a broad range of asymmetric forms. Five estimation techniques for determining the parameters of the proposed NLW distribution include the maximum likelihood, percentile, least squares, weighted least squares, and Cramér–von Mises methods. The performance of the estimators of the studied inferential methods is investigated through a comparative Monte Carlo simulation study and numerical demonstration. Additionally, the effectiveness of the NLW is validated by means of four real-world datasets. The results indicate that the NLW distribution provides a more accurate fit than several competing models. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

32 pages, 3452 KB  
Review
Assessment of Reliability Allocation Methods for Electronic Systems: A Systematic and Bibliometric Analysis
by Rajkumar B. Patil, San Kyeong, Michael Pecht, Rahul A. Gujar and Sandip Mane
Stats 2025, 8(1), 11; https://doi.org/10.3390/stats8010011 - 24 Jan 2025
Cited by 4 | Viewed by 4188
Abstract
Reliability allocation is the process of assigning reliability targets to sub-systems within a system to meet the overall reliability requirements. However, many traditional reliability allocation methods rely on assumptions that are often unrealistic, leading to misleading, unachievable, and costly outcomes. This paper provides [...] Read more.
Reliability allocation is the process of assigning reliability targets to sub-systems within a system to meet the overall reliability requirements. However, many traditional reliability allocation methods rely on assumptions that are often unrealistic, leading to misleading, unachievable, and costly outcomes. This paper provides a historical review of reliability allocation methods, focusing on the Weighing Factor Method (WFM), with a detailed analysis of its main findings, assumptions, and limitations. Additionally, the review covers methods for reliability optimization, redundancy optimization, and multi-state system optimization, highlighting their strengths and shortcomings. A case study is presented to demonstrate how the assumption of an exponential distribution impacts the reliability allocation process, showing the limitations it imposes on practical implementations. Furthermore, a bibliometric analysis is conducted to assess publication trends in the field of reliability allocation. Through examples, particularly in the context of electronic systems using commercial off-the-shelf (COTS) components, the challenges are discussed, and recommendations for alternative approaches to improve the reliability allocation process are provided. Full article
(This article belongs to the Section Reliability Engineering)
Show Figures

Figure 1

20 pages, 752 KB  
Article
DUS Topp–Leone-G Family of Distributions: Baseline Extension, Properties, Estimation, Simulation and Useful Applications
by Divine-Favour N. Ekemezie, Kizito E. Anyiam, Mohammed Kayid, Oluwafemi Samson Balogun and Okechukwu J. Obulezi
Entropy 2024, 26(11), 973; https://doi.org/10.3390/e26110973 - 13 Nov 2024
Cited by 5 | Viewed by 1937
Abstract
This study introduces the DUS Topp–Leone family of distributions, a novel extension of the Topp–Leone distribution enhanced by the DUS transformer. We derive the cumulative distribution function (CDF) and probability density function (PDF), demonstrating the distribution’s flexibility in modeling various lifetime phenomena. The [...] Read more.
This study introduces the DUS Topp–Leone family of distributions, a novel extension of the Topp–Leone distribution enhanced by the DUS transformer. We derive the cumulative distribution function (CDF) and probability density function (PDF), demonstrating the distribution’s flexibility in modeling various lifetime phenomena. The DUS-TL exponential distribution was studied as a sub-model, with analytical and graphical evidence revealing that it exhibits a unique unimodal shape, along with fat-tail characteristics, making it suitable for time-to-event data analysis. We evaluate parameter estimation methods, revealing that non-Bayesian approaches, particularly Maximum Likelihood and Least Squares, outperform Bayesian techniques in terms of bias and root mean square error. Additionally, the distribution effectively models datasets with varying skewness and kurtosis values, as illustrated by its application to total factor productivity data across African countries and the mortality rate of people who injected drugs. Overall, the DUS Topp–Leone family represents a significant advancement in statistical modeling, offering robust tools for researchers in diverse fields. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 292 KB  
Article
Arbitrary Random Variables and Wiman’s Inequality
by Andriy Kuryliak, Oleh Skaskiv and Andriy Bandura
Axioms 2024, 13(11), 739; https://doi.org/10.3390/axioms13110739 - 29 Oct 2024
Cited by 1 | Viewed by 1118
Abstract
We study the class of random entire functions given by power series, in which the coefficients are formed as the product of an arbitrary sequence of complex numbers and two sequences of random variables. One of them is the Rademacher sequence, and the [...] Read more.
We study the class of random entire functions given by power series, in which the coefficients are formed as the product of an arbitrary sequence of complex numbers and two sequences of random variables. One of them is the Rademacher sequence, and the other is an arbitrary complex-valued sequence from the class of sequences of random variables, determined by a certain restriction on the growth of absolute moments of a fixed degree from the maximum of the module of each finite subset of random variables. In the paper we prove sharp Wiman–Valiron’s type inequality for such random entire functions, which for given p(0;1) holds with a probability p outside some set of finite logarithmic measure. We also considered another class of random entire functions given by power series with coefficients, which, as above, are pairwise products of the elements of an arbitrary sequence of complex numbers and a sequence of complex-valued random variables described above. In this case, similar new statements about not improvable inequalities are also obtained. Full article
(This article belongs to the Special Issue Recent Advances in Complex Analysis and Related Topics)
44 pages, 786 KB  
Article
New Statistical Residuals for Regression Models in the Exponential Family: Characterization, Simulation, Computation, and Applications
by Raydonal Ospina, Patrícia L. Espinheira, Leilo A. Arias, Cleber M. Xavier, Víctor Leiva and Cecilia Castro
Mathematics 2024, 12(20), 3196; https://doi.org/10.3390/math12203196 - 12 Oct 2024
Cited by 1 | Viewed by 3045
Abstract
Residuals are essential in regression analysis for evaluating model adequacy, validating assumptions, and detecting outliers or influential data. While traditional residuals perform well in linear regression, they face limitations in exponential family models, such as those based on the binomial and Poisson distributions, [...] Read more.
Residuals are essential in regression analysis for evaluating model adequacy, validating assumptions, and detecting outliers or influential data. While traditional residuals perform well in linear regression, they face limitations in exponential family models, such as those based on the binomial and Poisson distributions, due to heteroscedasticity and dependence among observations. This article introduces a novel standardized combined residual for linear and nonlinear regression models within the exponential family. By integrating information from both the mean and dispersion sub-models, the new residual provides a unified diagnostic tool that enhances computational efficiency and eliminates the need for projection matrices. Simulation studies and real-world applications demonstrate its advantages in efficiency and interpretability over traditional residuals. Full article
(This article belongs to the Special Issue Statistical Simulation and Computation: 3rd Edition)
Show Figures

Figure 1

17 pages, 3617 KB  
Article
Investigations on Millimeter-Wave Indoor Channel Simulations for 5G Networks
by Huthaifa Obeidat
Appl. Sci. 2024, 14(19), 8972; https://doi.org/10.3390/app14198972 - 5 Oct 2024
Cited by 5 | Viewed by 3648
Abstract
Due to the extensively accessible bandwidth of many tens of GHz, millimeter-wave (mmWave) and sub-terahertz (THz) frequencies are anticipated to play a significant role in 5G and 6G wireless networks and beyond. This paper presents investigations on mmWave bands within the indoor environment [...] Read more.
Due to the extensively accessible bandwidth of many tens of GHz, millimeter-wave (mmWave) and sub-terahertz (THz) frequencies are anticipated to play a significant role in 5G and 6G wireless networks and beyond. This paper presents investigations on mmWave bands within the indoor environment based on extensive simulations; the study considers the behavior of the omnidirectional and directional propagation characteristics, including path loss exponents (PLE) delay spread (DS), the number of clusters, and the number of rays per cluster at different frequencies (28 GHz, 39 GHz, 60 GHz and 73 GHz) in both line-of-sight (LOS) and non-LOS (NLOS) propagation scenarios. This study finds that the PLE and DS show dependency on frequency; it was also found that, in NLOS scenarios, the number of clusters follows a Poisson distribution, while, in LOS, it follows a decaying exponential distribution. This study enhances understanding of the indoor channel behavior at different frequency bands within the same environment, as many research papers focus on single or two bands; this paper considers four frequency bands. The simulation is important as it provides insights into omnidirectional channel behavior at different frequencies, essential for indoor channel planning. Full article
(This article belongs to the Special Issue 5G and Beyond: Technologies and Communications)
Show Figures

Figure 1

Back to TopTop