Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,839)

Search Parameters:
Keywords = gaussian distribution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 6222 KB  
Article
Weighted, Mixed p Norm Regularization for Gaussian Noise-Based Denoising Method Extension
by Yuanmin Wang and Jinsong Leng
Mathematics 2026, 14(2), 298; https://doi.org/10.3390/math14020298 - 14 Jan 2026
Abstract
Many denoising methods model noise as Gaussian noise. However, the realistic noise captured by camera devices does not satisfy Gaussian distribution. Hence, those methods do not perform well when being applied to real-world image denoising tasks. In this work, we indicate that the [...] Read more.
Many denoising methods model noise as Gaussian noise. However, the realistic noise captured by camera devices does not satisfy Gaussian distribution. Hence, those methods do not perform well when being applied to real-world image denoising tasks. In this work, we indicate that the spatial correlation in noise and the variation of noise intensity are the main factors that impact the performance of Gaussian noise-based methods, and accordingly propose an extension of the method based on the weighted, mixed non-convex p norm. The proposed method first strengthens the intensity of the noise pattern in the original denoising result through the Guided Filter, then removes the over-amplified frequency in the local area by the proposed regularization term. We prove that the optimal solution can be achieved through the sub-gradient-based iterative optimization scheme, and further reduce the computational cost by optimizing the initial values. Numerical experiments show that the proposed extending method can balance well texture preservation and noise removal, and the PSNR of the extending method’s results are greatly improved, even outperforming the recently proposed realistic noise removal methods which also include deep learning based methods. Full article
(This article belongs to the Special Issue Mathematical Methods for Image Processing and Computer Vision)
Show Figures

Figure 1

15 pages, 3927 KB  
Article
Leaflet Lengths and Commissural Dimensions as the Primary Determinants of Orifice Area in Mitral Regurgitation: A Sobol Sensitivity Analysis
by Ashkan Bagherzadeh, Vahid Keshavarzzadeh, Patrick Hoang, Steve Kreuzer, Jiang Yao, Lik Chuan Lee, Ghassan S. Kassab and Julius Guccione
Bioengineering 2026, 13(1), 97; https://doi.org/10.3390/bioengineering13010097 - 14 Jan 2026
Abstract
Mitral valve orifice area is a key functional metric that depends on complex geometric features, motivating a systematic assessment of the relative influence of these parameters. In this study, the mitral valve geometry is parameterized using twelve geometric variables, and a global sensitivity [...] Read more.
Mitral valve orifice area is a key functional metric that depends on complex geometric features, motivating a systematic assessment of the relative influence of these parameters. In this study, the mitral valve geometry is parameterized using twelve geometric variables, and a global sensitivity analysis based on Sobol indices is performed to quantify their relative importance. Because global sensitivity analysis requires many simulations, a Gaussian Process regressor is developed to efficiently predict the orifice area from the geometric inputs. Structural simulations of the mitral valve are carried out in Abaqus, focusing exclusively on the valve mechanics. The predicted distribution of orifice areas obtained from the Gaussian Process shows strong agreement with the ground-truth simulation results, and similar agreement is observed when only the most influential geometric parameters are varied. The analysis identifies a subset of geometric parameters that dominantly govern the mitral valve orifice area and can be reliably extracted from medical imaging modalities such as echocardiography. These findings establish a direct link between echocardiographic measurements and physics-based simulations and provide a framework for patient-specific assessment of mitral valve mechanics, with potential applications in guiding interventional strategies such as MitraClip placement. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Graphical abstract

34 pages, 7282 KB  
Article
Investigating the Uncertainty Quantification of Failure of Shallow Foundation of Cohesionless Soils Through Drucker–Prager Constitutive Model and Probabilistic FEM
by Ambrosios-Antonios Savvides
Geotechnics 2026, 6(1), 6; https://doi.org/10.3390/geotechnics6010006 - 14 Jan 2026
Abstract
Uncertainty quantification in science and engineering has become increasingly important due to advances in computational mechanics and numerical simulation techniques. In this work, the relationship between uncertainty in soil material parameters and the variability of failure loads and displacements of a shallow foundation [...] Read more.
Uncertainty quantification in science and engineering has become increasingly important due to advances in computational mechanics and numerical simulation techniques. In this work, the relationship between uncertainty in soil material parameters and the variability of failure loads and displacements of a shallow foundation is investigated. A Drucker–Prager constitutive law is implemented within a stochastic finite element framework. The random material variables considered are the critical state line slope c, the unload–reload path slope κ, and the hydraulic permeability k defined by Darcy’s law. The novelty of this work lies in the integrated stochastic u–p finite element framework. The framework combines Drucker–Prager plasticity with spatially varying material properties, and Latin Hypercube Sampling. This approach enables probabilistic prediction of failure loads, displacements, stresses, strains, and limit-state initiation points at reduced computational cost compared to conventional Monte Carlo simulations. Statistical post-processing of the output parameters is performed using the Kolmogorov–Smirnov test. The results indicate that, for the investigated configurations, the distributions of failure loads and displacements can be adequately approximated by Gaussian distributions, despite the presence of material nonlinearity. Furthermore, the influence of soil depth and load eccentricity on the limit-state response is quantified within the proposed probabilistic framework. Full article
(This article belongs to the Special Issue Recent Advances in Geotechnical Engineering (3rd Edition))
Show Figures

Figure 1

25 pages, 4730 KB  
Article
Process Capability Assessment and Surface Quality Monitoring in Cathodic Electrodeposition of S235JRC+N Electric-Charging Station
by Martin Piroh, Damián Peti, Patrik Fejko, Miroslav Gombár and Michal Hatala
Materials 2026, 19(2), 330; https://doi.org/10.3390/ma19020330 - 14 Jan 2026
Abstract
This study presents a statistically robust quality-engineering evaluation of an industrial cathodic electrodeposition (CED) process applied to large electric-charging station components. In contrast to predominantly laboratory-scale studies, the analysis is based on 1250 thickness measurements, enabling reliable assessment of process uniformity, positional effects, [...] Read more.
This study presents a statistically robust quality-engineering evaluation of an industrial cathodic electrodeposition (CED) process applied to large electric-charging station components. In contrast to predominantly laboratory-scale studies, the analysis is based on 1250 thickness measurements, enabling reliable assessment of process uniformity, positional effects, and long-term stability under real production conditions. The mean coating thickness was specified at 21.84 µm with a standard deviation of 3.14 µm, fully within the specified tolerance window of 15–30 µm. One-way ANOVA revealed statistically significant but technologically small inter-station differences (F(49, 1200) = 3.49, p < 0.001), with an effect size of η2 ≈ 12.5%, indicating that most variability originates from inherent within-station common causes. Shewhart X¯–R–S control charts confirmed process stability, with all subgroup means and dispersions well inside the control limits and no evidence of special-cause variation. Distribution tests (χ2, Kolmogorov–Smirnov, Shapiro–Wilk, Anderson–Darling) detected deviations from perfect normality, primarily in the tails, attributable to the superposition of slightly heterogeneous station-specific distributions rather than fundamental non-Gaussian behaviour. Capability and performance indices were evaluated using Statistica and PalstatCAQ according to ISO 22514; the results (Cp = 0.878, Cpk = 0.808, Pp = 0.797, Ppk = 0.726) classify the process as conditionally capable, with improvement potential mainly linked to reducing positional effects and centering the mean closer to the target thickness. To complement the statistical findings, an AIAG–VDA FMEA was conducted across the entire value stream. The highest-risk failure modes—surface contamination, incorrect bath chemistry, and improper hanging—corresponded to the same mechanisms identified by SPC and ANOVA as contributors to thickness variability. Proposed corrective actions reduced RPN values by 50–62.5%, demonstrating strong potential for capability improvement. A predictive machine-learning model was implemented to estimate layer thickness and successfully reproduced the global trend while filtering process-related noise, offering a practical tool for future predictive quality control. Full article
(This article belongs to the Section Electronic Materials)
Show Figures

Figure 1

18 pages, 3035 KB  
Article
FedENLC: An End-to-End Noisy Label Correction Framework in Federated Learning
by Yeji Cho and Junghyun Kim
Mathematics 2026, 14(2), 290; https://doi.org/10.3390/math14020290 - 13 Jan 2026
Abstract
In this paper, we propose FedENLC, an end-to-end noisy label correction model that performs model training and label correction simultaneously to fundamentally mitigate the label noise problem of federated learning (FL). FedENLC consists of two stages. In the first stage, the proposed model [...] Read more.
In this paper, we propose FedENLC, an end-to-end noisy label correction model that performs model training and label correction simultaneously to fundamentally mitigate the label noise problem of federated learning (FL). FedENLC consists of two stages. In the first stage, the proposed model employs Symmetric Cross Entropy (SCE), a robust loss function for noisy labels, and label smoothing to prevent the model from being biased by incorrect information in noisy environments. Subsequently, a Bayesian Gaussian Mixture Model (BGMM) is utilized to detect noisy clients. BGMM mitigates extreme parameter bias through its prior distribution, enabling stable and reliable detection in FL environments where data heterogeneity and noisy labels coexist. In the second stage, only the top noisy clients with high noise ratios are selectively included in the label correction process. The selection of top noisy clients is determined dynamically by considering the number of classes, posterior probabilities, and the degree of data heterogeneity. Through this approach, the proposed model prevents performance degradation caused by incorrect detection, while improving both computational efficiency and training stability. Experimental results show that FedENLC achieves significantly improved performance over existing models on the CIFAR-10 and CIFAR-100 datasets under data heterogeneity settings along with four noise settings. Full article
18 pages, 8082 KB  
Article
Application of Attention Mechanism Models in the Identification of Oil–Water Two-Phase Flow Patterns
by Qiang Chen, Haimin Guo, Xiaodong Wang, Yuqing Guo, Jie Liu, Ao Li, Yongtuo Sun and Dudu Wang
Processes 2026, 14(2), 265; https://doi.org/10.3390/pr14020265 - 12 Jan 2026
Viewed by 36
Abstract
Accurate identification of oil–water two-phase flow patterns is essential for ensuring the safety and operational efficiency of oil and gas extraction systems. While traditional methods using empirical models and sensor technologies have provided basic insights, they often struggle to capture the nonlinear features [...] Read more.
Accurate identification of oil–water two-phase flow patterns is essential for ensuring the safety and operational efficiency of oil and gas extraction systems. While traditional methods using empirical models and sensor technologies have provided basic insights, they often struggle to capture the nonlinear features of complex operational conditions. To address the challenge of data scarcity commonly found in experimental settings, this study employs a data augmentation strategy that combines the Synthetic Minority Over-sampling Technique (SMOTE) with Gaussian noise injection, effectively expanding the feature space from 60 original experimental nodes. Next, a physics-constrained attention mechanism model was developed that incorporates a physical constraint matrix to effectively mask irrelevant feature interactions. Experimental results show that while the standard attention model (83.88%) and the baseline BP neural network (84.25%) have limitations in generalizing to complex regimes, the proposed physics-constrained model achieves a peak test accuracy of 96.62%. Importantly, the model demonstrates exceptional robustness in identifying complex transition regions—specifically Dispersed Oil-in-Water (DO/W) flows—where it improved recall rates by about 24.6% compared to baselines. Additionally, visualization of attention scores confirms that the distribution of attention weights aligns closely with fluid-dynamic mechanisms—favoring inclination for stratified flows and flow rate for turbulence-dominated dispersions—thus validating the model’s interpretability. This research offers a novel, interpretable approach for modeling dynamic feature interactions in multiphase flows and provides valuable insights for intelligent oilfield development. Full article
Show Figures

Graphical abstract

24 pages, 7954 KB  
Article
Machine Learning-Based Prediction of Maximum Stress in Observation Windows of HOV
by Dewei Li, Zhijie Wang, Zhongjun Ding and Xi An
J. Mar. Sci. Eng. 2026, 14(2), 151; https://doi.org/10.3390/jmse14020151 - 10 Jan 2026
Viewed by 149
Abstract
With advances in deep-sea exploration technologies, utilizing human-occupied vehicles (HOV) in marine science has become widespread. The observation window is a critical component, as its structural strength affects submersible safety and performance. Under load, it experiences stress concentration, deformation, cracking, and catastrophic failure. [...] Read more.
With advances in deep-sea exploration technologies, utilizing human-occupied vehicles (HOV) in marine science has become widespread. The observation window is a critical component, as its structural strength affects submersible safety and performance. Under load, it experiences stress concentration, deformation, cracking, and catastrophic failure. The observation window will experience different stress distributions in high-pressure environments. The maximum principal stress is the most significant phenomenon that determines the most likely failure of materials in windows of HOV. This study proposes an artificial intelligence-based method to predict the maximum principal stress of observation windows in HOV for rapid safety assessment. Samples were designed, while strain data with corresponding maximum principal stress values were collected under different loading conditions. Three machine learning algorithms—transformer–CNN-BiLSTM, CNN-LSTM, and Gaussian process regression (GP)—were employed for analysis. Results show that the transformer–CNN-BiLSTM model achieved the highest accuracy, particularly at the point exhibiting the maximum the principal stress value. Evaluation metrics, including mean squared error (MSE), mean absolute error (MAE), and root squared residual (RSR), confirmed its superior performance. The proposed hybrid model incorporates a positional encoding layer to enrich input data with locational information and combines the strengths of bidirectional long short-term memory (LSTM), one-dimensional CNN, and transformer–CNN-BiLSTM encoders. This approach effectively captures local and global stress features, offering a reliable predictive tool for health monitoring of submersible observation windows. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

19 pages, 2464 KB  
Article
Research on Formation Path Planning Method and Obstacle Avoidance Strategy for Deep-Sea Mining Vehicles Based on Improved RRT*
by Jiancheng Liu, Yujia Wang, Hao Li, Pengjie Huang, Bingchen Liang, Haotian Wu and Shimin Yu
J. Mar. Sci. Eng. 2026, 14(2), 138; https://doi.org/10.3390/jmse14020138 - 9 Jan 2026
Viewed by 148
Abstract
To enhance the autonomous operation capability of deep-sea mining vehicle formations, this study addresses the issues of slow convergence in formation path planning and insufficient obstacle avoidance flexibility under complex environments by investigating a global path planning and local obstacle avoidance strategy based [...] Read more.
To enhance the autonomous operation capability of deep-sea mining vehicle formations, this study addresses the issues of slow convergence in formation path planning and insufficient obstacle avoidance flexibility under complex environments by investigating a global path planning and local obstacle avoidance strategy based on an improved RRT algorithm*. Through dynamic elliptical sampling, adaptive goal-biased sampling, safe distance detection, and path smoothing optimization, the efficiency and passability of path planning are improved. For the obstacle avoidance of formation members, a priority determination model incorporating local obstacle avoidance, formation contraction, and transformation is designed, and methods such as Gaussian distribution fan-shaped sampling and trajectory backtracking are proposed to optimize the local planning effect. Simulation results show that this method can effectively improve the path planning quality and obstacle avoidance performance of mining vehicle formations in complex environments. Specifically, when in a longitudinal formation, the maximum inter-vehicle error is approximately 15.1%, and the average error is controlled within 3.5%; when in a triangular formation, the maximum inter-vehicle error is approximately 20%, and the average error is controlled within 4.2%, indicating promising application prospects. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

20 pages, 11036 KB  
Article
GMF-Net: A Gaussian-Matched Fusion Network for Weak Small Object Detection in Satellite Laser Ranging Imagery
by Wei Zhu, Weiming Gong, Yong Wang, Yi Zhang and Jinlong Hu
Sensors 2026, 26(2), 407; https://doi.org/10.3390/s26020407 - 8 Jan 2026
Viewed by 165
Abstract
Detecting small objects in Satellite Laser Ranging (SLR) CCD images is critical yet challenging due to low signal-to-noise ratios and complex backgrounds. Existing frameworks often suffer from high computational costs and insufficient feature extraction capabilities for such tiny targets. To address these issues, [...] Read more.
Detecting small objects in Satellite Laser Ranging (SLR) CCD images is critical yet challenging due to low signal-to-noise ratios and complex backgrounds. Existing frameworks often suffer from high computational costs and insufficient feature extraction capabilities for such tiny targets. To address these issues, we propose the Gaussian-Matched Fusion Network (GMF-Net), a lightweight and high-precision detector tailored for SLR scenarios. The core scientific innovation lies in the Gaussian-Matched Convolution (GMConv) module. Unlike standard convolutions, GMConv is theoretically grounded in the physical Gaussian energy distribution of SLR targets. It employs multi-directional heterogeneous sampling to precisely match target energy decay, enhancing central feature response while suppressing background noise. Additionally, we incorporate a Cross-Stage Partial Pyramidal Convolution (CSPPC) to reduce parameter redundancy and a Cross-Feature Attention (CFA) module to bridge multi-scale features. To validate the method, we constructed the first dedicated SLR-CCD dataset. Experimental results show that GMF-Net achieves an mAP@50 of 93.1% and mAP@50–95 of 52.4%. Compared to baseline models, parameters are reduced by 26.6% (to 2.2 M) with a 27.4% reduction in computational load, demonstrating a superior balance between accuracy and efficiency for automated SLR systems. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

26 pages, 4009 KB  
Article
A Hybrid Simulation–Physical Data-Driven Framework for Occupant Injury Prediction in Vehicle Underbody Structures
by Xinge Si, Changan Di, Peng Peng, Yongjian Zhang, Tao Lin and Cong Xu
Sensors 2026, 26(2), 380; https://doi.org/10.3390/s26020380 - 7 Jan 2026
Viewed by 123
Abstract
One major challenge in optimizing vehicle underbody structures for blast protection is the trade-off between the high cost of physical tests and the limited accuracy of simulations. We introduce a predictive framework that is co-driven by limited physical measurements and systematically augmented simulation [...] Read more.
One major challenge in optimizing vehicle underbody structures for blast protection is the trade-off between the high cost of physical tests and the limited accuracy of simulations. We introduce a predictive framework that is co-driven by limited physical measurements and systematically augmented simulation datasets. The main problem arises from the complex components of blast impact signals, which makes it difficult to augment the load signals for finite element simulations when only extremely small sample sets are available. Specifically, a small-scale data-augmentation model within the wavelet domain based on a conditional generative adversarial network (CGAN) was designed. Real-time perturbations, governed by cumulative distribution functions, were introduced to expand and diversify the data representations for enhanced dataset enrichment. A predictive model based on Gaussian process regression (GPR) that integrates physical experimental data with augmented data wavelet characteristics is employed to estimate injury indices, using wavelet scale energies reduced via principal component analysis (PCA) as inputs. Cross-validation shows that this hybrid model achieves higher accuracy than using simulations alone. Through the case study, the model demonstrates that increased hull angle and depth can effectively reduce occupant injury. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

15 pages, 18761 KB  
Article
GAOC: A Gaussian Adaptive Ochiai Loss for Bounding Box Regression
by Binbin Han, Qiang Tang, Jiuxu Song, Zheng Wang and Yi Yang
Sensors 2026, 26(2), 368; https://doi.org/10.3390/s26020368 - 6 Jan 2026
Viewed by 189
Abstract
Bounding box regression (BBR) loss plays a critical role in object detection within computer vision. Existing BBR loss functions are typically based on the Intersection over Union (IoU) between predicted and ground truth boxes. However, these methods neither account for the effect of [...] Read more.
Bounding box regression (BBR) loss plays a critical role in object detection within computer vision. Existing BBR loss functions are typically based on the Intersection over Union (IoU) between predicted and ground truth boxes. However, these methods neither account for the effect of predicted box scale on regression nor effectively address the drift problem inherent in BBR. To overcome these limitations, this paper introduces a novel BBR loss function, termed Gaussian Adaptive Ochiai BBR loss (GAOC), which combines the Ochiai Coefficient (OC) with a Gaussian Adaptive (GA) distribution. The OC component normalizes by the square root of the product of bounding box dimensions, ensuring scale invariance. Meanwhile, the GA distribution models the distance between the top-left and bottom-right corners (TL/BR) coordinates of predicted and ground truth boxes, enabling a similarity measure that reduces sensitivity to positional deviations. This design enhances detection robustness and accuracy. GAOC was integrated into YOLOv5 and RT-DETR and evaluated on the PASCAL VOC and MS COCO 2017 benchmarks. Experimental results demonstrate that GAOC consistently outperforms existing BBR loss functions, offering a more effective solution. Full article
(This article belongs to the Special Issue Advanced Deep Learning Techniques for Intelligent Sensor Systems)
Show Figures

Figure 1

46 pages, 5566 KB  
Article
Classifying with the Fine Structure of Distributions: Leveraging Distributional Information for Robust and Plausible Naïve Bayes
by Quirin Stier, Jörg Hoffmann and Michael C. Thrun
Mach. Learn. Knowl. Extr. 2026, 8(1), 13; https://doi.org/10.3390/make8010013 - 5 Jan 2026
Viewed by 287
Abstract
In machine learning, the Bayes classifier represents the theoretical optimum for minimizing classification errors. Since estimating high-dimensional probability densities is impractical, simplified approximations such as naïve Bayes and k-nearest neighbor are widely used as baseline classifiers. Despite their simplicity, these methods require design [...] Read more.
In machine learning, the Bayes classifier represents the theoretical optimum for minimizing classification errors. Since estimating high-dimensional probability densities is impractical, simplified approximations such as naïve Bayes and k-nearest neighbor are widely used as baseline classifiers. Despite their simplicity, these methods require design choices—such as the distance measures in kNN, or the feature independence in naïve Bayes. In particular, naïve Bayes relies on implicit assumptions by using Gaussian mixtures or univariate kernel density estimators. Such design choices, however, often fail to capture heterogeneous distributional structures across features. We propose a flexible naïve Bayes classifier that leverages Pareto Density Estimation (PDE), a parameter-free, non-parametric approach shown to outperform standard kernel methods in exploratory statistics. PDE avoids prior distributional assumptions and supports interpretability through visualization of class-conditional likelihoods. In addition, we address a recently described pitfall of Bayes’ theorem: the misclassification of observations with low evidence. Building on the concept of plausible Bayes, we introduce a safeguard to handle uncertain cases more reliably. While not aiming to surpass state-of-the-art classifiers, our results show that PDE-flexible naïve Bayes with uncertainty handling provides a robust, scalable, and interpretable baseline that can be applied across diverse data scenarios. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

12 pages, 440 KB  
Article
Symmetrized Extrinsic Information Transfer Chart Analysis for Joint Decoding Between LDPC Codes and CCDMs
by Gang Yang, Fei Yang and Yanan Luo
Electronics 2026, 15(1), 228; https://doi.org/10.3390/electronics15010228 - 4 Jan 2026
Viewed by 114
Abstract
This paper proposes a symmetrized extrinsic information transfer (S-EXIT) chart analysis for probabilistic shaped (PS) systems to optimize the joint decoding of low-density parity-check (LDPC) codes and constant composition distribution matchers (CCDMs). A major challenge in analyzing PS systems is the non-uniform channel [...] Read more.
This paper proposes a symmetrized extrinsic information transfer (S-EXIT) chart analysis for probabilistic shaped (PS) systems to optimize the joint decoding of low-density parity-check (LDPC) codes and constant composition distribution matchers (CCDMs). A major challenge in analyzing PS systems is the non-uniform channel input caused by shaping, which invalidates the all-zero assumption of traditional EXIT charts, coupled with the three-node structure of the joint decoder (variable nodes, check nodes, and shaping nodes) that exceeds the two-decoder framework of conventional EXIT analysis. To resolve these issues, we first prove the symmetry of the joint decoder and introduce a “symmetrized density” transformation to render the channel output symmetric, thereby enabling the extension of EXIT chart analysis to PS systems. We then approximate the EXIT function of the shaping node decoder via polynomial fitting and integrate it with the variable node decoder into a unified model (VSND) for threshold analysis. On one hand, the proposed S-EXIT chart provides a theoretical threshold for the joint decoder, which is crucial for guiding system design. On the other hand, it enables the joint optimization of LDPC code rates and CCDM rates, unlocking additional performance gains. Simulations over additive white Gaussian noise (AWGN) channels demonstrate that short-blocklength CCDMs (e.g., blocklength 20) achieve up to 1.2 dB gain over uniform systems via S-EXIT-based rate optimization. This work addresses the performance limitations of short-blocklength CCDMs in high-speed optical transmissions, offering a practical and efficient analytical tool for PS system design. Full article
(This article belongs to the Special Issue Advances in Optical Communications and Optical Networks)
Show Figures

Figure 1

22 pages, 46825 KB  
Article
Delineating the Distribution Outline of Populus euphratica in the Mainstream Area of the Tarim River Using Multi-Source Thematic Classification Data
by Hao Li, Jiawei Zou, Qinyu Zhao, Jiacong Hu, Suhong Liu, Qingdong Shi and Weiming Cheng
Remote Sens. 2026, 18(1), 157; https://doi.org/10.3390/rs18010157 - 3 Jan 2026
Viewed by 200
Abstract
Populus euphratica is a key constructive species in desert ecosystems and plays a vital role in maintaining their stability. However, effective automated methods for accurately delineating its distribution outlines are currently lacking. This study used the mainstream area of the Tarim River as [...] Read more.
Populus euphratica is a key constructive species in desert ecosystems and plays a vital role in maintaining their stability. However, effective automated methods for accurately delineating its distribution outlines are currently lacking. This study used the mainstream area of the Tarim River as a case study and proposed a technical solution for identifying the distribution outline of Populus euphratica using multi-source thematic classification data. First, cropland thematic data were used to optimize the accuracy of the Populus euphratica classification raster data. Discrete points were removed based on density to reduce their impact on boundary identification. Then, a hierarchical identification scheme was constructed using the alpha-shape algorithm to identify the boundaries of high- and low-density Populus euphratica distribution areas separately. Finally, the outlines of the Populus euphratica distribution polygons were smoothed, and the final distribution outline data were obtained after spatial merging. The results showed the following: (1) Applying a closing operation to the cropland thematic classification data to obtain the distribution range of shelterbelts effectively eliminated misclassified pixels. Using the kd-tree algorithm to remove sparse discrete points based on density, with a removal ratio of 5%, helped suppress the interference of outlier point sets on the Populus euphratica outline identification. (2) Constructing a hierarchical identification scheme based on differences in Populus euphratica density is critical for accurately delineating its distribution contours. Using the alpha-shape algorithm with parameters set to α = 0.02 and α = 0.006, the reconstructed geometries effectively covered both densely and sparsely distributed Populus euphratica areas. (3) In the morphological processing stage, a combination of three methods—Gaussian filtering, equidistant expansion, and gap filling—effectively ensured the accuracy of the Populus euphratica outline. Among the various smoothing algorithms, Gaussian filtering yielded the best results. The equidistant expansion method reduced the impact of elongated cavities, thereby contributing to boundary accuracy. This study enhances the automation of Populus euphratica vector data mapping and holds significant value for the scientific management and research of desert vegetation. Full article
(This article belongs to the Special Issue Vegetation Mapping through Multiscale Remote Sensing)
Show Figures

Figure 1

17 pages, 3476 KB  
Article
Integer-Valued Time Series Model via Copula-Based Bivariate Skellam Distribution
by Mohammed Alqawba, Norou Diawara and Mame Mor Sene
J. Risk Financial Manag. 2026, 19(1), 27; https://doi.org/10.3390/jrfm19010027 - 2 Jan 2026
Viewed by 254
Abstract
Time series analysis is crucial for modeling and forecasting diverse real-world phenomena. Traditional models typically assume continuous-valued data; however, many applications involve integer-valued series, often including negative integers. This paper introduces an approach that combines copula theory with the bivariate Skellam distribution to [...] Read more.
Time series analysis is crucial for modeling and forecasting diverse real-world phenomena. Traditional models typically assume continuous-valued data; however, many applications involve integer-valued series, often including negative integers. This paper introduces an approach that combines copula theory with the bivariate Skellam distribution to handle such integer-valued data effectively. Copulas are widely recognized for capturing complex dependencies among variables. By integrating copulas, our proposed method respects integer constraints while modeling positive, negative, and temporal dependencies accurately. Through simulation and an empirical study on a real-life example, we demonstrate that our class of models performs well. This approach has broad applicability in areas such as finance, epidemiology, and environmental science, where modeling series with integer values, both positive and negative, is essential. Full article
(This article belongs to the Special Issue Mathematical Modelling in Economics and Finance)
Show Figures

Figure 1

Back to TopTop