Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (132)

Search Parameters:
Keywords = entropy-information criterion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3698 KiB  
Article
Research on Bearing Fault Diagnosis Method Based on MESO-TCN
by Ruibin Gao, Jing Zhu, Yifan Wu, Kaiwen Xiao and Yang Shen
Machines 2025, 13(7), 558; https://doi.org/10.3390/machines13070558 - 27 Jun 2025
Viewed by 250
Abstract
To address the issues of information redundancy, limited feature representation, and empirically set parameters in rolling bearing fault diagnosis, this paper proposes a Multi-Entropy Screening and Optimization Temporal Convolutional Network (MESO-TCN). The method integrates feature filtering, network modeling, and parameter optimization into a [...] Read more.
To address the issues of information redundancy, limited feature representation, and empirically set parameters in rolling bearing fault diagnosis, this paper proposes a Multi-Entropy Screening and Optimization Temporal Convolutional Network (MESO-TCN). The method integrates feature filtering, network modeling, and parameter optimization into a unified diagnostic framework. Specifically, ensemble empirical mode decomposition (EEMD) is combined with a hybrid entropy criterion to preprocess the raw vibration signals and suppress redundant noise. A kernel-extended temporal convolutional network (ETCN) is designed with multi-scale dilated convolution to extract diverse temporal fault patterns. Furthermore, an improved whale optimization algorithm incorporating a firefly-inspired mechanism is introduced to adaptively optimize key hyperparameters. Experimental results on datasets from Xi’an Jiaotong University and Southeast University demonstrate that MESO-TCN achieves average accuracies of 99.78% and 95.82%, respectively, outperforming mainstream baseline methods. These findings indicate the method’s strong generalization ability, feature discriminability, and engineering applicability in intelligent fault diagnosis of rotating machinery. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

25 pages, 820 KiB  
Article
Method for Analyzing the Importance of Quality and Safety Influencing Factors in Automotive Body Manufacturing Process—A Comprehensive Weight Evaluation Method to Reduce Subjective Influence
by Ying Xiang, Long Guo, Shaoqian Ji, Shengchao Zhu, Zhiming Guo and Hu Qiao
Mathematics 2025, 13(12), 1944; https://doi.org/10.3390/math13121944 - 11 Jun 2025
Viewed by 548
Abstract
The automotive industry is a key pillar of many national economies, and automotive body manufacturing is among the most complex production processes. In the automotive body manufacturing process, quality control and safety assurance are of paramount importance, directly influencing the overall safety performance, [...] Read more.
The automotive industry is a key pillar of many national economies, and automotive body manufacturing is among the most complex production processes. In the automotive body manufacturing process, quality control and safety assurance are of paramount importance, directly influencing the overall safety performance, structural reliability, and comfort of vehicles. Therefore, it is crucial to analyze the primary factors that influence quality and safety during the car body manufacturing process. The study first focuses on four key processes of car body manufacturing—stamping, welding, painting, and assembly—using the man, machine, material, method, environment (4M1E) framework to analyze the factors affecting quality and safety. Subsequently, a quality and safety early-warning indicator system is established for the automotive body manufacturing process, followed by a comprehensive analysis of the constructed system. To address the issue of subjectivity in traditional technique for order of preference by similarity to an ideal solution (TOPSIS) evaluation methods, this paper employs the coefficient of variation method for objective analysis of criterion-level indicators, the trapezoidal fuzzy number method for subjective analysis of criterion-level indicators, and establishes a model for optimizing target weight that balances subjective and objective approaches. Furthermore, a relative entropy-based method is applied to comprehensively evaluate criterion-level indicators. This approach reduces the information loss associated with separate weighting schemes and overcomes a known limitation of traditional TOPSIS—its inability to distinguish alternatives that lie equidistant from ideal solutions. Finally, an evaluation model for quality and safety influencing factors in body manufacturing is developed and validated through a case study, demonstrating its feasibility. The results show that the proposed model can effectively identify the key quality and safety influencing factors in the automobile body manufacturing process, guarantee quality control and safety assurance in the body manufacturing process, and thus ensure that the automobile production process meets the quality and safety requirements. Full article
(This article belongs to the Special Issue Mathematical Techniques and New ITs for Smart Manufacturing Systems)
Show Figures

Figure 1

40 pages, 794 KiB  
Article
An Automated Decision Support System for Portfolio Allocation Based on Mutual Information and Financial Criteria
by Massimiliano Kaucic, Renato Pelessoni and Filippo Piccotto
Entropy 2025, 27(5), 480; https://doi.org/10.3390/e27050480 - 29 Apr 2025
Viewed by 593
Abstract
This paper introduces a two-phase decision support system based on information theory and financial practices to assist investors in solving cardinality-constrained portfolio optimization problems. Firstly, the approach employs a stock-picking procedure based on an interactive multi-criteria decision-making method (the so-called TODIM method). More [...] Read more.
This paper introduces a two-phase decision support system based on information theory and financial practices to assist investors in solving cardinality-constrained portfolio optimization problems. Firstly, the approach employs a stock-picking procedure based on an interactive multi-criteria decision-making method (the so-called TODIM method). More precisely, the best-performing assets from the investable universe are identified using three financial criteria. The first criterion is based on mutual information, and it is employed to capture the microstructure of the stock market. The second one is the momentum, and the third is the upside-to-downside beta ratio. To calculate the preference weights used in the chosen multi-criteria decision-making procedure, two methods are compared, namely equal and entropy weighting. In the second stage, this work considers a portfolio optimization model where the objective function is a modified version of the Sharpe ratio, consistent with the choices of a rational agent even when faced with negative risk premiums. Additionally, the portfolio design incorporates a set of bound, budget, and cardinality constraints, together with a set of risk budgeting restrictions. To solve the resulting non-smooth programming problem with non-convex constraints, this paper proposes a variant of the distance-based parameter adaptation for success-history-based differential evolution with double crossover (DISH-XX) algorithm equipped with a hybrid constraint-handling approach. Numerical experiments on the US and European stock markets over the past ten years are conducted, and the results show that the flexibility of the proposed portfolio model allows the better control of losses, particularly during market downturns, thereby providing superior or at least comparable ex post performance with respect to several benchmark investment strategies. Full article
(This article belongs to the Special Issue Entropy, Econophysics, and Complexity)
Show Figures

Figure 1

24 pages, 1735 KiB  
Article
Interpretable Evaluation of Sparse Time–Frequency Distributions: 2D Metric Based on Instantaneous Frequency and Group Delay Analysis
by Vedran Jurdana
Mathematics 2025, 13(6), 898; https://doi.org/10.3390/math13060898 - 7 Mar 2025
Viewed by 571
Abstract
Compressive sensing in the ambiguity domain offers an efficient method for reconstructing high-quality time–frequency distributions (TFDs) across diverse signals. However, evaluating the quality of these reconstructions presents a significant challenge due to the potential loss of auto-terms when a regularization parameter is inappropriate. [...] Read more.
Compressive sensing in the ambiguity domain offers an efficient method for reconstructing high-quality time–frequency distributions (TFDs) across diverse signals. However, evaluating the quality of these reconstructions presents a significant challenge due to the potential loss of auto-terms when a regularization parameter is inappropriate. Traditional global metrics have inherent limitations, while the state-of-the-art local Rényi entropy (LRE) metric provides a single-value assessment but lacks interpretability and positional information of auto-terms. This paper introduces a novel performance criterion that leverages instantaneous frequency and group delay estimations directly in the 2D time–frequency plane, offering a more nuanced evaluation by individually assessing the preservation of auto-terms, resolution quality, and interference suppression in TFDs. Experimental results on noisy synthetic and real-world gravitational signals demonstrate the effectiveness of this measure in assessing reconstructed TFDs, with a focus on auto-term preservation. The proposed metric offers advantages in interpretability and memory efficiency, while its application to meta-heuristic optimization yields high-performing reconstructed TFDs significantly quicker than the existing LRE-based metric. These benefits highlight its usability in advanced methods and machine-related applications. Full article
Show Figures

Figure 1

13 pages, 862 KiB  
Article
An Entropy-Based Approach to Model Selection with Application to Single-Cell Time-Stamped Snapshot Data
by William C. L. Stewart, Ciriyam Jayaprakash and Jayajit Das
Entropy 2025, 27(3), 274; https://doi.org/10.3390/e27030274 - 6 Mar 2025
Viewed by 791
Abstract
Recent single-cell experiments that measure copy numbers of over 40 proteins in thousands of individual cells at different time points [time-stamped snapshot (TSS) data] exhibit cell-to-cell variability. Because the same cells cannot be tracked over time, TSS data provide key information about the [...] Read more.
Recent single-cell experiments that measure copy numbers of over 40 proteins in thousands of individual cells at different time points [time-stamped snapshot (TSS) data] exhibit cell-to-cell variability. Because the same cells cannot be tracked over time, TSS data provide key information about the statistical time-evolution of protein abundances in single cells, information that could yield insights into the mechanisms influencing the biochemical signaling kinetics of a cell. However, when multiple candidate models (i.e., mechanistic models applied to initial protein abundances) can potentially explain the same TSS data, selecting the best model (i.e., model selection) is often challenging. For example, popular approaches like Kullback–Leibler divergence and Akaike’s Information Criterion are often difficult to implement largely because mathematical expressions for the likelihoods of candidate models are typically not available. To perform model selection, we introduce an entropy-based approach that uses split-sample techniques to exploit the availability of large data sets and uses (1) existing generalized method of moments (GMM) software to estimate model parameters, and (2) standard kernel density estimators and a Gaussian copula to estimate candidate models. Using simulated data, we show that our approach can select the ”ground truth” from a set of competing mechanistic models. Then, to assess the relative support for a candidate model, we compute model selection probabilities using a bootstrap procedure. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

20 pages, 4409 KiB  
Article
A Method for Reducing White Noise in Partial Discharge Signals of Underground Power Cables
by Jifang Li and Qilong Zhang
Electronics 2025, 14(4), 780; https://doi.org/10.3390/electronics14040780 - 17 Feb 2025
Cited by 2 | Viewed by 718
Abstract
Online partial discharge (PD) detection for power cables is one reliable means of monitoring their health. However, strong interference by white noise poses a major challenge in the process of collecting information on partial discharge signals. To solve the problem whereby the wavelet [...] Read more.
Online partial discharge (PD) detection for power cables is one reliable means of monitoring their health. However, strong interference by white noise poses a major challenge in the process of collecting information on partial discharge signals. To solve the problem whereby the wavelet threshold estimation based on sample entropy falls into the local optimal and the wavelet noise reduction makes it difficult to process detailed information, we propose a partial discharge signal noise reduction method based on a combination of improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) and discrete wavelet transform (DWT) with multiscale sample entropy (MSE). Firstly, the ICEEMDAN method was used to decompose the original sequence into multiple intrinsic mode components. The intrinsic mode function (IMF) components were grouped using the mutual information method, and high-frequency noise was eliminated using the kurtosis criterion. Next, an MSE model was established to optimize the wavelet threshold, and wavelet noise reduction was applied to the effective component. The ICEEMDAN-MSE-DWT method can retain effective information while achieving complete denoising, which alleviates the problem of information loss that occurs after denoising using the wavelet method. Lastly, as shown by our simulation and experimental results, the proposed method can effectively realize noise reduction for power cable partial discharge signals, thus providing an effective method. Full article
Show Figures

Figure 1

19 pages, 8294 KiB  
Article
Variable-Step-Size Generalized Maximum Correntropy Affine Projection Algorithm with Sparse Regularization Term
by Haorui Li, Ying Gao, Xinyu Guo and Shifeng Ou
Electronics 2025, 14(2), 291; https://doi.org/10.3390/electronics14020291 - 13 Jan 2025
Viewed by 698
Abstract
Adaptive filtering plays a pivotal role in modern electronic information and communication systems, particularly in dynamic and complex environments. While traditional adaptive algorithms work well in many scenarios, they do not fully exploit the sparsity of the system, which restricts their performance in [...] Read more.
Adaptive filtering plays a pivotal role in modern electronic information and communication systems, particularly in dynamic and complex environments. While traditional adaptive algorithms work well in many scenarios, they do not fully exploit the sparsity of the system, which restricts their performance in the presence of varying noise conditions. To overcome these limitations, this paper proposes a variable-step-size generalized maximum correntropy affine projection algorithm (C-APGMC) with a sparse regularization term. The algorithm leverages the system’s sparsity by using a correlated entropy-inducing metric (CIM), which approximates the l0 norm of the norms, assigning stronger zero-attraction to smaller coefficients at each iteration. Moreover, the algorithm employs a variable-step-size approach guided by the mean square deviation (MSD) criterion. This design seeks to optimize both convergence speed and steady-state performance, improving its adaptability in dynamic environments. The simulation results demonstrate that the algorithm outperforms others in echo cancellation tasks, even in the presence of various noise disturbances. Full article
Show Figures

Figure 1

27 pages, 24008 KiB  
Article
Adaptive Feature Extraction Using Sparrow Search Algorithm-Variational Mode Decomposition for Low-Speed Bearing Fault Diagnosis
by Bing Wang, Haihong Tang, Xiaojia Zu and Peng Chen
Sensors 2024, 24(21), 6801; https://doi.org/10.3390/s24216801 - 23 Oct 2024
Cited by 4 | Viewed by 1548
Abstract
To address the challenge of extracting effective fault features at low speeds, where fault information is weak and heavily influenced by environmental noise, a parameter-adaptive variational mode decomposition (VMD) method is proposed. This method aims to overcome the limitations of traditional VMD, which [...] Read more.
To address the challenge of extracting effective fault features at low speeds, where fault information is weak and heavily influenced by environmental noise, a parameter-adaptive variational mode decomposition (VMD) method is proposed. This method aims to overcome the limitations of traditional VMD, which relies on manually set parameters. The sparrow search algorithm is used to calculate the fitness function based on mean envelope entropy, enabling the adaptive determination of the number of mode decompositions and the penalty factor in VMD. Afterward, the optimised parameters are used to enhance traditional VMD, enabling the decomposition of the raw signal to obtain intrinsic mode function components. The kurtosis criterion is then used to select relevant intrinsic mode functions for signal reconstruction. Finally, envelope analysis is applied to the reconstructed signal, and the results reveal the relationship between fault characteristic frequencies and their harmonics. The experimental results demonstrate that compared with other advanced methods, the proposed approach effectively reduces noise interference and extracts fault features for diagnosing low-speed bearing faults. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

24 pages, 1545 KiB  
Article
The Representative Points of Generalized Alpha Skew-t Distribution and Applications
by Yong-Feng Zhou, Yu-Xuan Lin, Kai-Tai Fang and Hong Yin
Entropy 2024, 26(11), 889; https://doi.org/10.3390/e26110889 - 22 Oct 2024
Cited by 1 | Viewed by 976
Abstract
Assuming the underlying statistical distribution of data is critical in information theory, as it impacts the accuracy and efficiency of communication and the definition of entropy. The real-world data are widely assumed to follow the normal distribution. To better comprehend the skewness of [...] Read more.
Assuming the underlying statistical distribution of data is critical in information theory, as it impacts the accuracy and efficiency of communication and the definition of entropy. The real-world data are widely assumed to follow the normal distribution. To better comprehend the skewness of the data, many models more flexible than the normal distribution have been proposed, such as the generalized alpha skew-t (GAST) distribution. This paper studies some properties of the GAST distribution, including the calculation of the moments, and the relationship between the number of peaks and the GAST parameters with some proofs. For complex probability distributions, representative points (RPs) are useful due to the convenience of manipulation, computation and analysis. The relative entropy of two probability distributions could have been a good criterion for the purpose of generating RPs of a specific distribution but is not popularly used due to computational complexity. Hence, this paper only provides three ways to obtain RPs of the GAST distribution, Monte Carlo (MC), quasi-Monte Carlo (QMC), and mean square error (MSE). The three types of RPs are utilized in estimating moments and densities of the GAST distribution with known and unknown parameters. The MSE representative points perform the best among all case studies. For unknown parameter cases, a revised maximum likelihood estimation (MLE) method of parameter estimation is compared with the plain MLE method. It indicates that the revised MLE method is suitable for the GAST distribution having a unimodal or unobvious bimodal pattern. This paper includes two real-data applications in which the GAST model appears adaptable to various types of data. Full article
(This article belongs to the Special Issue Number Theoretic Methods in Statistics: Theory and Applications)
Show Figures

Figure 1

14 pages, 489 KiB  
Article
Research on Evaluation Method of Green Suppliers Under Pythagorean Fuzzy Environment
by Jianhua Wang and Nan An
Sustainability 2024, 16(20), 9124; https://doi.org/10.3390/su16209124 - 21 Oct 2024
Cited by 1 | Viewed by 1230
Abstract
The evaluation and selection of green suppliers, as an important part of the process of creating a green supply chain, has received attention from enterprises and scholars. However, the evaluation and selection of green suppliers is a complex multi-criteria decision-making problem, and the [...] Read more.
The evaluation and selection of green suppliers, as an important part of the process of creating a green supply chain, has received attention from enterprises and scholars. However, the evaluation and selection of green suppliers is a complex multi-criteria decision-making problem, and the evaluation information provided by experts is often ambiguous, so it is difficult to obtain reasonable and accurate assessment results. Therefore, this paper proposes a green supplier evaluation model of Pythagorean fuzzy approximation of ideal solution ranking (Technology for Order Preference by Similarity to Ideal Solution, or TOPSIS). The model utilizes Pythagorean fuzzy sets to deal with fuzzy expert opinions and the TOPSIS method to obtain the ranking of alternative suppliers. In addition, the model calculates the criterion weights using the entropy weighting method in the fuzzy environment. Finally, the model proposed in this paper is used to help Company A determine the optimal green supplier selection object, and the effectiveness and superiority of the model are verified through a comparative analysis with existing green supplier evaluation models. Full article
(This article belongs to the Section Sustainable Engineering and Science)
Show Figures

Figure 1

36 pages, 11684 KiB  
Article
Investigating the Satisfaction of Residents in the Historic Center of Macau and the Characteristics of the Townscape: A Decision Tree Approach to Machine Learning
by Shuai Yang, Yile Chen, Yuhao Huang, Liang Zheng and Yue Huang
Buildings 2024, 14(9), 2925; https://doi.org/10.3390/buildings14092925 - 15 Sep 2024
Cited by 1 | Viewed by 1892
Abstract
The historic city of Macau is China’s 31st world heritage site, and its residents have actively contributed to preserving its heritage and will continue to reside there for the foreseeable future. Residents’ satisfaction with the current urban environment is closely related to the [...] Read more.
The historic city of Macau is China’s 31st world heritage site, and its residents have actively contributed to preserving its heritage and will continue to reside there for the foreseeable future. Residents’ satisfaction with the current urban environment is closely related to the landscape characteristics of the towns surrounding the historic center of Macau. This study aims to analyze the relationship between landscape characteristics and residents’ satisfaction, determine the key factors affecting their satisfaction and how they are combined, and provide a scientific basis for urban planning. This study used a decision tree machine learning model to analyze 524 questionnaire survey responses that addressed five aspects of the historic town’s landscape: the architectural, Largo Square, street, mountain and sea, and commercial landscapes. The data-driven approach helped find the best decision path. The results indicate that (1) the layout of Largo Square, the commercial colors and materials, the location of the former humanities and religion center, and the commercial signage system are the primary factors influencing residents’ satisfaction. (2) Incorporating decision tree parameters with information entropy as the splitting criterion and a minimum sample split number of two (with no maximum depth) led to the best performance when investigating residents’ satisfaction with Macau’s historic town landscape characteristics. (3) A reasonable layout for Largo Square (satisfaction > 3.50), prominent and harmonious commercial colors and materials (satisfaction > 3.50), rich cultural and religious elements (satisfaction > 4.50), and an excellent commercial signage system (satisfaction > 4.00) can significantly improve residents’ satisfaction. This provides important empirical support and a reference for urban planning and landscape design in Macau and other historical and cultural cities. Full article
Show Figures

Figure 1

22 pages, 14082 KiB  
Article
A Robust SAR-Optical Heterologous Image Registration Method Based on Region-Adaptive Keypoint Selection
by Keke Zhang, Anxi Yu, Wenhao Tong and Zhen Dong
Remote Sens. 2024, 16(17), 3289; https://doi.org/10.3390/rs16173289 - 4 Sep 2024
Cited by 2 | Viewed by 1651
Abstract
The differences in sensor imaging mechanisms, observation angles, and scattering characteristics of terrestrial objects significantly limit the registration performance of synthetic aperture radar (SAR) and optical heterologous images. Traditional methods particularly struggle in weak feature regions, such as harbors and islands with substantial [...] Read more.
The differences in sensor imaging mechanisms, observation angles, and scattering characteristics of terrestrial objects significantly limit the registration performance of synthetic aperture radar (SAR) and optical heterologous images. Traditional methods particularly struggle in weak feature regions, such as harbors and islands with substantial water coverage, as well as in desolate areas like deserts. This paper introduces a robust heterologous image registration technique based on region-adaptive keypoint selection that integrates image texture features, targeting two pivotal aspects: feature point extraction and matching point screening. Initially, a dual threshold criterion based on block region information entropy and variance products effectively identifies weak feature regions. Subsequently, it constructs feature descriptors to generate similarity maps, combining histogram parameter skewness with non-maximum suppression (NMS) to enhance matching point accuracy. Extensive experiments have been conducted on conventional SAR-optical datasets and typical SAR-optical images with different weak feature regions to assess the method’s performance. The findings indicate that this method successfully removes outliers in weak feature regions and completes the registration task of SAR and optical images with weak feature regions. Full article
Show Figures

Figure 1

25 pages, 19977 KiB  
Article
Different Vegetation Covers Leading to the Uncertainty and Consistency of ET Estimation: A Case Study Assessment with Extended Triple Collocation
by Xiaoxiao Li, Huaiwei Sun, Yong Yang, Xunlai Sun, Ming Xiong, Shuo Ouyang, Haichen Li, Hui Qin and Wenxin Zhang
Remote Sens. 2024, 16(13), 2484; https://doi.org/10.3390/rs16132484 - 6 Jul 2024
Viewed by 1620
Abstract
Accurate and reliable estimation of actual evapotranspiration (AET) is essential for various hydrological studies, including drought prediction, water resource management, and the analysis of atmospheric–terrestrial carbon exchanges. Gridded AET products offer potential for application in ungauged areas, but their uncertainties may be significant, [...] Read more.
Accurate and reliable estimation of actual evapotranspiration (AET) is essential for various hydrological studies, including drought prediction, water resource management, and the analysis of atmospheric–terrestrial carbon exchanges. Gridded AET products offer potential for application in ungauged areas, but their uncertainties may be significant, making it difficult to identify the best products for specific regions. While in situ data directly estimate gridded ET products, their applicability is limited in ungauged areas that require FLUXNET data. This paper employs an Extended Triple Collocation (ETC) method to estimate the uncertainty of Global Land Evaporation Amsterdam Model (GLEAM), Famine Early Warning Systems Network (FLDAS), and Maximum Entropy Production (MEP) AET product without requiring prior information. Subsequently, a merged ET product is generated by combining ET estimates from three original products. Furthermore, the study quantifies the uncertainty of each individual product across different vegetation covers and then compares three original products and the Merged ET with data from 645 in situ sites. The results indicate that GLEAM covers the largest area, accounting for 39.1% based on the correlation coefficient criterion and 39.9% based on the error variation criterion. Meanwhile, FLDAS and MEP exhibit similar performance characteristics. The merged ET derived from the ETC method demonstrates the ability to mitigate uncertainty in ET estimates in North American (NA) and European (EU) regions, as well as tundra, forest, grassland, and shrubland areas. This merged ET could be effectively utilized to reduce uncertainty in AET estimates from multiple products for ungauged areas. Full article
Show Figures

Figure 1

16 pages, 471 KiB  
Article
A Metric Based on the Efficient Determination Criterion
by Jesús E. García, Verónica A. González-López and Johsac I. Gomez Sanchez
Entropy 2024, 26(6), 526; https://doi.org/10.3390/e26060526 - 19 Jun 2024
Viewed by 1003
Abstract
This paper extends the concept of metrics based on the Bayesian information criterion (BIC), to achieve strongly consistent estimation of partition Markov models (PMMs). We introduce a set of metrics drawn from the family of model selection criteria known as efficient determination criteria [...] Read more.
This paper extends the concept of metrics based on the Bayesian information criterion (BIC), to achieve strongly consistent estimation of partition Markov models (PMMs). We introduce a set of metrics drawn from the family of model selection criteria known as efficient determination criteria (EDC). This generalization extends the range of options available in BIC for penalizing the number of model parameters. We formally specify the relationship that determines how EDC works when selecting a model based on a threshold associated with the metric. Furthermore, we improve the penalty options within EDC, identifying the penalty ln(ln(n)) as a viable choice that maintains the strongly consistent estimation of a PMM. To demonstrate the utility of these new metrics, we apply them to the modeling of three DNA sequences of dengue virus type 3, endemic in Brazil in 2023. Full article
(This article belongs to the Special Issue Bayesianism)
Show Figures

Figure 1

25 pages, 18237 KiB  
Article
An Assessment of Urban Residential Environment Quality Based on Multi-Source Geospatial Data: A Case Study of Beijing, China
by Shijia Zhang, Yang Xia, Zijuan Li, Xue Li, Yufei Wu, Peiyi Liu and Shouhang Du
Land 2024, 13(6), 823; https://doi.org/10.3390/land13060823 - 7 Jun 2024
Cited by 2 | Viewed by 1949
Abstract
Assessing the urban residential environment quality (REQ) is essential for advancing sustainable urban development and enhancing urban residents’ living standards. Traditional REQ assessments rely on statistical data, prone to delays and lacking holistic insight. This study takes residential blocks as the analysis units [...] Read more.
Assessing the urban residential environment quality (REQ) is essential for advancing sustainable urban development and enhancing urban residents’ living standards. Traditional REQ assessments rely on statistical data, prone to delays and lacking holistic insight. This study takes residential blocks as the analysis units and is conducted within the area of the Sixth Ring Road in Beijing. It synthesizes multi-source geospatial data to devise a comprehensive framework for assessing urban REQ, incorporating facets of environmental health and comfort, housing comfort, transportation convenience, city security, and life convenience. Utilizing the principle of minimal relative informational entropy, this study integrates the Analytic Hierarchy Process (AHP) with the entropy method to determine the weight of each evaluative criterion. Subsequently, a linear weighting technique is employed to ascertain the scores for each evaluative criterion, thus facilitating a detailed examination of the REQ. Finally, the research probes into the complex interrelation between the assessed REQ and the city’s Gross Domestic Product (GDP) and carbon emissions across varying scales. Findings reveal that (1) the overall REQ within Beijing’s Sixth Ring Road is superior at the center and diminishes towards the periphery. (2) The dispersion of environmental health and comfort and city security metrics is relatively uniform, showing minor variations; however, a marked disparity is observed in the distribution of housing comfort metrics. (3) Regions characterized by higher GDP tend to demonstrate relatively higher levels of the REQ. Conversely, areas boasting higher-quality urban REQ are more inclined to exhibit increased levels of carbon emissions. Full article
(This article belongs to the Special Issue A Livable City: Rational Land Use and Sustainable Urban Space)
Show Figures

Figure 1

Back to TopTop