Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (90)

Search Parameters:
Keywords = Pareto distribution type I

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 2428 KiB  
Article
Machine Learning Models for Pancreatic Cancer Survival Prediction: A Multi-Model Analysis Across Stages and Treatments Using the Surveillance, Epidemiology, and End Results (SEER) Database
by Aditya Chakraborty and Mohan D. Pant
J. Clin. Med. 2025, 14(13), 4686; https://doi.org/10.3390/jcm14134686 - 2 Jul 2025
Viewed by 479
Abstract
Background: Pancreatic cancer is among the most lethal malignancies, with poor prognosis and limited survival despite treatment advances. Accurate survival modeling is critical for prognostication and clinical decision-making. This study had three primary aims: (1) to determine the best-fitting survival distribution among patients [...] Read more.
Background: Pancreatic cancer is among the most lethal malignancies, with poor prognosis and limited survival despite treatment advances. Accurate survival modeling is critical for prognostication and clinical decision-making. This study had three primary aims: (1) to determine the best-fitting survival distribution among patients diagnosed and deceased from pancreatic cancer across stages and treatment types; (2) to construct and compare predictive risk classification models; and (3) to evaluate survival probabilities using parametric, semi-parametric, non-parametric, machine learning, and deep learning methods for Stage IV patients receiving both chemotherapy and radiation. Methods: Using data from the SEER database, parametric models (Generalized Extreme Value, Generalized Pareto, Log-Pearson 3), semi-parametric (Cox), and non-parametric (Kaplan–Meier) methods were compared with four machine learning models (gradient boosting, neural network, elastic net, and random forest). Survival probability heatmaps were constructed, and six classification models were developed for risk stratification. ROC curves, accuracy, and goodness-of-fit tests were used for model validation. Statistical tests included Kruskal–Wallis, pairwise Wilcoxon, and chi-square. Results: Generalized Extreme Value (GEV) was found to be the best-fitting distribution in most of the scenarios. Stage-specific survival differences were statistically significant. The highest predictive accuracy (AUC: 0.947; accuracy: 56.8%) was observed in patients receiving both chemotherapy and radiation. The gradient boosting model predicted the most optimistic survival, while random forest showed a sharp decline after 15 months. Conclusions: This study emphasizes the importance of selecting appropriate analytical models for survival prediction and risk classification. Adopting these innovations, with the help of advanced machine learning and deep learning models, can enhance patient outcomes and advance precision medicine initiatives. Full article
(This article belongs to the Section Epidemiology & Public Health)
Show Figures

Figure 1

26 pages, 2742 KiB  
Article
Power Dispatch Stability Technology Based on Multi-Energy Complementary Alliances
by Yiming Zhao, Chengjun Zhang, Changsheng Wan, Dong Du, Jing Huang and Weite Li
Mathematics 2025, 13(13), 2091; https://doi.org/10.3390/math13132091 - 25 Jun 2025
Viewed by 262
Abstract
In the context of growing global energy demand and increasingly severe environmental pollution, ensuring the stable dispatch of new energy sources and the effective management of power resources has become particularly important. This study focuses on the reliability and stability issues of new [...] Read more.
In the context of growing global energy demand and increasingly severe environmental pollution, ensuring the stable dispatch of new energy sources and the effective management of power resources has become particularly important. This study focuses on the reliability and stability issues of new energy dispatch considering the complementary advantages of multiple energy types. It aims to enhance dispatch stability and energy utilization through an innovative Distributed Overlapping Coalition Formation (DOCF) model. A distributed algorithm utilizing tabu search is proposed to solve the complex optimization problem in power resource allocation. The overlapping coalitions consider synergies between different types of resources and intelligently allocate based on the heterogeneous demands of power loads and the supply capabilities of power stations. Simulation results demonstrate that DOCF can significantly improve power grid resource utilization efficiency and dispatch stability. Particularly in handling intermittent power resources such as solar and wind energy, the proposed model effectively reduces peak shaving time and improves the overall network energy efficiency. Compared with the preference relationship based on selfish and Pareto sequence, the PGG-TS algorithm based on BMBT has an average utility of 10.2% and 25.3% in terms of load, respectively. The methodology and findings of this study have important theoretical and practical value for guiding actual energy management practices and promoting the wider utilization of renewable energy. Full article
(This article belongs to the Special Issue Artificial Intelligence and Game Theory)
Show Figures

Figure 1

23 pages, 1136 KiB  
Article
Objective Framework for Bayesian Inference in Multicomponent Pareto Stress–Strength Model Under an Adaptive Progressive Type-II Censoring Scheme
by Young Eun Jeon, Yongku Kim and Jung-In Seo
Mathematics 2025, 13(9), 1379; https://doi.org/10.3390/math13091379 - 23 Apr 2025
Viewed by 277
Abstract
This study introduces an objective Bayesian approach for estimating the reliability of a multicomponent stress–strength model based on the Pareto distribution under an adaptive progressive Type-II censoring scheme. The proposed method is developed within a Bayesian framework, utilizing a reference prior with partial [...] Read more.
This study introduces an objective Bayesian approach for estimating the reliability of a multicomponent stress–strength model based on the Pareto distribution under an adaptive progressive Type-II censoring scheme. The proposed method is developed within a Bayesian framework, utilizing a reference prior with partial information to improve the accuracy of point estimation and to ensure the construction of a credible interval for uncertainty assessment. This approach is particularly useful for addressing several limitations of a widely used likelihood-based approach in estimating the multicomponent stress–strength reliability under the Pareto distribution. For instance, in the likelihood-based method, the asymptotic variance–covariance matrix may not exist due to certain constraints. This limitation hinders the construction of an approximate confidence interval for assessing the uncertainty. Moreover, even when an approximate confidence interval is obtained, it may fail to achieve nominal coverage levels in small sample scenarios. Unlike the likelihood-based method, the proposed method provides an efficient estimator across various criteria and constructs a valid credible interval, even with small sample sizes. Extensive simulation studies confirm that the proposed method yields reliable and accurate inference across various censoring scenarios, and a real data application validates its practical utility. These results demonstrate that the proposed method is an effective alternative to the likelihood-based method for reliability inference in the multicomponent stress–strength model based on the Pareto distribution under an adaptive progressive Type-II censoring scheme. Full article
Show Figures

Figure 1

26 pages, 4000 KiB  
Article
Collaborative Optimization of Shore Power and Berth Allocation Based on Economic, Environmental, and Operational Efficiency
by Zhiqiang Zhang, Yuhua Zhu, Jian Zhu, Daozheng Huang, Chuanzhong Yin and Jinyang Li
J. Mar. Sci. Eng. 2025, 13(4), 776; https://doi.org/10.3390/jmse13040776 - 14 Apr 2025
Cited by 3 | Viewed by 989
Abstract
When vessels are docked at ports, traditional auxiliary engines produce substantial pollutants and noise, exerting pressure on the port environment. Shore power technology, as a green, energy-efficient, and emission-reducing solution, can effectively mitigate ship emissions. However, its widespread adoption is hindered by challenges [...] Read more.
When vessels are docked at ports, traditional auxiliary engines produce substantial pollutants and noise, exerting pressure on the port environment. Shore power technology, as a green, energy-efficient, and emission-reducing solution, can effectively mitigate ship emissions. However, its widespread adoption is hindered by challenges such as high costs, compatibility issues, and connection complexity. This study develops a multi-objective optimization model for the coordinated allocation of shore power and berth scheduling, integrating economic benefits, environmental benefits, and operational efficiency. The NSGA-III algorithm is employed to solve the model and generate a Pareto-optimal solution set, with the final optimal solution identified using the TOPSIS method. The results demonstrate that the optimized shore power distribution and berth scheduling strategy can significantly reduce ship emissions and port operating costs while enhancing overall port resource utilization efficiency. Additionally, an economically feasible shore power allocation scheme, based on 80% of berth capacity, is proposed. By accounting for variations in ship types, this study provides more targeted and practical optimization strategies. These findings offer valuable decision support for port management and contribute to the intelligent and sustainable development of green ports. Full article
(This article belongs to the Section Marine Environmental Science)
Show Figures

Graphical abstract

27 pages, 11614 KiB  
Article
Multi-Objective Optimization for Resource Allocation in Space–Air–Ground Network with Diverse IoT Devices
by Yongnan Xu, Xiangrong Tang, Linyu Huang, Hamid Ullah and Qian Ning
Sensors 2025, 25(1), 274; https://doi.org/10.3390/s25010274 - 6 Jan 2025
Cited by 2 | Viewed by 1358
Abstract
As the Internet of Things (IoT) expands globally, the challenge of signal transmission in remote regions without traditional communication infrastructure becomes prominent. An effective solution involves integrating aerial, terrestrial, and space components to form a Space–Air–Ground Integrated Network (SAGIN). This paper discusses an [...] Read more.
As the Internet of Things (IoT) expands globally, the challenge of signal transmission in remote regions without traditional communication infrastructure becomes prominent. An effective solution involves integrating aerial, terrestrial, and space components to form a Space–Air–Ground Integrated Network (SAGIN). This paper discusses an uplink signal scenario in which various types of data collection sensors as IoT devices use Unmanned Aerial Vehicles (UAVs) as relays to forward signals to low-Earth-orbit satellites. Considering the fairness of resource allocation among IoT devices of the same category, our goal is to maximize the minimum uplink channel capacity for each category of IoT devices, which is a multi-objective optimization problem. Specifically, the variables include the deployment locations of UAVs, bandwidth allocation ratios, and the association between UAVs and IoT devices. To address this problem, we propose a multi-objective evolutionary algorithm that ensures fair resource distribution among multiple parties. The algorithm is validated in eight different scenario settings and compared with various traditional multi-objective optimization algorithms. The experimental results demonstrate that the proposed algorithm can achieve higher-quality Pareto fronts (PFs) and better convergence, indicating more equitable resource allocation and improved algorithmic effectiveness in addressing this issue. Moreover, these pre-prepared, high-quality solutions from PFs provide adaptability to varying requirements in signal collection scenarios. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

26 pages, 621 KiB  
Article
A Bivariate Extension of Type-II Generalized Crack Distribution for Modeling Heavy-Tailed Losses
by Taehan Bae and Hanson Quarshie
Mathematics 2024, 12(23), 3718; https://doi.org/10.3390/math12233718 - 27 Nov 2024
Viewed by 682
Abstract
As an extension of the (univariate) Birnbaum–Saunders distribution, the Type-II generalized crack (GCR2) distribution, built on an appropriate base density, provides a sufficient level of flexibility to fit various distributional shapes, including heavy-tailed ones. In this paper, we develop a bivariate extension of [...] Read more.
As an extension of the (univariate) Birnbaum–Saunders distribution, the Type-II generalized crack (GCR2) distribution, built on an appropriate base density, provides a sufficient level of flexibility to fit various distributional shapes, including heavy-tailed ones. In this paper, we develop a bivariate extension of the Type-II generalized crack distribution and study its dependency structure. For practical applications, three specific distributions, GCR2-Generalized Gaussian, GCR2-Student’s t, and GCR2-Logistic, are considered for marginals. The expectation-maximization algorithm is implemented to estimate the parameters in the bivariate GCR2 models. The model fitting results on a catastrophic loss dataset show that the bivariate GCR2 distribution based on the generalized Gaussian density fits the data significantly better than other alternative models, such as the bivariate lognormal distribution and some Archimedean copula models with lognormal or Pareto marginals. Full article
(This article belongs to the Special Issue Actuarial Statistical Modeling and Applications)
Show Figures

Figure 1

17 pages, 892 KiB  
Article
Bivariate Pareto–Feller Distribution Based on Appell Hypergeometric Function
by Christian Caamaño-Carrillo, Moreno Bevilacqua, Michael Zamudio-Monserratt and Javier E. Contreras-Reyes
Axioms 2024, 13(10), 701; https://doi.org/10.3390/axioms13100701 - 9 Oct 2024
Cited by 1 | Viewed by 1142
Abstract
The Pareto–Feller distribution has been widely used across various disciplines to model “heavy-tailed” phenomena, where extreme events such as high incomes or large losses are of interest. In this paper, we present a new bivariate distribution based on the Appell hypergeometric function with [...] Read more.
The Pareto–Feller distribution has been widely used across various disciplines to model “heavy-tailed” phenomena, where extreme events such as high incomes or large losses are of interest. In this paper, we present a new bivariate distribution based on the Appell hypergeometric function with marginal Pareto–Feller distributions obtained from two independent gamma random variables. The proposed distribution has the beta prime marginal distributions as special case, which were obtained using a Kibble-type bivariate gamma distribution, and the stochastic representation was obtained by the quotient of a scale mixture of two gamma random variables. This result can be viewed as a generalization of the standard bivariate beta I (or inverted bivariate beta distribution). Moreover, the obtained bivariate density is based on two confluent hypergeometric functions. Then, we derive the probability distribution function, the cumulative distribution function, the moment-generating function, the characteristic function, the approximated differential entropy, and the approximated mutual information index. Based on numerical examples, the exact and approximated expressions are shown. Full article
(This article belongs to the Special Issue Advances in Statistical Simulation and Computing)
Show Figures

Figure 1

17 pages, 3707 KiB  
Article
Extreme Value Index Estimation for Pareto-Type Tails under Random Censorship and via Generalized Means
by M. Ivette Gomes, Lígia Henriques-Rodrigues, M. Manuela Neves and Helena Penalva
Appl. Sci. 2024, 14(19), 8671; https://doi.org/10.3390/app14198671 - 26 Sep 2024
Cited by 1 | Viewed by 1168
Abstract
The field of statistical extreme value theory (EVT) focuses on estimating parameters associated with extreme events, such as the probability of exceeding a high threshold or determining a high quantile that lies at or beyond the observed data range. Typically, the assumption for [...] Read more.
The field of statistical extreme value theory (EVT) focuses on estimating parameters associated with extreme events, such as the probability of exceeding a high threshold or determining a high quantile that lies at or beyond the observed data range. Typically, the assumption for univariate data analysis is that the sample is complete, independent, identically distributed, or weakly dependent and stationary, drawn from an unknown distribution F. However, in the context of lifetime data, censoring is a common issue. In this work, we consider the case of random censoring for data with a heavy-tailed, Pareto-type distribution. As is common in applications of EVT, the estimation of the extreme value index (EVI) is critical, as it quantifies the tail heaviness of the distribution. The EVI has been extensively studied in the literature. Here, we discuss several classical EVI-estimators and reduced-bias (RB) EVI-estimators within a semi-parametric framework, with a focus on RB EVI-estimators derived from generalized means, which will be applied to both simulated and real survival data. Full article
(This article belongs to the Special Issue Applied Biostatistics: Challenges and Opportunities)
Show Figures

Figure 1

18 pages, 321 KiB  
Article
Estimation and Bayesian Prediction of the Generalized Pareto Distribution in the Context of a Progressive Type-II Censoring Scheme
by Tianrui Ye and Wenhao Gui
Appl. Sci. 2024, 14(18), 8433; https://doi.org/10.3390/app14188433 - 19 Sep 2024
Cited by 1 | Viewed by 1351
Abstract
The generalized Pareto distribution plays a significant role in reliability research. This study concentrates on the statistical inference of the generalized Pareto distribution utilizing progressively Type-II censored data. Estimations are performed using maximum likelihood estimation through the expectation–maximization approach. Confidence intervals are derived [...] Read more.
The generalized Pareto distribution plays a significant role in reliability research. This study concentrates on the statistical inference of the generalized Pareto distribution utilizing progressively Type-II censored data. Estimations are performed using maximum likelihood estimation through the expectation–maximization approach. Confidence intervals are derived using the asymptotic confidence intervals. Bayesian estimations are conducted using the Tierney and Kadane method alongside the Metropolis–Hastings algorithm, and the highest posterior density credible interval estimation is accomplished. Furthermore, Bayesian predictive intervals and future sample estimations are explored. To illustrate these inference techniques, a simulation and practical example are presented for analysis. Full article
(This article belongs to the Special Issue Novel Applications of Machine Learning and Bayesian Optimization)
18 pages, 1037 KiB  
Article
A New Class of Reduced-Bias Generalized Hill Estimators
by Lígia Henriques-Rodrigues, Frederico Caeiro and M. Ivette Gomes
Mathematics 2024, 12(18), 2866; https://doi.org/10.3390/math12182866 - 14 Sep 2024
Cited by 1 | Viewed by 1013
Abstract
The estimation of the extreme value index (EVI) is a crucial task in the field of statistics of extremes, as it provides valuable insights into the tail behavior of a distribution. For models with a Pareto-type tail, the Hill estimator is a popular [...] Read more.
The estimation of the extreme value index (EVI) is a crucial task in the field of statistics of extremes, as it provides valuable insights into the tail behavior of a distribution. For models with a Pareto-type tail, the Hill estimator is a popular choice. However, this estimator is susceptible to bias, which can lead to inaccurate estimations of the EVI, impacting the reliability of risk assessments and decision-making processes. This paper introduces a novel reduced-bias generalized Hill estimator, which aims to enhance the accuracy of EVI estimation by mitigating the bias. Full article
(This article belongs to the Special Issue Computational Statistical Methods and Extreme Value Theory)
Show Figures

Figure 1

10 pages, 994 KiB  
Article
Multi-UAV Reconnaissance Task Assignment for Heterogeneous Targets with ACD-NSGA-II Algorithm
by Hong Zhang, Kunzhong Miao, Huangzhi Yu and Yifeng Niu
Electronics 2024, 13(18), 3609; https://doi.org/10.3390/electronics13183609 - 11 Sep 2024
Viewed by 1060
Abstract
The existing task assignment algorithms usually solve only a point-based model. This paper proposes a novel algorithm for task assignment in detection search tasks. Firstly, the optimal reconnaissance path is generated by considering the drone’s position and attitude information, as well as the [...] Read more.
The existing task assignment algorithms usually solve only a point-based model. This paper proposes a novel algorithm for task assignment in detection search tasks. Firstly, the optimal reconnaissance path is generated by considering the drone’s position and attitude information, as well as the type of heterogeneous targets present in the actual scene. Subsequently, an adaptive crowding distance calculation (ACD-NSGA-II) is proposed based on the relative position of solutions in space, taking into account the spatial distribution of parent solutions and constraints imposed by uncertain targets and terrain. Finally, comparative experiments using digital simulation are conducted under two different target probability scenarios. Moreover, the improved algorithm is further evaluated across 100 cases, and a comparison of the Pareto solution set with other algorithms is conducted to demonstrate the algorithm’s overall adaptability. Full article
Show Figures

Figure 1

32 pages, 3170 KiB  
Article
Inequality in the Distribution of Wealth and Income as a Natural Consequence of the Equal Opportunity of All Members in the Economic System Represented by a Scale-Free Network
by John G. Ingersoll
Economies 2024, 12(9), 232; https://doi.org/10.3390/economies12090232 - 29 Aug 2024
Cited by 1 | Viewed by 2102
Abstract
The purpose of this work is to examine the nature of the historically observed and empirically described by the Pareto law inequality in the distribution of wealth and income in an economic system. This inequality is presumed to be the result of unequal [...] Read more.
The purpose of this work is to examine the nature of the historically observed and empirically described by the Pareto law inequality in the distribution of wealth and income in an economic system. This inequality is presumed to be the result of unequal opportunity by its members. An analytical model of the economic system consisting of a large number of actors, all having equal access to its total wealth (or income) has been developed that is formally represented by a scale-free network comprised of nodes (actors) and links (states of wealth or income). The dynamic evolution of the complex network can be mapped in turn, as is known, into a system of quantum particles (links) distributed among various energy levels (nodes) in thermodynamic equilibrium. The distribution of quantum particles (photons) at different energy levels in the physical system is then derived based on statistical thermodynamics with the attainment of maximal entropy for the system to be in a dynamic equilibrium. The resulting Planck-type distribution of the physical system mapped into a scale-free network leads naturally into the Pareto law distribution of the economic system. The conclusions of the scale-free complex network model leading to the analytical derivation of the empirical Pareto law are multifold. First, any complex economic system behaves akin to a scale-free complex network. Second, equal access or opportunity leads to unequal outcomes. Third, the optimal value for the Pareto index is obtained that ensures the optimal, albeit unequal, outcome of wealth and income distribution. Fourth, the optimal value for the Gini coefficient can then be calculated and be compared to the empirical values of that coefficient for wealth and income to ascertain how close an economic system is to its optimal distribution of income and wealth among its members. Fifth, in an economic system with equal opportunity for all its members there should be no difference between the resulting income and wealth distributions. Examination of the wealth and income distributions described by the Gini coefficient of national economies suggests that income and particularly wealth are far off from their optimal value. We conclude that the equality of opportunity should be the fundamental guiding principle of any economic system for the optimal distribution of wealth and income. The practical application of this conclusion is that societies ought to shift focus from policies such as taxation and payment transfers purporting to produce equal outcomes for all, a goal which is unattainable and wasteful, to policies advancing among others education, health care, and affordable housing for all as well as the re-evaluation of rules and institutions such that all members in the economic system have equal opportunity for the optimal utilization of resources and the distribution of wealth and income. Future research efforts should develop the scale-free complex network model of the economy as a complement to the current standard models. Full article
(This article belongs to the Special Issue Innovation, Reallocation and Economy Growth)
Show Figures

Figure 1

15 pages, 29814 KiB  
Case Report
Assessment of Extracellular Matrix Fibrous Elements in Male Dermal Aging: A Ten-Year Follow-Up Preliminary Case Study
by Bogusław Machaliński, Dorota Oszutowska-Mazurek, Przemyslaw Mazurek, Mirosław Parafiniuk, Paweł Szumilas, Alicja Zawiślak, Małgorzata Zaremba, Iwona Stecewicz, Piotr Zawodny and Barbara Wiszniewska
Biology 2024, 13(8), 636; https://doi.org/10.3390/biology13080636 - 20 Aug 2024
Cited by 1 | Viewed by 1559
Abstract
Skin aging is a complex phenomenon influenced by multiple internal and external factors that can lead to significant changes in skin structure, particularly the degradation of key extracellular matrix (ECM) components such as collagen and elastic fibers in the dermis. In this study, [...] Read more.
Skin aging is a complex phenomenon influenced by multiple internal and external factors that can lead to significant changes in skin structure, particularly the degradation of key extracellular matrix (ECM) components such as collagen and elastic fibers in the dermis. In this study, we aimed to meticulously assess the morphological changes within these critical fibrous ECM elements in the dermis of the same volunteer at age 47 and 10 years later (2012 to 2022). Using advanced histological staining techniques, we examined the distribution and characteristics of ECM components, including type I collagen, type III collagen, and elastic fibers. Morphological analysis, facilitated by hematoxylin and eosin staining, allowed for an accurate assessment of fiber bundle thickness and a quantification of collagen and elastic fiber areas. In addition, we used the generalized Pareto distribution for histogram modeling to refine our statistical analyses. This research represents a pioneering effort to examine changes in ECM fiber material, specifically within the male dermis over a decade-long period. Our findings reveal substantial changes in the organization of type I collagen within the ECM, providing insight into the dynamic processes underlying skin aging. Full article
(This article belongs to the Section Cell Biology)
Show Figures

Figure 1

32 pages, 11808 KiB  
Article
A Multi-Objective Non-Dominated Sorting Gravitational Search Algorithm for Assembly Flow-Shop Scheduling of Marine Prefabricated Cabins
by Ruipu Dong, Jinghua Li, Dening Song, Boxin Yang and Lei Zhou
Mathematics 2024, 12(14), 2288; https://doi.org/10.3390/math12142288 - 22 Jul 2024
Cited by 1 | Viewed by 1226
Abstract
Prefabricated cabin modular units (PMCUs) are a widespread type of intermediate products used during ship or offshore platform construction. This paper focuses on the scheduling problem of PMCU assembly flow shops, which is summarized as a multi-objective, fuzzy-blocking hybrid flow-shop-scheduling problem based on [...] Read more.
Prefabricated cabin modular units (PMCUs) are a widespread type of intermediate products used during ship or offshore platform construction. This paper focuses on the scheduling problem of PMCU assembly flow shops, which is summarized as a multi-objective, fuzzy-blocking hybrid flow-shop-scheduling problem based on learning and fatigue effects (FB-HFSP-LF) to minimize the maximum fuzzy makespan and maximize the average fuzzy due-date agreement index. This paper proposes a multi-objective non-dominated sorting gravitational search algorithm (MONSGSA) to solve it. In the proposed MONSGSA, the ranked-order value is used to convert continuous solutions to discrete solutions. Multi-dimensional Latin hypercube sampling is used to enhance initial population diversity. Setting up an external archive to maintain non-dominated solutions while introducing an adaptive inertia factor and a trap avoidance operator to guide individual positional updates. The results of multiple sets of experiments show that Pareto solutions of MONSGSA have better distribution and convergence compared to other competitors. Finally, the instance of PMCU manufacturer is used for validation, and the results show that MONSGSA has better applicability to practical problems. Full article
(This article belongs to the Special Issue Mathematical Programming, Optimization and Operations Research)
Show Figures

Figure 1

37 pages, 10996 KiB  
Article
A Cost-Effective Approach for the Integrated Optimization of Line Planning and Timetabling in an Urban Rail Transit Line
by Yi Gao, Chuanjun Jia, Zhipeng Wang and Zhiyuan Hu
Appl. Sci. 2024, 14(14), 6273; https://doi.org/10.3390/app14146273 - 18 Jul 2024
Cited by 1 | Viewed by 1243
Abstract
Line planning and timetabling play important roles in the design of urban rail transportation services. Due to the complexity of the integrated optimization of entire transportation plans, previous studies have generally considered line planning and timetabling design independently, which cannot ensure the global [...] Read more.
Line planning and timetabling play important roles in the design of urban rail transportation services. Due to the complexity of the integrated optimization of entire transportation plans, previous studies have generally considered line planning and timetabling design independently, which cannot ensure the global optimality of transportation services. In this study, the integrated design problem of line planning and timetabling was characterized as an equilibrium space–time network design problem and solved with a bi-objective nonlinear integer programming model. The model, in which train overtaking and passenger path choice behavior were considered, adjusted the network topology and link attributes (time and capacity) of the travel space–time network by optimizing the train service frequency, operation zone, stopping pattern, train formation, and train order to minimize the system life cycle cost and total passenger travel time perception. An algorithm was constructed using the non-dominated sorting genetic algorithm II combined with the self-adaptive gradient projection algorithm to solve the model. A real-world case was considered to evaluate the effectiveness of the proposed model and algorithm. The results showed that the model not only performed well in the trade-off between system cost and passenger travel efficiency, but it could also reduce the imbalance of train and station loads. Pareto front analysis of the model with different parameters showed that more types of trains did not correlate with a better performance, some line-planning strategies had a combination effect, and multi-strategy line planning was more suitable for scenarios with a high imbalance in the temporal and spatial distributions of passenger flow. Full article
Show Figures

Figure 1

Back to TopTop