Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (33)

Search Parameters:
Keywords = unreliable variances

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 995 KiB  
Article
An Upper Partial Moment Framework for Pathfinding Problem Under Travel Time Uncertainty
by Xu Zhang and Mei Chen
Systems 2025, 13(7), 600; https://doi.org/10.3390/systems13070600 - 17 Jul 2025
Viewed by 128
Abstract
Route planning under uncertain traffic conditions requires accounting for not only expected travel times but also the risk of late arrivals. This study proposes a mean-upper partial moment (MUPM) framework for pathfinding that explicitly considers travel time unreliability. The framework incorporates a benchmark [...] Read more.
Route planning under uncertain traffic conditions requires accounting for not only expected travel times but also the risk of late arrivals. This study proposes a mean-upper partial moment (MUPM) framework for pathfinding that explicitly considers travel time unreliability. The framework incorporates a benchmark travel time to measure the upper partial moment (UPM), capturing both the probability and severity of delays. By adjusting a risk parameter (θ), the model reflects different traveler risk preferences and unifies several existing reliability measures, including on-time arrival probability, late arrival penalty, and semi-variance. A bi-objective model is formulated to simultaneously minimize mean travel time and UPM. Theoretical analysis shows that the MUPM framework is consistent with the expected utility theory (EUT) and stochastic dominance theory (SDT), providing a behavioral foundation for the model. To efficiently solve the model, an SDT-based label-correcting algorithm is adapted, with a pre-screening step to reduce unnecessary pairwise path comparisons. Numerical experiments using GPS probe vehicle data from Louisville, Kentucky, USA, demonstrate that varying θ values lead to different non-dominated paths. Lower θ values emphasize frequent small delays but may overlook excessive delays, while higher θ values effectively capture the tail risk, aligning with the behavior of risk-averse travelers. The MUPM framework provides a flexible, behaviorally grounded, and computationally scalable approach to pathfinding under uncertainty. It holds strong potential for applications in traveler information systems, transportation planning, and network resilience analysis. Full article
(This article belongs to the Special Issue Data-Driven Urban Mobility Modeling)
Show Figures

Figure 1

26 pages, 1556 KiB  
Article
Modified Two-Parameter Ridge Estimators for Enhanced Regression Performance in the Presence of Multicollinearity: Simulations and Medical Data Applications
by Muteb Faraj Alharthi and Nadeem Akhtar
Axioms 2025, 14(7), 527; https://doi.org/10.3390/axioms14070527 - 10 Jul 2025
Viewed by 197
Abstract
Predictive regression models often face a common challenge known as multicollinearity. This phenomenon can distort the results, causing models to overfit and produce unreliable coefficient estimates. Ridge regression is a widely used approach that incorporates a regularization term to stabilize parameter estimates and [...] Read more.
Predictive regression models often face a common challenge known as multicollinearity. This phenomenon can distort the results, causing models to overfit and produce unreliable coefficient estimates. Ridge regression is a widely used approach that incorporates a regularization term to stabilize parameter estimates and improve the prediction accuracy. In this study, we introduce four newly modified ridge estimators, referred to as RIRE1, RIRE2, RIRE3, and RIRE4, that are aimed at tackling severe multicollinearity more effectively than ordinary least squares (OLS) and other existing estimators under both normal and non-normal error distributions. The ridge estimators are biased, so their efficiency cannot be judged by variance alone; instead, we use the mean squared error (MSE) to compare their performance. Each new estimator depends on two shrinkage parameters, k and d, making the theoretical analysis complex. To address this, we employ Monte Carlo simulations to rigorously evaluate and compare these new estimators with OLS and other existing ridge estimators. Our simulations show that the proposed estimators consistently minimize the MSE better than OLS and other ridge estimators, particularly in datasets with strong multicollinearity and large error variances. We further validate their practical value through applications using two real-world datasets, demonstrating both their robustness and theoretical alignment. Full article
(This article belongs to the Special Issue Applied Mathematics and Mathematical Modeling)
Show Figures

Figure 1

27 pages, 3332 KiB  
Article
Wind Speed Forecasting with Differentially Evolved Minimum-Bandwidth Filters and Gated Recurrent Units
by Khathutshelo Steven Sivhugwana and Edmore Ranganai
Forecasting 2025, 7(2), 27; https://doi.org/10.3390/forecast7020027 - 10 Jun 2025
Viewed by 994
Abstract
Wind data are often cyclostationary due to cyclic variations, non-constant variance resulting from fluctuating weather conditions, and structural breaks due to transient behaviour (due to wind gusts and turbulence), resulting in unreliable wind power supply. In wavelet hybrid forecasting, wind prediction accuracy depends [...] Read more.
Wind data are often cyclostationary due to cyclic variations, non-constant variance resulting from fluctuating weather conditions, and structural breaks due to transient behaviour (due to wind gusts and turbulence), resulting in unreliable wind power supply. In wavelet hybrid forecasting, wind prediction accuracy depends heavily on the decomposition level (L) and the wavelet filter technique selected. Hence, we examined the efficacy of wind predictions as a function of L and wavelet filters. In the proposed hybrid approach, differential evolution (DE) optimises the decomposition level of various wavelet filters (i.e., least asymmetric (LA), Daubechies (DB), and Morris minimum-bandwidth (MB)) using the maximal overlap discrete wavelet transform (MODWT), allowing for the decomposition of wind data into more statistically sound sub-signals. These sub-signals are used as inputs into the gated recurrent unit (GRU) to accurately capture wind speed. The final predicted values are obtained by reconciling the sub-signal predictions using multiresolution analysis (MRA) to form wavelet-MODWT-GRUs. Using wind data from three Wind Atlas South Africa (WASA) locations, Alexander Bay, Humansdorp, and Jozini, the root mean square error, mean absolute error, coefficient of determination, probability integral transform, pinball loss, and Dawid-Sebastiani showed that the MB-MODWT-GRU at L=3 was best across the three locations. Full article
(This article belongs to the Special Issue Feature Papers of Forecasting 2025)
Show Figures

Figure 1

19 pages, 750 KiB  
Article
Evaluating Estimator Performance Under Multicollinearity: A Trade-Off Between MSE and Accuracy in Logistic, Lasso, Elastic Net, and Ridge Regression with Varying Penalty Parameters
by H. M. Nayem, Sinha Aziz and B. M. Golam Kibria
Stats 2025, 8(2), 45; https://doi.org/10.3390/stats8020045 - 31 May 2025
Viewed by 474
Abstract
Multicollinearity in logistic regression models can result in inflated variances and yield unreliable estimates of parameters. Ridge regression, a regularized estimation technique, is frequently employed to address this issue. This study conducts a comparative evaluation of the performance of 23 established ridge regression [...] Read more.
Multicollinearity in logistic regression models can result in inflated variances and yield unreliable estimates of parameters. Ridge regression, a regularized estimation technique, is frequently employed to address this issue. This study conducts a comparative evaluation of the performance of 23 established ridge regression estimators alongside Logistic Regression, Elastic-Net, Lasso, and Generalized Ridge Regression (GRR), considering various levels of multicollinearity within the context of logistic regression settings. Simulated datasets with high correlations (0.80, 0.90, 0.95, and 0.99) and real-world data (municipal and cancer remission) were analyzed. Both results show that ridge estimators, such as kAL1, kAL2, kKL1, and kKL2, exhibit strong performance in terms of Mean Squared Error (MSE) and accuracy, particularly in smaller samples, while GRR demonstrates superior performance in large samples. Real-world data further confirm that GRR achieves the lowest MSE in highly collinear municipal data, while ridge estimators and GRR help prevent overfitting in small-sample cancer remission data. The results underscore the efficacy of ridge estimators and GRR in handling multicollinearity, offering reliable alternatives to traditional regression techniques, especially for datasets with high correlations and varying sample sizes. Full article
Show Figures

Figure 1

10 pages, 201 KiB  
Article
Novel Use of Generalizability Theory to Optimize Countermovement Jump Data Collection
by Alan Huebner, Jonathon R. Lever, Thomas W. Clark, Timothy J. Suchomel, Casey J. Metoyer, Jonathan D. Hauenstein and John P. Wagle
Sports 2025, 13(3), 85; https://doi.org/10.3390/sports13030085 - 12 Mar 2025
Viewed by 1099
Abstract
This study aimed to evaluate the reliability of countermovement jump (CMJ) performance metrics across five NCAA Division I varsity sports using Generalizability Theory (G-Theory). Three hundred male athletes from football, hockey, baseball, soccer, and lacrosse performed three or more CMJs on dual-force platforms. [...] Read more.
This study aimed to evaluate the reliability of countermovement jump (CMJ) performance metrics across five NCAA Division I varsity sports using Generalizability Theory (G-Theory). Three hundred male athletes from football, hockey, baseball, soccer, and lacrosse performed three or more CMJs on dual-force platforms. G-Theory was applied to identify variance components and determine reliability coefficients (Φ) for 14 key metrics. Metrics requiring more than three jumps to achieve Φ 0.80 were deemed unreliable. Metric reliability varied by sport and phase of movement. Metrics associated with the eccentric phase (e.g., Eccentric Duration, Deceleration Rate of Force Development Asymmetry) demonstrated lower reliability, often requiring >3 jumps. Reliable metrics across sports included Phase 1 Concentric Impulse and Scaled Power, requiring three trials or fewer. CMJ reliability is sport- and metric-specific. Practitioners should prioritize reliable metrics and adjust protocols to balance data quality and practicality, particularly when monitoring eccentric characteristics. Full article
16 pages, 903 KiB  
Article
Newly Improved Two-Parameter Ridge Estimators: A Better Approach for Mitigating Multicollinearity in Regression Analysis
by Muteb Faraj Alharthi and Nadeem Akhtar
Axioms 2025, 14(3), 186; https://doi.org/10.3390/axioms14030186 - 2 Mar 2025
Cited by 1 | Viewed by 738
Abstract
This study tackles the common issues of multicollinearity arising in regression models due to high correlations among predictor variables, leading to unreliable coefficient estimates and inflated variances, ultimately affecting the model’s accuracy. To address this issue, we introduce four improved two-parameter ridge estimators, [...] Read more.
This study tackles the common issues of multicollinearity arising in regression models due to high correlations among predictor variables, leading to unreliable coefficient estimates and inflated variances, ultimately affecting the model’s accuracy. To address this issue, we introduce four improved two-parameter ridge estimators, named as MIRE1, MIRE2, MIRE3, and MIRE4, which incorporate innovative adjustments such as logarithmic transformations and customized penalization strategies to enhance estimation efficiency. These biased estimators are evaluated through a comprehensive Monte Carlo simulation using the minimum estimated mean square error (MSE) criterion. Although no single ridge estimator performs optimally under all conditions, our proposed estimators consistently outperform existing estimators in most scenarios. Notably, MIRE2 and MIRE3 emerge as the best-performing estimators across a variety of conditions. Their practical utility is further demonstrated through applications to two real-world datasets. The results of the analysis confirm that the proposed ridge estimators offer a reliable and effective approach for improving estimation precision in regression models, as they consistently yield the lowest MSE compared to other estimators. Full article
(This article belongs to the Special Issue Computational Statistics and Its Applications)
Show Figures

Figure 1

18 pages, 2556 KiB  
Article
Soil Salinity Mapping of Plowed Agriculture Lands Combining Radar Sentinel-1 and Optical Sentinel-2 with Topographic Data in Machine Learning Models
by Diego Tola, Frédéric Satgé, Ramiro Pillco Zolá, Humberto Sainz, Bruno Condori, Roberto Miranda, Elizabeth Yujra, Jorge Molina-Carpio, Renaud Hostache and Raúl Espinoza-Villar
Remote Sens. 2024, 16(18), 3456; https://doi.org/10.3390/rs16183456 - 18 Sep 2024
Cited by 4 | Viewed by 4225
Abstract
This study assesses the relative performance of Sentinel-1 and -2 and their combination with topographic information for plow agricultural land soil salinity mapping. A learning database made of 255 soil samples’ electrical conductivity (EC) along with corresponding radar (R), optical (O), and topographic [...] Read more.
This study assesses the relative performance of Sentinel-1 and -2 and their combination with topographic information for plow agricultural land soil salinity mapping. A learning database made of 255 soil samples’ electrical conductivity (EC) along with corresponding radar (R), optical (O), and topographic (T) information derived from Sentinel-2 (S2), Sentinel-1 (S1), and the SRTM digital elevation model, respectively, was used to train four machine learning models (Decision tree—DT, Random Forest—RF, Gradient Boosting—GB, Extreme Gradient Boosting—XGB). Each model was separately trained/validated for four scenarios based on four combinations of R, O, and T (R, O, R+O, R+O+T), with and without feature selection. The Recursive Feature Elimination with k-fold cross validation (RFEcv 10-fold) and the Variance Inflation Factor (VIF) were used for the feature selection process to minimize multicollinearity by selecting the most relevant features. The most reliable salinity estimates are obtained for the R+O+T scenario, considering the feature selection process, with R2 of 0.73, 0.74, 0.75, and 0.76 for DT, GB, RF, and XGB, respectively. Conversely, models based on R information led to unreliable soil salinity estimates due to the saturation of the C-band signal in plowed lands. Full article
Show Figures

Graphical abstract

35 pages, 4495 KiB  
Article
Low-Level Visual Features of Window Views Contribute to Perceived Naturalness and Mental Health Outcomes
by Larissa Samaan, Leonie Klock, Sandra Weber, Mirjam Reidick, Leonie Ascone and Simone Kühn
Int. J. Environ. Res. Public Health 2024, 21(5), 598; https://doi.org/10.3390/ijerph21050598 - 6 May 2024
Cited by 3 | Viewed by 2933
Abstract
Previous studies have shown that natural window views are beneficial for mental health, but it is still unclear which specific features constitute a ‘natural’ window view. On the other hand, studies on image analysis found that low-level visual features (LLVFs) are associated with [...] Read more.
Previous studies have shown that natural window views are beneficial for mental health, but it is still unclear which specific features constitute a ‘natural’ window view. On the other hand, studies on image analysis found that low-level visual features (LLVFs) are associated with perceived naturalness, but mainly conducted experiments with brief stimulus presentations. In this study, research on the effects of window views on mental health was combined with the detailed analysis of LLVFs. Healthy adults rated window views from their home and sent in photographs of those views for analysis. Content validity of the ‘ecological’ view assessment was evaluated by checking correlations of LLVFs with window view ratings. Afterwards, it was explored which of the LLVFs best explained variance in perceived percentage of nature and man-made elements, and in ratings of view quality. Criterion validity was tested by investigating which variables were associated with negative affect and impulsive decision-making. The objective and subjective assessments of nature/sky in the view were aligned but objective brightness was unreliable. The perceived percentage of nature was significantly explained by green pixel ratio, while view quality was associated with fractals, saturation, sky pixel ratio and straight edge density. The higher subjective brightness of rooms was associated with a lower negative affect, whereas results for impulsive decision-making were inconsistent. The research highlights the validity to apply LLVFs analysis to ecological window views. For affect, subjective brightness seemed to be more relevant than LLVFs. For impulsive decision-making, performance context needs to be controlled in future studies. Full article
(This article belongs to the Section Behavioral and Mental Health)
Show Figures

Figure 1

20 pages, 1837 KiB  
Article
Detection of Forged Images Using a Combination of Passive Methods Based on Neural Networks
by Ancilon Leuch Alencar, Marcelo Dornbusch Lopes, Anita Maria da Rocha Fernandes, Julio Cesar Santos dos Anjos, Juan Francisco De Paz Santana and Valderi Reis Quietinho Leithardt
Future Internet 2024, 16(3), 97; https://doi.org/10.3390/fi16030097 - 14 Mar 2024
Cited by 3 | Viewed by 2610
Abstract
In the current era of social media, the proliferation of images sourced from unreliable origins underscores the pressing need for robust methods to detect forged content, particularly amidst the rapid evolution of image manipulation technologies. Existing literature delineates two primary approaches to image [...] Read more.
In the current era of social media, the proliferation of images sourced from unreliable origins underscores the pressing need for robust methods to detect forged content, particularly amidst the rapid evolution of image manipulation technologies. Existing literature delineates two primary approaches to image manipulation detection: active and passive. Active techniques intervene preemptively, embedding structures into images to facilitate subsequent authenticity verification, whereas passive methods analyze image content for traces of manipulation. This study presents a novel solution to image manipulation detection by leveraging a multi-stream neural network architecture. Our approach harnesses three convolutional neural networks (CNNs) operating on distinct data streams extracted from the original image. We have developed a solution based on two passive detection methodologies. The system utilizes two separate streams to extract specific data subsets, while a third stream processes the unaltered image. Each net independently processes its respective data stream, capturing diverse facets of the image. The outputs from these nets are then fused through concatenation to ascertain whether the image has undergone manipulation, yielding a comprehensive detection framework surpassing the efficacy of its constituent methods. Our work introduces a unique dataset derived from the fusion of four publicly available datasets, featuring organically manipulated images that closely resemble real-world scenarios. This dataset offers a more authentic representation than other state-of-the-art methods that use algorithmically generated datasets based on image patches. By encompassing genuine manipulation scenarios, our dataset enhances the model’s ability to generalize across varied manipulation techniques, thereby improving its performance in real-world settings. After training, the merged approach obtained an accuracy of 89.59% in the set of validation images, significantly higher than the model trained with only unaltered images, which obtained 78.64%, and the two other models trained using images with a feature selection method applied to enhance inconsistencies that obtained 68.02% for Error-Level Analysis images and 50.70% for the method using Discrete Wavelet Transform. Moreover, our proposed approach exhibits reduced accuracy variance compared to alternative models, underscoring its stability and robustness across diverse datasets. The approach outlined in this work needs to provide information about the specific location or type of tempering, which limits its practical applications. Full article
(This article belongs to the Special Issue Secure Communication Protocols for Future Computing)
Show Figures

Figure 1

13 pages, 2216 KiB  
Article
Problematic Smartphone Usage in Singaporean University Students: An Analysis of Self-Reported Versus Objectively Measured Smartphone Usage Patterns
by James Keng Hong Teo, Iris Yue Ling Chionh, Nasharuddin Akmal Bin Shaul Hamed and Christopher Lai
Healthcare 2023, 11(23), 3033; https://doi.org/10.3390/healthcare11233033 - 24 Nov 2023
Cited by 3 | Viewed by 3642
Abstract
Introduction: Problematic smartphone usage is the excessive usage of the smartphone, leading to addiction symptoms that impair one’s functional status. Self-administered surveys developed to describe the symptoms and measure the risk of problematic smartphone usage have been associated with depressive symptoms, symptoms of [...] Read more.
Introduction: Problematic smartphone usage is the excessive usage of the smartphone, leading to addiction symptoms that impair one’s functional status. Self-administered surveys developed to describe the symptoms and measure the risk of problematic smartphone usage have been associated with depressive symptoms, symptoms of anxiety disorder, and perceived stress. However, self-reported smartphone usage can be unreliable, and previous studies have identified a better association between objectively measured smartphone usage and problematic smartphone usage. Methodology: A self-administered survey was used to investigate the relationships between the risk of problematic smartphone usage (SAS–SV) with depressive symptoms (PHQ–9), anxiety disorder symptoms (GAD–7), and perceived stress (PSS) in Singaporean full-time university students. Self-reported screentime and objectively measured screentime were collected to determine if there is any difference between perceived smartphone usage and objective smartphone usage. Results: There was no statistical difference between self-reported and app-measured screentime in the study population. However, there were significant positive correlations between SAS–SV with PHQ–9, GAD–7, and PSS. In the logistic regression model, PHQ–9 was found to be the sole predictor for variances in SAS–SV score in the study population. Conclusion: This study suggests that problematic smartphone usage may potentially related to depressive symptoms, symptoms of anxiety disorder, and greater perceived stress in university students. Full article
Show Figures

Figure 1

18 pages, 697 KiB  
Article
Russo-Ukrainian War and Trust or Mistrust in Information: A Snapshot of Individuals’ Perceptions in Greece
by Paraskevi El. Skarpa, Konstantinos B. Simoglou and Emmanouel Garoufallou
Journal. Media 2023, 4(3), 835-852; https://doi.org/10.3390/journalmedia4030052 - 27 Jul 2023
Cited by 5 | Viewed by 5844
Abstract
The purpose of this study was to assess the Greek public’s perceptions of the reliability of information received about the Russo-Ukrainian war in the spring of 2022. The study was conducted through an online questionnaire survey consisting of closed-ended statements on a five-point [...] Read more.
The purpose of this study was to assess the Greek public’s perceptions of the reliability of information received about the Russo-Ukrainian war in the spring of 2022. The study was conducted through an online questionnaire survey consisting of closed-ended statements on a five-point Likert scale. Principal components analysis was performed on the collected data. The retained principal components (PCs) were subjected to non-hierarchical k-means cluster analysis to group respondents into clusters based on the similarity of perceived outcomes. A total of 840 responses were obtained. Twenty-eight original variables from the questionnaire were summarised into five PCs, explaining 63.0% of the total variance. The majority of respondents felt that the information they had received about the Russo-Ukrainian war was unreliable. Older, educated, professional people with exposure to fake news were sceptical about the reliability of information related to the war. Young adults who were active on social networks and had no detailed knowledge of the events considered information about the war to be reliable. The study found that the greater an individual’s ability to spot fake news, the lower their trust in social media and their information habits on social networks. Full article
Show Figures

Figure 1

9 pages, 1097 KiB  
Article
Perfusion-Weighted Imaging: The Use of a Novel Perfusion Scoring Criteria to Improve the Assessment of Brain Tumor Recurrence versus Treatment Effects
by Sneha Sai Mannam, Chibueze D. Nwagwu, Christina Sumner, Brent D. Weinberg and Kimberly B. Hoang
Tomography 2023, 9(3), 1062-1070; https://doi.org/10.3390/tomography9030087 - 23 May 2023
Cited by 1 | Viewed by 2933
Abstract
Introduction: Imaging surveillance of contrast-enhancing lesions after the treatment of malignant brain tumors with radiation is plagued by an inability to reliably distinguish between tumor recurrence and treatment effects. Magnetic resonance perfusion-weighted imaging (PWI)—among other advanced brain tumor imaging modalities—is a useful adjunctive [...] Read more.
Introduction: Imaging surveillance of contrast-enhancing lesions after the treatment of malignant brain tumors with radiation is plagued by an inability to reliably distinguish between tumor recurrence and treatment effects. Magnetic resonance perfusion-weighted imaging (PWI)—among other advanced brain tumor imaging modalities—is a useful adjunctive tool for distinguishing between these two entities but can be clinically unreliable, leading to the need for tissue sampling to confirm diagnosis. This may be partially because clinical PWI interpretation is non-standardized and no grading criteria are used for assessment, leading to interpretation discrepancies. This variance in the interpretation of PWI and its subsequent effect on the predictive value has not been studied. Our objective is to propose structured perfusion scoring criteria and determine their effect on the clinical value of PWI. Methods: Patients treated at a single institution between 2012 and 2022 who had prior irradiated malignant brain tumors and subsequent progression of contrast-enhancing lesions determined by PWI were retrospectively studied from CTORE (CNS Tumor Outcomes Registry at Emory). PWI was given two separate qualitative scores (high, intermediate, or low perfusion). The first (control) was assigned by a neuroradiologist in the radiology report in the course of interpretation with no additional instruction. The second (experimental) was assigned by a neuroradiologist with additional experience in brain tumor interpretation using a novel perfusion scoring rubric. The perfusion assessments were divided into three categories, each directly corresponding to the pathology-reported classification of residual tumor content. The interpretation accuracy in predicting the true tumor percentage, our primary outcome, was assessed through Chi-squared analysis, and inter-rater reliability was assessed using Cohen’s Kappa. Results: Our 55-patient cohort had a mean age of 53.5 ± 12.2 years. The percentage agreement between the two scores was 57.4% (κ: 0.271). Upon conducting the Chi-squared analysis, we found an association with the experimental group reads (p-value: 0.014) but no association with the control group reads (p-value: 0.734) in predicting tumor recurrence versus treatment effects. Conclusions: With our study, we showed that having an objective perfusion scoring rubric aids in improved PWI interpretation. Although PWI is a powerful tool for CNS lesion diagnosis, methodological radiology evaluation greatly improves the accurate assessment and characterization of tumor recurrence versus treatment effects by all neuroradiologists. Further work should focus on standardizing and validating scoring rubrics for PWI evaluation in tumor patients to improve diagnostic accuracy. Full article
(This article belongs to the Special Issue Current Trends in Diagnostic and Therapeutic Imaging of Brain Tumors)
Show Figures

Figure 1

17 pages, 4447 KiB  
Article
Probabilistic Slope Seepage Analysis under Rainfall Considering Spatial Variability of Hydraulic Conductivity and Method Comparison
by Hao Zou, Jing-Sen Cai, E-Chuan Yan, Rui-Xuan Tang, Lin Jia and Kun Song
Water 2023, 15(4), 810; https://doi.org/10.3390/w15040810 - 19 Feb 2023
Cited by 1 | Viewed by 2624
Abstract
Due to the spatial variability of hydraulic properties, probabilistic slope seepage analysis becomes necessary. This study conducts a probabilistic analysis of slope seepage under rainfall, considering the spatial variability of saturated hydraulic conductivity. Through this, both the commonly used Monte Carlo simulation method [...] Read more.
Due to the spatial variability of hydraulic properties, probabilistic slope seepage analysis becomes necessary. This study conducts a probabilistic analysis of slope seepage under rainfall, considering the spatial variability of saturated hydraulic conductivity. Through this, both the commonly used Monte Carlo simulation method and the proposed first-order stochastic moment approach are tested and compared. The results indicate that the first-order analysis approach is effective and applicable to the study of flow processes in a slope scenario. It is also capable of obtaining statistics such as mean and variance with a high enough accuracy. Using this approach, higher variabilities in the pressure head and the fluctuation of the phreatic surface in the slope are found with a higher value of the correlation length of the saturated hydraulic conductivity. The Monte Carlo simulation is found to be time-consuming: at least 10,000 realizations are required to reach convergence, and the number of realizations needed is sensitive to the grid density. A coarser grid case requires more realizations for convergence. If the number of realizations is not enough, the results are unreliable. Compared with Monte Carlo simulation, the accuracy of the first-order stochastic moment analysis is generally satisfied when the variance and the correlation length of the saturated hydraulic conductivity are not too large. This study highlights the applicability of the proposed first-order stochastic moment analysis approach in the slope scenario. Full article
(This article belongs to the Special Issue Water-Related Geoenvironmental Issues)
Show Figures

Figure 1

18 pages, 511 KiB  
Article
Smoothing County-Level Sampling Variances to Improve Small Area Models’ Outputs
by Lu Chen, Luca Sartore, Habtamu Benecha, Valbona Bejleri and Balgobin Nandram
Stats 2022, 5(3), 898-915; https://doi.org/10.3390/stats5030052 - 11 Sep 2022
Cited by 1 | Viewed by 1868
Abstract
The use of hierarchical Bayesian small area models, which take survey estimates along with auxiliary data as input to produce official statistics, has increased in recent years. Survey estimates for small domains are usually unreliable due to small sample sizes, and the corresponding [...] Read more.
The use of hierarchical Bayesian small area models, which take survey estimates along with auxiliary data as input to produce official statistics, has increased in recent years. Survey estimates for small domains are usually unreliable due to small sample sizes, and the corresponding sampling variances can also be imprecise and unreliable. This affects the performance of the model (i.e., the model will not produce an estimate or will produce a low-quality modeled estimate), which results in a reduced number of official statistics published by a government agency. To mitigate the unreliable sampling variances, these survey-estimated variances are typically modeled against the direct estimates wherever a relationship between the two is present. However, this is not always the case. This paper explores different alternatives to mitigate the unreliable (beyond some threshold) sampling variances. A Bayesian approach under the area-level model set-up and a distribution-free technique based on bootstrap sampling are proposed to update the survey data. An application to the county-level corn yield data from the County Agricultural Production Survey of the United States Department of Agriculture’s (USDA’s) National Agricultural Statistics Service (NASS) is used to illustrate the proposed approaches. The final county-level model-based estimates for small area domains, produced based on updated survey data from each method, are compared with county-level model-based estimates produced based on the original survey data and the official statistics published in 2016. Full article
(This article belongs to the Special Issue Small Area Estimation: Theories, Methods and Applications)
Show Figures

Figure 1

12 pages, 2513 KiB  
Article
Machine Learning Model Based on Radiomic Features for Differentiation between COVID-19 and Pneumonia on Chest X-ray
by Young Jae Kim
Sensors 2022, 22(17), 6709; https://doi.org/10.3390/s22176709 - 5 Sep 2022
Cited by 15 | Viewed by 3311
Abstract
Machine learning approaches are employed to analyze differences in real-time reverse transcription polymerase chain reaction scans to differentiate between COVID-19 and pneumonia. However, these methods suffer from large training data requirements, unreliable images, and uncertain clinical diagnosis. Thus, in this paper, we used [...] Read more.
Machine learning approaches are employed to analyze differences in real-time reverse transcription polymerase chain reaction scans to differentiate between COVID-19 and pneumonia. However, these methods suffer from large training data requirements, unreliable images, and uncertain clinical diagnosis. Thus, in this paper, we used a machine learning model to differentiate between COVID-19 and pneumonia via radiomic features using a bias-minimized dataset of chest X-ray scans. We used logistic regression (LR), naive Bayes (NB), support vector machine (SVM), k-nearest neighbor (KNN), bagging, random forest (RF), extreme gradient boosting (XGB), and light gradient boosting machine (LGBM) to differentiate between COVID-19 and pneumonia based on training data. Further, we used a grid search to determine optimal hyperparameters for each machine learning model and 5-fold cross-validation to prevent overfitting. The identification performances of COVID-19 and pneumonia were compared with separately constructed test data for four machine learning models trained using the maximum probability, contrast, and difference variance of the gray level co-occurrence matrix (GLCM), and the skewness as input variables. The LGBM and bagging model showed the highest and lowest performances; the GLCM difference variance showed a high overall effect in all models. Thus, we confirmed that the radiomic features in chest X-rays can be used as indicators to differentiate between COVID-19 and pneumonia using machine learning. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop