Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (79)

Search Parameters:
Keywords = Monte Carlo feature selection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 5215 KiB  
Article
Supply Chain Cost Analysis for Interior Lighting Systems Based on Polymer Optical Fibres Compared to Optical Injection Moulding
by Jan Kallweit, Fabian Köntges and Thomas Gries
Textiles 2025, 5(3), 29; https://doi.org/10.3390/textiles5030029 - 24 Jul 2025
Viewed by 186
Abstract
Car interior design should evoke emotions, offer comfort, convey safety and at the same time project the brand identity of the car manufacturer. Lighting is used to address these functions. Modules required for automotive interior lighting often feature injection-moulded (IM) light guides, whereas [...] Read more.
Car interior design should evoke emotions, offer comfort, convey safety and at the same time project the brand identity of the car manufacturer. Lighting is used to address these functions. Modules required for automotive interior lighting often feature injection-moulded (IM) light guides, whereas woven fabrics with polymer optical fibres (POFs) offer certain technological advantages and show first-series applications in cars. In the future, car interior illumination will become even more important in the wake of megatrends such as autonomous driving. Since the increase in deployment of these technologies facilitates a need for an economical comparison, this paper aims to deliver a cost-driven approach to fulfil the aforementioned objective. Therefore, the cost structures of the supply chains for an IM-based and a POF-based illumination module are analysed. The employed research methodologies include an activity-based costing approach for which the data is collected via document analysis and guideline-based expert interviews. To account for data uncertainty, Monte Carlo simulations are conducted. POF-based lighting modules have lower initial costs due to continuous fibre production and weaving processes, but are associated with higher unit costs. This is caused by the discontinuous assembly of the rolled woven fabric which allows postponement strategies. The development costs of the mould generate high initial costs for IM light guides, which makes them beneficial only for high quantities of produced light guides. For the selected scenario, the POF-based module’s self-costs are 11.05 EUR/unit whereas the IM module’s self-costs are 14,19 EUR/unit. While the cost structures are relatively independent from the selected scenario, the actual self-costs are highly dependent on boundary conditions such as production volume. Full article
Show Figures

Figure 1

27 pages, 2617 KiB  
Article
Monte Carlo Gradient Boosted Trees for Cancer Staging: A Machine Learning Approach
by Audrey Eley, Thu Thu Hlaing, Daniel Breininger, Zarindokht Helforoush and Nezamoddin N. Kachouie
Cancers 2025, 17(15), 2452; https://doi.org/10.3390/cancers17152452 - 24 Jul 2025
Viewed by 270
Abstract
Machine learning algorithms are commonly employed for classification and interpretation of high-dimensional data. The classification task is often broken down into two separate procedures, and different methods are applied to achieve accurate results and produce interpretable outcomes. First, an effective subset of high-dimensional [...] Read more.
Machine learning algorithms are commonly employed for classification and interpretation of high-dimensional data. The classification task is often broken down into two separate procedures, and different methods are applied to achieve accurate results and produce interpretable outcomes. First, an effective subset of high-dimensional features must be extracted and then the selected subset will be used to train a classifier. Gradient Boosted Trees (GBT) is an ensemble model and, particularly due to their robustness, ability to model complex nonlinear interactions, and feature interpretability, they are well suited for complex applications. XGBoost (eXtreme Gradient Boosting) is a high-performance implementation of GBT that incorporates regularization, parallel computation, and efficient tree pruning that makes it a suitable efficient, interpretable, and scalable classifier with potential applications to medical data analysis. In this study, a Monte Carlo Gradient Boosted Trees (MCGBT) model is proposed for both feature reduction and classification. The proposed MCGBT method was applied to a lung cancer dataset for feature identification and classification. The dataset contains 107 radiomics which are quantitative imaging biomarkers extracted from CT scans. A reduced set of 12 radiomics were identified, and patients were classified into different cancer stages. Cancer staging accuracy of 90.3% across 100 independent runs was achieved which was on par with that obtained using the full set of 107 radiomics, enabling lean and deployable classifiers. Full article
(This article belongs to the Section Cancer Informatics and Big Data)
Show Figures

Figure 1

21 pages, 2817 KiB  
Article
A Handheld IoT Vis/NIR Spectroscopic System to Assess the Soluble Solids Content of Wine Grapes
by Xu Zhang, Ziquan Qin, Ruijie Zhao, Zhuojun Xie and Xuebing Bai
Sensors 2025, 25(14), 4523; https://doi.org/10.3390/s25144523 - 21 Jul 2025
Viewed by 276
Abstract
The quality of wine largely depends on the quality of wine grapes, which is determined by their chemical composition. Therefore, measuring parameters related to grape ripeness, such as soluble solids content (SSC), is crucial for harvesting high-quality grapes. Visible–Near-Infrared (Vis/NIR) spectroscopy enables effective, [...] Read more.
The quality of wine largely depends on the quality of wine grapes, which is determined by their chemical composition. Therefore, measuring parameters related to grape ripeness, such as soluble solids content (SSC), is crucial for harvesting high-quality grapes. Visible–Near-Infrared (Vis/NIR) spectroscopy enables effective, non-destructive detection of SSC in grapes. However, commercial Vis/NIR spectrometers are often expensive, bulky, and power-consuming, making them unsuitable for on-site applications. This article integrated the AS7265X sensor to develop a low-cost handheld IoT multispectral detection device, which can collect 18 variables in the wavelength range of 410–940 nm. The data can be sent in real time to the cloud configuration, where it can be backed up and visualized. After simultaneously removing outliers detected by both Monte Carlo (MC) and principal component analysis (PCA) methods from the raw spectra, the SSC prediction model was established, resulting in an RV2 of 0.697. Eight preprocessing methods were compared, among which moving average smoothing (MAS) and Savitzky–Golay smoothing (SGS) improved the RV2 to 0.756 and 0.766, respectively. Subsequently, feature wavelengths were selected using UVE and SPA, reducing the number of variables from 18 to 5 and 6, respectively, further increasing the RV2 to 0.809 and 0.795. The results indicate that spectral data optimization methods are effective and essential for improving the performance of SSC prediction models. The IoT Vis/NIR Spectroscopic System proposed in this study offers a miniaturized, low-cost, and practical solution for SSC detection in wine grapes. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

21 pages, 1057 KiB  
Article
Hybrid Sensor Placement Framework Using Criterion-Guided Candidate Selection and Optimization
by Se-Hee Kim, JungHyun Kyung, Jae-Hyoung An and Hee-Chang Eun
Sensors 2025, 25(14), 4513; https://doi.org/10.3390/s25144513 - 21 Jul 2025
Viewed by 210
Abstract
This study presents a hybrid sensor placement methodology that combines criterion-based candidate selection with advanced optimization algorithms. Four established selection criteria—modal kinetic energy (MKE), modal strain energy (MSE), modal assurance criterion (MAC) sensitivity, and mutual information (MI)—are used to evaluate DOF sensitivity and [...] Read more.
This study presents a hybrid sensor placement methodology that combines criterion-based candidate selection with advanced optimization algorithms. Four established selection criteria—modal kinetic energy (MKE), modal strain energy (MSE), modal assurance criterion (MAC) sensitivity, and mutual information (MI)—are used to evaluate DOF sensitivity and generate candidate pools. These are followed by one of four optimization algorithms—greedy, genetic algorithm (GA), particle swarm optimization (PSO), or simulated annealing (SA)—to identify the optimal subset of sensor locations. A key feature of the proposed approach is the incorporation of constraint dynamics using the Udwadia–Kalaba (U–K) generalized inverse formulation, which enables the accurate expansion of structural responses from sparse sensor data. The framework assumes a noise-free environment during the initial sensor design phase, but robustness is verified through extensive Monte Carlo simulations under multiple noise levels in a numerical experiment. This combined methodology offers an effective and flexible solution for data-driven sensor deployment in structural health monitoring. To clarify the rationale for using the Udwadia–Kalaba (U–K) generalized inverse, we note that unlike conventional pseudo-inverses, the U–K method incorporates physical constraints derived from partial mode shapes. This allows a more accurate and physically consistent reconstruction of unmeasured responses, particularly under sparse sensing. To clarify the benefit of using the U–K generalized inverse over conventional pseudo-inverses, we emphasize that the U–K method allows the incorporation of physical constraints derived from partial mode shapes directly into the reconstruction process. This leads to a constrained dynamic solution that not only reflects the known structural behavior but also improves numerical conditioning, particularly in underdetermined or ill-posed cases. Unlike conventional Moore–Penrose pseudo-inverses, which yield purely algebraic solutions without physical insight, the U–K formulation ensures that reconstructed responses adhere to dynamic compatibility, thereby reducing artifacts caused by sparse measurements or noise. Compared to unconstrained least-squares solutions, the U–K approach improves stability and interpretability in practical SHM scenarios. Full article
Show Figures

Figure 1

14 pages, 555 KiB  
Article
A Novel Hyper-Heuristic Algorithm for Bayesian Network Structure Learning Based on Feature Selection
by Yinglong Dang, Xiaoguang Gao and Zidong Wang
Axioms 2025, 14(7), 538; https://doi.org/10.3390/axioms14070538 - 17 Jul 2025
Viewed by 220
Abstract
Bayesian networks (BNs) are effective and universal tools for addressing uncertain knowledge. BN learning includes structure learning and parameter learning, and structure learning is its core. The topology of a BN can be determined by expert domain knowledge or obtained through data analysis. [...] Read more.
Bayesian networks (BNs) are effective and universal tools for addressing uncertain knowledge. BN learning includes structure learning and parameter learning, and structure learning is its core. The topology of a BN can be determined by expert domain knowledge or obtained through data analysis. However, when many variables exist in a BN, relying only on expert knowledge is difficult and infeasible. Therefore, the current research focus is to build a BN via data analysis. However, current data learning methods have certain limitations. In this work, we consider a combination of expert knowledge and data learning methods. In our algorithm, the hard constraints are derived from highly reliable expert knowledge, and some conditional independent information is mined by feature selection as a soft constraint. These structural constraints are reasonably integrated into an exponential Monte Carlo with counter (EMCQ) hyper-heuristic algorithm. A comprehensive experimental study demonstrates that our proposed method exhibits more robustness and accuracy compared to alternative algorithms. Full article
(This article belongs to the Special Issue Advances in Mathematical Optimization Algorithms and Its Applications)
Show Figures

Figure 1

32 pages, 8958 KiB  
Article
A Monte Carlo Simulation Framework for Evaluating the Robustness and Applicability of Settlement Prediction Models in High-Speed Railway Soft Foundations
by Zhenyu Liu, Liyang Wang, Taifeng Li, Huiqin Guo, Feng Chen, Youming Zhao, Qianli Zhang and Tengfei Wang
Symmetry 2025, 17(7), 1113; https://doi.org/10.3390/sym17071113 - 10 Jul 2025
Viewed by 203
Abstract
Accurate settlement prediction for high-speed railway (HSR) soft foundations remains challenging due to the irregular and dynamic nature of real-world monitoring data, often represented as non-equidistant and non-stationary time series (NENSTS). Existing empirical models lack clear applicability criteria under such conditions, resulting in [...] Read more.
Accurate settlement prediction for high-speed railway (HSR) soft foundations remains challenging due to the irregular and dynamic nature of real-world monitoring data, often represented as non-equidistant and non-stationary time series (NENSTS). Existing empirical models lack clear applicability criteria under such conditions, resulting in subjective model selection. This study introduces a Monte Carlo-based evaluation framework that integrates data-driven simulation with geotechnical principles, embedding the concept of symmetry across both modeling and assessment stages. Equivalent permeability coefficients (EPCs) are used to normalize soil consolidation behavior, enabling the generation of a large, statistically robust dataset. Four empirical settlement prediction models—Hyperbolic, Exponential, Asaoka, and Hoshino—are systematically analyzed for sensitivity to temporal features and resistance to stochastic noise. A symmetry-aware comprehensive evaluation index (CEI), constructed via a robust entropy weight method (REWM), balances multiple performance metrics to ensure objective comparison. Results reveal that while settlement behavior evolves asymmetrically with respect to EPCs over time, a symmetrical structure emerges in model suitability across distinct EPC intervals: the Asaoka method performs best under low-permeability conditions (EPC ≤ 0.03 m/d), Hoshino excels in intermediate ranges (0.03 < EPC ≤ 0.7 m/d), and the Exponential model dominates in highly permeable soils (EPC > 0.7 m/d). This framework not only quantifies model robustness under complex data conditions but also formalizes the notion of symmetrical applicability, offering a structured path toward intelligent, adaptive settlement prediction in HSR subgrade engineering. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

30 pages, 5294 KiB  
Article
Non-Invasive Bioelectrical Characterization of Strawberry Peduncles for Post-Harvest Physiological Maturity Classification
by Jonnel Alejandrino, Ronnie Concepcion, Elmer Dadios, Ryan Rhay Vicerra, Argel Bandala, Edwin Sybingco, Laurence Gan Lim and Raouf Naguib
AgriEngineering 2025, 7(7), 223; https://doi.org/10.3390/agriengineering7070223 - 8 Jul 2025
Viewed by 309
Abstract
Strawberry post-harvest losses are estimated at 50%, due to improper handling and harvest timing, necessitating the use of non-invasive methods. This study develops a non-invasive in situ bioelectrical spectroscopy for strawberry peduncles. Based on traditional assessments and invasive metrics, 100 physiologically ripe (PR) [...] Read more.
Strawberry post-harvest losses are estimated at 50%, due to improper handling and harvest timing, necessitating the use of non-invasive methods. This study develops a non-invasive in situ bioelectrical spectroscopy for strawberry peduncles. Based on traditional assessments and invasive metrics, 100 physiologically ripe (PR) and 100 commercially mature (CM) strawberries were distinguished. Spectra from their peduncles were measured from 1 kHz to 1 MHz, collecting four parameters (magnitude (Z(f)), phase angle (θ(f)), resistance (R(f)), and reactance (X(f))), resulting in 80,000 raw data points. Through systematic spectral preprocessing, Bode and Cole–Cole plots revealed a distinction between PR and CM strawberries. Frequency selection identified seven key frequencies (1, 5, 50, 75, 100, 250, 500 kHz) for deriving 37 engineered features from spectral, extrema, and derivative parameters. Feature selection reduced these to 6 parameters: phase angle at 50 kHz (θ (50 kHz)); relaxation time (τ); impedance ratio (|Z1k/Z250k|); dispersion coefficient (α); membrane capacitance (Cm); and intracellular resistivity (ρi). Four algorithms (TabPFN, CatBoost, GPC, EBM) were evaluated with Monte Carlo cross-validation with five iterations, ensuring robust evaluation. CatBoost achieved the highest accuracy at 93.3% ± 2.4%. Invasive reference metrics showed strong correlations with bioelectrical parameters (r = 0.74 for firmness, r = −0.71 for soluble solids). These results demonstrate a solution for precise harvest classification, reducing post-harvest losses without compromising marketability. Full article
(This article belongs to the Section Pre and Post-Harvest Engineering in Agriculture)
Show Figures

Figure 1

39 pages, 8177 KiB  
Article
Unveiling Epigenetic Regulatory Elements Associated with Breast Cancer Development
by Marta Jardanowska-Kotuniak, Michał Dramiński, Michal Wlasnowolski, Marcin Łapiński, Kaustav Sengupta, Abhishek Agarwal, Adam Filip, Nimisha Ghosh, Vera Pancaldi, Marcin Grynberg, Indrajit Saha, Dariusz Plewczynski and Michał J. Dąbrowski
Int. J. Mol. Sci. 2025, 26(14), 6558; https://doi.org/10.3390/ijms26146558 - 8 Jul 2025
Viewed by 591
Abstract
Breast cancer affects over 2 million women annually and results in 650,000 deaths. This study aimed to identify epigenetic mechanisms impacting breast cancer-related gene expression, discover potential biomarkers, and present a novel approach integrating feature selection, Natural Language Processing, and 3D chromatin structure [...] Read more.
Breast cancer affects over 2 million women annually and results in 650,000 deaths. This study aimed to identify epigenetic mechanisms impacting breast cancer-related gene expression, discover potential biomarkers, and present a novel approach integrating feature selection, Natural Language Processing, and 3D chromatin structure analysis. We used The Cancer Genome Atlas database with over 800 samples and multi-omics datasets (mRNA, miRNA, DNA methylation) to select 2701 features statistically significant in cancer versus control samples, from an initial 417,486, using the Monte Carlo Feature Selection and Interdependency Discovery algorithm. Classification of cancer vs. control samples on the selected features returned very high accuracy, depending on feature-type and classifier. The cancer samples generally showed lower expression of differentially expressed genes (DEGs) and increased β-values of differentially methylated sites (DMSs). We identified mRNAs whose expression is explained by miRNA expression and β-values of DMSs. We recognized DMSs affecting NRF1 and MXI1 transcription factors binding, causing a disturbance in NKAPL and PITX1 expression, respectively. Our 3D models showed more loosely packed chromatin in cancer. This study highlights numerous possible regulatory dependencies, and the presented bioinformatic approach provides a robust framework for data dimensionality reduction, enabling the identification of key features for further experimental validation. Full article
(This article belongs to the Section Molecular Oncology)
Show Figures

Figure 1

14 pages, 1816 KiB  
Article
On Optimally Selecting Candidate Detectors with High Predicted Radio Signals from Energetic Cosmic Ray-Induced Extensive Air Showers
by Tudor Alexandru Calafeteanu, Paula Gina Isar and Emil Ioan Slușanschi
Universe 2025, 11(6), 192; https://doi.org/10.3390/universe11060192 - 18 Jun 2025
Viewed by 238
Abstract
Monte Carlo simulations of induced extensive air showers (EASs) by ultra-high-energy cosmic rays are widely used in comparison with measured events at experiments to estimate the main cosmic ray characteristics, such as mass, energy, and arrival direction. However, these simulations are computationally expensive, [...] Read more.
Monte Carlo simulations of induced extensive air showers (EASs) by ultra-high-energy cosmic rays are widely used in comparison with measured events at experiments to estimate the main cosmic ray characteristics, such as mass, energy, and arrival direction. However, these simulations are computationally expensive, with running time scaling proportionally with the number of radio antennas included. The AugerPrime upgrade of the Pierre Auger Observatory will feature an array of 1660 radio antennas. As a result, simulating a single EAS using the full detector array will take weeks on a single CPU thread. To reduce the simulation time, detectors are commonly pre-selected based on their proximity to the shower core, using a selection ellipse based on the Cherenkov radiation footprint scaled by a fixed constant factor. While effective, this approach often includes many noisy antennas at high zenith angles, reducing computational efficiency. In this paper, we introduce an optimal method for selecting candidate detectors with high predicted signal-to-noise ratio for proton and iron primary cosmic rays, replacing the constant scaling factor with a function of the zenith angle. This approach significantly reduces simulation time—by more than 50% per CPU thread for the heaviest, most inclined showers—without compromising signal quality. Full article
(This article belongs to the Special Issue Ultra-High-Energy Cosmic Rays)
Show Figures

Figure 1

19 pages, 1617 KiB  
Article
A Short-Term Risk Prediction Method Based on In-Vehicle Perception Data
by Xinpeng Yao, Nengchao Lyu and Mengfei Liu
Sensors 2025, 25(10), 3213; https://doi.org/10.3390/s25103213 - 20 May 2025
Viewed by 367
Abstract
Advanced driving assistance systems (ADASs) provide rich data on vehicles and their surroundings, enabling early detection and warning of driving risks. This study proposes a short-term risk prediction method based on in-vehicle perception data, aiming to support real-time risk identification in ADAS environments. [...] Read more.
Advanced driving assistance systems (ADASs) provide rich data on vehicles and their surroundings, enabling early detection and warning of driving risks. This study proposes a short-term risk prediction method based on in-vehicle perception data, aiming to support real-time risk identification in ADAS environments. A variable sliding window approach is employed to determine the optimal prediction window lead length and duration. The method incorporates Monte Carlo simulation for threshold calibration, Boruta-based feature selection, and multiple machine learning models, including the light gradient-boosting machine (LGBM), with performance interpretation via SHAP analysis. Validation is conducted using data from 90 real-world driving sessions. Results show that the optimal prediction lead time and window length are 1.6 s and 1.2 s, respectively, with LGBM achieving the best predictive performance. Risk prediction effectiveness is enhanced when integrating information across the human–vehicle–road environment system. Key features influencing prediction include vehicle speed, accelerator operation, braking deceleration, and the reciprocal of time to collision (TTCi). The proposed approach provides an effective solution for short-term risk prediction and offers algorithmic support for future ADAS applications. Full article
(This article belongs to the Special Issue Intelligent Traffic Safety and Security)
Show Figures

Figure 1

15 pages, 2023 KiB  
Article
Improved Prediction Accuracy for Late-Onset Preeclampsia Using cfRNA Profiles: A Comparative Study of Marker Selection Strategies
by Akiha Nakano, Kohei Uno and Yusuke Matsui
Healthcare 2025, 13(10), 1162; https://doi.org/10.3390/healthcare13101162 - 16 May 2025
Viewed by 490
Abstract
Background: Late-onset pre-eclampsia (LO-PE) remains difficult to predict because placental angiogenic markers perform poorly once maternal cardiometabolic factors dominate. Methods: We reanalyzed a publicly available cell-free RNA (cfRNA) cohort (12 EO-PE, 12 LO-PE, and 24 matched controls). After RNA-seq normalization, we [...] Read more.
Background: Late-onset pre-eclampsia (LO-PE) remains difficult to predict because placental angiogenic markers perform poorly once maternal cardiometabolic factors dominate. Methods: We reanalyzed a publicly available cell-free RNA (cfRNA) cohort (12 EO-PE, 12 LO-PE, and 24 matched controls). After RNA-seq normalization, we derived LO-PE candidate genes using (i) differential expression and (ii) elastic-net feature selection. Predictive accuracy was assessed with nested Monte-Carlo cross-validation (10 × 70/30 outer splits; 5-fold inner grid-search for λ). Results: The best LO-PE elastic-net model achieved a mean ± SD AUROC of 0.88 ± 0.08 and F1 of 0.73 ± 0.17—substantially higher than an EO-derived baseline applied to the same samples (AUROC ≈ 0.69). Enrichment analysis highlighted immune-tolerance and metabolic pathways; three genes (HLA-G, IL17RB, and KLRC4) recurred across >50% of cross-validation repeats. Conclusions: Plasma cfRNA signatures can outperform existing EO-based screens for LO-PE and nominate biologically plausible markers of immune and metabolic dysregulation. Because the present dataset is small (n = 48) and underpowered for single-gene claims, external validation in larger, multicenter cohorts is essential before clinical translation. Full article
Show Figures

Figure 1

29 pages, 752 KiB  
Article
A Lightweight Intrusion Detection System for Internet of Things: Clustering and Monte Carlo Cross-Entropy Approach
by Abdulmohsen Almalawi
Sensors 2025, 25(7), 2235; https://doi.org/10.3390/s25072235 - 2 Apr 2025
Viewed by 1082
Abstract
Our modern lives are increasingly shaped by the Internet of Things (IoT), as IoT devices monitor and manage everything from our homes to our workplaces, becoming an essential part of health systems and daily infrastructure. However, this rapid growth in IoT has introduced [...] Read more.
Our modern lives are increasingly shaped by the Internet of Things (IoT), as IoT devices monitor and manage everything from our homes to our workplaces, becoming an essential part of health systems and daily infrastructure. However, this rapid growth in IoT has introduced significant security challenges, leading to increased vulnerability to cyber attacks. To address these challenges, machine learning-based intrusion detection systems (IDSs)—traditionally considered a primary line of defense—have been deployed to monitor and detect malicious activities in IoT networks. Despite this, these IDS solutions often struggle with the inherent resource constraints of IoT devices, including limited computational power and memory. To overcome these limitations, we propose an approach to enhance intrusion detection efficiency. First, we introduce a recursive clustering method for data condensation, integrating compactness and entropy-driven sampling to select a highly representative subset from the larger dataset. Second, we adopt a Monte Carlo Cross-Entropy approach combined with a stability metric of features to consistently select the most stable and relevant features, resulting in a lightweight, efficient, and high-accuracy IoT-based IDS. Evaluation of our proposed approach on three IoT datasets from real devices (N-BaIoT, Edge-IIoTset, CICIoT2023) demonstrates comparable classification accuracy while significantly reducing training and testing times by 45× and 15×, respectively, and lowering memory usage by 18×, compared to competitor approaches. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

14 pages, 17234 KiB  
Article
A Grid-Based Long Short-Term Memory Framework for Runoff Projection and Uncertainty in the Yellow River Source Area Under CMIP6 Climate Change
by Haibo Chu, Yulin Jiang and Zhuoqi Wang
Water 2025, 17(5), 750; https://doi.org/10.3390/w17050750 - 4 Mar 2025
Cited by 1 | Viewed by 905
Abstract
Long-term runoff projection and uncertainty estimates can provide both the changing trends and confidence intervals of water resources, provide basic information for decision makers, and reduce risks for water resource management. In this paper, a grid-based runoff projection and uncertainty framework was proposed [...] Read more.
Long-term runoff projection and uncertainty estimates can provide both the changing trends and confidence intervals of water resources, provide basic information for decision makers, and reduce risks for water resource management. In this paper, a grid-based runoff projection and uncertainty framework was proposed through input selection and long short-term memory (LSTM) modelling coupled with uncertainty analysis. We simultaneously considered dynamic variables and static variables in the candidate input combinations. Different input combinations were compared. We employed LSTM to develop a relationship between monthly runoff and the selected variables and demonstrated the improvement in forecast accuracy through comparison with the MLR, RBFNN, and RNN models. The LSTM model achieved the highest mean Kling–Gupta Efficiency (KGE) score of 0.80, representing respective improvements of 45.45%, 33.33%, and 2.56% over the other three models. The uncertainty sources originating from the parameters of the LSTM models were considered, and the Monte Carlo approach was used to provide uncertainty estimates. The framework was applied to the Yellow River Source Area (YRSR) at the 0.25° grid scale to better show the temporal and spatial features. The results showed that extra information about static variables can improve the accuracy of runoff projections. Annual runoff tended to increase, with projection ranges of 148.44–296.16 mm under the 95% confidence level, under various climate scenarios. Full article
Show Figures

Figure 1

12 pages, 6129 KiB  
Article
Effect of OSEM Reconstruction Iteration Number and Monte Carlo Collimator Modeling on 166Ho Activity Quantification in SPECT/CT
by Rita Albergueiro, Vera Antunes and João Santos
Appl. Sci. 2025, 15(3), 1589; https://doi.org/10.3390/app15031589 - 5 Feb 2025
Cited by 1 | Viewed by 1100
Abstract
Background: Accurate reconstruction and quantification in the post-therapy SPECT/CT imaging of 166Ho microspheres for hepatic malignancies is crucial for treatment evaluation. This present study aimed to explore the impact of the OSEM reconstruction parameters on SPECT/CT image features for dose distribution determination, [...] Read more.
Background: Accurate reconstruction and quantification in the post-therapy SPECT/CT imaging of 166Ho microspheres for hepatic malignancies is crucial for treatment evaluation. This present study aimed to explore the impact of the OSEM reconstruction parameters on SPECT/CT image features for dose distribution determination, using Hybrid Recon™ (Hermes Medical Solutions AB) and full Monte Carlo (MC) collimator modeling. Methods: Image quality and activity quantification were assessed through two acquisitions of the Jaszczak phantom using a Siemens Symbia Intevo Bold SPECT/CT system. The datasets were reconstructed using the OSEM method, with variations in the number of iterations for 15 and 8 subsets, both with and without full MC collimator modeling. Contrast recovery coefficient (QH), coefficient of variation (CV), contrast-to-noise ratio (CNR), calibration factor (CF), and activity recovery coefficient (ARC) were calculated and used to evaluate image quality and activity quantification. Results: Reconstructions with 5 iterations and 15 subsets, as well as 10 iterations and 8 subsets, were selected as the most suitable for 166Ho imaging, as they provided higher QH and ARCs. Incorporating full MC collimator modeling in both reconstructions led to significant improvements in image quality and activity recovery. The CFs remained consistent for a fixed value of 15 and 8 subsets, with values of (14.9 ± 0.5) cps/MBq and (14.6 ± 0.5) cps/MBq, respectively. However, when applying full collimator modeling, the CF values decreased to a range between 10.9 and 12.1 cps/MBq. Conclusions: For 166Ho SPECT/CT imaging, OSEM (with either 5 iterations and 15 subsets or 10 iterations and 8 subsets) combined with full MC collimator modeling yielded superior image quality and quantification results. Full article
(This article belongs to the Special Issue Bioinformatics in Healthcare to Prevent Cancer and Children Obesity)
Show Figures

Figure 1

14 pages, 1748 KiB  
Article
Harnessing Halogenated Zeolitic Imidazolate Frameworks for Alcohol Vapor Adsorption
by Kevin Dedecker, Martin Drobek and Anne Julbe
Molecules 2024, 29(24), 5825; https://doi.org/10.3390/molecules29245825 - 10 Dec 2024
Cited by 1 | Viewed by 1048
Abstract
This study explores Zeolitic Imidazolate Frameworks (ZIFs) as promising materials for adsorbing alcohol vapors, one of the main contributors to air quality deterioration and adverse health effects. Indeed, this sub-class of Metal–Organic Frameworks (MOFs) offers a promising alternative to conventional adsorbents like zeolites [...] Read more.
This study explores Zeolitic Imidazolate Frameworks (ZIFs) as promising materials for adsorbing alcohol vapors, one of the main contributors to air quality deterioration and adverse health effects. Indeed, this sub-class of Metal–Organic Frameworks (MOFs) offers a promising alternative to conventional adsorbents like zeolites and activated carbons for air purification. Specifically, this investigation focuses on ZIF-8_Br, a brominated version of ZIF-8_CH3, to evaluate its ability to capture aliphatic alcohols at lower partial pressures. The adsorption properties have been investigated using both experimental and computational methods combining Density Functional Theory and Grand Canonical Monte Carlo simulations. The Ideal Adsorbed Solution Theory (IAST) has been used to assess the material selectivity in the presence of binary equimolar alcohol mixtures. Compared to ZIF-8_CH3, the brominated analog has been shown to feature a higher affinity for alcohols, a property that could be advantageously exploited in environmental remediation or in the development of membranes for alcohol vapor sensors. Full article
(This article belongs to the Special Issue Porous Organic Materials: Design and Applications: Volume II)
Show Figures

Graphical abstract

Back to TopTop