Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,144)

Search Parameters:
Keywords = stand-alone model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 651 KB  
Article
A Fusion of Statistical and Machine Learning Methods: GARCH-XGBoost for Improved Volatility Modelling of the JSE Top40 Index
by Israel Maingo, Thakhani Ravele and Caston Sigauke
Int. J. Financial Stud. 2025, 13(3), 155; https://doi.org/10.3390/ijfs13030155 - 25 Aug 2025
Abstract
Volatility modelling is a key feature of financial risk management, portfolio optimisation, and forecasting, particularly for market indices such as the JSE Top40 Index, which serves as a benchmark for the South African stock market. This study investigates volatility modelling of the JSE [...] Read more.
Volatility modelling is a key feature of financial risk management, portfolio optimisation, and forecasting, particularly for market indices such as the JSE Top40 Index, which serves as a benchmark for the South African stock market. This study investigates volatility modelling of the JSE Top40 Index log-returns from 2011 to 2025 using a hybrid approach that integrates statistical and machine learning techniques through a two-step approach. The ARMA(3,2) model was chosen as the optimal mean model, using the auto.arima() function from the forecast package in R (version 4.4.0). Several alternative variants of GARCH models, including sGARCH(1,1), GJR-GARCH(1,1), and EGARCH(1,1), were fitted under various conditional error distributions (i.e., STD, SSTD, GED, SGED, and GHD). The choice of the model was based on AIC, BIC, HQIC, and LL evaluation criteria, and ARMA(3,2)-EGARCH(1,1) was the best model according to the lowest evaluation criteria. Residual diagnostic results indicated that the model adequately captured autocorrelation, conditional heteroskedasticity, and asymmetry in JSE Top40 log-returns. Volatility persistence was also detected, confirming the persistence attributes of financial volatility. Thereafter, the ARMA(3,2)-EGARCH(1,1) model was coupled with XGBoost using standardised residuals extracted from ARMA(3,2)-EGARCH(1,1) as lagged features. The data was split into training (60%), testing (20%), and calibration (20%) sets. Based on the lowest values of forecast accuracy measures (i.e., MASE, RMSE, MAE, MAPE, and sMAPE), along with prediction intervals and their evaluation metrics (i.e., PICP, PINAW, PICAW, and PINAD), the hybrid model captured residual nonlinearities left by the standalone ARMA(3,2)-EGARCH(1,1) and demonstrated improved forecasting accuracy. The hybrid ARMA(3,2)-EGARCH(1,1)-XGBoost model outperforms the standalone ARMA(3,2)-EGARCH(1,1) model across all forecast accuracy measures. This highlights the robustness and suitability of the hybrid ARMA(3,2)-EGARCH(1,1)-XGBoost model for financial risk management in emerging markets and signifies the strengths of integrating statistical and machine learning methods in financial time series modelling. Full article
Show Figures

Figure 1

30 pages, 13771 KB  
Article
A High-Performance Hybrid Transformer–LSTM–XGBoost Model for sEMG-Based Fatigue Detection in Simulated Roofing Postures
by Sujan Acharya, Krishna Kisi, Sabrin Raj Gautam, Tarek Mahmud and Rujan Kayastha
Buildings 2025, 15(17), 3005; https://doi.org/10.3390/buildings15173005 - 24 Aug 2025
Abstract
Within the hazardous construction industry, roofers represent one of the most at-risk workforces, with high fatalities and injury rates largely driven by Work-Related Musculoskeletal Disorders (WMSDs). The primary precursor to these disorders is muscle fatigue, yet its objective assessment remains a significant challenge [...] Read more.
Within the hazardous construction industry, roofers represent one of the most at-risk workforces, with high fatalities and injury rates largely driven by Work-Related Musculoskeletal Disorders (WMSDs). The primary precursor to these disorders is muscle fatigue, yet its objective assessment remains a significant challenge for implementing proactive safety management. To address this gap, this study details the implementation and validation of an AI-driven predictive analytics framework for automated fatigue detection using surface electromyography (sEMG) signals. Data was collected as participants (novice roofers) performed strenuous, simulated roofing tasks involving sustained standing, stooping, and kneeling postures. A key innovation is a data-driven labeling methodology using Weak Monotonicity (WM) trend analysis to automate the generation of objective labels. After a feature selection process yielded seven significant features, an evaluation of standard models confirmed that their classification performance was highly posture-dependent, motivating a more robust, hybrid solution. The framework culminates in a high-performance hybrid machine learning model. This architecture synergistically combines a Transformer–LSTM network for deep feature extraction with an XGBoost classifier. The model outperformed all standalone approaches, achieving over 82% accuracy across all postures with consistently strong fatigue F1-scores (0.77–0.78). The entire framework was validated using a stringent Leave-One-Subject-Out (LOSO) cross-validation protocol to ensure subject-independent generalizability. This research provides a validated component for AI-enhanced safety management systems. Future work should prioritize field validation with professional workers to translate this framework into practical, real-world ergonomic monitoring systems. Full article
(This article belongs to the Special Issue Safety Management and Occupational Health in Construction)
Show Figures

Figure 1

16 pages, 1481 KB  
Article
Assessing Urban Lake Performance for Stormwater Harvesting: Insights from Two Lake Systems in Western Sydney, Australia
by Sai Kiran Natarajan, Dharmappa Hagare and Basant Maheshwari
Water 2025, 17(17), 2504; https://doi.org/10.3390/w17172504 - 22 Aug 2025
Viewed by 153
Abstract
This study examines the impact of catchment characteristics and design on the performance of urban lakes in terms of water quality and stormwater harvesting potential. Two urban lake systems in Western Sydney, Australia, were selected for comparison: Wattle Grove Lake, a standalone constructed [...] Read more.
This study examines the impact of catchment characteristics and design on the performance of urban lakes in terms of water quality and stormwater harvesting potential. Two urban lake systems in Western Sydney, Australia, were selected for comparison: Wattle Grove Lake, a standalone constructed lake, and Woodcroft Lake, part of an integrated wetland–lake system. Both systems receive runoff from surrounding residential catchments of differing sizes and land uses. Over a one-year period, continuous monitoring was conducted to evaluate water quality parameters, including turbidity, total suspended solids (TSS), nutrients (total nitrogen and total phosphorus), pH, dissolved oxygen, and biochemical oxygen demand. The results reveal that the lake with an integrated wetland significantly outperformed the standalone lake in terms of water quality, particularly in terms of turbidity and total suspended solids (TSS), achieving up to 70% reduction in TSS at the outlet compared to the inlet. The wetland served as an effective pre-treatment system, reducing pollutant loads before water entered the lake. Despite this, nutrient concentrations in both systems remained above the thresholds set by the Australian and New Zealand Environment and Conservation Council (ANZECC) Guidelines (2000), indicating persistent challenges in nutrient retention. Notably, the larger catchment area and shallow depth of Wattle Grove Lake likely contributed to higher turbidity and nutrient levels, resulting from sediment resuspension and algal growth. Hydrological modelling using the Model for Urban Stormwater Improvement Conceptualisation (MUSIC) software (version 6) complemented the field data and highlighted the influence of catchment size, hydraulic retention time, and lake depth on pollutant removal efficiency. While both systems serve important environmental and recreational functions, the integrated wetland–lake system at Woodcroft demonstrated greater potential for safe stormwater harvesting and reuse within urban settings. The findings from the study offer practical insights for urban stormwater management and inform future designs that enhance resilience and water reuse potential in growing cities. Full article
(This article belongs to the Special Issue Urban Stormwater Harvesting, and Wastewater Treatment and Reuse)
Show Figures

Figure 1

23 pages, 4794 KB  
Article
IHGR-RAG: An Enhanced Retrieval-Augmented Generation Framework for Accurate and Interpretable Power Equipment Condition Assessment
by Zhenhao Ye, Donglian Qi, Hanlin Liu and Siqi Zhang
Electronics 2025, 14(16), 3284; https://doi.org/10.3390/electronics14163284 - 19 Aug 2025
Viewed by 283
Abstract
Condition assessment of power equipment is crucial for optimizing maintenance strategies. However, knowledge-driven approaches rely heavily on manual alignment between equipment failure characteristics and guideline information, while data-driven methods predominantly depend on on-site experiments to detect abnormal conditions. Both face challenges in terms [...] Read more.
Condition assessment of power equipment is crucial for optimizing maintenance strategies. However, knowledge-driven approaches rely heavily on manual alignment between equipment failure characteristics and guideline information, while data-driven methods predominantly depend on on-site experiments to detect abnormal conditions. Both face challenges in terms of inefficiency and timeliness limitations. With the growing integration of information systems, a significant portion of condition assessment-related information is represented in textual formats, such as system alerts and experimental records. Although Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) show promise in processing such text-based information, their practical application is constrained by LLMs’ hallucinations and RAG’s coarse-grained retrieval mechanisms, which struggle with semantically similar but contextually distinct guideline items. To address these issues, this paper proposes an enhanced RAG framework that integrates hierarchical and global retrieval mechanisms (IHGR-RAG). The framework comprehensively incorporates three optimization strategies: a query rewriting mechanism based on few-shot learning prompt engineering, an integrated approach combining hierarchical and global retrieval mechanisms, and a zero-shot chain-of-thought generation optimization pipeline. Additionally, a Task-Specific Quantitative Evaluation Benchmark is developed to rigorously evaluate model performance. Experimental results indicate that IHGR-RAG achieves accuracy improvements of 4.14% and 5.12% in the task of matching the solely correct guideline item, compared to conventional RAG and standalone hierarchical methods, respectively. Ablation studies confirm the effectiveness of each module. This work advances dynamic health monitoring for power equipment by balancing interpretability, accuracy, and domain adaptability, providing a cost-effective optimization pathway for scenarios with limited annotated data. Full article
(This article belongs to the Special Issue Advances in Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

17 pages, 899 KB  
Article
Optimal Sizing of Residential PV and Battery Systems Under Grid Export Constraints: An Estonian Case Study
by Arko Kesküla, Kirill Grjaznov, Tiit Sepp and Alo Allik
Energies 2025, 18(16), 4405; https://doi.org/10.3390/en18164405 - 19 Aug 2025
Viewed by 361
Abstract
This study investigates the optimal sizing of photovoltaic (PV) and battery storage (BAT) systems for Estonian households operating under grid constraints that prevent selling surplus energy. We develop and compare three sizing models of increasing complexity, ranging from a simple heuristic to a [...] Read more.
This study investigates the optimal sizing of photovoltaic (PV) and battery storage (BAT) systems for Estonian households operating under grid constraints that prevent selling surplus energy. We develop and compare three sizing models of increasing complexity, ranging from a simple heuristic to a full simulation based optimization. Their performance is evaluated using a multi-criteria decision analysis (MCDA) framework that integrates Net Present Value (NPV), Internal Rate of Return (IRR), Profitability Index Ratio (PIR), and payback period. Sensitivity analyses are used to test the robustness of each configuration against electricity price shifts and market volatility. Our findings reveal that standalone PV-only systems are the most economically robust investment. They consistently outperform combined PV + BAT and BAT-only configurations in terms of investment efficiency and overall financial attractiveness. Key results demonstrate that the simplest heuristic-based model (Model 1) identifies configurations with a better balance of financial returns and capital efficiency than the more complex simulation-based approach (Model 3). While the optimization model achieves the highest absolute NPV, it requires significantly higher investment and results in lower overall efficiency. The economic case for batteries remains weak, with viability depending heavily on price volatility and arbitrage potential. These results provide practical guidance, suggesting that for grid constrained households, a well-sized PV-only system identified with a simple model offers the most effective path to cost savings and energy self-sufficiency. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

22 pages, 2887 KB  
Article
Autoencoder-Assisted Stacked Ensemble Learning for Lymphoma Subtype Classification: A Hybrid Deep Learning and Machine Learning Approach
by Roseline Oluwaseun Ogundokun, Pius Adewale Owolawi, Chunling Tu and Etienne van Wyk
Tomography 2025, 11(8), 91; https://doi.org/10.3390/tomography11080091 - 18 Aug 2025
Viewed by 243
Abstract
Background: Accurate subtype identification of lymphoma cancer is crucial for effective diagnosis and treatment planning. Although standard deep learning algorithms have demonstrated robustness, they are still prone to overfitting and limited generalization, necessitating more reliable and robust methods. Objectives: This study presents an [...] Read more.
Background: Accurate subtype identification of lymphoma cancer is crucial for effective diagnosis and treatment planning. Although standard deep learning algorithms have demonstrated robustness, they are still prone to overfitting and limited generalization, necessitating more reliable and robust methods. Objectives: This study presents an autoencoder-augmented stacked ensemble learning (SEL) framework integrating deep feature extraction (DFE) and ensembles of machine learning classifiers to improve lymphoma subtype identification. Methods: Convolutional autoencoder (CAE) was utilized to obtain high-level feature representations of histopathological images, followed by dimensionality reduction via Principal Component Analysis (PCA). Various models were utilized for classifying extracted features, i.e., Random Forest (RF), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), AdaBoost, and Extra Trees classifiers. A Gradient Boosting Machine (GBM) meta-classifier was utilized in an SEL approach to further fine-tune final predictions. Results: All the models were tested using accuracy, area under the curve (AUC), and Average Precision (AP) metrics. The stacked ensemble classifier performed better than all the individual models with a 99.04% accuracy, 0.9998 AUC, and 0.9996 AP, far exceeding what regular deep learning (DL) methods would achieve. Of standalone classifiers, MLP (97.71% accuracy, 0.9986 AUC, 0.9973 AP) and Random Forest (96.71% accuracy, 0.9977 AUC, 0.9953 AP) provided the best prediction performance, while AdaBoost was the poorest performer (68.25% accuracy, 0.8194 AUC, 0.6424 AP). PCA and t-SNE plots confirmed that DFE effectively enhances class discrimination. Conclusion: This study demonstrates a highly accurate and reliable approach to lymphoma classification by using autoencoder-assisted ensemble learning, reducing the misclassification rate and significantly enhancing the accuracy of diagnosis. AI-based models are designed to assist pathologists by providing interpretable outputs such as class probabilities and visualizations (e.g., Grad-CAM), enabling them to understand and validate predictions in the diagnostic workflow. Future studies should enhance computational efficacy and conduct multi-centre validation studies to confirm the model’s generalizability on extensive collections of histopathological datasets. Full article
Show Figures

Figure 1

29 pages, 4725 KB  
Article
Feature Fusion Using Deep Learning Algorithms in Image Classification for Security Purposes by Random Weight Network
by Mustafa Servet Kiran, Gokhan Seyfi, Merve Yilmaz, Engin Esme and Xizhao Wang
Appl. Sci. 2025, 15(16), 9053; https://doi.org/10.3390/app15169053 - 17 Aug 2025
Viewed by 336
Abstract
Automated threat detection in X-ray security imagery is a critical yet challenging task, where conventional deep learning models often struggle with low accuracy and overfitting. This study addresses these limitations by introducing a novel framework based on feature fusion. The proposed method extracts [...] Read more.
Automated threat detection in X-ray security imagery is a critical yet challenging task, where conventional deep learning models often struggle with low accuracy and overfitting. This study addresses these limitations by introducing a novel framework based on feature fusion. The proposed method extracts features from multiple and diverse deep learning architectures and classifies them using a Random Weight Network (RWN), whose hyperparameters are optimized for maximum performance. The results show substantial improvements at each stage: while the best standalone deep learning model achieved a test accuracy of 83.55%, applying the RWN to a single feature set increased accuracy to 94.82%. Notably, the proposed feature fusion framework achieved a state-of-the-art test accuracy of 97.44%. These findings demonstrate that a modular approach combining multi-model feature fusion with an efficient classifier is a highly effective strategy for improving the accuracy and generalization capability of automated threat detection systems. Full article
(This article belongs to the Special Issue Deep Learning for Image Processing and Computer Vision)
Show Figures

Figure 1

22 pages, 2132 KB  
Article
Ontology Matching Method Based on Deep Learning and Syntax
by Jiawei Lu and Changfeng Yan
Big Data Cogn. Comput. 2025, 9(8), 208; https://doi.org/10.3390/bdcc9080208 - 14 Aug 2025
Viewed by 245
Abstract
Ontology technology addresses data heterogeneity challenges in Internet of Everything (IoE) systems enabled by Cyber Twin and 6G, yet the subjective nature of ontology engineering often leads to differing definitions of the same concept across ontologies, resulting in ontology heterogeneity. To solve this [...] Read more.
Ontology technology addresses data heterogeneity challenges in Internet of Everything (IoE) systems enabled by Cyber Twin and 6G, yet the subjective nature of ontology engineering often leads to differing definitions of the same concept across ontologies, resulting in ontology heterogeneity. To solve this problem, this study introduces a hybrid ontology matching method that integrates a Recurrent Neural Network (RNN) with syntax-based analysis. The method first extracts representative entities by leveraging in-degree and out-degree information from ontological tree structures, which reduces training noise and improves model generalization. Next, a matching framework combining RNN and N-gram is designed: the RNN captures medium-distance dependencies and complex sequential patterns, supporting the dynamic optimization of embedding parameters and semantic feature extraction; the N-gram module further captures local information and relationships between adjacent characters, improving the coverage of matched entities. The experiments were conducted on the OAEI benchmark dataset, where the proposed method was compared with representative baseline methods from OAEI as well as a Transformer-based method. The results demonstrate that the proposed method achieved an 18.18% improvement in F-measure over the best-performing baseline. This improvement was statistically significant, as validated by the Friedman and Holm tests. Moreover, the proposed method achieves the shortest runtime among all the compared methods. Compared to other RNN-based hybrid frameworks that adopt classical structure-based and semantics-based similarity measures, the proposed method further improved the F-measure by 18.46%. Furthermore, a comparison of time and space complexity with the standalone RNN model and its variants demonstrated that the proposed method achieved high performance while maintaining favorable computational efficiency. These findings confirm the effectiveness and efficiency of the method in addressing ontology heterogeneity in complex IoE environments. Full article
Show Figures

Figure 1

23 pages, 1008 KB  
Article
An Integrated Structural Equation Modelling and Machine Learning Framework for Measurement Scale Evaluation—Application to Voluntary Turnover Intentions
by Marcin Nowak and Robert Zajkowski
AppliedMath 2025, 5(3), 105; https://doi.org/10.3390/appliedmath5030105 - 13 Aug 2025
Viewed by 244
Abstract
There is an increasing demand for robust methodologies to rigorously evaluate the psychometric properties of measurement scales used in quantitative research across various scientific disciplines. This article proposes an integrative method that combines structural equation modelling (SEM) with machine learning (ML) to jointly [...] Read more.
There is an increasing demand for robust methodologies to rigorously evaluate the psychometric properties of measurement scales used in quantitative research across various scientific disciplines. This article proposes an integrative method that combines structural equation modelling (SEM) with machine learning (ML) to jointly assess model fit and predictive accuracy, limitations often addressed separately in traditional approaches. Using a measurement scale for voluntary employee turnover intention, the method demonstrates clear improvements: RMSEA decreased from 0.073 to 0.065, and classifier accuracy slightly increased from 0.862 to 0.863 after removing three redundant items. Compared to standalone SEM or ML, the integrated framework yields a shorter, better-fitting scale without compromising predictive power. For practitioners, this method enables the creation of more efficient, theoretically grounded, and predictive tools, facilitating faster and more accurate assessments in organisational settings. To this end, this study employs Covariance-Based SEM (CB-SEM) in conjunction with classifiers such as naive Bayes, linear and nonlinear support vector machines, decision trees, k-nearest neighbours, and logistic regression. Full article
Show Figures

Figure 1

8 pages, 277 KB  
Proceeding Paper
Optimizing Short-Term Electrical Demand Forecasting with Deep Learning and External Influences
by Leonardo Santos Amaral, Gustavo Medeiros de Araújo and Ricardo Moraes
Eng. Proc. 2025, 101(1), 16; https://doi.org/10.3390/engproc2025101016 - 12 Aug 2025
Viewed by 210
Abstract
Short-term electrical demand forecasting is crucial for the efficient operation of modern power grids. Traditional methods often fail by neglecting system nonlinearities and external factors that influence electricity consumption. In this study, we propose an enhanced deep learning-based forecasting model that integrates external [...] Read more.
Short-term electrical demand forecasting is crucial for the efficient operation of modern power grids. Traditional methods often fail by neglecting system nonlinearities and external factors that influence electricity consumption. In this study, we propose an enhanced deep learning-based forecasting model that integrates external factors such as meteorological data and economic indicators to improve prediction accuracy. Using an ISO NE (Independent System Operator New England) dataset from 2017 to 2019, we analyze 23 independent variables to assess their impact on model performance. Our findings demonstrate that careful variable selection reduces dimensionality while maintaining forecasting accuracy, enabling the effective application of deep learning models. The CNN plus LSTM composite model achieved the lowest prediction error of 0.15%, outperforming standalone CNN (0.8%) and LSTM (1.44%) approaches. The combination of CNN’s feature extraction capabilities with LSTM’s strength in handling time series data was instrumental in achieving superior performance. Our results highlight the importance of incorporating external influences in electricity demand forecasting and suggest future directions for developing more precise and efficient models. Full article
Show Figures

Figure 1

16 pages, 4802 KB  
Article
Validation of the New TLANESY Thermal–Hydraulic Code with Data from the QUENCH-01 Experiment
by Nahum Contreras-Pérez, Heriberto Sánchez-Mora, Sergio Quezada-García, Armando Miguel Gómez Torres and Ricardo Isaac Cázares Ramírez
J. Nucl. Eng. 2025, 6(3), 32; https://doi.org/10.3390/jne6030032 - 12 Aug 2025
Viewed by 323
Abstract
Hydrogen generation and the correct simulation of severe accidents have been of utmost importance since the Fukushima Dai-ichi accident. QUENCH experiments are quite useful for validating mathematical models implemented in system codes for early-phase severe accidents, where hydrogen generation, fuel rod temperature, and [...] Read more.
Hydrogen generation and the correct simulation of severe accidents have been of utmost importance since the Fukushima Dai-ichi accident. QUENCH experiments are quite useful for validating mathematical models implemented in system codes for early-phase severe accidents, where hydrogen generation, fuel rod temperature, and their deterioration during these conditions are of vital importance. This paper presents a new system code, TLANESY, designed for the simulation of thermal–hydraulic systems with two-phase flow (mainly water) and with application in the analysis of severe accidents during the early phase. The computational implementation consists of fast-running numerical methods and their validation with experimental data from the QUENCH-01 experiment. The results showed an error with respect to the total hydrogen generation of approximately 0.6%. A stand-alone sensitivity analysis was also performed with some parameters related to the cladding, where it was shown that variation in the thermal conductivity by 15% can alter the total hydrogen generation by up to 5%, indicating that impurities in this material can have a significant impact on this Figure of Merit. Full article
(This article belongs to the Special Issue Validation of Code Packages for Light Water Reactor Physics Analysis)
Show Figures

Figure 1

27 pages, 1145 KB  
Article
Non-Monotone Carbon Tax Preferences and Rebate-Earmarking Synergies
by Felix Fred Mölk, Florian Bottner, Gottfried Tappeiner and Janette Walde
Sustainability 2025, 17(16), 7282; https://doi.org/10.3390/su17167282 - 12 Aug 2025
Viewed by 409
Abstract
As carbon taxes gain traction in climate policy, public support remains limited. The purpose of this study was to investigate how different mineral oil tax designs, particularly those combining rebates and earmarking, affect public acceptance, and whether the effects are monotone. The data [...] Read more.
As carbon taxes gain traction in climate policy, public support remains limited. The purpose of this study was to investigate how different mineral oil tax designs, particularly those combining rebates and earmarking, affect public acceptance, and whether the effects are monotone. The data were based on an online survey that was conducted in 2022 in Austria (n = 1216). It was found that a tax increase of EUR 25-cents per liter is politically feasible if revenues are earmarked for public transport or climate protection and paired with moderate rebates. Other uses of revenue, especially the general budget, fail to achieve majority support, regardless of tax level or compensation. To capture non-monotonic and heterogeneous preferences, an adaptive-choice-based-conjoint experiment with hierarchical Bayesian estimation was employed. Rebates were modeled as a stand-alone attribute, allowing for the identification of non-monotonicities for this attribute. The findings show deviations from widespread monotonicity assumptions: a moderate tax increase (EUR 10-cent/liter) was preferred over no increase, even in the absence of earmarking. Similarly, larger annual rebates (EUR 200–300) reduced support compared to a EUR 100 rebate, which was most popular. Full article
Show Figures

Figure 1

37 pages, 989 KB  
Review
In Vitro Skin Models for Skin Sensitisation: Challenges and Future Directions
by Ignacio Losada-Fernández, Ane San Martín, Sergio Moreno-Nombela, Leticia Suárez-Cabrera, Leticia Valencia, Paloma Pérez-Aciego and Diego Velasco
Cosmetics 2025, 12(4), 173; https://doi.org/10.3390/cosmetics12040173 - 12 Aug 2025
Viewed by 658
Abstract
Allergic contact dermatitis is one of the most common adverse events associated with cosmetic use. Accordingly, assessment of skin sensitisation hazard is required for safety evaluation of cosmetic ingredients. The transition to the use of alternative methods for testing has made skin sensitisation [...] Read more.
Allergic contact dermatitis is one of the most common adverse events associated with cosmetic use. Accordingly, assessment of skin sensitisation hazard is required for safety evaluation of cosmetic ingredients. The transition to the use of alternative methods for testing has made skin sensitisation an intense field in the past decades. The first alternative methods have been in place for almost a decade, but none as stand-alone replacement for the reference murine Local Lymph Node Assay (LLNA). While strategies to combine data from several methods are being evaluated and refined, individual methods face technical limitations. These include issues related to their applicability to highly lipophilic substances and the lack of reliable potency estimation, which remain important obstacles to their widespread adoption as replacement for animal methods. The unique characteristics of in vitro skin models represented an attractive alternative, potentially overcoming these limitations and offering a more physiologically relevant environment for the assessment of the response in keratinocytes and dendritic cells. In this review, we recapitulate how reconstructed human skin models have been used as platforms for skin sensitisation testing, including the latest approaches using organ-on-a-chip and microfluidic technologies, aimed to develop next-generation organotypic skin models with increased complexity and monitoring capabilities. Full article
Show Figures

Figure 1

24 pages, 1395 KB  
Review
A Systematic Literature Review of MODFLOW Combined with Artificial Neural Networks (ANNs) for Groundwater Flow Modelling
by Kunal Kishor, Ashish Aggarwal, Pankaj Kumar Srivastava, Yaggesh Kumar Sharma, Jungmin Lee and Fatemeh Ghobadi
Water 2025, 17(16), 2375; https://doi.org/10.3390/w17162375 - 11 Aug 2025
Viewed by 532
Abstract
The sustainable management of global groundwater resources is increasingly challenged by climatic uncertainty and escalating anthropogenic stress. Thus, there is a need for simulation tools that are more robust and flexible. This systematic review addresses the integration of two dominant modeling paradigms: the [...] Read more.
The sustainable management of global groundwater resources is increasingly challenged by climatic uncertainty and escalating anthropogenic stress. Thus, there is a need for simulation tools that are more robust and flexible. This systematic review addresses the integration of two dominant modeling paradigms: the physically grounded Modular Finite-Difference Flow (MODFLOW) model and the data-agile Artificial Neural Network (ANN). While the MODFLOW model provides deep process-based understanding, it is often limited by extensive data requirements and computational intensity. In contrast, an ANN offers remarkable predictive accuracy and computational efficiency, particularly in complex, non-linear systems, but traditionally lacks physical interpretability. This review synthesizes existing research to present a functional classification framework for MODFLOW–ANN integration, providing a systematic analysis of the literature within this structure. Our analysis of the literature, sourced from Scopus, Web of Science, and Google Scholar reveals a clear trend of the strategic integration of these models, representing a new trend in hydrogeological simulation. The literature reveals a classification framework that categorizes the primary integration strategies into three distinct approaches: (1) training an ANN on MODFLOW model outputs to create computationally efficient surrogate models; (2) using an ANN to estimate physical parameters for improved MODFLOW model calibration; and (3) applying ANNs as post-processors to correct systematic errors in MODFLOW model simulations. Our analysis reveals that these hybrid methods consistently outperform standalone approaches by leveraging ANNs for computational acceleration through surrogate modeling, for enhanced model calibration via intelligent parameter estimation, and for improved accuracy through systematic error correction. Full article
(This article belongs to the Special Issue Application of Hydrological Modelling to Water Resources Management)
Show Figures

Figure 1

12 pages, 603 KB  
Article
Predictors of Implant Subsidence and Its Impact on Cervical Alignment Following Anterior Cervical Discectomy and Fusion: A Retrospective Study
by Rose Fluss, Alireza Karandish, Rebecca Della Croce, Sertac Kirnaz, Vanessa Ruiz, Rafael De La Garza Ramos, Saikiran G. Murthy, Reza Yassari and Yaroslav Gelfand
J. Clin. Med. 2025, 14(16), 5660; https://doi.org/10.3390/jcm14165660 - 10 Aug 2025
Viewed by 397
Abstract
Background/Objectives: Anterior cervical discectomy and fusion (ACDF) is a common procedure for treating cervical spondylotic myelopathy. Limited research exists on the predictors of subsidence following ACDF. Subsidence can compromise surgical outcomes, alter alignment, and predispose patients to further complications, making it essential [...] Read more.
Background/Objectives: Anterior cervical discectomy and fusion (ACDF) is a common procedure for treating cervical spondylotic myelopathy. Limited research exists on the predictors of subsidence following ACDF. Subsidence can compromise surgical outcomes, alter alignment, and predispose patients to further complications, making it essential to prevent and understand it. This study aims to identify key risk factors for clinically significant subsidence and evaluate its impact on cervical alignment parameters in a large, diverse patient population. Methods: We conducted a retrospective review of patients who underwent ACDF between 2013 and 2022 at a single institution. Subsidence was calculated as the mean change in anterior and posterior disc height, with clinically significant subsidence being defined as three millimeters or more. Univariate analysis was followed by regression modeling to identify subsidence predictors and analyze patterns. Subgroup analyses stratified patients by implant type, number of levels fused, and cage material. Results: A total of 96 patients with 141 levels of ACDF met the inclusion criteria. Patients with significant subsidence were younger on average (52.44 vs. 55.94 years; p = 0.074). Those with less postoperative lordosis were more likely to experience significant subsidence (79.5% vs. 90.2%; p = 0.088). Patients with significant subsidence were more likely to have standalone implants (38.5% vs. 16.7%; p < 0.01), taller cages (6.62 mm vs. 6.18 mm; p < 0.05), and greater loss of segmental lordosis (7.33 degrees vs. 3.31 degrees; p < 0.01). Multivariate analysis confirmed that standalone implants were a significant independent predictor of subsidence (OR 2.679; p < 0.05), and greater subsidence was positively associated with loss of segmental lordosis (OR 1.089; p < 0.01). Subgroup analysis revealed that multi-level procedures had a higher incidence of subsidence (35.7% vs. 28.1%; p = 0.156), and PEEK cages demonstrated similar subsidence rates compared to titanium constructs (28.1% vs. 29.4%; p = 0.897). Conclusions: Standalone implants are the strongest independent predictor of significant subsidence, and those that experience subsidence also show greater loss of segmental lordosis, although not overall lordosis. These findings have implications for surgical planning, particularly in patients with borderline bone quality or requiring multi-level fusions. The results support the use of plated constructs in high-risk patients and emphasize the importance of individualized surgical planning based on patient-specific factors. Further research is needed to explore these findings and determine how they can be applied to improve ACDF outcomes. Full article
(This article belongs to the Special Issue Advances in Spine Surgery: Best Practices and Future Directions)
Show Figures

Figure 1

Back to TopTop