Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,259)

Search Parameters:
Keywords = feedforward neural network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 6357 KB  
Article
Enhanced Motion Prediction of a Semi-Submersible Platform Using Bayesian Neural Network and Field Monitoring Data
by Song Li and Jia-Wang Chen
AI. Eng. 2026, 1(1), 2; https://doi.org/10.3390/aieng1010002 - 3 Apr 2026
Viewed by 89
Abstract
The motion prediction of semi-submersible platforms is of significant importance for improving operational efficiency, ensuring platform safety, and providing early warning information for potential risks. Traditional prediction methods, such as those based on hydrodynamic simulations combined with Kalman filters, often face limitations due [...] Read more.
The motion prediction of semi-submersible platforms is of significant importance for improving operational efficiency, ensuring platform safety, and providing early warning information for potential risks. Traditional prediction methods, such as those based on hydrodynamic simulations combined with Kalman filters, often face limitations due to their reliance on precise hydrodynamic parameters, which are difficult to obtain in practice. More recently, data-driven approaches, particularly deep learning models like Long Short-Term Memory (LSTM) networks, have shown promise in predicting complex motions. However, these methods often treat the prediction process as a “black box,” leading to issues such as a lack of generalization ability, overfitting, and an inability to quantify the uncertainty of prediction results. To address these challenges, this paper proposes a novel motion prediction method for semi-submersible platforms based on a Bayesian neural network (BNN). The BNN incorporates Bayesian inference to effectively integrate prior knowledge and measured data, thereby quantifying uncertainties and improving prediction accuracy. The method is validated using field-measured motion data from a semi-submersible platform in the South China Sea. Compared with LSTM and feedforward neural network, the BNN demonstrates superior anti-noise performance and prediction accuracy, achieving an accuracy rate (R2) of up to 91.5%. Moreover, over 92% of the true values are captured within the 95% confidence interval of the prediction results. This study highlights the potential of BNNs for the real-time motion prediction of offshore platforms, providing valuable support for early warning systems and operational decision-making. Full article
Show Figures

Figure 1

19 pages, 2119 KB  
Article
UHPC Creep Behavior and Neural Network Prediction with Calibration of fib Model Code 2020
by Shijun Wang, Mengen Yue, Wenming Zhang and Teng Tong
Buildings 2026, 16(7), 1300; https://doi.org/10.3390/buildings16071300 - 25 Mar 2026
Viewed by 188
Abstract
Ultra-High-Performance Concrete (UHPC) is increasingly used in slender and prestressed structural members due to its superior strength and durability. However, inaccurate or incomplete prediction of creep deformation may lead to excessive long-term deflection, prestress loss, cracking, and potential serviceability or safety risks in [...] Read more.
Ultra-High-Performance Concrete (UHPC) is increasingly used in slender and prestressed structural members due to its superior strength and durability. However, inaccurate or incomplete prediction of creep deformation may lead to excessive long-term deflection, prestress loss, cracking, and potential serviceability or safety risks in buildings and infrastructure. Therefore, reliable prediction methods for UHPC creep are essential for both structural design and long-term performance assessment. In this study, a database containing 60 literature-derived UHPC creep records was compiled to investigate the creep coefficient at approximately 100 days. Pearson correlation analysis revealed strong interdependence among predictors and weak single-variable linear relationships, indicating that creep behavior is governed by nonlinear interactions. A feedforward backpropagation neural network (BPNN) trained using the Levenberg–Marquardt algorithm was developed to predict the creep coefficient. To maintain engineering interpretability, the fib Model Code 2020 (MC2020) formulation was adopted as a code-based benchmark and further calibrated using ridge regression. Results show that the calibrated MC2020 model improves prediction consistency, while the BPNN model provides the highest predictive accuracy. The proposed framework integrates machine-learning prediction with interpretable code-based calibration, contributing to the development of creep modeling approaches for UHPC and providing practical support for the safe design of UHPC structures. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

19 pages, 642 KB  
Article
Enhancing Type 1 Diabetes Polygenic Risk Prediction Through Neural Networks and Entropy-Derived Insights
by Antonio Nadal-Martínez, Guillermo Pérez-Solero, Sandra Ferreiro López, Jorge Blom-Dahl, Eduard Montanya, Marta Alonso-Bernáldez, Moises Shabot, Christian Binsch, Lukasz Szczerbinski, Adam Kretowski, Julián Nevado, Pablo Lapunzina, Robert Wagner and Jair Tenorio-Castano
Int. J. Mol. Sci. 2026, 27(7), 2966; https://doi.org/10.3390/ijms27072966 - 25 Mar 2026
Viewed by 241
Abstract
Type 1 diabetes (T1D) is an autoimmune disease with a strong genetic component (~70% heritability). Early identification of individuals at risk is crucial for early intervention or risk assessment. Although polygenic risk scores (PRS) have shown promise in risk assessment, most current approaches [...] Read more.
Type 1 diabetes (T1D) is an autoimmune disease with a strong genetic component (~70% heritability). Early identification of individuals at risk is crucial for early intervention or risk assessment. Although polygenic risk scores (PRS) have shown promise in risk assessment, most current approaches remain constrained by linear assumptions and limited generalizability. We aimed to develop a neural network-driven classifier using T1D-associated single nucleotide polymorphisms (SNPs). In addition, we explored the inclusion of an entropy-derived feature as a complementary variable, representing the degree of genetic variability within an individual’s genotype profile across the 67 T1D-associated SNPs, to evaluate its potential additive contribution to the model performance. We analyzed genotype data from 11,909 individuals in the UK BioBank (546 T1D cases and 11,363 controls). Sixty-seven well-known SNPs associated with T1D were utilized as inputs to the model, using two distinct allele-encoding strategies. A feed-forward neural network was evaluated under varying case–control ratios through five-fold cross-validation. Performance was assessed using the area under the receiver operating characteristic curve (AUC) on a held-out test set and on an external European cohort as a validation cohort. Across five-fold cross-validation, the best configuration achieved a median AUC of 0.903. On the held-out UK Biobank test set, the model generalized well, with an AUC of 0.8889 (95% CI: 0.8516–0.9262). A probability-based risk framework, constructed using five risk groups (“very low”, “low”, “intermediate”, “high”, and “very high” risk), yielded a negative predictive value (NPV) of 98.9% for the “very low” risk group and a Positive Predicted Value (PPV) of 61.9% with a specificity of 97.3% for the “very high” risk group, assuming a 10% T1D prevalence. External validation in the German Diabetes Study reproduced clear case–control separation; for individuals with recent onset diabetes and glutamic acid decarboxylase antibodies (GADA+) vs. controls, specificity reached 91.9% in the “high” risk group (PPV of 94.3%) and 97.6% in the “very high” risk group (PPV of 95.7%). The proposed neural network reliably predicts T1D genetic risk using a compact SNP panel of 67 SNPs and maintains accuracy in both internal and external European cohorts. Its probabilistic output enables clinically interpretable risk thresholds, while entropy features contributed modestly to performance. These results demonstrate that a neural network-based approach achieves discriminative performance that is comparable to established T1D genetic risk models, while offering flexible probability-based risk stratification and architectural extensibility for future integration of additional features. Full article
Show Figures

Figure 1

20 pages, 7980 KB  
Article
Data-Driven Sensorless Rotor Position Estimation for Switched Reluctance Motors Using a Deep LSTM Network
by Bekir Gecer, Alper Nabi Akpolat, Necibe Fusun Oyman Serteller, Ozturk Tosun and Mehmet Gol
Electronics 2026, 15(6), 1330; https://doi.org/10.3390/electronics15061330 - 23 Mar 2026
Viewed by 309
Abstract
Advances in semiconductor technologies, particularly in power transistors and switching diodes, have enabled higher switching frequencies and converter efficiency, renewing interest in Switched Reluctance Motors (SRMs) for electric vehicles. This work presents a data-driven approach utilizing a Long Short-Term Memory (LSTM) network capable [...] Read more.
Advances in semiconductor technologies, particularly in power transistors and switching diodes, have enabled higher switching frequencies and converter efficiency, renewing interest in Switched Reluctance Motors (SRMs) for electric vehicles. This work presents a data-driven approach utilizing a Long Short-Term Memory (LSTM) network capable of effectively managing temporal dependencies for estimating rotor position without sensors in SRMs. The motor investigated was custom-designed, subsequently manufactured as a prototype. The LSTM was trained and validated with experimental data collected at various speeds and load conditions. The outcomes demonstrate the model’s strong performance, with a mean squared error (MSE) of 1.77°2, a mean absolute error (MAE) of 1.09°, and 97.35% accuracy. Compared to typical estimation methods such as back-electromotive force (EMF)-based techniques, fuzzy logic, model predictive control, feed-forward neural networks (FFNNs), and back-propagation neural networks (BPNNs), the LSTM stands out as one of the most effective and widely used models. Previous neural networks (NN)-based studies typically report ±5° accuracy, whereas LSTM keeps the error about 1° in this study. This strategy eliminates position sensors, reduces cost and complexity, and enables reliable real-time SRM control. Results indicate that the method has significant potential for electric motor drives, particularly for SRMs. Full article
Show Figures

Figure 1

31 pages, 13358 KB  
Article
The Lateral Control of Unmanned Vehicles Based on Neural Network Identification and a Fast Tube Model Predictive Control Algorithm
by Yong Dai and Zhichen Zhou
Sensors 2026, 26(6), 1973; https://doi.org/10.3390/s26061973 - 21 Mar 2026
Viewed by 348
Abstract
In traditional vehicle trajectory tracking processes, the dynamic model of the vehicle may not accurately represent complex and nonlinear vehicle behaviors. Moreover, conventional control methods may perform poorly when dealing with system uncertainties and disturbances, facing challenges in real-time computation. To address these [...] Read more.
In traditional vehicle trajectory tracking processes, the dynamic model of the vehicle may not accurately represent complex and nonlinear vehicle behaviors. Moreover, conventional control methods may perform poorly when dealing with system uncertainties and disturbances, facing challenges in real-time computation. To address these issues, this paper proposes an autonomous driving control method based on control-affine feedforward neural network (CAFNN) and fast tube model predictive control (tube-MPC). This method utilizes CAFNN for system dynamic identification, replacing traditional mathematical modeling with data-driven neural network pattern recognition to more accurately describe the vehicle’s nonlinear dynamic characteristics. On this basis, the proposed tube-MPC structure is divided into two parts: nominal MPC and sliding mode control (SMC). The nominal MPC controller associates the MPC problem with a linear complementarity problem (LCP) using a ramp function, enabling rapid computation of the quadratic programming (QP) solution through piecewise affine (PWA) functions; the auxiliary SMC controller employs multi-power sliding mode reaching laws to enhance the system’s robustness against external disturbances and model uncertainties. This control strategy demonstrates high accuracy and stability in vehicle trajectory tracking under complex road conditions, providing strong support for the advancement of autonomous driving technology. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

34 pages, 8592 KB  
Article
Neural Network Modeling of Air Spring Dynamic Stiffness Based on Its Pneumatic Physics
by Yuelian Wang, Tao Bo, Wenzheng Hu, Jiaqi Zhao, Fa Su, Zuguo Ma and Ye Zhuang
Mathematics 2026, 14(6), 1057; https://doi.org/10.3390/math14061057 - 20 Mar 2026
Viewed by 240
Abstract
To meet the real-time computational requirements of active suspension control systems, this study shifts from complex microscopic physical equations to a direct nonlinear functional mapping between the relative motion states (displacement and velocity) and the output force of air springs. This approach aims [...] Read more.
To meet the real-time computational requirements of active suspension control systems, this study shifts from complex microscopic physical equations to a direct nonlinear functional mapping between the relative motion states (displacement and velocity) and the output force of air springs. This approach aims to preserve critical nonlinear hysteresis characteristics while significantly reducing the computational overhead. A progressive modeling strategy is implemented to characterize these complex behaviors. Initially, polynomial fitting is employed to identify key input features; however, its limited capacity to capture intricate nonlinearities necessitates more advanced methods. Subsequently, standard Feedforward Neural Networks (FNNs) are explored for their nonlinear mapping capabilities, yet their inherent “black-box” nature often leads to convergence difficulties and restricted generalization. To address these issues, a Physics-Informed Neural Network (PINN) architecture is introduced, embedding physical governing equations as regularization constraints within the loss function to integrate data-driven flexibility with mathematical rigor. Recognizing that conventional PINNs often encounter convergence challenges due to conflicts between PDE constraints and data-driven loss terms, this research develops a Physics-Embedded Hierarchical Network (PEHN). By deriving specialized PDE constraints tailored to air spring dynamics and designing a hierarchical architecture aligned with these physical requirements, the PEHN effectively balances physical priors with experimental data. Experimental results demonstrate that, compared to the baseline models, the proposed PEHN exhibits stronger stability and superior accuracy in capturing the complex nonlinearities of air spring dynamics. Full article
Show Figures

Figure 1

22 pages, 1509 KB  
Article
ICTD: Combination of Improved CNN–Transformer and Enhanced Deep Canonical Correlation Analysis for Eye-Movement Emotion Classification
by Cong Zhang, Xisheng Li, Jiannan Chi, Ming Cao, Qingfeng Gu and Jiahui Liu
Brain Sci. 2026, 16(3), 330; https://doi.org/10.3390/brainsci16030330 - 19 Mar 2026
Viewed by 287
Abstract
Background/Objectives: Emotion classification based on eye-movement features has become a widely adopted approach due to the simplicity of data acquisition and the strong association between ocular responses and emotional states. However, several challenges remain with regard to existing emotion recognition methods, including [...] Read more.
Background/Objectives: Emotion classification based on eye-movement features has become a widely adopted approach due to the simplicity of data acquisition and the strong association between ocular responses and emotional states. However, several challenges remain with regard to existing emotion recognition methods, including the relatively weak correlation between eye-movement features and emotional labels and the fact that the key features are not prominently presented. Methods: To address abovelimitations, this study proposes an improved CNN-transformer combined with enhanced deep canonical correlation analysis network (ICTD). The proposed method first performs preprocessing and reconstruction of raw eye-movement signals to extract informative features. Subsequently, convolutional neural networks (CNNs) and transformer architectures are employed to capture local and global feature, respectively. In addition, an incremental feature feedforward network is incorporated to enhance the transformer, enabling the model to assign higher importance to salient feature information. Finally, the extracted representations are processed through deep canonical correlation analysis based on cosine similarity in order to generate classification outcomes. Results: Experiments conducted on the SEED-IV, SEED-V, and eSEE-d datasets demonstrate that the proposed ICTD framework consistently outperforms baseline approaches and attains optimal classification results. (1) On the eSEE-d dataset, the results of three-category arousal and valence classification reach 81.8% and 85.2%, respectively; (2) on the SEED-IV dataset, the emotion four-category classification result reaches 91.2%; (3) finally, on the SEED-V dataset, the emotion five-category classification result reaches 85.1%. Conclusions: The proposed ICTD framework effectively improves feature representation and classification performance, showing strong potential for practical emotion recognition and physiological signal analysis. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Figure 1

29 pages, 5152 KB  
Article
Impact of Neural Network Initialisation Seed and Architecture on Accuracy, Generalisation and Generative Consistency in Data-Driven Internal Combustion Engine Modelling
by Arturas Gulevskis, Redha Benhadj-Djilali and Konstantin Volkov
Computers 2026, 15(3), 194; https://doi.org/10.3390/computers15030194 - 17 Mar 2026
Viewed by 305
Abstract
Artificial neural networks (ANNs) are widely used to approximate nonlinear mappings, yet their ability to capture thermodynamic behaviour in dynamic physical systems remains insufficiently characterised. This study investigates how representational capacity influences surrogate modelling accuracy for a crank-angle-resolved internal combustion engine (ICE) simulation [...] Read more.
Artificial neural networks (ANNs) are widely used to approximate nonlinear mappings, yet their ability to capture thermodynamic behaviour in dynamic physical systems remains insufficiently characterised. This study investigates how representational capacity influences surrogate modelling accuracy for a crank-angle-resolved internal combustion engine (ICE) simulation with a maximum dynamic state dimension of six. Two feedforward ANN configurations are evaluated: a low-capacity 5–5 architecture containing 84 trainable parameters and a high-capacity 25–25–25 architecture containing 1554 parameters (18.5× larger). Both networks approximate the nonlinear mapping from five embedded operating parameters to four peak thermodynamic outputs (maximum pressure, pressure phasing, maximum temperature, and temperature phasing). Evaluation across 53,178 operating points demonstrates that the high-capacity configuration reduces root mean squared error by factors of 30–50× relative to the low-capacity network, decreasing peak temperature error from 17.68 K to 0.36 K and peak pressure error from 0.116 MPa to 0.0025 MPa. Although both models achieve coefficients of determination exceeding 0.99, the low-capacity network exhibits heavy-tailed residual distributions and regime-dependent error amplification, whereas the high-capacity model reduces both central dispersion and extreme-case error. These results demonstrate that high correlation alone does not guarantee engineering reliability in nonlinear thermodynamic systems. Distribution-level analysis, including percentile and extreme-case characterisation, is required to evaluate engineering robustness. The findings provide a quantitative framework linking ANN capacity, nonlinear dynamic system representation, and predictive robustness. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence (2nd Edition))
Show Figures

Graphical abstract

25 pages, 1477 KB  
Article
AI-Based Predictive Risk and Environmental Management in Phosphate Mining (OCP, Morocco)
by Ismail Haloui, Yang Li, Hayat Amzil and Aziz Moumen
Sustainability 2026, 18(6), 2923; https://doi.org/10.3390/su18062923 - 17 Mar 2026
Viewed by 271
Abstract
Phosphate mining companies in Morocco pose many environmental and occupational safety risks, especially through the release of airborne particulates, gas pollutants, and heavy metals. While there is increased implementation of monitoring systems within industrial mining contexts, current methodologies are still predominantly founded on [...] Read more.
Phosphate mining companies in Morocco pose many environmental and occupational safety risks, especially through the release of airborne particulates, gas pollutants, and heavy metals. While there is increased implementation of monitoring systems within industrial mining contexts, current methodologies are still predominantly founded on rule-based systems or classical statistics that presume linearity in relationships between an arbitrary set of environmental parameters and the likelihood of an incident. Conversely, mining operations are characterized by intricately dynamic nonlinear combinations of numerous environmental and operational variables. As a result, a potential research opportunity exists for the application of sophisticated machine learning techniques that provide the ability to detect various levels of operational risk within phosphate mining scenarios. This study has three objectives. First, to examine the mining environmental and operational data from the phosphate mining sites to determine the mining operational conditions that present the highest risk. Second, to create a machine learning classification model which utilizes a Feedforward Neural Network (FNN) to identify operational states that are prone to incidents based on multivariate sensor data. Third, to assess the validity and reliability of the model using machine learning validity and reliability evaluation techniques along with statistical validation methods. In this study, an artificial intelligence-based approach for AI-based safety monitoring was proposed by using a Feedforward Neural Network (FNN) on a detailed data set of 1536 hourly measurements, directly recorded onsite at OCP plants in Benguerir and Khouribga. Environmental and industrial parameters (dust concentration, gas emissions, temperature, and toxic metal content) were measured using industrial-grade sensors certified for such a type of application. By means of training the proposed FNN model with adaptive gradient descent and dropout regularization with early stopping, a test mean squared error of 0.057 and over 85% accuracy on incident detection were obtained. Gradient tracking and m-adaptive validation proved the stability and convergence of the model. Emissions and dust were identified as the main risk classifiers in a variable importance analysis. The findings demonstrate that the mining sector may move from reactive to proactive safety management and validate the incorporation of AI into a real-time monitoring infrastructure inside the OCP ecosystem. Practical concerns of industrial data gathering, model interpretability, and the moral application of AI in high-risk settings are also addressed by the study. Full article
Show Figures

Figure 1

41 pages, 8144 KB  
Article
Statistical Development of Rainfall IDF Curves and Machine Learning-Based Bias Assessment: A Case Study of Wadi Al-Rummah, Saudi Arabia
by Ibrahim T. Alhbib, Ibrahim H. Elsebaie and Saleh H. Alhathloul
Hydrology 2026, 13(3), 96; https://doi.org/10.3390/hydrology13030096 - 16 Mar 2026
Viewed by 631
Abstract
Reliable estimation of extreme rainfall is essential for hydraulic design and flood risk mitigation, particularly in arid regions where rainfall exhibits strong temporal and spatial variability. This study presents a statistical framework for developing rainfall intensity-duration-frequency (IDF) curves, complemented by a machine learning-based [...] Read more.
Reliable estimation of extreme rainfall is essential for hydraulic design and flood risk mitigation, particularly in arid regions where rainfall exhibits strong temporal and spatial variability. This study presents a statistical framework for developing rainfall intensity-duration-frequency (IDF) curves, complemented by a machine learning-based assessment of model bias and performance. The analysis was conducted using data from ten rainfall stations located within or near the Wadi Al-Rummah Basin. Annual maximum series (AMS) from 1969 to 2024 were first reconstructed to address missing years using a modified normal ratio method (NRM) combined with nearest-station selection, ensuring spatial consistency while preserving station-specific rainfall characteristics. Six probability distributions (Weibull, Gumbel, gamma, lognormal, generalized extreme value (GEV), and generalized Pareto) were fitted to each station, and the best-fit distribution was identified using multiple goodness-of-fit (GOF) criteria, including the Kolmogorov–Smirnov (K-S) test, Anderson–Darling (A-D) test, root mean square error (RMSE), chi-square (χ2) statistic, Akaike information criterion (AIC), Bayesian information criterion (BIC), and the coefficient of determination (R2). Statistical IDF curves were then developed for durations ranging from 5 to 1440 min and return periods from 2 to 1000 years. To evaluate the robustness of the statistically derived IDF curves, three machine learning (ML) models, multiple linear regression (MLR), regression random forest (RRF), and multilayer feed-forward neural network (MFFNN), were trained as surrogate models using duration, return period, and station geographic attributes as predictor variables. Model performance was evaluated using RMSE, MAE, and mean bias metrics across stations and return periods. The lognormal distribution emerged as the best-fit model for four stations, while the Gumbel and gamma distributions were selected for two stations each. Overall, no single probability distribution consistently outperformed others, indicating station-dependent behavior. Among the machine learning models, the MFFNN achieved the closest agreement with statistical IDF estimates (RMSE0.97, MAE0.65, bias0.02), followed by RRF and MLR based on global average performance across all stations and return periods. The proposed framework offers a reliable approach for rainfall IDF development and evaluation in arid region watersheds. Full article
(This article belongs to the Section Statistical Hydrology)
Show Figures

Figure 1

23 pages, 8019 KB  
Article
Machine Learning for Daylight Performance Prediction
by Zeynep Keskin Tang and Ilker Karadag
Appl. Sci. 2026, 16(6), 2757; https://doi.org/10.3390/app16062757 - 13 Mar 2026
Viewed by 381
Abstract
Machine learning methods are increasingly applied in daylight performance assessment due to their ability to model complex nonlinear relationships within large datasets while offering substantially faster predictions than conventional simulation workflows. Within this framework, deep learning architectures provide enhanced representational capability for capturing [...] Read more.
Machine learning methods are increasingly applied in daylight performance assessment due to their ability to model complex nonlinear relationships within large datasets while offering substantially faster predictions than conventional simulation workflows. Within this framework, deep learning architectures provide enhanced representational capability for capturing spatial and geometric dependencies. However, existing approaches often lack seamless integration with parametric design environments and offer limited interpretability regarding the influence of design parameters. This paper presents DayANN (Daylight Artificial Neural Network), a feedforward deep neural network developed within a structured Grasshopper-to-machine learning workflow for analyzing daylight performance in a parametrically defined office space. The method employs Climate Studio for Grasshopper to generate 288 simulation scenarios, forming the training dataset for the predictive model. The proposed framework enables automated data transfer, model training, and performance feedback within an iterative design–evaluation loop. In addition to predictive accuracy, SHAP-based interpretability is incorporated to quantify the contribution of individual daylighting parameters. The model achieved high accuracy, with R2 values of 0.988 for Useful Daylight Illuminance (UDI) and 0.947 for Daylight Factor (DF), demonstrating that DayANN serves as a computationally efficient, transparent surrogate model suitable for early-stage architectural decision-making. Full article
Show Figures

Figure 1

23 pages, 1449 KB  
Article
Parametrization of Subgrid Scales in Long-Term Simulations of the Shallow-Water Equations Using Machine Learning and Convex Limiting
by Md Amran Hossan Mojamder, Zhihang Xu, Min Wang and Ilya Timofeyev
Fluids 2026, 11(3), 76; https://doi.org/10.3390/fluids11030076 - 12 Mar 2026
Viewed by 263
Abstract
We present a method for parametrizing sub-grid processes in the shallow water equations. We define coarse variables and local spatial averages and use a feed-forward neural network to learn sub-grid fluxes. Our method results in a local parametrization that uses a four-point computational [...] Read more.
We present a method for parametrizing sub-grid processes in the shallow water equations. We define coarse variables and local spatial averages and use a feed-forward neural network to learn sub-grid fluxes. Our method results in a local parametrization that uses a four-point computational stencil, which has several advantages over globally coupled parametrizations. We demonstrate numerically that our method improves energy balance in long-term turbulent simulations and also accurately reproduces individual solutions. The long-term simulations refer to numerical studies where a fluid flow is simulated over a duration long enough to reach a statistical steady state. The neural network parametrization can be easily combined with flux limiting to reduce oscillations near shocks. More importantly, our method provides reliable parametrizations, even in dynamical regimes that are not included in the training data. Full article
Show Figures

Figure 1

19 pages, 5400 KB  
Article
Image Deblurring via Frequency-Domain Feature Enhanced Convolutional Neural Networks
by Yecai Guo, Lixiang Ma and Yangyang Zhang
Sensors 2026, 26(6), 1784; https://doi.org/10.3390/s26061784 - 12 Mar 2026
Viewed by 286
Abstract
To address the issues of insufficient restoration of texture details in deblurred images and inadequate learning of frequency domain features, an image deblurring algorithm based on frequency domain feature enhancement and convolutional neural networks is proposed. In this architecture, firstly, a Fourier residual [...] Read more.
To address the issues of insufficient restoration of texture details in deblurred images and inadequate learning of frequency domain features, an image deblurring algorithm based on frequency domain feature enhancement and convolutional neural networks is proposed. In this architecture, firstly, a Fourier residual module with a parallel structure is constructed to achieve collaborative learning and modeling of spatial and frequency domain features, aiming to improve frequency domain feature learning capability and the restoration effect of the texture details; secondly, a gated controlled feed-forward unit acts on the Fourier residual module to further enhance the nonlinear expression ability of the algorithm; thirdly, a supervised attention module is improved and added to the decoder to promote more effective capture of key features for image reconstruction; finally, the weighted sum of spatial domain Charbonnier loss function and frequency domain loss function is defined as a novel total loss function. In addition, to verify the performance of our proposed algorithm, we conducted experiments on the GOPRO and HIDE datasets. Through experiments on the GOPRO, we obtained an SSIM and an LPIPS of 0.961 and 0.0278, respectively. With regard to the experiments on the HIDE datasets, we obtained an SSIM and an LPIPS of 0.941 and 0.0286, respectively. As for parameter count and running time, their values were 1.197 and 9.15 × 106, respectively, obtained by the experiments on the GOPRO. In all algorithms, the values of our proposed algorithm are optimal. However, the PSNR of our proposed algorithm is very close to that of the latest comparison algorithm and is suboptimal. In a word, experimental results have demonstrated that our proposed algorithm effectively removes blur while better preserving the details and edges of the image. Therefore, it has more practical value and prospects in computer vision tasks. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 6254 KB  
Article
Earthquake Magnitude Detection Utilizing a Novel Hybrid Earth–Transformer–LSTM Architecture
by Amir A. Ghavifekr, Elman Ghazaei, Mohsen Mirzajani and Paolo Visconti
Future Internet 2026, 18(3), 143; https://doi.org/10.3390/fi18030143 - 11 Mar 2026
Viewed by 327
Abstract
One of the complicated and demanding tasks in seismology is the reliable detection of earthquakes. The key challenge is that the detection models must be applied to a specific region, and models trained on one region may not perform as well in others. [...] Read more.
One of the complicated and demanding tasks in seismology is the reliable detection of earthquakes. The key challenge is that the detection models must be applied to a specific region, and models trained on one region may not perform as well in others. The limitations of datasets for most regions of the world pose another task. Comprehensive, high-quality datasets are essential for developing robust earthquake detection algorithms. Despite these challenges, developing effective earthquake detection systems is critically important. This paper proposes a novel deep network, Earth–Transformer–LSTM (ETL), to estimate earthquake magnitude with high precision. The proposed method uses Transformer encoders as its first layer to extract profound features from the dataset. To obtain highly accurate results, the extracted data is used as the input to the Long Short-Term Memory (LSTM) neural network. Additionally, one-dimensional convolution is replaced by Multi-Layer Perceptron (MLP), which performs better in Transformer encoders’ feed-forward networks. The Turkey earthquake dataset 2000–2018 was used in this research because significant earthquakes have occurred in this region in recent years. According to the obtained results, the proposed method’s Root Mean Squared Error (RMSE) is 0.7, representing a noticeable improvement over advanced conventional models. Full article
Show Figures

Figure 1

22 pages, 11365 KB  
Article
Addressing Dense Small-Object Detection in Remote Sensing: An Open-Vocabulary Object Detection Framework
by Menghan Ju, Yingchao Feng, Wenhui Diao and Chunbo Liu
Remote Sens. 2026, 18(6), 851; https://doi.org/10.3390/rs18060851 - 10 Mar 2026
Viewed by 444
Abstract
Remote sensing open-vocabulary object detection focuses on identifying and localizing unseen categories within remote sensing imagery. However, constrained by characteristics such as dense target distribution, complex background interference, and drastic scale variations inherent to remote sensing scenarios, existing methods are prone to background [...] Read more.
Remote sensing open-vocabulary object detection focuses on identifying and localizing unseen categories within remote sensing imagery. However, constrained by characteristics such as dense target distribution, complex background interference, and drastic scale variations inherent to remote sensing scenarios, existing methods are prone to background noise interference when extracting features from dense, small target regions. This leads to weakened semantic representation and reduced localization accuracy. Therefore, we propose RS-DINO to address these challenges. Specifically: Firstly, to address the issue of small features being obscured by the background, the feature extraction module incorporates a multi-scale large-kernel attention mechanism. This expands the receptive field while enhancing local detail modelling, significantly improving the feature representation of minute targets. Secondly, a cross-modal feature fusion module employing bidirectional cross-attention achieves deep alignment between image and textual features. Subsequently, a language-guided query selection mechanism enhances detection accuracy through hybrid query strategies. Finally, to enhance the spatial sensitivity and channel adaptability of fusion features, the multimodal decoder integrates a convolutional gated feedforward network, significantly boosting the model’s robustness in dense, multi-scale scenes. Experiments on DIOR, DOTA v2.0, and NWPU-VHR10 demonstrate substantial gains, with fine-tuned RS-DINO surpassing existing methods by 3.5%, 3.7%, and 4.0% in accuracy, respectively. Full article
Show Figures

Figure 1

Back to TopTop