Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (163)

Search Parameters:
Keywords = Random Time Transformation Technique

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 875 KB  
Article
A Comparative Analysis of Preprocessing Filters for Deep Learning-Based Equipment Power Efficiency Classification and Prediction Models
by Sang-Ha Sung, Chang-Sung Seo, Michael Pokojovy and Sangjin Kim
Appl. Sci. 2025, 15(20), 11277; https://doi.org/10.3390/app152011277 - 21 Oct 2025
Abstract
The quality of input data is critical to the performance of time-series classification models, particularly in the domain for industrial sensor data where noise and anomalies are frequent. This study investigates how various filtering-based preprocessing techniques impact the accuracy and robustness of a [...] Read more.
The quality of input data is critical to the performance of time-series classification models, particularly in the domain for industrial sensor data where noise and anomalies are frequent. This study investigates how various filtering-based preprocessing techniques impact the accuracy and robustness of a Transformer model that predicts power efficiency states (Normal, Caution, Warning) from minute-level IIoT sensor data. We evaluated five techniques: a baseline, Simple Moving Average, Median filter, Hampel filter, and Kalman filter. For each technique, we conducted systematic experiments across time windows (360 and 720 min) that reflect real-world industrial inspection cycles, along with five prediction offsets (up to 2880 min). To ensure statistical robustness, we repeated each experiment 20 times with different random seeds. The results show that the Simple Moving Average filter, combined with a 360 min window and a short-term prediction offset, yielded the best overall performance and stability. While other techniques such as the Kalman and Median filters showed situational strengths, methods focused on outlier removal, like the Hampel filter, adversely affected performance. This study provides empirical evidence that a simple and efficient filtering strategy such as Simple Moving Average, can significantly and reliably enhance model performance for power efficiency prediction tasks. Full article
Show Figures

Figure 1

36 pages, 3174 KB  
Review
A Bibliometric-Systematic Literature Review (B-SLR) of Machine Learning-Based Water Quality Prediction: Trends, Gaps, and Future Directions
by Jeimmy Adriana Muñoz-Alegría, Jorge Núñez, Ricardo Oyarzún, Cristian Alfredo Chávez, José Luis Arumí and Lien Rodríguez-López
Water 2025, 17(20), 2994; https://doi.org/10.3390/w17202994 - 17 Oct 2025
Viewed by 518
Abstract
Predicting the quality of freshwater, both surface and groundwater, is essential for the sustainable management of water resources. This study collected 1822 articles from the Scopus database (2000–2024) and filtered them using Topic Modeling to create the study corpus. The B-SLR analysis identified [...] Read more.
Predicting the quality of freshwater, both surface and groundwater, is essential for the sustainable management of water resources. This study collected 1822 articles from the Scopus database (2000–2024) and filtered them using Topic Modeling to create the study corpus. The B-SLR analysis identified exponential growth in scientific publications since 2020, indicating that this field has reached a stage of maturity. The results showed that the predominant techniques for predicting water quality, both for surface and groundwater, fall into three main categories: (i) ensemble models, with Bagging and Boosting representing 43.07% and 25.91%, respectively, particularly random forest (RF), light gradient boosting machine (LightGBM), and extreme gradient boosting (XGB), along with their optimized variants; (ii) deep neural networks such as long short-term memory (LSTM) and convolutional neural network (CNN), which excel at modeling complex temporal dynamics; and (iii) traditional algorithms like artificial neural network (ANN), support vector machines (SVMs), and decision tree (DT), which remain widely used. Current trends point towards the use of hybrid and explainable architectures, with increased application of interpretability techniques. Emerging approaches such as Generative Adversarial Network (GAN) and Group Method of Data Handling (GMDH) for data-scarce contexts, Transfer Learning for knowledge reuse, and Transformer architectures that outperform LSTM in time series prediction tasks were also identified. Furthermore, the most studied water bodies (e.g., rivers, aquifers) and the most commonly used water quality indicators (e.g., WQI, EWQI, dissolved oxygen, nitrates) were identified. The B-SLR and Topic Modeling methodology provided a more robust, reproducible, and comprehensive overview of AI/ML/DL models for freshwater quality prediction, facilitating the identification of thematic patterns and research opportunities. Full article
(This article belongs to the Special Issue Machine Learning Applications in the Water Domain)
Show Figures

Figure 1

31 pages, 1529 KB  
Review
Artificial Intelligence-Enhanced Liquid Biopsy and Radiomics in Early-Stage Lung Cancer Detection: A Precision Oncology Paradigm
by Swathi Priya Cherukuri, Anmolpreet Kaur, Bipasha Goyal, Hanisha Reddy Kukunoor, Areesh Fatima Sahito, Pratyush Sachdeva, Gayathri Yerrapragada, Poonguzhali Elangovan, Mohammed Naveed Shariff, Thangeswaran Natarajan, Jayarajasekaran Janarthanan, Samuel Richard, Shakthidevi Pallikaranai Venkatesaprasath, Shiva Sankari Karuppiah, Vivek N. Iyer, Scott A. Helgeson and Shivaram P. Arunachalam
Cancers 2025, 17(19), 3165; https://doi.org/10.3390/cancers17193165 - 29 Sep 2025
Cited by 1 | Viewed by 1256
Abstract
Background: Lung cancer remains the leading cause of cancer-related mortality globally, largely due to delayed diagnosis in its early stages. While conventional diagnostic tools like low-dose CT and tissue biopsy are routinely used, they suffer from limitations including invasiveness, radiation exposure, cost, and [...] Read more.
Background: Lung cancer remains the leading cause of cancer-related mortality globally, largely due to delayed diagnosis in its early stages. While conventional diagnostic tools like low-dose CT and tissue biopsy are routinely used, they suffer from limitations including invasiveness, radiation exposure, cost, and limited sensitivity for early-stage detection. Liquid biopsy, a minimally invasive alternative that captures circulating tumor-derived biomarkers such as ctDNA, cfRNA, and exosomes from body fluids, offers promising diagnostic potential—yet its sensitivity in early disease remains suboptimal. Recent advances in Artificial Intelligence (AI) and radiomics are poised to bridge this gap. Objective: This review aims to explore how AI, in combination with radiomics, enhances the diagnostic capabilities of liquid biopsy for early detection of lung cancer and facilitates personalized monitoring strategies. Content Overview: We begin by outlining the molecular heterogeneity of lung cancer, emphasizing the need for earlier, more accurate detection strategies. The discussion then transitions into liquid biopsy and its key analytes, followed by an in-depth overview of AI techniques—including machine learning (e.g., SVMs, Random Forest) and deep learning models (e.g., CNNs, RNNs, GANs)—that enable robust pattern recognition across multi-omics datasets. The role of radiomics, which quantitatively extracts spatial and morphological features from imaging modalities such as CT and PET, is explored in conjunction with AI to provide an integrative, multimodal approach. This convergence supports the broader vision of precision medicine by integrating omics data, imaging, and electronic health records. Discussion: The synergy between AI, liquid biopsy, and radiomics signifies a shift from traditional diagnostics toward dynamic, patient-specific decision-making. Radiomics contributes spatial information, while AI improves pattern detection and predictive modeling. Despite these advancements, challenges remain—including data standardization, limited annotated datasets, the interpretability of deep learning models, and ethical considerations. A push toward rigorous validation and multimodal AI frameworks is necessary to facilitate clinical adoption. Conclusion: The integration of AI with liquid biopsy and radiomics holds transformative potential for early lung cancer detection. This non-invasive, scalable, and individualized diagnostic paradigm could significantly reduce lung cancer mortality through timely and targeted interventions. As technology and regulatory pathways mature, collaborative research is crucial to standardize methodologies and translate this innovation into routine clinical practice. Full article
(This article belongs to the Special Issue The Genetic Analysis and Clinical Therapy in Lung Cancer: 2nd Edition)
Show Figures

Figure 1

18 pages, 2657 KB  
Article
GRE: A Framework for Significant SNP Identification Associated with Wheat Yield Leveraging GWAS–Random Forest Joint Feature Selection and Explainable Machine Learning Genomic Selection Algorithm
by Mei Song, Shanghui Zhang, Shijie Qiu, Ran Qin, Chunhua Zhao, Yongzhen Wu, Han Sun, Guangchen Liu and Fa Cui
Genes 2025, 16(10), 1125; https://doi.org/10.3390/genes16101125 - 24 Sep 2025
Viewed by 564
Abstract
Background: Facing global wheat production pressures such as environmental degradation and reduced cultivated land, breeding innovation is urgent to boost yields. Genomic selection (GS) is a useful wheat breeding technology to make the breeding process more efficient, increasing the genetic gain per [...] Read more.
Background: Facing global wheat production pressures such as environmental degradation and reduced cultivated land, breeding innovation is urgent to boost yields. Genomic selection (GS) is a useful wheat breeding technology to make the breeding process more efficient, increasing the genetic gain per unit time and cost. Precise genomic estimated breeding value (GEBV) via genome-wide markers is usually hampered by high-dimensional genomic data. Methods: To address this, we propose GRE, a framework combining genome-wide association study (GWAS)’s biological significance and random forest (RF)’s prediction efficiency for an explainable machine learning GS model. First, GRE identifies significant SNPs affecting wheat yield traits by comparison of the constructed 24 SNP subsets (intersection/union) selected by leveraging GWAS and RF, to analyze the marker scale’s impact. Furthermore, GRE compares six GS algorithms (GBLUP and five machine learning models), evaluating performance via prediction accuracy (Pearson correlation coefficient, PCC) and error. Additionally, GRE leverages Shapley additive explanations (SHAP) explainable techniques to overcome traditional GS models’ “black box” limitation, enabling cross-scale quantitative analysis and revealing how significant SNPs affect yield traits. Results: Results show that XGBoost and ElasticNet perform best in the union (383 SNPs) of GWAS and RF’s TOP 200 SNPs, with high accuracy (PCC > 0.864) and stability (standard deviation, SD < 0.005), and the significant SNPs identified by XGBoost are precisely explained by their main and interaction effects on wheat yield by SHAP. Conclusions: This study provides tool support for intelligent breeding chip design, important trait gene mining, and GS technology field transformation, aiding global agricultural sustainable productivity. Full article
(This article belongs to the Section Plant Genetics and Genomics)
Show Figures

Figure 1

16 pages, 1287 KB  
Article
From Chaos to Security: A Comparative Study of Lorenz and Rössler Systems in Cryptography
by Alexandru Dinu
Cryptography 2025, 9(3), 58; https://doi.org/10.3390/cryptography9030058 - 12 Sep 2025
Viewed by 498
Abstract
Chaotic systems, governed by deterministic nonlinear equations yet exhibiting highly complex and unpredictable behaviors, have emerged as valuable tools at the intersection of mathematics, engineering, and information security. This paper presents a comparative study of the Lorenz and Rössler systems, focusing on their [...] Read more.
Chaotic systems, governed by deterministic nonlinear equations yet exhibiting highly complex and unpredictable behaviors, have emerged as valuable tools at the intersection of mathematics, engineering, and information security. This paper presents a comparative study of the Lorenz and Rössler systems, focusing on their dynamic complexity and statistical independence—two critical properties for applications in chaos-based cryptography. By integrating techniques from nonlinear dynamics (e.g., Lyapunov exponents, KS entropy, Kaplan–Yorke dimension) and statistical testing (e.g., chi-square and Gaussian transformation-based independence tests), we provide a quantitative framework to evaluate the pseudo-randomness potential of chaotic trajectories. Our results show that the Lorenz system offers faster convergence to chaos and superior statistical independence over time, making it more suitable for rapid encryption schemes. In contrast, the Rössler system provides complementary insights due to its simpler attractor and longer memory. These findings contribute to a multidisciplinary methodology for selecting and optimizing chaotic systems in secure communication and signal processing contexts. Full article
(This article belongs to the Special Issue Interdisciplinary Cryptography)
Show Figures

Figure 1

21 pages, 10827 KB  
Article
Smart Monitoring of Power Transformers in Substation 4.0: Multi-Sensor Integration and Machine Learning Approach
by Fabio Henrique de Souza Duz, Tiago Goncalves Zacarias, Ronny Francis Ribeiro Junior, Fabio Monteiro Steiner, Frederico de Oliveira Assuncao, Erik Leandro Bonaldi and Luiz Eduardo Borges-da-Silva
Sensors 2025, 25(17), 5469; https://doi.org/10.3390/s25175469 - 3 Sep 2025
Cited by 1 | Viewed by 884
Abstract
Power transformers are critical components in electrical power systems, where failures can cause significant outages and economic losses. Traditional maintenance strategies, typically based on offline inspections, are increasingly insufficient to meet the reliability requirements of modern digital substations. This work presents an integrated [...] Read more.
Power transformers are critical components in electrical power systems, where failures can cause significant outages and economic losses. Traditional maintenance strategies, typically based on offline inspections, are increasingly insufficient to meet the reliability requirements of modern digital substations. This work presents an integrated multi-sensor monitoring framework that combines online frequency response analysis (OnFRA® 4.0), capacitive tap-based monitoring (FRACTIVE® 4.0), dissolved gas analysis, and temperature measurements. All data streams are synchronized and managed within a SCADA system that supports real-time visualization and historical traceability. To enable automated fault diagnosis, a Random Forest classifier was trained using simulated datasets derived from laboratory experiments that emulate typical transformer and bushing degradation scenarios. Principal Component Analysis was employed for dimensionality reduction, improving model interpretability and computational efficiency. The proposed model achieved perfect classification metrics on the simulated data, demonstrating the feasibility of combining high-fidelity monitoring hardware with machine learning techniques for anomaly detection. Although no in-service failures have been recorded to date, the monitoring infrastructure is already tested and validated through laboratory conditions, enabling continuous data acquisition. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

30 pages, 2358 KB  
Article
Prediction of Mental Fatigue for Control Room Operators: Innovative Data Processing and Multi-Model Evaluation
by Yong Chen, Jiangtao Chen, Xian Xie, Wenchao Yi and Zuzhen Ji
Mathematics 2025, 13(17), 2794; https://doi.org/10.3390/math13172794 - 30 Aug 2025
Viewed by 732
Abstract
When control room operators encounter mental fatigue, the accuracy of their work will decline. Accurately predicting the mental fatigue of industrial control room operators is of great significance for preventing operational mistakes. In this study, facial data of experimental participants were collected via [...] Read more.
When control room operators encounter mental fatigue, the accuracy of their work will decline. Accurately predicting the mental fatigue of industrial control room operators is of great significance for preventing operational mistakes. In this study, facial data of experimental participants were collected via cameras, and fatigue levels were evaluated using an improved Karolinska Sleepiness Scale (KSS). Subsequently, a dataset of fatigue samples based on facial features was established. A novel early-warning framework was put forward, framing fatigue prediction as a time series prediction task. Two innovative data processing techniques were introduced. Reverse data binning transforms discrete fatigue labels into continuous values through a random perturbation of ≤0.3, enabling precise temporal modeling. A fatigue-aware data screening method uses the 6 s rule and a sliding window to filter out transient states and preserve key transition patterns. Five prediction models, namely Light Gradient Boosting Machine (LightGBM), Gated Recurrent Unit (GRU), Temporal Convolutional Network (TCN), Transformer, and Attention-based Temporal Convolutional Network (Attention-based TCN), were evaluated using the collected dataset of fatigue samples based on facial features. The results indicated that LightGBM demonstrated outstanding performance, with an accuracy rate reaching 93.33% and an average absolute error of 0.067. It significantly outperformed deep learning models. Moreover, its computational efficiency further verified its suitability for real-time deployment. This research integrates predictive modeling with industrial safety applications, providing evidence for the feasibility of machine learning in proactive fatigue management. Full article
Show Figures

Figure 1

21 pages, 3049 KB  
Article
SRoFF-Yolover: A Small-Target Detection Model for Suspicious Regions of Forest Fire
by Lairong Chen, Ling Li, Pengle Cheng and Ying Huang
Forests 2025, 16(8), 1335; https://doi.org/10.3390/f16081335 - 16 Aug 2025
Viewed by 594
Abstract
The rapid detection and confirmation of Suspicious Regions of Forest Fire (SRoFF) are critical for timely alerts and firefighting operations. In the early stages of forest fires, small flames and heavy occlusion lead to low accuracy, false detections, omissions, and slow inference in [...] Read more.
The rapid detection and confirmation of Suspicious Regions of Forest Fire (SRoFF) are critical for timely alerts and firefighting operations. In the early stages of forest fires, small flames and heavy occlusion lead to low accuracy, false detections, omissions, and slow inference in existing target-detection algorithms. We constructed the Suspicious Regions of Forest Fire Dataset (SRFFD), comprising publicly available datasets, relevant images collected from online searches, and images generated through various image enhancement techniques. The SRFFD contains a total of 64,584 images. In terms of effectiveness, the individual augmentation techniques rank as follows (in descending order): HSV (Hue Saturation and Value) random enhancement, copy-paste augmentation, and affine transformation. A detection model named SRoFF-Yolover is proposed for identifying suspicious regions of forest fire, based on the YOLOv8. An embedding layer that effectively integrates seasonal and temporal information into the image enhances the prediction accuracy of the SRoFF-Yolover. The SRoFF-Yolover enhances YOLOv8 by (1) adopting dilated convolutions in the Backbone to enlarge feature map receptive fields; (2) incorporating the Convolutional Block Attention Module (CBAM) prior to the Neck’s C2fLayer for small-target attention; and (3) reconfiguring the Backbone-Neck linkage via P2, P4, and SPPF. Compared with the baseline model (YOLOv8s), the SRoFF-Yolover achieves an 18.1% improvement in mAP@0.5, a 4.6% increase in Frames Per Second (FPS), a 2.6% reduction in Giga Floating-Point Operations (GFLOPs), and a 3.2% decrease in the total number of model parameters (#Params). The SRoFF-Yolover can effectively detect suspicious regions of forest fire, particularly during winter nights. Experiments demonstrated that the detection accuracy of the SRoFF-Yolover for suspicious regions of forest fire is higher at night than during daytime in the same season. Full article
(This article belongs to the Section Natural Hazards and Risk Management)
Show Figures

Figure 1

18 pages, 10727 KB  
Article
Time Series Transformer-Based Modeling of Pavement Skid and Texture Deterioration
by Lu Gao, Zia Ud Din, Kinam Kim and Ahmed Senouci
Constr. Mater. 2025, 5(3), 55; https://doi.org/10.3390/constrmater5030055 - 14 Aug 2025
Viewed by 521
Abstract
This study investigates the deterioration of skid resistance and surface macrotexture following preventive maintenance using micro-milling techniques. Field data were collected from 31 asphalt pavement sections located across four climatic zones in Texas. The data encompasses a variety of surface types, milling depths, [...] Read more.
This study investigates the deterioration of skid resistance and surface macrotexture following preventive maintenance using micro-milling techniques. Field data were collected from 31 asphalt pavement sections located across four climatic zones in Texas. The data encompasses a variety of surface types, milling depths, operational speeds, and drum configurations. A standardized data collection protocol was followed, with measurements taken before milling, immediately after treatment, and at 3, 6, 12, and 18 months post-treatment. Skid number and Mean Profile Depth (MPD) were used to evaluate surface friction and texture characteristics. The dataset was reformatted into a time-series structure with 930 observations, including contextual variables such as climatic zone, treatment parameters, and baseline surface condition. A comparative modeling framework was applied to predict the deterioration trends of both skid resistance and macrotexture over time. Eight regression models, including linear, tree-based, and ensemble methods, were evaluated alongside a time series Transformer model. The results show that the Transformer model achieved the highest prediction accuracy for skid resistance (R2 = 0.981), while Random Forest performed best for macrotexture prediction (R2 = 0.838). The findings indicate that the degradation of surface characteristics after preventive maintenance is non-linear and influenced by a combination of environmental and operational factors. This study demonstrates the effectiveness of data-driven modeling in supporting transportation agencies with pavement performance forecasting and maintenance planning. Full article
Show Figures

Figure 1

20 pages, 732 KB  
Review
AI Methods Tailored to Influenza, RSV, HIV, and SARS-CoV-2: A Focused Review
by Achilleas Livieratos, George C. Kagadis, Charalambos Gogos and Karolina Akinosoglou
Pathogens 2025, 14(8), 748; https://doi.org/10.3390/pathogens14080748 - 30 Jul 2025
Viewed by 1595
Abstract
Artificial intelligence (AI) techniques—ranging from hybrid mechanistic–machine learning (ML) ensembles to gradient-boosted decision trees, support-vector machines, and deep neural networks—are transforming the management of seasonal influenza, respiratory syncytial virus (RSV), human immunodeficiency virus (HIV), and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Symptom-based [...] Read more.
Artificial intelligence (AI) techniques—ranging from hybrid mechanistic–machine learning (ML) ensembles to gradient-boosted decision trees, support-vector machines, and deep neural networks—are transforming the management of seasonal influenza, respiratory syncytial virus (RSV), human immunodeficiency virus (HIV), and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Symptom-based triage models using eXtreme Gradient Boosting (XGBoost) and Random Forests, as well as imaging classifiers built on convolutional neural networks (CNNs), have improved diagnostic accuracy across respiratory infections. Transformer-based architectures and social media surveillance pipelines have enabled real-time monitoring of COVID-19. In HIV research, support-vector machines (SVMs), logistic regression, and deep neural network (DNN) frameworks advance viral-protein classification and drug-resistance mapping, accelerating antiviral and vaccine discovery. Despite these successes, persistent challenges remain—data heterogeneity, limited model interpretability, hallucinations in large language models (LLMs), and infrastructure gaps in low-resource settings. We recommend standardized open-access data pipelines and integration of explainable-AI methodologies to ensure safe, equitable deployment of AI-driven interventions in future viral-outbreak responses. Full article
(This article belongs to the Section Viral Pathogens)
Show Figures

Figure 1

14 pages, 29613 KB  
Article
Unsupervised Insulator Defect Detection Method Based on Masked Autoencoder
by Yanying Song and Wei Xiong
Sensors 2025, 25(14), 4271; https://doi.org/10.3390/s25144271 - 9 Jul 2025
Viewed by 643
Abstract
With the rapid expansion of high-speed rail infrastructure, maintaining the structural integrity of insulators is critical to operational safety. However, conventional defect detection techniques typically rely on extensive labeled datasets, struggle with class imbalance, and often fail to capture large-scale structural anomalies. In [...] Read more.
With the rapid expansion of high-speed rail infrastructure, maintaining the structural integrity of insulators is critical to operational safety. However, conventional defect detection techniques typically rely on extensive labeled datasets, struggle with class imbalance, and often fail to capture large-scale structural anomalies. In this paper, we present an unsupervised insulator defect detection framework based on a masked autoencoder (MAE) architecture. Built upon a vision transformer (ViT), the model employs an asymmetric encoder-decoder structure and leverages a high-ratio random masking scheme during training to facilitate robust representation learning. At inference, a dual-pass interval masking strategy enhances defect localization accuracy. Benchmark experiments across multiple datasets demonstrate that our method delivers competitive image- and pixel-level performance while significantly reducing computational overhead compared to existing ViT-based approaches. By enabling high-precision defect detection through image reconstruction without requiring manual annotations, this approach offers a scalable and efficient solution for real-time industrial inspection under limited supervision. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

17 pages, 2101 KB  
Article
Enhancing DDoS Attacks Mitigation Using Machine Learning and Blockchain-Based Mobile Edge Computing in IoT
by Mahmoud Chaira, Abdelkader Belhenniche and Roman Chertovskih
Computation 2025, 13(7), 158; https://doi.org/10.3390/computation13070158 - 1 Jul 2025
Cited by 2 | Viewed by 1617
Abstract
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures. [...] Read more.
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures. In this paper, we present a novel approach to DDoS attack detection and mitigation that integrates state-of-the-art machine learning techniques with Blockchain-based Mobile Edge Computing (MEC) in IoT environments. Our solution leverages the decentralized and tamper-resistant nature of Blockchain technology to enable secure and efficient data collection and processing at the network edge. We evaluate multiple machine learning models, including K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Transformer architectures, and LightGBM, using the CICDDoS2019 dataset. Our results demonstrate that Transformer models achieve a superior detection accuracy of 99.78%, while RF follows closely with 99.62%, and LightGBM offers optimal efficiency for real-time detection. This integrated approach significantly enhances detection accuracy and mitigation effectiveness compared to existing methods, providing a robust and adaptive mechanism for identifying and mitigating malicious traffic patterns in IoT environments. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

21 pages, 3747 KB  
Article
An Optimized Multi-Stage Framework for Soil Organic Carbon Estimation in Citrus Orchards Based on FTIR Spectroscopy and Hybrid Machine Learning Integration
by Yingying Wei, Xiaoxiang Mo, Shengxin Yu, Saisai Wu, He Chen, Yuanyuan Qin and Zhikang Zeng
Agriculture 2025, 15(13), 1417; https://doi.org/10.3390/agriculture15131417 - 30 Jun 2025
Cited by 1 | Viewed by 651
Abstract
Soil organic carbon (SOC) is a critical indicator of soil health and carbon sequestration potential. Accurate, efficient, and scalable SOC estimation is essential for sustainable orchard management and climate-resilient agriculture. However, traditional visible–near-infrared (Vis–NIR) spectroscopy often suffers from limited chemical specificity and weak [...] Read more.
Soil organic carbon (SOC) is a critical indicator of soil health and carbon sequestration potential. Accurate, efficient, and scalable SOC estimation is essential for sustainable orchard management and climate-resilient agriculture. However, traditional visible–near-infrared (Vis–NIR) spectroscopy often suffers from limited chemical specificity and weak adaptability in heterogeneous soil environments. To overcome these limitations, this study develops a five-stage modeling framework that systematically integrates Fourier Transform Infrared (FTIR) spectroscopy with hybrid machine learning techniques for non-destructive SOC prediction in citrus orchard soils. The proposed framework includes (1) FTIR spectral acquisition; (2) a comparative evaluation of nine spectral preprocessing techniques; (3) dimensionality reduction via three representative feature selection algorithms, namely the Successive Projections Algorithm (SPA), Competitive Adaptive Reweighted Sampling (CARS), and Principal Component Analysis (PCA); (4) regression modeling using six machine learning algorithms, namely the Random Forest (RF), Support Vector Regression (SVR), Gray Wolf Optimized SVR (SVR-GWO), Partial Least Squares Regression (PLSR), Principal Component Regression (PCR), and the Back-propagation Neural Network (BPNN); and (5) comprehensive performance assessments and the identification of the optimal modeling pathway. The results showed that second-derivative (SD) preprocessing significantly enhanced the spectral signal-to-noise ratio. Among feature selection methods, the SPA reduced over 300 spectral bands to 10 informative wavelengths, enabling efficient modeling with minimal information loss. The SD + SPA + RF pipeline achieved the highest prediction performance (R2 = 0.84, RMSE = 4.67 g/kg, and RPD = 2.51), outperforming the PLSR and BPNN models. This study presents a reproducible and scalable FTIR-based modeling strategy for SOC estimation in orchard soils. Its adaptive preprocessing, effective variable selection, and ensemble learning integration offer a robust solution for real-time, cost-effective, and transferable carbon monitoring, advancing precision soil sensing in orchard ecosystems. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

23 pages, 6078 KB  
Article
Multi-Energy Optimal Dispatching of Port Microgrids Taking into Account the Uncertainty of Photovoltaic Power
by Xiaoyong Wang, Xing Wei, Hanqing Zhang, Bailiang Liu and Yanmin Wang
Energies 2025, 18(12), 3216; https://doi.org/10.3390/en18123216 - 19 Jun 2025
Cited by 1 | Viewed by 552
Abstract
To tackle the problems of high scheduling costs and low photovoltaic (PV) accommodation rates in port microgrids, which are caused by the coupling of uncertainties in new energy output and load randomness, this paper proposes an optimized scheduling method that integrates scenario analysis [...] Read more.
To tackle the problems of high scheduling costs and low photovoltaic (PV) accommodation rates in port microgrids, which are caused by the coupling of uncertainties in new energy output and load randomness, this paper proposes an optimized scheduling method that integrates scenario analysis with multi-energy complementarity. Firstly, based on the improved Iterative Self-organizing Data Analysis Techniques Algorithm (ISODATA) clustering algorithm and backward reduction method, a set of typical scenarios that represent the uncertainties of PV and load is generated. Secondly, a multi-energy complementary system model is constructed, which includes thermal power, PV, energy storage, electric vehicle (EV) clusters, and demand response. Then, a planning model centered on economy is established. Through multi-energy coordinated optimization, supply–demand balance and cost control are achieved. The simulation results based on the port microgrid of the LEKKI Port in Nigeria show that the proposed method can significantly reduce system operating costs by 18% and improve the PV accommodation rate through energy storage time-shifting, flexible EV scheduling, and demand response incentives. The research findings provide theoretical and technical support for the low-carbon transformation of energy systems in high-volatility load scenarios, such as ports. Full article
Show Figures

Figure 1

29 pages, 3251 KB  
Article
Optimizing Energy Forecasting Using ANN and RF Models for HVAC and Heating Predictions
by Khaled M. Salem, Javier M. Rey-Hernández, A. O. Elgharib and Francisco J. Rey-Martínez
Appl. Sci. 2025, 15(12), 6806; https://doi.org/10.3390/app15126806 - 17 Jun 2025
Cited by 3 | Viewed by 955
Abstract
Industry 5.0 is transforming energy demand by integrating sustainability into energy planning, ensuring market stability while minimizing environmental impact for future generations. There are several patterns for calculating energy consumption depending on whether it is measured daily, monthly, or annually through the integration [...] Read more.
Industry 5.0 is transforming energy demand by integrating sustainability into energy planning, ensuring market stability while minimizing environmental impact for future generations. There are several patterns for calculating energy consumption depending on whether it is measured daily, monthly, or annually through the integration of artificial intelligence approaches, particularly Artificial Neural Networks (ANNs) and Random Forests (RFs), and within the framework of Industry 5.0. This study employs machine learning techniques to analyze energy consumption data from two distinct buildings in Spain: the LUCIA facility in Valladolid and the FUHEM Building in Madrid. The implementation was conducted using custom MATLAB code developed in-house. Our approach systematically evaluates and compares the predictive performance of Artificial Neural Networks (ANNs) and Random Forests (RFs) for energy demand forecasting, leveraging each algorithm’s unique characteristics to assess their suitability for this application. The performances of both models are calculated using the Root Mean Square Percentage Error (RMSPE), Root Mean Square Relative Percentage Error (RMSRPE), Mean Absolute Percentage Error (MAPE), Mean Absolute Relative Percentage Error (MARPE), Kling–Gupta Efficiency (KGE), and also the coefficient of determination, R2. Training times are validated using ANN and RF models. Lucia RF took 2.8 s, while Lucia ANN took 40 s; FUHEM RF took 0.3 s, compared to FUHEM ANN, which took 1.1 s. The performances of the two models are described in detail to show the effectiveness of each of them. Full article
(This article belongs to the Special Issue Infrastructure Resilience Analysis)
Show Figures

Figure 1

Back to TopTop