Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (94)

Search Parameters:
Keywords = AutoGrid

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1563 KB  
Article
Small Object Tracking in LiDAR Point Clouds: Learning the Target-Awareness Prototype and Fine-Grained Search Region
by Shengjing Tian, Yinan Han, Xiantong Zhao and Xiuping Liu
Sensors 2025, 25(12), 3633; https://doi.org/10.3390/s25123633 - 10 Jun 2025
Viewed by 868
Abstract
Light Detection and Ranging (LiDAR) point clouds are an essential perception modality for artificial intelligence systems like autonomous driving and robotics, where the ubiquity of small objects in real-world scenarios substantially challenges the visual tracking of small targets amidst the vastness of point [...] Read more.
Light Detection and Ranging (LiDAR) point clouds are an essential perception modality for artificial intelligence systems like autonomous driving and robotics, where the ubiquity of small objects in real-world scenarios substantially challenges the visual tracking of small targets amidst the vastness of point cloud data. Current methods predominantly focus on developing universal frameworks for general object categories, often sidelining the persistent difficulties associated with small objects. These challenges stem from a scarcity of foreground points and a low tolerance for disturbances. To this end, we propose a deep neural network framework that trains a Siamese network for feature extraction and innovatively incorporates two pivotal modules: the target-awareness prototype mining (TAPM) module and the regional grid subdivision (RGS) module. The TAPM module utilizes the reconstruction mechanism of the masked auto-encoder to distill prototypes within the feature space, thereby enhancing the salience of foreground points and aiding in the precise localization of small objects. To heighten the tolerance of disturbances in feature maps, the RGS module is devised to retrieve detailed features of the search area, capitalizing on Vision Transformer and pixel shuffle technologies. Furthermore, beyond standard experimental configurations, we have meticulously crafted scaling experiments to assess the robustness of various trackers when dealing with small objects. Comprehensive evaluations show our method achieves a mean Success of 64.9% and 60.4% under original and scaled settings, outperforming benchmarks by +3.6% and +5.4%, respectively. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems)
Show Figures

Figure 1

20 pages, 3859 KB  
Article
Thermal Mitigation in Coastal Cities: Marine and Urban Morphology Effects on Land Surface Temperature in Xiamen
by Tingting Hong, Xiaohui Huang, Qinfei Lv, Suting Zhao, Zeyang Wang and Yuanchuan Yang
Buildings 2025, 15(7), 1170; https://doi.org/10.3390/buildings15071170 - 2 Apr 2025
Cited by 2 | Viewed by 618
Abstract
Amidst the rapid global urbanization and economic integration, coastal cities have undergone significant changes in urban spatial patterns. These changes have further worsened the complex urban thermal environment, making it crucial to study the interaction between human-driven development and natural climate systems. To [...] Read more.
Amidst the rapid global urbanization and economic integration, coastal cities have undergone significant changes in urban spatial patterns. These changes have further worsened the complex urban thermal environment, making it crucial to study the interaction between human-driven development and natural climate systems. To address the insufficient quantification of marine elements in the urban planning of subtropical coastal zones, this study takes Xiamen, a typical deep-water port city, as an example to construct a spatial analysis framework integrating marine boundary layer parameters. This research employs interpolation simulation, atmospheric correction, and other techniques to simulate the inversion of land use and Landsat 8 data, deriving urban morphological elements and Land Surface Temperature (LST) data. These data were then assigned to 500 m grids for analysis. A bivariate spatial auto-correlation model was applied to examine the relationship between urban carbon emission and LST. The study area was categorized based on the influence of marine factors, and the spatial relationships between urban morphological elements and LST were analyzed using a multiscale geographically weighted regression model. Three Xiamen-specific discoveries emerged: (1) the marine exerts a significant thermal mitigation effect on the city, with an average influence range of 7.94 km; (2) the relationship between urban morphology and the thermal environment exhibits notable spatial heterogeneity across different regions; and (3) to mitigate urban thermal environments, connected green corridors should be established in the southern coastal areas of outer districts in regions significantly influenced by the ocean. In areas with less marine influence, spatial complexity should be introduced by disrupting relatively intact blue–green spaces, while regions unaffected by the ocean should focus on increasing green spaces and reducing impervious surfaces and water bodies. These findings directly inform Xiamen’s 2035 Master Plan for combating heat island effects in coastal special economic zones, providing transferable metrics for similar maritime cities. Full article
(This article belongs to the Special Issue Advanced Research on the Urban Heat Island Effect and Climate)
Show Figures

Figure 1

23 pages, 9538 KB  
Article
Demand Flexibility of Pre-Cooling Strategies for City-Scale Buildings Through Urban Building Energy Modeling
by Anni Xu, Chengcheng Song, Wenxian Zhao and Yixing Chen
Buildings 2025, 15(7), 1051; https://doi.org/10.3390/buildings15071051 - 25 Mar 2025
Viewed by 918
Abstract
With the increasing demand for electricity, it is causing a growing burden on the power grid. In order to alleviate the pressure on the power system, a series of demand response (DR) strategies have emerged. This paper studied the DR potential and energy [...] Read more.
With the increasing demand for electricity, it is causing a growing burden on the power grid. In order to alleviate the pressure on the power system, a series of demand response (DR) strategies have emerged. This paper studied the DR potential and energy flexibility on city-scale building clusters under pre-cooling combined with temperature reset. This study firstly selected 18 types of buildings, each containing three construction years as prototype buildings, to represent the 228,539 buildings in Shenzhen. Then several pre-cooling strategies were developed, and after comparative analysis, the optimal strategy was obtained and applied to the entire Shenzhen building cluster, with simulation and analysis conducted for the nine administrative districts. Among them, this paper used AutoBPS-DR and added pre-cooling code based on the Ruby language to automatically generate building models with DR strategies and finally simulated the energy consumption results by EnergyPlus. The results showed that a pre-cooling duration of 0.5 h and a change of 2 °C in both pre-cooling temperature and reset temperature was the optimal strategy. Under this strategy, small and medium prototype buildings can achieve better results, with a maximum load reduction of 23.89 W/m2 and a reduction rate of 56.82%. In the simulation results of the building cluster, Guangming District showed the best results. Finally, the peak electricity reduction amount and reduction rate of the entire building cluster were calculated to be 0.007 kWh/m2 and 21.87%, respectively, with the maximum cost saving and saving percentage of 0.081 CNY/m2 and 15.05%, respectively. From this, it can be seen that the Shenzhen building cluster had shown considerable DR potential under the pre-cooling strategy. Full article
(This article belongs to the Special Issue Flexible Interaction between Buildings and Power Grid)
Show Figures

Figure 1

6 pages, 163 KB  
Editorial
Advanced AI and Machine Learning Techniques for Time Series Analysis and Pattern Recognition
by Antonio Pagliaro, Antonio Alessio Compagnino and Pierluca Sangiorgi
Appl. Sci. 2025, 15(6), 3165; https://doi.org/10.3390/app15063165 - 14 Mar 2025
Viewed by 1825
Abstract
Time series analysis and pattern recognition are cornerstones for innovation across diverse domains. In finance, these techniques enable market prediction and risk assessment. Astrophysicists use them to detect various phenomena and analyze data. Environmental scientists track ecosystem changes and pollution patterns, while healthcare [...] Read more.
Time series analysis and pattern recognition are cornerstones for innovation across diverse domains. In finance, these techniques enable market prediction and risk assessment. Astrophysicists use them to detect various phenomena and analyze data. Environmental scientists track ecosystem changes and pollution patterns, while healthcare professionals monitor patient vitals and disease progression. Transportation systems optimize traffic flow and predict maintenance needs. Energy providers balance grid loads and forecast consumption. Climate scientists model atmospheric changes and extreme weather events. Cybersecurity experts identify threats through anomaly detection in network traffic patterns. This editorial introduces this Special Issue, which explores state-of-the-art AI and machine learning (ML) techniques, including Long Short-Term Memory (LSTM) networks, Transformers, ensemble methods, and AutoML frameworks. We highlight innovative applications in data-driven finance, astrophysical event reconstruction, cloud masking, and healthcare monitoring. Recent advancements in feature engineering, unsupervised learning frameworks for cloud masking, and Transformer-based time series forecasting demonstrate the potential of these technologies. The papers collected in this Special Issue showcase how integrating domain-specific knowledge with computational innovations provides a pathway to achieving higher accuracy in time series analysis across various scientific disciplines. Full article
17 pages, 1774 KB  
Article
Training a Minesweeper Agent Using a Convolutional Neural Network
by Wenbo Wang and Chengyou Lei
Appl. Sci. 2025, 15(5), 2490; https://doi.org/10.3390/app15052490 - 25 Feb 2025
Viewed by 1545
Abstract
The Minesweeper game is modeled as a sequential decision-making task, for which a neural network architecture, state encoding, and reward function were herein designed. Both a Deep Q-Network (DQN) and supervised learning methods were successfully applied to optimize the training of the game. [...] Read more.
The Minesweeper game is modeled as a sequential decision-making task, for which a neural network architecture, state encoding, and reward function were herein designed. Both a Deep Q-Network (DQN) and supervised learning methods were successfully applied to optimize the training of the game. The experiments were conducted on the AutoDL platform using an NVIDIA RTX 3090 GPU for efficient computation. The results showed that in a 6 × 6 grid with four mines, the DQN model achieved an average win rate of 93.3% (standard deviation: 0.77%), while the supervised learning method achieved 91.2% (standard deviation: 0.9%), both outperforming human players and baseline algorithms and demonstrating high intelligence. The mechanisms of the two methods in the Minesweeper task were analyzed, with the reasons for the faster training speed and more stable performance of supervised learning explained from the perspectives of means–ends analysis and feedback control. Although there is room for improvement in sample efficiency and training stability in the DQN model, its greater generalization ability makes it highly promising for application in more complex decision-making tasks. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

32 pages, 9587 KB  
Article
A Layered Framework for Universal Extraction and Recognition of Electrical Diagrams
by Weiguo Cao, Zhong Chen, Congying Wu and Tiecheng Li
Electronics 2025, 14(5), 833; https://doi.org/10.3390/electronics14050833 - 20 Feb 2025
Cited by 1 | Viewed by 1179
Abstract
Secondary systems in electrical engineering often rely on traditional CAD software (AutoCAD v2024.1.6) or non-structured, paper-based diagrams for fieldwork, posing challenges for digital transformation. Electrical diagram recognition technology bridges this gap by converting traditional diagram operations into a “digital” model, playing a critical [...] Read more.
Secondary systems in electrical engineering often rely on traditional CAD software (AutoCAD v2024.1.6) or non-structured, paper-based diagrams for fieldwork, posing challenges for digital transformation. Electrical diagram recognition technology bridges this gap by converting traditional diagram operations into a “digital” model, playing a critical role in power system scheduling, operation, and maintenance. However, conventional recognition methods, which primarily rely on partition detection, face significant limitations such as poor adaptability to diverse diagram styles, interference among recognition objects, and reduced accuracy in handling complex and varied electrical diagrams. This paper introduces a novel layered framework for electrical diagram recognition that sequentially extracts the element layer, text layer, and connection relationship layer to address these challenges. First, an improved YOLOv7 model, combined with a multi-scale sliding window strategy, is employed to accurately segment large and small diagram objects. Next, PaddleOCR, trained with electrical-specific terminology, and PaddleClas, using multi-angle classification, are utilized for robust text recognition, effectively mitigating interference from diagram elements. Finally, clustering and adaptive FcF-inpainting algorithms are applied to repair the connection relationship layer, resolving local occlusion issues and enhancing the overall coupling of the diagram. Experimental results demonstrate that the proposed method outperforms existing approaches in robustness and universality, particularly for complex diagrams, providing technical support for intelligent power grid construction and operation. Full article
Show Figures

Figure 1

20 pages, 9274 KB  
Article
Numerical Simulation on Self-Propulsion Characteristics of Bionic Flexible Foil Considering Ground Wall Effect
by Yongcheng Li, Nan Zhang, Xinyuan Tang, Ziying Pan and Pengfei Xu
Biomimetics 2024, 9(12), 750; https://doi.org/10.3390/biomimetics9120750 - 10 Dec 2024
Viewed by 837
Abstract
In order to figure out the wall effect on the propulsive property of an auto-propelled foil, the commercial open-source code ANSYS Fluent was employed to numerically evaluate the fluid dynamics of flexible foil under various wall distances. A virtual model of NACA0015 foil [...] Read more.
In order to figure out the wall effect on the propulsive property of an auto-propelled foil, the commercial open-source code ANSYS Fluent was employed to numerically evaluate the fluid dynamics of flexible foil under various wall distances. A virtual model of NACA0015 foil undergoing travelling wavy motion was adopted, and the research object included 2D and 3D models. To capture the foil’s moving boundary, the dynamic grid technique coupled with the overlapping grid was utilized to realize the foil’s positive deformation and passive forward motion. The ground wall effect on fluid dynamics (thrust force, lift force and propulsive efficiency) and the flow structures of travelling wavy foil were analyzed. The numerical results show that the existence of the ground wall is beneficial for the propulsive property of foil. Specifically, the existence of the wall can improve the forward speed and efficiency of foil, with a maximum increase of 13% in moving velocity and a 10.5% increase in propulsive efficiency. The conclusions acquired in the current study are of great significance for the design of bionic UUV. Full article
(This article belongs to the Special Issue Bionic Robotic Fish: 2nd Edition)
Show Figures

Figure 1

18 pages, 1300 KB  
Article
XAI-Based Accurate Anomaly Detector That Is Robust Against Black-Box Evasion Attacks for the Smart Grid
by Islam Elgarhy, Mahmoud M. Badr, Mohamed Mahmoud, Maazen Alsabaan, Tariq Alshawi and Muteb Alsaqhan
Appl. Sci. 2024, 14(21), 9897; https://doi.org/10.3390/app14219897 - 29 Oct 2024
Cited by 3 | Viewed by 2087
Abstract
In the realm of smart grids, machine learning (ML) detectors—both binary (or supervised) and anomaly (or unsupervised)—have proven effective in detecting electricity theft (ET). However, binary detectors are designed for specific attacks, making their performance unpredictable against new attacks. Anomaly detectors, conversely, are [...] Read more.
In the realm of smart grids, machine learning (ML) detectors—both binary (or supervised) and anomaly (or unsupervised)—have proven effective in detecting electricity theft (ET). However, binary detectors are designed for specific attacks, making their performance unpredictable against new attacks. Anomaly detectors, conversely, are trained on benign data and identify deviations from benign patterns as anomalies, but their performance is highly sensitive to the selected threshold values. Additionally, ML detectors are vulnerable to evasion attacks, where attackers make minimal changes to malicious samples to evade detection. To address these limitations, we introduce a hybrid anomaly detector that combines a Deep Auto-Encoder (DAE) with a One-Class Support Vector Machine (OCSVM). This detector not only enhances classification performance but also mitigates the threshold sensitivity of the DAE. Furthermore, we evaluate the vulnerability of this detector to benchmark evasion attacks. Lastly, we propose an accurate and robust cluster-based DAE+OCSVM ET anomaly detector, trained using Explainable Artificial Intelligence (XAI) explanations generated by the Shapley Additive Explanations (SHAP) method on consumption readings. Our experimental results demonstrate that the proposed XAI-based detector achieves superior classification performance and exhibits enhanced robustness against various evasion attacks, including gradient-based and optimization-based methods, under a black-box threat model. Full article
(This article belongs to the Special Issue IoT in Smart Cities and Homes, 2nd Edition)
Show Figures

Figure 1

30 pages, 10941 KB  
Article
Closed-Boundary Reflections of Shallow Water Waves as an Open Challenge for Physics-Informed Neural Networks
by Kubilay Timur Demir, Kai Logemann and David S. Greenberg
Mathematics 2024, 12(21), 3315; https://doi.org/10.3390/math12213315 - 22 Oct 2024
Cited by 2 | Viewed by 2041
Abstract
Physics-informed neural networks (PINNs) have recently emerged as a promising alternative to traditional numerical methods for solving partial differential equations (PDEs) in fluid dynamics. By using PDE-derived loss functions and auto-differentiation, PINNs can recover solutions without requiring costly simulation data, spatial gridding, or [...] Read more.
Physics-informed neural networks (PINNs) have recently emerged as a promising alternative to traditional numerical methods for solving partial differential equations (PDEs) in fluid dynamics. By using PDE-derived loss functions and auto-differentiation, PINNs can recover solutions without requiring costly simulation data, spatial gridding, or time discretization. However, PINNs often exhibit slow or incomplete convergence, depending on the architecture, optimization algorithms, and complexity of the PDEs. To address these difficulties, a variety of novel and repurposed techniques have been introduced to improve convergence. Despite these efforts, their effectiveness is difficult to assess due to the wide range of problems and network architectures. As a novel test case for PINNs, we propose one-dimensional shallow water equations with closed boundaries, where the solutions exhibit repeated boundary wave reflections. After carefully constructing a reference solution, we evaluate the performance of PINNs across different architectures, optimizers, and special training techniques. Despite the simplicity of the problem for classical methods, PINNs only achieve accurate results after prohibitively long training times. While some techniques provide modest improvements in stability and accuracy, this problem remains an open challenge for PINNs, suggesting that it could serve as a valuable testbed for future research on PINN training techniques and optimization strategies. Full article
Show Figures

Figure 1

21 pages, 14185 KB  
Article
An Automated Machine Learning Approach to the Retrieval of Daily Soil Moisture in South Korea Using Satellite Images, Meteorological Data, and Digital Elevation Model
by Nari Kim, Soo-Jin Lee, Eunha Sohn, Mija Kim, Seonkyeong Seong, Seung Hee Kim and Yangwon Lee
Water 2024, 16(18), 2661; https://doi.org/10.3390/w16182661 - 18 Sep 2024
Cited by 1 | Viewed by 2311
Abstract
Soil moisture is a critical parameter that significantly impacts the global energy balance, including the hydrologic cycle, land–atmosphere interactions, soil evaporation, and plant growth. Currently, soil moisture is typically measured by installing sensors in the ground or through satellite remote sensing, with data [...] Read more.
Soil moisture is a critical parameter that significantly impacts the global energy balance, including the hydrologic cycle, land–atmosphere interactions, soil evaporation, and plant growth. Currently, soil moisture is typically measured by installing sensors in the ground or through satellite remote sensing, with data retrieval facilitated by reanalysis models such as the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis 5 (ERA5) and the Global Land Data Assimilation System (GLDAS). However, the suitability of these methods for capturing local-scale variabilities is insufficiently validated, particularly in regions like South Korea, where land surfaces are highly complex and heterogeneous. In contrast, artificial intelligence (AI) approaches have shown promising potential for soil moisture retrieval at the local scale but have rarely demonstrated substantial products for spatially continuous grids. This paper presents the retrieval of daily soil moisture (SM) over a 500 m grid for croplands in South Korea using random forest (RF) and automated machine learning (AutoML) models, leveraging satellite images and meteorological data. In a blind test conducted for the years 2013–2019, the AutoML-based SM model demonstrated optimal performance, achieving a root mean square error of 2.713% and a correlation coefficient of 0.940. Furthermore, the performance of the AutoML model remained consistent across all the years and months, as well as under extreme weather conditions, indicating its reliability and stability. Comparing the soil moisture data derived from our AutoML model with the reanalysis data from sources such as the European Space Agency Climate Change Initiative (ESA CCI), GLDAS, the Local Data Assimilation and Prediction System (LDAPS), and ERA5 for the South Korea region reveals that our AutoML model provides a much better representation. These experiments confirm the feasibility of AutoML-based SM retrieval, particularly for local agrometeorological applications in regions with heterogeneous land surfaces like South Korea. Full article
Show Figures

Figure 1

14 pages, 2053 KB  
Article
A Method for Locating Wideband Oscillation Disturbance Sources in Power Systems by Integrating TimesNet and Autoformer
by Huan Yan, Keqiang Tai, Mengchen Liu, Zhe Wang, Yunzhang Yang, Xu Zhou, Zongsheng Zheng, Shilin Gao and Yuhong Wang
Electronics 2024, 13(16), 3250; https://doi.org/10.3390/electronics13163250 - 16 Aug 2024
Cited by 3 | Viewed by 1494
Abstract
The large-scale integration of new energy generators into the power grid poses a potential threat to its stable operation due to broadband oscillations. The rapid and accurate localization of oscillation sources is fundamental for mitigating these risks. To enhance the interpretability and accuracy [...] Read more.
The large-scale integration of new energy generators into the power grid poses a potential threat to its stable operation due to broadband oscillations. The rapid and accurate localization of oscillation sources is fundamental for mitigating these risks. To enhance the interpretability and accuracy of broadband oscillation localization models, this paper proposes a broadband oscillation localization model based on deep learning, integrating TimesNet and Autoformer algorithms. This model utilizes transmission grid measurement sampling data as the input and employs a data-driven approach to establish the broadband oscillation localization model. TimesNet improves the model’s accuracy significantly by decomposing the measurement data into intra- and inter-period variations using dimensional elevation, tensor transformation, and fast Fourier transform. Autoformer enhances the ability to capture oscillation features through the Auto-Correlation mechanism. A typical high-proportion renewable energy system was constructed using CloudPSS to create a sample dataset. Simulation examples validated the proposed method, demonstrating it as a highly accurate solution for broadband oscillation source localization. Full article
Show Figures

Figure 1

20 pages, 8519 KB  
Article
Electricity Behavior Modeling and Anomaly Detection Services Based on a Deep Variational Autoencoder Network
by Rongheng Lin, Shuo Chen, Zheyu He, Budan Wu, Hua Zou, Xin Zhao and Qiushuang Li
Energies 2024, 17(16), 3904; https://doi.org/10.3390/en17163904 - 7 Aug 2024
Cited by 4 | Viewed by 2231
Abstract
Understanding electrical load profiles and detecting anomaly behaviors are important to the smart grid system. However, current load identification and anomaly analysis are based on static analysis, and less consideration is given to anomaly findings under load change conditions. This paper proposes a [...] Read more.
Understanding electrical load profiles and detecting anomaly behaviors are important to the smart grid system. However, current load identification and anomaly analysis are based on static analysis, and less consideration is given to anomaly findings under load change conditions. This paper proposes a deep variational autoencoder network (DVAE) for load profiles, along with anomaly analysis services, and introduces auto-time series data updating strategies based on sliding window adjustment. DVAE can help reconstruct the load curve and measure the difference between the original and the newer curve, whose measurement indicators include reconstruction probability and Pearson similarity. Meanwhile, the design of the sliding window strategy updates the data and DVAE model in a time-series manner. Experiments were carried out based on datasets from the U.S. Department of Energy and from Southeast China. The results showed that the proposed services could result in a 5% improvement in the AUC value, which helps to identify the anomaly behavior. Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

27 pages, 2005 KB  
Article
Vertebral Column Pathology Diagnosis Using Ensemble Strategies Based on Supervised Machine Learning Techniques
by Alam Gabriel Rojas-López, Alejandro Rodríguez-Molina, Abril Valeria Uriarte-Arcia and Miguel Gabriel Villarreal-Cervantes
Healthcare 2024, 12(13), 1324; https://doi.org/10.3390/healthcare12131324 - 2 Jul 2024
Cited by 1 | Viewed by 1636
Abstract
One expanding area of bioinformatics is medical diagnosis through the categorization of biomedical characteristics. Automatic medical strategies to boost the diagnostic through machine learning (ML) methods are challenging. They require a formal examination of their performance to identify the best conditions that enhance [...] Read more.
One expanding area of bioinformatics is medical diagnosis through the categorization of biomedical characteristics. Automatic medical strategies to boost the diagnostic through machine learning (ML) methods are challenging. They require a formal examination of their performance to identify the best conditions that enhance the ML method. This work proposes variants of the Voting and Stacking (VC and SC) ensemble strategies based on diverse auto-tuning supervised machine learning techniques to increase the efficacy of traditional baseline classifiers for the automatic diagnosis of vertebral column orthopedic illnesses. The ensemble strategies are created by first combining a complete set of auto-tuned baseline classifiers based on different processes, such as geometric, probabilistic, logic, and optimization. Next, the three most promising classifiers are selected among k-Nearest Neighbors (kNN), Naïve Bayes (NB), Logistic Regression (LR), Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Support Vector Machine (SVM), Artificial Neural Networks (ANN), and Decision Tree (DT). The grid-search K-Fold cross-validation strategy is applied to auto-tune the baseline classifier hyperparameters. The performances of the proposed ensemble strategies are independently compared with the auto-tuned baseline classifiers. A concise analysis evaluates accuracy, precision, recall, F1-score, and ROC-ACU metrics. The analysis also examines the misclassified disease elements to find the most and least reliable classifiers for this specific medical problem. The results show that the VC ensemble strategy provides an improvement comparable to that of the best baseline classifier (the kNN). Meanwhile, when all baseline classifiers are included in the SC ensemble, this strategy surpasses 95% in all the evaluated metrics, standing out as the most suitable option for classifying vertebral column diseases. Full article
Show Figures

Figure 1

14 pages, 430 KB  
Article
A Neural Network Forecasting Approach for the Smart Grid Demand Response Management Problem
by Slim Belhaiza and Sara Al-Abdallah
Energies 2024, 17(10), 2329; https://doi.org/10.3390/en17102329 - 11 May 2024
Cited by 5 | Viewed by 2066
Abstract
Demand response management (DRM) plays a crucial role in the prospective development of smart grids. The precise estimation of electricity demand for individual houses is vital for optimizing the operation and planning of the power system. Accurate forecasting of the required components holds [...] Read more.
Demand response management (DRM) plays a crucial role in the prospective development of smart grids. The precise estimation of electricity demand for individual houses is vital for optimizing the operation and planning of the power system. Accurate forecasting of the required components holds significance as it can substantially impact the final cost, mitigate risks, and support informed decision-making. In this paper, a forecasting approach employing neural networks for smart grid demand-side management is proposed. The study explores various enhanced artificial neural network (ANN) architectures for forecasting smart grid consumption. The performance of the ANN approach in predicting energy demands is evaluated through a comparison with three statistical models: a time series model, an auto-regressive model, and a hybrid model. Experimental results demonstrate the ability of the proposed neural network framework to deliver accurate and reliable energy demand forecasts. Full article
Show Figures

Figure 1

20 pages, 5994 KB  
Article
Numerical Analysis of the Stress Shadow Effects in Multistage Hydrofracturing Considering Natural Fracture and Leak-Off Effect
by Jinxin Song, Qing Qiao, Chao Chen, Jiangtao Zheng and Yongliang Wang
Water 2024, 16(9), 1308; https://doi.org/10.3390/w16091308 - 4 May 2024
Cited by 3 | Viewed by 2267
Abstract
As a critical technological approach, multistage fracturing is frequently used to boost gas recovery in compact hydrocarbon reservoirs. Determining an ideal cluster distance that effectively integrates pre-existing natural fractures in the deposit creates a fracture network conducive to gas movement. Fracturing fluid leak-off [...] Read more.
As a critical technological approach, multistage fracturing is frequently used to boost gas recovery in compact hydrocarbon reservoirs. Determining an ideal cluster distance that effectively integrates pre-existing natural fractures in the deposit creates a fracture network conducive to gas movement. Fracturing fluid leak-off also impacts water resources. In our study, we use a versatile finite element–discrete element method that improves the auto-refinement of the grid and the detection of multiple fracture movements to model staged fracturing in naturally fractured reservoirs. This computational model illustrates the interaction between hydraulic fractures and pre-existing fractures and employs the nonlinear Carter leak-off criterion to portray fluid leakage and the impacts of hydromechanical coupling during multistage fracturing. Numerical results show that sequential fracturing exhibits the maximum length in unfractured and naturally fractured models, and the leak-off volume of parallel fracturing is the smallest. Our study proposes an innovative technique for identifying and optimizing the spacing of fracturing clusters in unconventional reservoirs. Full article
(This article belongs to the Special Issue Thermo-Hydro-Mechanical Coupling in Fractured Porous Media)
Show Figures

Figure 1

Back to TopTop