Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,691)

Search Parameters:
Keywords = large-scale optimization problems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2193 KB  
Article
Deep Reinforcement Learning-Based Experimental Scheduling System for Clay Mineral Extraction
by Bo Zhou, Lei He, Yongqiang Li, Zhandong Lv and Shiping Zhang
Electronics 2026, 15(3), 617; https://doi.org/10.3390/electronics15030617 (registering DOI) - 31 Jan 2026
Abstract
Efficient and non-destructive extraction of clay minerals is fundamental for shale oil and gas reservoir evaluation and enrichment mechanism studies. However, traditional manual extraction experiments face bottlenecks such as low efficiency and reliance on operator experience, which limit their scalability and adaptability to [...] Read more.
Efficient and non-destructive extraction of clay minerals is fundamental for shale oil and gas reservoir evaluation and enrichment mechanism studies. However, traditional manual extraction experiments face bottlenecks such as low efficiency and reliance on operator experience, which limit their scalability and adaptability to intelligent research demands. To address this, this paper proposes an intelligent experimental scheduling system for clay mineral extraction based on deep reinforcement learning. First, the complex experimental process is deconstructed, and its core scheduling stages are abstracted into a Flexible Job Shop Scheduling Problem (FJSP) model with resting time constraints. Then, a scheduling agent based on the Proximal Policy Optimization (PPO) algorithm is developed and integrated with an improved Heterogeneous Graph Neural Network (HGNN) to represent the relationships among operations, machines, and constraints. This enables effective capture of the complex topological structure of the experimental environment and facilitates efficient sequential decision-making. To facilitate future practical applicability, a four-layer system architecture is proposed, comprising the physical equipment layer, execution control layer, scheduling decision layer, and interactive application layer. A digital twin module is designed to bridge the gap between theoretical scheduling and physical execution. This study focuses on validating the core scheduling algorithm through realistic simulations. Simulation results demonstrate that the proposed HGNN-PPO scheduling method significantly outperforms traditional heuristic rules (FIFO, SPT), meta-heuristic algorithms (GA), and simplified reinforcement learning methods (PPO-MLP). Specifically, in large-scale problems, our method reduces the makespan by over 9% compared to the PPO-MLP baseline, and the algorithm runs more than 30 times faster than GA. This highlights its superior performance and scalability. This study provides an effective solution for intelligent scheduling in automated chemical laboratory workflows and holds significant theoretical and practical value for advancing the intelligentization of experimental sciences, including shale oil and gas research. Full article
Show Figures

Figure 1

37 pages, 11655 KB  
Article
Large-Scale Sparse Multimodal Multiobjective Optimization via Multi-Stage Search and RL-Assisted Environmental Selection
by Bozhao Chen, Yu Sun and Bei Hua
Electronics 2026, 15(3), 616; https://doi.org/10.3390/electronics15030616 - 30 Jan 2026
Abstract
Multimodal multiobjective optimization problems (MMOPs) are widely encountered in real-world applications. While numerous evolutionary algorithms have been developed to locate equivalent Pareto-optimal solutions, existing Multimodal Multiobjective Evolutionary Algorithms (MMOEAs) often struggle to handle large-scale decision variables and sparse Pareto sets due to the [...] Read more.
Multimodal multiobjective optimization problems (MMOPs) are widely encountered in real-world applications. While numerous evolutionary algorithms have been developed to locate equivalent Pareto-optimal solutions, existing Multimodal Multiobjective Evolutionary Algorithms (MMOEAs) often struggle to handle large-scale decision variables and sparse Pareto sets due to the curse of dimensionality and unknown sparsity. To address these challenges, this paper proposes a novel approach named MASR-MMEA, which stands for Large-scale Sparse Multimodal Multiobjective Optimization via Multi-stage Search and Reinforcement Learning (RL)-assisted Environmental Selection. Specifically, to enhance search efficiency, a multi-stage framework is established incorporating three key innovations. First, a dual-strategy genetic operator based on improved hybrid encoding is designed, employing sparse-sensing dynamic redistribution for binary vectors and a sparse fuzzy decision framework for real vectors. Second, an affinity-based elite strategy utilizing Mahalanobis distance is introduced to pair real vectors with compatible binary vectors, increasing the probability of generating superior offspring. Finally, an adaptive sparse environmental selection strategy assisted by Multilayer Perceptron (MLP) reinforcement learning is developed. By utilizing the MLP-generated Guiding Vector (GDV) to direct the evolutionary search toward efficient regions and employing an iteration-based adaptive mechanism to regulate genetic operators, this strategy accelerates convergence. Furthermore, it dynamically quantifies population-level sparsity and adjusts selection pressure through a modified crowding distance mechanism to filter structural redundancy, thereby effectively balancing convergence and multimodal diversity. Comparative studies against six state-of-the-art methods demonstrate that MASR-MMEA significantly outperforms existing approaches in terms of both solution quality and convergence speed on large-scale sparse MMOPs. Full article
20 pages, 1275 KB  
Article
QEKI: A Quantum–Classical Framework for Efficient Bayesian Inversion of PDEs
by Jiawei Yong and Sihai Tang
Entropy 2026, 28(2), 156; https://doi.org/10.3390/e28020156 - 30 Jan 2026
Abstract
Solving Bayesian inverse problems efficiently stands as a major bottleneck in scientific computing. Although Bayesian Physics-Informed Neural Networks (B-PINNs) have introduced a robust way to quantify uncertainty, the high-dimensional parameter spaces inherent in deep learning often lead to prohibitive sampling costs. Addressing this, [...] Read more.
Solving Bayesian inverse problems efficiently stands as a major bottleneck in scientific computing. Although Bayesian Physics-Informed Neural Networks (B-PINNs) have introduced a robust way to quantify uncertainty, the high-dimensional parameter spaces inherent in deep learning often lead to prohibitive sampling costs. Addressing this, our work introduces Quantum-Encodable Bayesian PINNs trained via Classical Ensemble Kalman Inversion (QEKI), a framework that pairs Quantum Neural Networks (QNNs) with Ensemble Kalman Inversion (EKI). The core advantage lies in the QNN’s ability to act as a compact surrogate for PDE solutions, capturing complex physics with significantly fewer parameters than classical networks. By adopting the gradient-free EKI for training, we mitigate the barren plateau issue that plagues quantum optimization. Through several benchmarks on 1D and 2D nonlinear PDEs, we show that QEKI yields precise inversions and substantial parameter compression, even in the presence of noise. While large-scale applications are constrained by current quantum hardware, this research outlines a viable hybrid framework for including quantum features within Bayesian uncertainty quantification. Full article
(This article belongs to the Special Issue Quantum Computation, Quantum AI, and Quantum Information)
Show Figures

Figure 1

21 pages, 6750 KB  
Article
Machine Learning-Based Energy Consumption and Carbon Footprint Forecasting in Urban Rail Transit Systems
by Sertaç Savaş and Kamber Külahcı
Appl. Sci. 2026, 16(3), 1369; https://doi.org/10.3390/app16031369 - 29 Jan 2026
Abstract
In the fight against global climate change, the transportation sector is of critical importance because it is one of the major causes of total greenhouse gas emissions worldwide. Although urban rail transit systems offer a lower carbon footprint compared to road transportation, accurately [...] Read more.
In the fight against global climate change, the transportation sector is of critical importance because it is one of the major causes of total greenhouse gas emissions worldwide. Although urban rail transit systems offer a lower carbon footprint compared to road transportation, accurately forecasting the energy consumption of these systems is vital for sustainable urban planning, energy supply management, and the development of carbon balancing strategies. In this study, forecasting models are designed using five different machine learning (ML) algorithms, and their performances in predicting the energy consumption and carbon footprint of urban rail transit systems are comprehensively compared. For five distribution-center substations, 10 years of monthly energy consumption data and the total carbon footprint data of these substations are used. Support Vector Regression (SVR), Extreme Gradient Boosting (XGBoost), Long Short-Term Memory (LSTM), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Nonlinear Autoregressive Neural Network (NAR-NN) models are developed to forecast these data. Model hyperparameters are optimized using a 20-iteration Random Search algorithm, and the stochastic models are run 10 times with the optimized parameters. Results reveal that the SVR model consistently exhibits the highest forecasting performance across all datasets. For carbon footprint forecasting, the SVR model yields the best results, with an R2 of 0.942 and a MAPE of 3.51%. The ensemble method XGBoost also demonstrates the second-best performance (R2=0.648). Accordingly, while deterministic traditional ML models exhibit superior performance, the neural network-based stochastic models, such as LSTM, ANFIS, and NAR-NN, show insufficient generalization capability under limited data conditions. These findings indicate that, in small- and medium-scale time-series forecasting problems, traditional machine learning methods are more effective than neural network-based methods that require large datasets. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

30 pages, 4996 KB  
Article
Energy-Efficient, Multi-Agent Deep Reinforcement Learning Approach for Adaptive Beacon Selection in AUV-Based Underwater Localization
by Zahid Ullah Khan, Hangyuan Gao, Farzana Kulsoom, Syed Agha Hassnain Mohsan, Aman Muhammad and Hassan Nazeer Chaudry
J. Mar. Sci. Eng. 2026, 14(3), 262; https://doi.org/10.3390/jmse14030262 - 27 Jan 2026
Viewed by 80
Abstract
Accurate and energy-efficient localization of autonomous underwater vehicles (AUVs) remains a fundamental challenge due to the complex, bandwidth-limited, and highly dynamic nature of underwater acoustic environments. This paper proposes a fully adaptive deep reinforcement learning (DRL)-driven localization framework for AUVs operating in Underwater [...] Read more.
Accurate and energy-efficient localization of autonomous underwater vehicles (AUVs) remains a fundamental challenge due to the complex, bandwidth-limited, and highly dynamic nature of underwater acoustic environments. This paper proposes a fully adaptive deep reinforcement learning (DRL)-driven localization framework for AUVs operating in Underwater Acoustic Sensor Networks (UAWSNs). The localization problem is formulated as a Markov Decision Process (MDP) in which an intelligent agent jointly optimizes beacon selection and transmit power allocation to minimize long-term localization error and energy consumption. A hierarchical learning architecture is developed by integrating four actor–critic algorithms, which are (i) Twin Delayed Deep Deterministic Policy Gradient (TD3), (ii) Soft Actor–Critic (SAC), (iii) Multi-Agent Deep Deterministic Policy Gradient (MADDPG), and (iv) Distributed DDPG (D2DPG), enabling robust learning under non-stationary channels, cooperative multi-AUV scenarios, and large-scale deployments. A round-trip time (RTT)-based geometric localization model incorporating a depth-dependent sound speed gradient is employed to accurately capture realistic underwater acoustic propagation effects. A multi-objective reward function jointly balances localization accuracy, energy efficiency, and ranging reliability through a risk-aware metric. Furthermore, the Cramér–Rao Lower Bound (CRLB) is derived to characterize the theoretical performance limits, and a comprehensive complexity analysis is performed to demonstrate the scalability of the proposed framework. Extensive Monte Carlo simulations show that the proposed DRL-based methods achieve significantly lower localization error, lower energy consumption, faster convergence, and higher overall system utility than classical TD3. These results confirm the effectiveness and robustness of DRL for next-generation adaptive underwater localization systems. Full article
(This article belongs to the Section Ocean Engineering)
25 pages, 4900 KB  
Article
Multimodal Feature Fusion and Enhancement for Function Graph Data
by Yibo Ming, Lixin Bai, Jialu Zhao and Yanmin Chen
Appl. Sci. 2026, 16(3), 1246; https://doi.org/10.3390/app16031246 - 26 Jan 2026
Viewed by 76
Abstract
Recent years have witnessed performance improvements in Multimodal Large Language Models (MLLMs) on downstream natural image understanding tasks. However, when applied to the function graph reasoning task, which is highly information-dense and abundant in fine-grained structural details, these models face pronounced performance degradation. [...] Read more.
Recent years have witnessed performance improvements in Multimodal Large Language Models (MLLMs) on downstream natural image understanding tasks. However, when applied to the function graph reasoning task, which is highly information-dense and abundant in fine-grained structural details, these models face pronounced performance degradation. The challenges are primarily characterized by several core issues: the static projection bottleneck, inadequate cross-modal interaction, and insufficient visual context in text embeddings. To address these problems, this study proposes a multimodal feature fusion enhancement method for function graph reasoning and constructs the FuncFusion-Math model. The core innovation of this model resides in its design of a dual-path feature fusion mechanism for both image and text. Specifically, the image fusion module adopts cross-attention and self-attention mechanisms to optimize visual feature representations under the guidance of textual semantics, effectively mitigating fine-grained information loss. The text fusion module, through feature concatenation and Transformer encoding layers, deeply integrates structured mathematical information from the image into the textual embedding space, significantly reducing semantic deviation. Furthermore, this study utilizes a four-stage progressive training strategy and incorporates the LoRA technique for parameter-efficient optimization. Experimental results demonstrate that the FuncFusion-Math model, with 3B parameters, achieves an accuracy of 43.58% on the FunctionQA subset of the MathVista test set, outperforming a 7B-scale baseline model by 13.15%, which validates the feasibility and effectiveness of the proposed method. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 11094 KB  
Article
SRNN: Surface Reconstruction from Sparse Point Clouds with Nearest Neighbor Prior
by Haodong Li, Ying Wang and Xi Zhao
Appl. Sci. 2026, 16(3), 1210; https://doi.org/10.3390/app16031210 - 24 Jan 2026
Viewed by 113
Abstract
Surface reconstruction from 3D point clouds has a wide range of applications. In this paper, we focus on the reconstruction from raw, sparse point clouds. Although some existing methods work on this topic, the results often suffer from geometric defects. To solve this [...] Read more.
Surface reconstruction from 3D point clouds has a wide range of applications. In this paper, we focus on the reconstruction from raw, sparse point clouds. Although some existing methods work on this topic, the results often suffer from geometric defects. To solve this problem, we propose a novel method that optimizes a neural network (referred to as signed distance function) to fit the Signed Distance Field (SDF) from sparse point clouds. The signed distance function is optimized by projecting query points to its iso-surface accordingly. Our key idea is to encourage both the direction and distance of projection to be correct through the supervision provided by a nearest neighbor prior. In addition, we mitigate the error propagated from the prior function by augmenting the low-frequency components in the input. In our implementation, the nearest neighbor prior is trained with a large-scale local geometry dataset, and the positional encoding with a specified spectrum is used as a regularization for the optimization process. Experiments on the ShapeNetCore dataset demonstrate that our method achieves better accuracy than SDF-based methods while preserving smoothness. Full article
(This article belongs to the Special Issue Technical Advances in 3D Reconstruction—2nd Edition)
27 pages, 6867 KB  
Article
Recovering Gamma-Ray Burst Redshift Completeness Maps via Spherical Generalized Additive Models
by Zsolt Bagoly and Istvan I. Racz
Universe 2026, 12(2), 31; https://doi.org/10.3390/universe12020031 - 24 Jan 2026
Viewed by 93
Abstract
We present an advanced statistical framework for estimating the relative intensity of astrophysical event distributions (e.g., Gamma-Ray Bursts, GRBs) on the sky tofacilitate population studies and large-scale structure analysis. In contrast to the traditional approach based on the ratio of Kernel Density Estimation [...] Read more.
We present an advanced statistical framework for estimating the relative intensity of astrophysical event distributions (e.g., Gamma-Ray Bursts, GRBs) on the sky tofacilitate population studies and large-scale structure analysis. In contrast to the traditional approach based on the ratio of Kernel Density Estimation (KDE), which is characterized by numerical instability and bandwidth sensitivity, this work applies a logistic regression embedded in a Bayesian framework to directly model selection effects. It reformulates the problem as a logistic regression task within a Generalized Additive Model (GAM) framework, utilizing isotropic Splines on the Sphere (SOS) to map the conditional probability of redshift measurement. The model complexity and smoothness are objectively optimized using Restricted Maximum Likelihood (REML) and the Akaike Information Criterion (AIC), ensuring a data-driven bias-variance trade-off. We benchmark this approach against an Adaptive Kernel Density Estimator (AKDE) using von Mises–Fisher kernels and Abramson’s square root law. The comparative analysis reveals strong statistical evidence in favor of this Preconditioned (Precon) Estimator, yielding a log-likelihood improvement of ΔL74.3 (Bayes factor >1030) over the adaptive method. We show that this Precon Estimator acts as a spectral bandwidth extender, effectively decoupling the wideband exposure map from the narrowband selection efficiency. This provides a tool for cosmologists to recover high-frequency structural features—such as the sharp cutoffs—that are mathematically irresolvable by direct density estimators due to the bandwidth limitation inherent in sparse samples. The methodology ensures that reconstructions of the cosmic web are stable against Poisson noise and consistent with observational constraints. Full article
(This article belongs to the Section Astroinformatics and Astrostatistics)
Show Figures

Figure 1

16 pages, 993 KB  
Article
TSS GAZ PTP: Towards Improving Gumbel AlphaZero with Two-Stage Self-Play for Multi-Constrained Electric Vehicle Routing Problems
by Hui Wang, Xufeng Zhang and Chaoxu Mu
Smart Cities 2026, 9(2), 21; https://doi.org/10.3390/smartcities9020021 - 23 Jan 2026
Viewed by 118
Abstract
Deep reinforcement learning (DRL) with self-play has emerged as a promising paradigm for solving combinatorial optimization (CO) problems. The recently proposed Gumbel AlphaZero Plan-to-Play (GAZ PTP) framework adopts a competitive training setup between a learning agent and an opponent to tackle classical CO [...] Read more.
Deep reinforcement learning (DRL) with self-play has emerged as a promising paradigm for solving combinatorial optimization (CO) problems. The recently proposed Gumbel AlphaZero Plan-to-Play (GAZ PTP) framework adopts a competitive training setup between a learning agent and an opponent to tackle classical CO tasks such as the Traveling Salesman Problem (TSP). However, in complex and multi-constrained environments like the Electric Vehicle Routing Problem (EVRP), standard self-play often suffers from opponent mismatch: when the opponent is either too weak or too strong, the resulting learning signal becomes ineffective. To address this challenge, we introduce Two-Stage Self-Play GAZ PTP (TSS GAZ PTP), a novel DRL method designed to maintain adaptive and effective learning pressure throughout the training process. In the first stage, the learning agent, guided by Gumbel Monte Carlo Tree Search (MCTS), competes against a greedy opponent that follows the best historical policy. As training progresses, the framework transitions to a second stage in which both agents employ Gumbel MCTS, thereby establishing a dynamically balanced competitive environment that encourages continuous strategy refinement. The primary objective of this work is to develop a robust self-play mechanism capable of handling the high-dimensional constraints inherent in real-world routing problems. We first validate our approach on the TSP, a benchmark used in the original GAZ PTP study, and then extend it to the multi-constrained EVRP, which incorporates practical limitations including battery capacity, time windows, vehicle load limits, and charging infrastructure availability. The experimental results show that TSS GAZ PTP consistently outperforms existing DRL methods, with particularly notable improvements on large-scale instances. Full article
Show Figures

Figure 1

26 pages, 5754 KB  
Article
Heatmap-Assisted Reinforcement Learning Model for Solving Larger-Scale TSPs
by Guanqi Liu and Donghong Xu
Electronics 2026, 15(3), 501; https://doi.org/10.3390/electronics15030501 - 23 Jan 2026
Viewed by 160
Abstract
Deep reinforcement learning (DRL)-based algorithms for solving the Traveling Salesman Problem (TSP) have demonstrated competitive potential compared to traditional heuristic algorithms on small-scale TSP instances. However, as the problem size increases, the NP-hard nature of the TSP leads to exponential growth in the [...] Read more.
Deep reinforcement learning (DRL)-based algorithms for solving the Traveling Salesman Problem (TSP) have demonstrated competitive potential compared to traditional heuristic algorithms on small-scale TSP instances. However, as the problem size increases, the NP-hard nature of the TSP leads to exponential growth in the combinatorial search space, state–action space explosion, and sharply increased sample complexity, which together cause significant performance degradation for most existing DRL-based models when directly applied to large-scale instances. This research proposes a two-stage reinforcement learning framework, termed GCRL-TSP (Graph Convolutional Reinforcement Learning for the TSP), which consists of a heatmap generation stage based on a graph convolutional neural network, and a heatmap-assisted Proximal Policy Optimization (PPO) training stage, where the generated heatmaps are used as auxiliary guidance for policy optimization. First, we design a divide-and-conquer heatmap generation strategy: a graph convolutional network infers m-node sub-heatmaps, which are then merged into a global edge-probability heatmap. Second, we integrate the heatmap into PPO by augmenting the state representation and restricting the action space toward high-probability edges, improving training efficiency. On standard instances with 200/500/1000 nodes, GCRL-TSP achieves a Gap% of 4.81/4.36/13.20 (relative to Concorde) with runtimes of 36 s/1.12 min/4.65 min. Experimental results show that GCRL-TSP achieves more than twice the solving speed compared to other TSP solving algorithms, while obtaining solution quality comparable to other algorithms on TSPs ranging from 200 to 1000 nodes. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

35 pages, 2106 KB  
Article
A Novel Method That Is Based on Differential Evolution Suitable for Large-Scale Optimization Problems
by Glykeria Kyrou, Vasileios Charilogis and Ioannis G. Tsoulos
Foundations 2026, 6(1), 2; https://doi.org/10.3390/foundations6010002 - 23 Jan 2026
Viewed by 119
Abstract
Global optimization represents a fundamental challenge in computer science and engineering, as it aims to identify high-quality solutions to problems spanning from moderate to extremely high dimensionality. The Differential Evolution (DE) algorithm is a population-based algorithm like Genetic Algorithms (GAs) and uses similar [...] Read more.
Global optimization represents a fundamental challenge in computer science and engineering, as it aims to identify high-quality solutions to problems spanning from moderate to extremely high dimensionality. The Differential Evolution (DE) algorithm is a population-based algorithm like Genetic Algorithms (GAs) and uses similar operators such as crossover, mutation and selection. The proposed method introduces a set of methodological enhancements designed to increase both the robustness and the computational efficiency of the classical DE framework. Specifically, an adaptive termination criterion is incorporated, enabling early stopping based on statistical measures of convergence and population stagnation. Furthermore, a population sampling strategy based on k-means clustering is employed to enhance exploration and improve the redistribution of individuals in high-dimensional search spaces. This mechanism enables structured population renewal and effectively mitigates premature convergence. The enhanced algorithm was evaluated on standard large-scale numerical optimization benchmarks and compared with established global optimization methods. The experimental results indicate substantial improvements in convergence speed, scalability and solution stability. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

16 pages, 3576 KB  
Article
Optimization of a Technological Package for the Biosorption of Heavy Metals in Drinking Water, Using Agricultural Waste Activated with Lemon Juice: A Sustainable Alternative for Native Communities in Northern Peru
by Eli Morales-Rojas, Pompeyo Ferro, Euclides Ticona Chayña, Adi Aynett Guevara Montoya, Angel Fernando Huaman-Pilco, Edwin Adolfo Díaz Ortiz, Lizbeth Córdova and Romel Ivan Guevara Guerrero
Sustainability 2026, 18(2), 1058; https://doi.org/10.3390/su18021058 - 20 Jan 2026
Viewed by 297
Abstract
The objective of this research was to optimize a technological package for the biosorption of heavy metals in water, using agricultural waste activated with lemon juice, as a sustainable development alternative. Heavy metals such as lead, cadmium, copper, and chromium were characterized in [...] Read more.
The objective of this research was to optimize a technological package for the biosorption of heavy metals in water, using agricultural waste activated with lemon juice, as a sustainable development alternative. Heavy metals such as lead, cadmium, copper, and chromium were characterized in two stages (field and laboratory conditions) using the American Public Health Association (APHA) method, and morphological characterization was performed using electron scanning techniques. Cocoa pod husk (CPH) and banana stem (BS) waste was collected with the informed consent of the native communities to obtain charcoal activated with lemon juice (LJ). In addition, a portable filter was designed that could be adapted to the native communities. The efficiency and validation of the filter were also calculated in the field. Statistical analysis was performed using Student’s t-test and Pearson’s correlation. The results show a significant reduction in lead from 0.209 mg/L to 0.02 mg/L. With regard to morphological characterization, more compact structures were observed after activation with BS, favoring the absorption of heavy metals. The correlations were positive for copper and lead (1.000), evidently due to the alteration of anthropic factors. The efficiency of the cocoa filter reached 87.48% and that of the banana stem reached 88.77%. For the cadmium, copper, and chromium parameters, the values obtained were within the maximum permissible limit (LMP). The validation of the filters showed that 80% of the population agrees with using the filters and hopes for their large-scale implementation. These findings represent a new alternative for native communities and a solution to the problem of heavy metals in drinking water. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

25 pages, 3073 KB  
Article
A Two-Stage Intelligent Reactive Power Optimization Method for Power Grids Based on Dynamic Voltage Partitioning
by Tianliang Xue, Xianxin Gan, Lei Zhang, Su Wang, Qin Li and Qiuting Guo
Electronics 2026, 15(2), 447; https://doi.org/10.3390/electronics15020447 - 20 Jan 2026
Viewed by 91
Abstract
Aiming at issues such as reactive power distribution fluctuations and insufficient local support caused by large-scale integration of renewable energy in new power systems, as well as the poor adaptability of traditional methods and bottlenecks of deep reinforcement learning in complex power grids, [...] Read more.
Aiming at issues such as reactive power distribution fluctuations and insufficient local support caused by large-scale integration of renewable energy in new power systems, as well as the poor adaptability of traditional methods and bottlenecks of deep reinforcement learning in complex power grids, a two-stage intelligent optimization method for grid reactive power based on dynamic voltage partitioning is proposed. Firstly, a comprehensive indicator system covering modularity, regulation capability, and membership degree is constructed. Adaptive MOPSO is employed to optimize K-means clustering centers, achieving dynamic grid partitioning and decoupling large-scale optimization problems. Secondly, a Markov Decision Process model is established for each partition, incorporating a penalty mechanism for safety constraint violations into the reward function. The DDPG algorithm is improved through multi-experience pool probabilistic replay and sampling mechanisms to enhance agent training. Finally, an optimal reactive power regulation scheme is obtained through two-stage collaborative optimization. Simulation case studies demonstrate that this method effectively reduces solution complexity, accelerates convergence, accurately addresses reactive power dynamic distribution and local support deficiencies, and ensures voltage security and optimal grid losses. Full article
Show Figures

Figure 1

22 pages, 4546 KB  
Article
Comprehensive Strategy for Effective Exploitation of Offshore Extra-Heavy Oilfields with Cyclic Steam Stimulation
by Chunsheng Zhang, Jianhua Bai, Xu Zheng, Wei Zhang and Chao Zhang
Processes 2026, 14(2), 359; https://doi.org/10.3390/pr14020359 - 20 Jan 2026
Viewed by 134
Abstract
The N Oilfield is the first offshore extra-heavy oilfield developed using thermal recovery methods, adopting cyclic steam stimulation (CSS) and commissioned in 2022. The development of offshore heavy oil reservoirs is confronted with numerous technical and operational challenges. Key constraints include limited platform [...] Read more.
The N Oilfield is the first offshore extra-heavy oilfield developed using thermal recovery methods, adopting cyclic steam stimulation (CSS) and commissioned in 2022. The development of offshore heavy oil reservoirs is confronted with numerous technical and operational challenges. Key constraints include limited platform space, stringent economic thresholds for single-well production, and elevated operational risks, collectively contributing to significant uncertainties in project viability. For effective exploitation of the target oilfield, a comprehensive strategy was proposed, which consisted of effective artificial lifting, steam channeling and high water cut treatment. First, to achieve efficient artificial lifting of the extra-heavy oil, an integrated injection–production lifting technology using jet pump was designed and implemented. In addition, during the first steam injection cycle, challenges such as inter-well steam channeling, high water cut, and an excessive water recovery ratio were encountered. Subsequent analysis indicated that low-quality reservoir intervals were the dominant sources of unwanted water production and preferential steam channeling pathways. To address these problems, a suite of efficiency-enhancing technologies was established, including regional steam injection for channeling suppression, classification-based water shutoff and control, and production regime optimization. Given the significant variations in geological conditions and production dynamics among different types of high-water-cut wells, a single plugging agent system proved inadequate for their diverse requirements. Therefore, customized water control countermeasures were formulated for specific well types, and a suite of plugging agent systems with tailored properties was subsequently developed, including high-temperature-resistant N2 foam, high-temperature-degradable gel, and high-strength ultra-fine cement systems. To date, regional steam injection has been implemented in 10 well groups, water control measures have been applied to 12 wells, and production regimes optimization has been implemented in 5 wells. Up to the current production round, no steam channeling has been observed in the well groups after thermal treatment. Compared with the pre-measurement stage, the average water cut per well decreased by 10%. During the three-year production cycle, the average daily oil production per well increased by 10%, the cumulative oil increment of the oilfield reached 15,000 tons, and the total crude oil production exceeded 800,000 tons. This study provides practical technical insights for the large-scale and efficient development of extra-heavy oil reservoirs in the Bohai Oilfield and offers a valuable reference for similar reservoirs worldwide. Full article
Show Figures

Figure 1

23 pages, 2599 KB  
Article
Optimal Operation of EVs, EBs and BESS Considering EBs-Charging Piles Matching Problem Using a Novel Pricing Strategy Based on ICDLBPM
by Jincheng Liu, Biyu Wang, Hongyu Wang, Taoyong Li, Kai Wu, Yimin Zhao and Jing Liu
Processes 2026, 14(2), 324; https://doi.org/10.3390/pr14020324 - 16 Jan 2026
Viewed by 178
Abstract
Electric vehicles (EVs), electric buses (EBs), and battery energy storage system (BESS), as both controllable power sources and load, play a great role in providing flexibility for the power grid, especially with the increased renewable energy penetration. However, there is still a lack [...] Read more.
Electric vehicles (EVs), electric buses (EBs), and battery energy storage system (BESS), as both controllable power sources and load, play a great role in providing flexibility for the power grid, especially with the increased renewable energy penetration. However, there is still a lack of studies on EVs’ pricing strategy as well as the EBs-charging piles matching problem. To address these issues, a multi-objective optimal operation model is presented to achieve the lowest load fluctuation level, minimum electricity cost, and maximum discharging benefit. An improved load boundary prediction method (ICDLBPM) and a novel pricing strategy are proposed. In addition, reduction in the number of EBs charging piles would not only impact normal operation of EBs, but also even lead to load flexibility decline. Thus a handling method of the EBs-charging piles matching problem is presented. Several case studies were conducted on a regional distribution network comprising 100 EVs, 30 EBs, and 20 BESS units. The developed model and methodology demonstrate superior performance, improving load smoothness by 45.78% and reducing electricity costs by 19.73%. Furthermore, its effectiveness is also validated in a large-scale system, where it achieves additional reductions of 39.31% in load fluctuation and 62.45% in total electricity cost. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

Back to TopTop