Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (143)

Search Parameters:
Keywords = gas network digitalization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1129 KB  
Review
Trends in Renewable Energy Adoption for Climate Change Mitigation: A Bibliometric Analysis
by Henerica Tazvinga, Christina M. Botai and Nosipho Zwane
Energies 2026, 19(8), 1918; https://doi.org/10.3390/en19081918 - 15 Apr 2026
Abstract
The shift to renewable energy sources is widely seen as a promising way to reduce carbon emissions and mitigate the impacts of climate change. The abundance of renewable energy resources in Africa has enormous potential to reduce greenhouse gas emissions and promote climate [...] Read more.
The shift to renewable energy sources is widely seen as a promising way to reduce carbon emissions and mitigate the impacts of climate change. The abundance of renewable energy resources in Africa has enormous potential to reduce greenhouse gas emissions and promote climate resilience. This study conducted a bibliometric analysis of research trends in the adoption of renewable energy systems for climate change mitigation in Africa from 1993 to the first quarter of 2025. The results showed a steady growth in publications during the 2000s, with a growing annual rate of approximately 12.7%, reaching a peak in 2024, indicating increasing research interest in Africa. The thematic analysis highlights key but underdeveloped and emerging themes, including climate change mitigation, renewable energy sources, greenhouse gas assessment, climate change, energy policy, economic growth, carbon emissions, energy consumption, rural electrification, and energy transformation for further investigation. These findings also revealed regional disparities, highlighting the need to strengthen institutional capacity, develop clear long-term policies, and develop innovative financing mechanisms to expedite the deployment of renewable energy. Additionally, results from network analysis and emerging keyword detection revealed that enhanced regional and international cooperation, grid modernization, and technological innovation, such as energy storage and digital solutions, are vital in the developmental efforts to enhance optimized resource utilization and ensure energy access and security. The study thus provides insights into existing research gaps and future research directions, which will benefit policymakers, academics, and related stakeholders in their efforts to utilize Africa’s renewable energy potential to mitigate climate change, enable sustainable development, and achieve energy security throughout the continent. Full article
40 pages, 5294 KB  
Article
Optimizing Carbon Capture Efficiency: Knowledge Extraction from Process Simulations of Post-Combustion Amine Scrubbing
by Mohammad Fazle Rabbi
Mach. Learn. Knowl. Extr. 2026, 8(4), 87; https://doi.org/10.3390/make8040087 - 2 Apr 2026
Viewed by 260
Abstract
Post-combustion amine scrubbing using monoethanolamine (MEA) remains a leading carbon capture technology, yet its deployment is constrained by high regeneration energy requirements and the computational expense of rigorous process simulation. This study presents an integrated framework coupling high-fidelity rate-based process simulation with explainable [...] Read more.
Post-combustion amine scrubbing using monoethanolamine (MEA) remains a leading carbon capture technology, yet its deployment is constrained by high regeneration energy requirements and the computational expense of rigorous process simulation. This study presents an integrated framework coupling high-fidelity rate-based process simulation with explainable machine learning to systematically characterize a ten-dimensional operating space for MEA-based CO2 absorption. Latin hypercube sampling generated 10,000 steady-state cases, and five regression architectures were benchmarked under identical protocols. A neural network achieved the highest accuracy (R2 = 0.9729, RMSE = 1.43%), while XGBoost was selected as the operational surrogate due to its robust computational efficiency (1.5 ms inference latency) and native compatibility with exact Shapley value decomposition. SHAP analysis identified liquid-to-gas ratio as the dominant efficiency determinant, contributing 46.6% of total predictive importance, followed by inlet temperature and MEA concentration, with these three parameters collectively explaining 85% of efficiency variation and establishing a compact control hierarchy suitable for reduced-order control architectures. Bivariate interaction analysis located a high-efficiency operating region, while sensitivity analysis confirmed the strong influence of inlet temperature across the operating envelope. Pareto optimization via NSGA-II generated tiered operational guidelines spanning the 85% to 98% capture efficiency range, quantifying a 39% specific regeneration duty penalty (3.1 to 4.3 MJ/kg CO2) for pursuing maximum versus baseline capture targets. The framework demonstrates how explainable machine learning converts opaque process simulations into actionable engineering knowledge, providing a transparent and computationally efficient basis for design optimization and digital twin deployment in post-combustion carbon capture systems. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

32 pages, 59024 KB  
Article
Digital Core-Based Characterization and Fracability Evaluation of Deep Shale Gas Reservoirs in the Weiyuan Area, Sichuan Basin, China
by Jing Li, Yuqi Deng, Tingting Huang, Guo Chen, Bei Yang, Xiaohai Ren and Hu Li
Minerals 2026, 16(4), 366; https://doi.org/10.3390/min16040366 - 31 Mar 2026
Viewed by 353
Abstract
Deep shale gas reservoirs in the southern Sichuan Basin (Weiyuan area) exhibit strong heterogeneity and complex pore-fracture networks. Traditional reservoir evaluation methods struggle to accurately capture their microscale pore characteristics and fracability, thereby restricting efficient development and precise sweet spot prediction. Therefore, integrating [...] Read more.
Deep shale gas reservoirs in the southern Sichuan Basin (Weiyuan area) exhibit strong heterogeneity and complex pore-fracture networks. Traditional reservoir evaluation methods struggle to accurately capture their microscale pore characteristics and fracability, thereby restricting efficient development and precise sweet spot prediction. Therefore, integrating digital core technology with geological analysis is essential to systematically quantify key reservoir parameters, including microscale pore structure, mineral composition, and brittleness characteristics. To clarify the controlling factors of high-quality deep shale gas reservoirs in the Weiyuan area and assess their exploration and development potential, we performed digital core analysis at micron to nanometer scales. Three-dimensional digital core models of representative deep shale gas wells were constructed. Integrating mineral composition, geochemical characteristics, and pore space features, we discuss the geological conditions for deep shale gas accumulation and the fracability of horizontal wells, and we delineate favorable shale reservoir zones. The results show that digital core technology enables quantitative and visual characterization of each sublayer of the Longmaxi Formation shale reservoir, including mineral types, laminae types, pore-throat structures, and organic matter distribution. From the Long 11-1 sublayer to the Long 11-4 sublayer, the pore-throat radius, total pore volume, total throat volume, connected pore-throat percentage, and coordination number all gradually decrease. In the eastern Weiyuan area, the siliceous components in deep shale gas reservoirs at the base of the Longmaxi Formation are primarily of both biogenic and terrigenous origin. Due to local variations in the sedimentary environment, terrigenous input contributes significantly to the total siliceous content in this region. Although the Long 11-1 sublayer of the Longmaxi Formation is lithologically classified as mud shale, its particle size and mineral composition more closely resemble those of clayey siltstone or argillaceous sandstone, suggesting considerable potential for reservoir space development. Typical wells in the eastern Weiyuan area exhibit distinct lithological characteristics, including coarser grain sizes, stronger hydrodynamic conditions during deposition, and abundant terrigenous clastic supply. The rigid framework formed by silt- to sand-sized particles effectively mitigates compaction, thereby facilitating the preservation of intergranular pores and microfractures. High organic matter abundance, appropriate thermal maturity, and a considerable thickness of high-quality shale ensured sufficient hydrocarbon supply. The main types of natural fractures are intergranular and grain-edge fractures formed by differences in sedimentary grain size, and bedding-parallel fractures generated by hydrocarbon generation overpressure. Based on reservoir mineral composition, pore characteristics, areal porosity, and pore size distribution identified via digital core analysis, the bottom 0–3 m of the Long 11-1 sublayer is determined to be the optimal target interval. By delineating the microscopic characteristics of the shale reservoir and predicting rock mechanical parameters, a fracability evaluation index was established from digital core simulations. This guides the selection of target layers in deep shale gas reservoirs and optimizes hydraulic fracturing design. Full article
Show Figures

Figure 1

24 pages, 2457 KB  
Article
An Enhanced ABC Algorithm with Hybrid Initialization and Stagnation-Guided Search for Parameter-Efficient Text Summarization
by Yun Liu, Yingjing Yao, Wenyu Pei, Mengqi Liu and Hao Gao
Mathematics 2026, 14(7), 1120; https://doi.org/10.3390/math14071120 - 27 Mar 2026
Viewed by 281
Abstract
The digital transformation of oil and gas pipeline networks has generated substantial volumes of unstructured maintenance documentation from communication systems, creating an urgent need for automated summarization to improve operational efficiency. However, domain-specific text summarization for pipeline communication maintenance remains challenging due to [...] Read more.
The digital transformation of oil and gas pipeline networks has generated substantial volumes of unstructured maintenance documentation from communication systems, creating an urgent need for automated summarization to improve operational efficiency. However, domain-specific text summarization for pipeline communication maintenance remains challenging due to scarce labeled data and the high computational cost of fine-tuning large pretrained models. Parameter-efficient fine-tuning alleviates this issue, but its effectiveness strongly depends on appropriate hyperparameter selection. This paper proposes a unified framework that combines weight-decomposed low-rank adaptation with an enhanced Artificial Bee Colony algorithm for automated hyperparameter optimization. The enhanced algorithm addresses two specific limitations of the standard Artificial Bee Colony algorithm: uninformed random initialization that ignores promising regions, and premature abandonment of stagnated solutions that discards partially useful search directions. These two components represent principled design choices, each targeting a distinct bottleneck in applying swarm intelligence search to high-dimensional mixed-type hyperparameter spaces. The method introduces a hybrid initialization strategy to exploit prior knowledge and a stagnation-guided local search mechanism to refine stagnated solutions instead of discarding them, achieving a better balance between exploration and exploitation. Experimental results on a public Chinese summarization benchmark and an industrial oil and gas pipeline communication maintenance corpus show that the proposed approach consistently outperforms full fine-tuning, manually tuned parameter-efficient methods, and several evolutionary optimization baselines in terms of ROUGE metrics. The automated search introduces modest additional computational overhead compared to manual tuning while eliminating expert-dependent hyperparameter configuration and achieving consistent performance gains across both datasets. Overall, the proposed framework provides an efficient and robust solution for adapting large language models to specialized summarization tasks in the context of pipeline communication system maintenance. Full article
Show Figures

Figure 1

48 pages, 14824 KB  
Review
Convergence of Multidimensional Sensing: A Review of AI-Enhanced Space-Division Multiplexing in Optical Fiber Sensors
by Rabiu Imam Sabitu and Amin Malekmohammadi
Sensors 2026, 26(7), 2044; https://doi.org/10.3390/s26072044 - 25 Mar 2026
Viewed by 931
Abstract
The growing demand for high-fidelity, multi-parameter, distributed sensing in critical domains such as structural health monitoring, oil and gas exploration, and secure perimeter surveillance is pushing traditional optical fiber sensors (OFS) to their performance limits. Although conventional multiplexing techniques such as time-division and [...] Read more.
The growing demand for high-fidelity, multi-parameter, distributed sensing in critical domains such as structural health monitoring, oil and gas exploration, and secure perimeter surveillance is pushing traditional optical fiber sensors (OFS) to their performance limits. Although conventional multiplexing techniques such as time-division and wavelength-division multiplexing (TDM, WDM) have been commercially successful, they are rapidly approaching fundamental bottlenecks in sensor density, spatial resolution, and data capacity. This review argues that the synergistic convergence of space-division multiplexing (SDM) and artificial intelligence (AI) represents a paradigm shift, enabling a new generation of intelligent, high-dimensional sensing networks. We comprehensively survey the state of the art in SDM-based OFS, detailing the operating principles and applications of multi-core fibers (MCFs) for ultra-dense sensor arrays and 3D shape sensing, as well as few-mode fibers (FMFs) for mode-division multiplexing and enhanced multi-parameter discrimination. However, the unprecedented spatial parallelism provided by SDM introduces significant challenges, including inter-channel crosstalk, complex signal demultiplexing, and massive data volumes. This paper systematically explores how AI, particularly machine learning (ML) and deep learning (DL), is being leveraged not merely as a tool but as an indispensable core technology to mitigate these impairments. We critically analyze AI’s role in digital crosstalk suppression, intelligent mode demultiplexing, signal denoising, and solving complex inverse problems for parameter estimation. Furthermore, we highlight how this AI–SDM synergy enables capabilities beyond the reach of either technology alone, such as super-resolution sensing and predictive analytics. The discussion is extended to include the critical supporting pillars of this ecosystem, such as advanced interrogation techniques and the associated data management challenges. Finally, we provide a forward-looking perspective on the trajectory of the field, outlining a path toward cognitive sensing networks that are self-calibrating, adaptive, and capable of autonomous decision-making. This review is intended to serve as a foundational reference for researchers and engineers at the intersection of photonics and intelligent systems, illuminating the pathway toward tomorrow’s intelligent sensing infrastructure. Full article
(This article belongs to the Collection Artificial Intelligence in Sensors Technology)
Show Figures

Figure 1

28 pages, 4916 KB  
Article
Improving Manufacturing Line Design Efficiency Using Digital Value Stream Mapping
by P Paryanto, Muhammad Faizin and Jörg Franke
J. Manuf. Mater. Process. 2026, 10(3), 98; https://doi.org/10.3390/jmmp10030098 - 13 Mar 2026
Viewed by 773
Abstract
This study proposes a real-time data-based Digital Value Stream Mapping (Digital VSM) framework that integrates Artificial Intelligence (AI) feature selection and discrete-event simulation validation to enhance production system performance. Unlike conventional VSM approaches that rely on static, manually aggregated data, the proposed framework [...] Read more.
This study proposes a real-time data-based Digital Value Stream Mapping (Digital VSM) framework that integrates Artificial Intelligence (AI) feature selection and discrete-event simulation validation to enhance production system performance. Unlike conventional VSM approaches that rely on static, manually aggregated data, the proposed framework uses real-time operational data to dynamically quantify Value Added (VA), Non-Value Added (NVA), and Necessary Non-Value Added (NNVA) activities. To improve decision accuracy, an Artificial Neural Network (ANN) combined with Genetic Algorithm (GA) feature selection is employed to identify dominant production variables influencing lead time and line imbalance. Furthermore, Ranked Positional Weight (RPW) optimization results are validated through Tecnomatix Plant Simulation to ensure robustness before physical implementation. The proposed framework was applied to a discrete manufacturing line, resulting in a reduction of total lead time from 8755 s to 6400 s and an increase in process ratio from 33.64% to 45.91%, with line efficiency reaching 91.7%. The findings demonstrate that integrating Digital VSM with AI-driven feature selection and simulation validation transforms Lean analysis from a descriptive tool into a predictive and validated decision-support system suitable for Industry 4.0 environments. Full article
(This article belongs to the Special Issue Emerging Methods in Digital Manufacturing)
Show Figures

Figure 1

25 pages, 2662 KB  
Review
Optimizing Biomass Feedstock Logistics Using AI for Integrated Multimodal Transport in Bioenergy and Bioproduct Systems: A Review
by Johanna Gonzalez and Jingxin Wang
Logistics 2026, 10(3), 54; https://doi.org/10.3390/logistics10030054 - 2 Mar 2026
Viewed by 968
Abstract
Background: The constant growth in demand for sustainable energy products and the development of the circular economy have created a critical need for an efficient supply chain for biomass. However, the inherent challenges of biomass make its harvesting, collection, storage, and transport [...] Read more.
Background: The constant growth in demand for sustainable energy products and the development of the circular economy have created a critical need for an efficient supply chain for biomass. However, the inherent challenges of biomass make its harvesting, collection, storage, and transport difficult, impacting logistical efficiency and the viability of bioenergy and bioproduct production. This study analyzes how combining artificial intelligence (AI) with multimodal transport can optimize and improve efficiency, as well as reduce costs, in biomass logistics. Methods: The study uses a tiered research framework that encompasses the physical domain (biomass limitations), the structural domain (mathematical modeling for multimodal transport), the intelligence domain (AI-based decision making), and the strategic approach. Results: The outcomes indicate that while truck transport is ideal for short distances, integrating rail and water transport through AI-driven optimization reduces costs and greenhouse gas emissions for long-distance travel. AI technologies, such as digital twins and machine learning, improve demand forecasting, real-time routing, and cargo consolidation, leading to enhanced prediction accuracy for transport costs. Conclusions: The integration of AI and multimodal networks builds resilient and sustainable biomass supply chains. However, full implementation requires addressing data fragmentation and investing in digital infrastructure to enable seamless coordination between supply chain stakeholders. Full article
Show Figures

Graphical abstract

26 pages, 1919 KB  
Article
Optimising Harbour Construction Projects for Environmental Sustainability: A Hybrid Artificial Intelligence Approach
by Mohamed T. Elnabwy, Mohamed ElAgroudy, Emad Elbeltagi, Mahmoud M. El Banna, Ehab A. Mlybari and Hossam Wefki
Sustainability 2026, 18(5), 2162; https://doi.org/10.3390/su18052162 - 24 Feb 2026
Viewed by 364
Abstract
Harbour sedimentation represents a major challenge to the environmental sustainability and operational efficiency of coastal infrastructure, as frequent dredging activities increase maintenance costs, ecological disturbance, and carbon emissions. Conventional physical and numerical sediment transport models, while widely applied, are computationally intensive and often [...] Read more.
Harbour sedimentation represents a major challenge to the environmental sustainability and operational efficiency of coastal infrastructure, as frequent dredging activities increase maintenance costs, ecological disturbance, and carbon emissions. Conventional physical and numerical sediment transport models, while widely applied, are computationally intensive and often unsuitable for early-stage, sustainability-oriented design optimisation. To address these limitations, this study proposes a hybrid artificial intelligence-based optimisation framework integrating Artificial Neural Networks (ANNs), Genetic Algorithms (GAs), and Particle Swarm Optimisation (PSO) for sustainable breakwater and harbour layout design. Hydrodynamic simulations using the Coastal Modelling System (CMS) were conducted to generate a comprehensive dataset describing sediment transport behaviour under varying geometric and structural configurations. An ANN surrogate model was trained to capture nonlinear relationships between breakwater parameters and accumulated sedimentation volume, while GA-based global optimisation and PSO-based validation and local refinement were employed to identify optimal design solutions. Comparative assessment demonstrated consistent convergence of ANN–GA and ANN–PSO solutions within the same design region, with a maximum deviation of 8.46% between design variables and a sedimentation difference of 2.4%. The hybrid ANN–GA–PSO framework achieved the lowest predicted sedimentation volume, representing an improvement of approximately 2.3% relative to the ANN–GA baseline. The proposed framework supports Integrated Coastal Structures Management (ICSM) by enabling proactive, design-stage reduction in long-term sediment accumulation and dredging requirements, offering a scalable pathway toward sustainable and digital-twin-enabled harbour planning. Full article
Show Figures

Figure 1

20 pages, 1420 KB  
Article
High-Level Synthesis (HLS)-Enabled Field-Programmable Gate Array (FPGA) Algorithms for Latency-Critical Plasma Diagnostics and Neural Trigger Prototyping in Next-Generation Energy Projects
by Radosław Cieszewski, Krzysztof Poźniak, Ryszard Romaniuk and Maciej Linczuk
Energies 2026, 19(4), 1091; https://doi.org/10.3390/en19041091 - 21 Feb 2026
Viewed by 587
Abstract
Large-scale advanced energy systems, including fusion devices, high-power plasma sources, and accelerator-driven energy platforms, increasingly depend on real-time, hardware-level data processing for diagnostics, control, and protection. In such installations, ultra-low latency, deterministic throughput, and multi-decade operational lifetimes are not optional design goals but [...] Read more.
Large-scale advanced energy systems, including fusion devices, high-power plasma sources, and accelerator-driven energy platforms, increasingly depend on real-time, hardware-level data processing for diagnostics, control, and protection. In such installations, ultra-low latency, deterministic throughput, and multi-decade operational lifetimes are not optional design goals but strict system-level requirements. While similar timing constraints exist in high-energy physics infrastructures, energy applications place a stronger emphasis on long-term stability, maintainability, and reproducibility of digital signal processing pipelines. This work investigates whether high-level synthesis (HLS) provides a practical and sustainable design methodology for implementing both classical pattern-based and compact neural network (NN) trigger logic on Field-Programmable Gate Arrays (FPGAs) under realistic energy-system constraints. Using representative commercial toolchains (Intel HLS and hls4ml) as reference workflows, we demonstrate the capabilities of fixed-point, fully pipelined streaming architectures, while also identifying critical shortcomings of pragma-driven HLS approaches in terms of architecture transparency, long-term portability, and systematic multi-objective design-space exploration, all of which are crucial for long-lived energy projects and plasma diagnostic systems. These limitations directly motivate the development of a custom, vendor-agnostic, extensible HLS framework (PyHLS), specifically oriented toward deterministic latency, reproducibility, and physics-grade verification demands of advanced energy infrastructures. Gas Electron Multipliers (GEMs) are modern gaseous detectors increasingly employed in plasma diagnostics, radiation monitoring, and high-power energy experiments, where high rate capability, fine spatial resolution, and radiation tolerance are required. Their massively parallel signal structure and continuous data streams make GEMs a representative and demanding benchmark for FPGA-based real-time trigger and preprocessing systems in energy-related environments. The primary objective of this study is to establish a pragmatic technological baseline, demonstrating that contemporary HLS workflows can reliably support both template-based and neural inference-based trigger architectures within strict timing, resource, and power constraints typical for advanced energy installations. Furthermore, we outline a scalable development path toward multi-channel and two-dimensional (pixelated) GEM readout architectures, directly applicable to fusion diagnostics, plasma accelerators, beam–plasma interaction studies, and radiation-hard energy monitoring platforms. Although the proposed methodology remains fully transferable to large-scale physics trigger systems, its principal relevance is directed toward real-time diagnostics and protection layers in next-generation energy systems. Full article
Show Figures

Figure 1

21 pages, 21467 KB  
Article
Exploitation of Multi-Sensor UAS Surveying for Monitoring the Volcanic Unrest at Vulcano Island (September 2021–June 2024)
by Matteo Cagnizi, Mauro Coltelli, Luigi Lodato, Peppe Junior Valentino D’Aranno, Maria Marsella and Francesco Rossi
Remote Sens. 2026, 18(4), 601; https://doi.org/10.3390/rs18040601 - 14 Feb 2026
Viewed by 542
Abstract
In September 2021, significant changes in the geophysical and geochemical parameters on Vulcano Island were recorded by the surveillance network activities and periodic surveys. Between October 2021 and June 2024, additional surveys were conducted to acquire LIDAR, thermal, and RGB datasets for the [...] Read more.
In September 2021, significant changes in the geophysical and geochemical parameters on Vulcano Island were recorded by the surveillance network activities and periodic surveys. Between October 2021 and June 2024, additional surveys were conducted to acquire LIDAR, thermal, and RGB datasets for the generation of Digital Terrain Models (DTMs), orthophotos, and fumarole field maps. These data were collected using DJI Matrice 300 UAS platforms. Precision positioning was ensured through a POS/NAV RTK georeferencing approach. The instrumentation included Genius R-Fans-16 and DJI Zenmuse L1 laser scanners for structural mapping, alongside Zenmuse H20T infrared cameras for the thermal detection of potential instabilities on the volcano flanks, focused on the northern area and summit of Gran Cratere La Fossa, and these were subsequently repeated in May 2022, October 2022, October 2023, and June 2024. Additionally, 3D reconstruction targeted morphological variations in unstable areas like the cone top, Forgia Vecchia, and the 1988 landslide site. In May 2022, anomalous degassing in the Eastern Bay led to increased gas and hydrothermal fluid emissions, causing water whitening in front of Baia di Levante. Optical-thermal monitoring, both on land and at sea, detected multiple hydrothermal gas streams, aiding in assessing the magnitude and areal extension of fumarolic fields. These findings contribute to establishing a comprehensive monitoring approach for understanding the volcanic unrest evolution cost-effectively and safely. Full article
Show Figures

Figure 1

32 pages, 3953 KB  
Review
Coal Research in the Global Energy Transition: Trends and Transformation (1975–2024)
by Medet Junussov, Geroy Zh. Zholtayev, Maxat K. Kembayev, Zamzagul T. Umarbekova, Moldir A. Mashrapova, Anatoly A. Antonenko and Biao Fu
Energies 2026, 19(4), 1017; https://doi.org/10.3390/en19041017 - 14 Feb 2026
Viewed by 813
Abstract
Driven by cleaner energy demands, environmental regulations, and technological advances, coal science is rapidly evolving, creating the need to understand its transition and transformation within the global energy research landscape. Building upon earlier national- and topic-specific bibliometric studies, this study presents a comprehensive [...] Read more.
Driven by cleaner energy demands, environmental regulations, and technological advances, coal science is rapidly evolving, creating the need to understand its transition and transformation within the global energy research landscape. Building upon earlier national- and topic-specific bibliometric studies, this study presents a comprehensive long-term global bibliometric analysis of coal research (1975–2024), based on 272,370 Web of Science records, applying the Cross-Disciplinary Publication Index (CDPI), the Technology–Economic Linkage Model (TELM), VOSviewer, and Excel to assess research growth, structural shifts, and interdisciplinary integration. Results show that coal research is dominated by articles (74%) with publication output peaking at ~19,500 in 2024, reflecting fluctuations in global coal prices due to energy transition market dynamics. CDPI results highlight Energy & Fuels (0.83), Chemical Engineering (0.80), Environmental Sciences (0.77), Materials Science (0.74), and Geosciences (0.66), showing coal’s central role across technology, environment, and geological research domains and revealing a clear shift toward sustainability-oriented and advanced material applications. China leads output (122,130 publications), with strong contributions from the China University of Mining and Technology and the Chinese Academy of Sciences, while the USA, Australia, and Europe maintain strong international collaboration networks. The evolution of coal research can be divided into three major phases: conventional mining, coal preparation, combustion, and coalbed methane commercialization (1975–2004; ~64,000 publications); integrated gasification combined cycle (IGCC) and carbon capture and storage (CCS) technologies (2005–2014; ~58,707 publications); and a recent phase dominated by by-product valorization, carbon capture utilization and storage (CCUS), and digital technologies (AI, IoT, ML) (2015–2024; ~146,174 publications). Contemporary coal research spans three interconnected domains: energy supply (≈36% of global electricity generation and ~15 Gt CO2 emissions), resource and geoscience applications (including large-scale fly ash utilization and critical element recovery), and environmental and health impacts related to greenhouse gas and pollutant emissions. The findings demonstrate that coal science is transitioning from a conventional fossil fuel-centered discipline toward an integrated, interdisciplinary energy research field, emphasizing emission reduction, resource efficiency, digitalization, and circular economy applications, thereby extending prior bibliometric studies through unprecedented temporal coverage, global scope, and the combined application of CDPI and TELM frameworks, providing critical insights for future energy strategies and policy development. Full article
(This article belongs to the Section B: Energy and Environment)
Show Figures

Figure 1

23 pages, 8113 KB  
Article
Estimating H I Mass Fraction in Galaxies with Bayesian Neural Networks
by Joelson Sartori, Cristian G. Bernal and Carlos Frajuca
Galaxies 2026, 14(1), 10; https://doi.org/10.3390/galaxies14010010 - 2 Feb 2026
Viewed by 739
Abstract
Neutral atomic hydrogen (H I) regulates galaxy growth and quenching, but direct 21 cm measurements remain observationally expensive and affected by selection biases. We develop Bayesian neural networks (BNNs)—a type of neural model that returns both a prediction and an associated uncertainty—to infer [...] Read more.
Neutral atomic hydrogen (H I) regulates galaxy growth and quenching, but direct 21 cm measurements remain observationally expensive and affected by selection biases. We develop Bayesian neural networks (BNNs)—a type of neural model that returns both a prediction and an associated uncertainty—to infer the H I mass, log10(MHI), from widely available optical properties (e.g., stellar mass, apparent magnitudes, and diagnostic colors) and simple structural parameters. For continuity with the photometric gas fraction (PGF) literature, we also report the gas-to-stellar-mass ratio, log10(G/S), where explicitly noted. Our dataset is a reproducible cross-match of SDSS DR12, the MPA–JHU value-added catalogs, and the 100% ALFALFA release, resulting in 31,501 galaxies after quality controls. To ensure fair evaluation, we adopt fixed train/validation/test partitions and an additional sky-holdout region to probe domain shift, i.e., how well the model extrapolates to sky regions that were not used for training. We also audit features to avoid information leakage and benchmark the BNNs against deterministic models, including a feed-forward neural network baseline and gradient-boosted trees (GBTs, a standard tree-based ensemble method in machine learning). Performance is assessed using mean absolute error (MAE), root-mean-square error (RMSE), and probabilistic diagnostics such as the negative log-likelihood (NLL, a loss that rewards models that assign high probability to the observed H I masses), reliability diagrams (plots comparing predicted probabilities to observed frequencies), and empirical 68%/95% coverage. The Bayesian models achieve point accuracy comparable to the deterministic baselines while additionally providing calibrated prediction intervals that adapt to stellar mass, surface density, and color. This enables galaxy-by-galaxy uncertainty estimation and prioritization for 21 cm follow-up that explicitly accounts for predicted uncertainties (“risk-aware” target selection). Overall, the results demonstrate that uncertainty-aware machine-learning methods offer a scalable and reproducible route to inferring galactic H I content from widely available optical data. Full article
Show Figures

Figure 1

22 pages, 2193 KB  
Article
Deep Reinforcement Learning-Based Experimental Scheduling System for Clay Mineral Extraction
by Bo Zhou, Lei He, Yongqiang Li, Zhandong Lv and Shiping Zhang
Electronics 2026, 15(3), 617; https://doi.org/10.3390/electronics15030617 - 31 Jan 2026
Viewed by 360
Abstract
Efficient and non-destructive extraction of clay minerals is fundamental for shale oil and gas reservoir evaluation and enrichment mechanism studies. However, traditional manual extraction experiments face bottlenecks such as low efficiency and reliance on operator experience, which limit their scalability and adaptability to [...] Read more.
Efficient and non-destructive extraction of clay minerals is fundamental for shale oil and gas reservoir evaluation and enrichment mechanism studies. However, traditional manual extraction experiments face bottlenecks such as low efficiency and reliance on operator experience, which limit their scalability and adaptability to intelligent research demands. To address this, this paper proposes an intelligent experimental scheduling system for clay mineral extraction based on deep reinforcement learning. First, the complex experimental process is deconstructed, and its core scheduling stages are abstracted into a Flexible Job Shop Scheduling Problem (FJSP) model with resting time constraints. Then, a scheduling agent based on the Proximal Policy Optimization (PPO) algorithm is developed and integrated with an improved Heterogeneous Graph Neural Network (HGNN) to represent the relationships among operations, machines, and constraints. This enables effective capture of the complex topological structure of the experimental environment and facilitates efficient sequential decision-making. To facilitate future practical applicability, a four-layer system architecture is proposed, comprising the physical equipment layer, execution control layer, scheduling decision layer, and interactive application layer. A digital twin module is designed to bridge the gap between theoretical scheduling and physical execution. This study focuses on validating the core scheduling algorithm through realistic simulations. Simulation results demonstrate that the proposed HGNN-PPO scheduling method significantly outperforms traditional heuristic rules (FIFO, SPT), meta-heuristic algorithms (GA), and simplified reinforcement learning methods (PPO-MLP). Specifically, in large-scale problems, our method reduces the makespan by over 9% compared to the PPO-MLP baseline, and the algorithm runs more than 30 times faster than GA. This highlights its superior performance and scalability. This study provides an effective solution for intelligent scheduling in automated chemical laboratory workflows and holds significant theoretical and practical value for advancing the intelligentization of experimental sciences, including shale oil and gas research. Full article
Show Figures

Figure 1

24 pages, 873 KB  
Article
Multi-Scale Digital Twin Framework with Physics-Informed Neural Networks for Real-Time Optimization and Predictive Control of Amine-Based Carbon Capture: Development, Experimental Validation, and Techno-Economic Assessment
by Mansour Almuwallad
Processes 2026, 14(3), 462; https://doi.org/10.3390/pr14030462 - 28 Jan 2026
Viewed by 665
Abstract
Carbon capture and storage (CCS) is essential for achieving net-zero emissions, yet amine-based capture systems face significant challenges including high energy penalties (20–30% of power plant output) and operational costs ($50–120/tonne CO2). This study develops and validates a novel multi-scale Digital [...] Read more.
Carbon capture and storage (CCS) is essential for achieving net-zero emissions, yet amine-based capture systems face significant challenges including high energy penalties (20–30% of power plant output) and operational costs ($50–120/tonne CO2). This study develops and validates a novel multi-scale Digital Twin (DT) framework integrating Physics-Informed Neural Networks (PINNs) to address these challenges through real-time optimization. The framework combines molecular dynamics, process simulation, computational fluid dynamics, and deep learning to enable real-time predictive control. A key innovation is the sequential training algorithm with domain decomposition, specifically designed to handle the nonlinear transport equations governing CO2 absorption with enhanced convergence properties. The algorithm achieves prediction errors below 1% for key process variables (R2 > 0.98) when validated against CFD simulations across 500 test cases. Experimental validation against pilot-scale absorber data (12 m packing, 30 wt% MEA) confirms good agreement with measured profiles, including temperature (RMSE = 1.2 K), CO2 loading (RMSE = 0.015 mol/mol), and capture efficiency (RMSE = 0.6%). The trained surrogate enables computational speedups of up to four orders of magnitude, supporting real-time inference with response times below 100 ms suitable for closed-loop control. Under the conditions studied, the framework demonstrates reboiler duty reductions of 18.5% and operational cost reductions of approximately 31%. Sensitivity analysis identifies liquid-to-gas ratio and MEA concentration as the most influential parameters, with mechanistic explanations linking these to mass transfer enhancement and reaction kinetics. Techno-economic assessment indicates favorable investment metrics, though results depend on site-specific factors. The framework architecture is designed for extensibility to alternative solvent systems, with future work planned for industrial-scale validation and uncertainty quantification through Bayesian approaches. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
Show Figures

Figure 1

39 pages, 2204 KB  
Review
Breeding Smarter: Artificial Intelligence and Machine Learning Tools in Modern Breeding—A Review
by Ana Luísa Garcia-Oliveira, Sangam L. Dwivedi, Subhash Chander, Charles Nelimor, Diaa Abd El Moneim and Rodomiro Octavio Ortiz
Agronomy 2026, 16(1), 137; https://doi.org/10.3390/agronomy16010137 - 5 Jan 2026
Viewed by 3316
Abstract
Climate challenges, along with a projected global population increase of 2 billion by 2080, are intensifying pressures on agricultural systems, leading to biodiversity loss, land use constrains, soil fertility declining, and changes in water cycles, while crop yields struggle to meet the rising [...] Read more.
Climate challenges, along with a projected global population increase of 2 billion by 2080, are intensifying pressures on agricultural systems, leading to biodiversity loss, land use constrains, soil fertility declining, and changes in water cycles, while crop yields struggle to meet the rising food demand. These challenges, coupled with evolving legislation and rapid technology advancements, require innovative sustainable agricultural solutions. By reshaping farmers’ daily operations, real-time data acquisition and predictive models can support informed decision-making. In this context, smart farming (SM) applied to plant breeding can improve efficiency by reducing inputs and increasing outputs through the adoption of digital and data-driven technologies. Examples include the investment on common ontologies and metadata standards for phenotypes and environments, standardization of HTP protocols, integration of prediction outputs into breeding databases, and selection workflows, as well in building multi-partner field networks that collect diverse envirotypes. This review outlines how AI and machine learning (ML) can be integrated in modern plant breeding methodologies, including genomic selection (GS) and genetic algorithms (GAs), to accelerate the development of climate-resilient and sustainably performing crop varieties. While many reviews address smart farming or smart breeding independently, herein, these domains are bridged to provide an understandable strategic landscape by enhancing breeding efficiency. Full article
(This article belongs to the Collection AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Graphical abstract

Back to TopTop