Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (10,351)

Search Parameters:
Keywords = generational gap

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 3807 KB  
Review
Flapping Foil-Based Propulsion and Power Generation: A Comprehensive Review
by Prabal Kandel, Jiadong Wang and Jian Deng
Biomimetics 2026, 11(2), 86; https://doi.org/10.3390/biomimetics11020086 (registering DOI) - 25 Jan 2026
Abstract
This review synthesizes the state of the art in flapping foil technology and bridges the distinct engineering domains of bio-inspired propulsion and power generation via flow energy harvesting. This review is motivated by the observation that propulsion and power-generation studies are frequently presented [...] Read more.
This review synthesizes the state of the art in flapping foil technology and bridges the distinct engineering domains of bio-inspired propulsion and power generation via flow energy harvesting. This review is motivated by the observation that propulsion and power-generation studies are frequently presented separately, even though they share common unsteady vortex dynamics. Accordingly, we adopt a unified unsteady-aerodynamic perspective to relate propulsion and energy-extraction regimes within a common framework and to clarify their operational duality. Within this unified framework, the feathering parameter provides a theoretical delimiter between momentum transfer and kinetic energy extraction. A critical analysis of experimental foundations demonstrates that while passive structural flexibility enhances propulsive thrust via favorable wake interactions, synchronization mismatches between deformation and peak hydrodynamic loading constrain its benefits in power generation. This review extends the analysis to complex and non-homogeneous environments and identifies that density stratification fundamentally alters the hydrodynamic performance. Specifically, resonant interactions with the natural Brunt–Väisälä frequency of the fluid shift the optimal kinematic regimes. The present study also surveys computational methodologies and highlights a paradigm shift from traditional parametric sweeps to high-fidelity three-dimensional (3D) Large-Eddy Simulations (LESs) and Deep Reinforcement Learning (DRL) to resolve finite-span vortex interconnectivities. Finally, this review outlines the critical pathways for future research. To bridge the gap between computational idealization and physical reality, the findings suggest that future systems prioritize tunable stiffness mechanisms, multi-phase environmental modeling, and artificial intelligence (AI)-driven digital twin frameworks for real-time adaptation. Full article
Show Figures

Graphical abstract

49 pages, 25553 KB  
Hypothesis
Synthetic Integration of an FCS into Coronaviruses—Hype or an Unresolved Biorisk? An Integrative Analysis of DNA Repair, Cancer Research, Drug Development, and Escape Mutant Traits
by Siguna Mueller
Life 2026, 16(2), 199; https://doi.org/10.3390/life16020199 (registering DOI) - 25 Jan 2026
Abstract
A 19 nt fragment that spans the SARS-CoV-2 furin cleavage site (FCS) is identical to the reverse complement of a proprietary human DNA repair gene sequence. Rather than interpreting this overlap as evidence of a laboratory event, this article uses it as a [...] Read more.
A 19 nt fragment that spans the SARS-CoV-2 furin cleavage site (FCS) is identical to the reverse complement of a proprietary human DNA repair gene sequence. Rather than interpreting this overlap as evidence of a laboratory event, this article uses it as a theoretical springboard to explore underappreciated biorisk concerns, specifically in the context of cancer research. Although they are RNA viruses, coronaviruses are capable of hijacking host DNA damage response (DDR) pathways, exploiting nuclear functions to enhance replication and evade innate immunity. Under selective pressures (antivirals, DDR antagonists, or large-scale siRNA libraries designed to silence critical host genes), escape mutants may arise with fitness advantages. Parallel observations involving in vivo RNA interference via chimeric viruses lend plausibility to some of the key aspects underlying unappreciated biorisks. The mechanistic insights that incorporate DNA repair mechanisms, CoVs in the nucleus, specifics of viruses in cancer research, anticancer drugs, automated gene silencing experiments, and gene sequence overlaps identify gaps in biorisk policies, even those unaccounted for by the potent “Sequences of Concern” paradigm. Key concerning attributes, including genome multifunctionality, such as NLS/FCS in SARS-CoV-2, antisense sequences, and their combination, are further described in more general terms. The article concludes with recommendations pairing modern technical safeguards with enduring ethical principles. Full article
(This article belongs to the Section Microbiology)
21 pages, 3957 KB  
Article
Integration Optimization and Annual Performance of a Coal-Fired Power System Retrofitted with a Solar Tower
by Junjie Wu, Ximeng Wang, Yun Li, Jiawen Liu and Yu Han
Energies 2026, 19(3), 620; https://doi.org/10.3390/en19030620 (registering DOI) - 25 Jan 2026
Abstract
Solar-aided power generation offers a pathway to reduce the carbon dioxide emissions from existing coal-fired plants. This study addresses the gap in comparing different solar integration modes by conducting a thermo-economic analysis of a 600 MW coal-fired system retrofitted with a solar tower. [...] Read more.
Solar-aided power generation offers a pathway to reduce the carbon dioxide emissions from existing coal-fired plants. This study addresses the gap in comparing different solar integration modes by conducting a thermo-economic analysis of a 600 MW coal-fired system retrofitted with a solar tower. Four integration modes were designed and rigorously compared, encompassing series and parallel configurations at either the high-exergy reheater or the lower-exergy economizer. A detailed thermodynamic model was developed to simulate its off-design and annual performance. The results showed that integration at the primary reheater outperformed the economizer integration. Specifically, the parallel configuration at the primary reheater (Mode II) achieved the highest annual solar-to-electricity efficiency of 18.43% at a thermodynamically optimal heliostat field area of 125,025.6 m2. Economic analysis revealed a trade-off, with the minimum levelized cost of energy (LCOE) of −0.00929 USD/kWh for Mode II occurring at the economically optimal area of 321,494 m2 due to greater coal and emission savings. Sensitivity analysis across two other locations confirmed that the annual solar-to-electricity efficiency and LCOE are directly influenced by solar resource quality, but the thermodynamically optimal and economically optimal heliostat field area remain consistent. This work demonstrates that parallel integration with the primary reheater presents a favorable and practical configuration, balancing high solar-to-electricity conversion efficiency with favorable economics for hybrid solar–coal power plants. Full article
(This article belongs to the Special Issue Solar Energy Conversion and Storage Technologies)
Show Figures

Figure 1

24 pages, 1066 KB  
Article
Is GaN the Enabler of High-Power-Density Converters? An Overview of the Technology, Devices, Circuits, and Applications
by Paul-Catalin Medinceanu, Alexandru Mihai Antonescu and Marius Enachescu
Electronics 2026, 15(3), 510; https://doi.org/10.3390/electronics15030510 (registering DOI) - 25 Jan 2026
Abstract
The growing demand for electric vehicles, renewable energy systems, and portable electronics has led to the widespread adoption of power conversion systems. Although advanced structures like the superjunction MOSFET have prolonged the viability of silicon in power applications, maintaining its dominance through cost [...] Read more.
The growing demand for electric vehicles, renewable energy systems, and portable electronics has led to the widespread adoption of power conversion systems. Although advanced structures like the superjunction MOSFET have prolonged the viability of silicon in power applications, maintaining its dominance through cost efficiency, Si-based technology is ultimately constrained by its intrinsic limitations in critical electric fields. To address these constraints, research into wide bandgap semiconductors aims to minimize system footprint while maximizing efficiency. This study reviews the semiconductor landscape, demonstrating why Gallium Nitride (GaN) has emerged as the most promising technology for next-generation power applications. With a critical electric field of 3.75MV/cm (12.5× higher than Si), GaN facilitates power devices with lower conduction loss and higher frequency capability when compared to their Si counterpart. Furthermore, this paper surveys the GaN ecosystem, ranging from device modeling and packaging to monolithic ICs and switching converter implementations based on discrete transistors. While existing literature primarily focuses on discrete devices, this work addresses the critical gap regarding GaN monolithic integration. It synthesizes key challenges and achievements in the design of GaN integrated circuits, providing a comprehensive review that spans semiconductor technology, monolithic circuit architectures, and system-level applications. Reported data demonstrate monolithic stages reaching 30mΩ and 25MHz, exceeding Si performance limits. Additionally, the study reports on high-density hybrid implementations, such as a space-grade POL converter achieving 123.3kW/L with 90.9% efficiency. Full article
(This article belongs to the Section Microelectronics)
35 pages, 3075 KB  
Review
Agentic Artificial Intelligence for Smart Grids: A Comprehensive Review of Autonomous, Safe, and Explainable Control Frameworks
by Mahmoud Kiasari and Hamed Aly
Energies 2026, 19(3), 617; https://doi.org/10.3390/en19030617 (registering DOI) - 25 Jan 2026
Abstract
Agentic artificial intelligence (AI) is emerging as a paradigm for next-generation smart grids, enabling autonomous decision-making, adaptive coordination, and resilient control in complex cyber–physical environments. Unlike traditional AI models, which are typically static predictors or offline optimizers, agentic AI systems perceive grid states, [...] Read more.
Agentic artificial intelligence (AI) is emerging as a paradigm for next-generation smart grids, enabling autonomous decision-making, adaptive coordination, and resilient control in complex cyber–physical environments. Unlike traditional AI models, which are typically static predictors or offline optimizers, agentic AI systems perceive grid states, reason about goals, plan multi-step actions, and interact with operators in real time. This review presents the latest advances in agentic AI for power systems, including architectures, multi-agent control strategies, reinforcement learning frameworks, digital twin optimization, and physics-based control approaches. The synthesis is based on new literature sources to provide an aggregate of techniques that fill the gap between theoretical development and practical implementation. The main application areas studied were voltage and frequency control, power quality improvement, fault detection and self-healing, coordination of distributed energy resources, electric vehicle aggregation, demand response, and grid restoration. We examine the most effective agentic AI techniques in each domain for achieving operational goals and enhancing system reliability. A systematic evaluation is proposed based on criteria such as stability, safety, interpretability, certification readiness, and interoperability for grid codes, as well as being ready to deploy in the field. This framework is designed to help researchers and practitioners evaluate agentic AI solutions holistically and identify areas in which more research and development are needed. The analysis identifies important opportunities, such as hierarchical architectures of autonomous control, constraint-aware learning paradigms, and explainable supervisory agents, as well as challenges such as developing methodologies for formal verification, the availability of benchmark data, robustness to uncertainty, and building human operator trust. This study aims to provide a common point of reference for scholars and grid operators alike, giving detailed information on design patterns, system architectures, and potential research directions for pursuing the implementation of agentic AI in modern power systems. Full article
Show Figures

Figure 1

24 pages, 741 KB  
Article
Restoration of Distribution Network Power Flow Solutions Considering the Conservatism Impact of the Feasible Region from the Convex Inner Approximation Method
by Zirong Chen, Yonghong Huang, Xingyu Liu, Shijia Zang and Junjun Xu
Energies 2026, 19(3), 609; https://doi.org/10.3390/en19030609 (registering DOI) - 24 Jan 2026
Abstract
Under the “Dual Carbon” strategy, high-penetration integration of distributed generators (DG) into distribution networks has triggered bidirectional power flow and reactive power-voltage violations. This phenomenon undermines the accuracy guarantee of conventional relaxation models (represented by second-order cone programming, SOCP), causing solutions to deviate [...] Read more.
Under the “Dual Carbon” strategy, high-penetration integration of distributed generators (DG) into distribution networks has triggered bidirectional power flow and reactive power-voltage violations. This phenomenon undermines the accuracy guarantee of conventional relaxation models (represented by second-order cone programming, SOCP), causing solutions to deviate from the AC power flow feasible region. Notably, ensuring solution feasibility becomes particularly crucial in engineering practice. To address this problem, this paper proposes a collaborative optimization framework integrating convex inner approximation (CIA) theory and a solution recovery algorithm. First, a system relaxation model is constructed using CIA, which strictly enforces ACPF constraints while preserving the computational efficiency of convex optimization. Second, aiming at the conservatism drawback introduced by the CIA method, an admissible region correction strategy based on Stochastic Gradient Descent is designed to narrow the dual gap of the solution. Furthermore, a multi-objective optimization framework is established, incorporating voltage security, operational economy, and renewable energy accommodation rate. Finally, simulations on the IEEE 33/69/118-bus systems demonstrate that the proposed method outperforms the traditional SOCP approach in the 24 h sequential optimization, reducing voltage deviation by 22.6%, power loss by 24.7%, and solution time by 45.4%. Compared with the CIA method, it improves the DG utilization rate by 30.5%. The proposed method exhibits superior generality compared to conventional approaches. Within the upper limit range of network penetration (approximately 60%), it addresses the issue of conservative power output of DG, thereby effectively promoting the utilization of renewable energy. Full article
27 pages, 2154 KB  
Review
A Review of Pavement Damping Characteristics for Mitigating Tire-Pavement Noise: Material Composition and Underlying Mechanisms
by Maoyi Liu, Wei Duan, Ruikun Dong and Mutahar Al-Ammari
Materials 2026, 19(3), 476; https://doi.org/10.3390/ma19030476 (registering DOI) - 24 Jan 2026
Abstract
The mitigation of traffic noise is essential for the development of sustainable and livable urban environments, a goal that is directly contingent on addressing tire-pavement interaction noise (TPIN) as the dominant acoustic pollutant at medium to high vehicle speeds. This comprehensive review addresses [...] Read more.
The mitigation of traffic noise is essential for the development of sustainable and livable urban environments, a goal that is directly contingent on addressing tire-pavement interaction noise (TPIN) as the dominant acoustic pollutant at medium to high vehicle speeds. This comprehensive review addresses a critical gap in the literature by systematically analyzing the damping properties of pavement systems through a unified, multi-scale framework—from the molecular-scale viscoelasticity of asphalt binders to the composite performance of asphalt mixtures. The analysis begins by synthesizing state-of-the-art testing and characterization methodologies, which establish a clear connection between macroscopic damping performance and the underlying viscoelastic mechanisms coupled with the microscopic morphology of the binders. Subsequently, the review critically assesses the influence of critical factors, such as polymer modifiers including rubber and Styrene-Butadiene-Styrene (SBS), temperature, and loading frequency. This examination elucidates how these variables govern molecular mobility and relaxation processes to ultimately determine damping efficacy. A central and synthesizing conclusion emphasizes the paramount importance of the asphalt binder’s properties, which serve as the primary determinant of the composite mixture’s overall acoustic performance. By delineating this structure-property-performance relationship across different scales, the review consolidates a foundational scientific framework to guide the rational design and informed material selection for next-generation asphalt pavements. The insights presented not only advance the fundamental understanding of damping mechanisms in pavement materials but also provide actionable strategies for creating quieter and more sustainable transportation infrastructures. Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Graphical abstract

22 pages, 3681 KB  
Article
The Pelagic Laser Tomographer for the Study of Suspended Particulates
by M. Dale Stokes, David R. Nadeau and James J. Leichter
J. Mar. Sci. Eng. 2026, 14(3), 247; https://doi.org/10.3390/jmse14030247 (registering DOI) - 24 Jan 2026
Abstract
An ongoing challenge in pelagic oceanography and limnology is to quantify and understand the distribution of suspended particles and particle aggregates with sufficient temporal and spatial fidelity to understand their dynamics. These particles include biotic (mesoplankton, organic fragments, fecal pellets, etc.) and abiotic [...] Read more.
An ongoing challenge in pelagic oceanography and limnology is to quantify and understand the distribution of suspended particles and particle aggregates with sufficient temporal and spatial fidelity to understand their dynamics. These particles include biotic (mesoplankton, organic fragments, fecal pellets, etc.) and abiotic (dusts, precipitates, sediments and flocks, anthropogenic materials, etc.) matter and their aggregates (i.e., marine snow), which form a large part of the total particulate matter > 200 μm in size in the ocean. The transport of organic material from surface waters to the deep-sea floor is of particular interest, as it is recognized as a key factor controlling the global carbon cycle and hence, a critical process influencing the sequestration of carbon dioxide from the atmosphere. Here we describe the development of an oceanographic instrument, the Pelagic Laser Tomographer (PLT), that uses high-resolution optical technology, coupled with post-processing analysis, to scan the 3D content of the water column to detect and quantify 3D distributions of small particles. Existing optical instruments typically trade sampling volume for spatial resolution or require large, complex platforms. The PLT addresses this gap by combining high-resolution laser-sheet imaging with large effective sampling volumes in a compact, deployable system. The PLT can generate spatial distributions of small particles (~100 µm and larger) across large water volumes (order 100–1000 m3) during a typical deployment, and allow measurements of particle patchiness over spatial scales to less than 1 mm. The instrument’s small size (6 kg), high resolution (~100 µm in each 3000 cm2 tomographic image slice), and analysis software provide a tool for pelagic studies that have typically been limited by high cost, data storage, resolution, and mechanical constraints, all usually necessitating bulky instrumentation and infrequent deployment, typically requiring a large research vessel. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

32 pages, 1245 KB  
Systematic Review
A Systematic Review of Artificial Intelligence in Higher Education Institutions (HEIs): Functionalities, Challenges, and Best Practices
by Neema Florence Vincent Mosha, Josiline Chigwada, Gaelle Fitong Ketchiwou and Patrick Ngulube
Educ. Sci. 2026, 16(2), 185; https://doi.org/10.3390/educsci16020185 (registering DOI) - 24 Jan 2026
Abstract
The rapid advancement of Artificial Intelligence (AI) technologies has significantly transformed teaching, learning, and research practices within higher education institutions (HEIs). Although a growing body of literature has examined the application of AI in higher education, existing studies remain fragmented, often focusing on [...] Read more.
The rapid advancement of Artificial Intelligence (AI) technologies has significantly transformed teaching, learning, and research practices within higher education institutions (HEIs). Although a growing body of literature has examined the application of AI in higher education, existing studies remain fragmented, often focusing on isolated tools or outcomes, with limited synthesis of best practices, core functionalities, and implementation challenges across diverse contexts. To address this gap, this systematic review aims to comprehensively examine the best practices, functionalities, and challenges associated with the integration of AI in HEIs. A comprehensive literature search was conducted across major academic databases, including Google Scholar, Scopus, Taylor & Francis, and Web of Science, resulting in the inclusion of 35 peer-reviewed studies published between 2014 and 2024. The findings suggest that effective AI integration is supported by best practices, including promoting student engagement and interaction, providing language support, facilitating collaborative projects, and fostering creativity and idea generation. Key AI functionalities identified include adaptive learning systems that personalize educational experiences, predictive analytics for identifying at-risk students, and automated grading tools that improve assessment efficiency and accuracy. Despite these benefits, significant challenges persist, including limited knowledge and skills, ethical concerns, inadequate infrastructure, insufficient institutional and management support, data privacy risks, inequitable access to technology, and the absence of standardized evaluation metrics. This review provides evidence-based insights to inform educators, institutional leaders, and policymakers on strategies for leveraging AI to enhance teaching, learning, and research in higher education. Full article
(This article belongs to the Section Higher Education)
Show Figures

Figure 1

11 pages, 253 KB  
Review
Real-World Cardiovascular Research Using the German IQVIA Disease Analyzer Database: Methods, Evidence, and Limitations (2000–2025)
by Karel Kostev, Marcel Konrad and Mark Luedde
J. Cardiovasc. Dev. Dis. 2026, 13(2), 61; https://doi.org/10.3390/jcdd13020061 (registering DOI) - 24 Jan 2026
Abstract
Cardiovascular diseases (CVDs) remain the leading cause of morbidity and mortality worldwide. This increases the demand for real-world evidence to complement findings from randomized controlled trials. The German IQVIA Disease Analyzer (DA) database, which is populated with anonymized electronic medical records from general [...] Read more.
Cardiovascular diseases (CVDs) remain the leading cause of morbidity and mortality worldwide. This increases the demand for real-world evidence to complement findings from randomized controlled trials. The German IQVIA Disease Analyzer (DA) database, which is populated with anonymized electronic medical records from general practitioners and specialists, has become an increasingly valuable source for cardiovascular research. Over the past two decades, and especially between 2020 and 2025, numerous epidemiological studies have used this database to explore associations between cardiovascular risk factors, comorbidities, therapeutic patterns, and cardiovascular outcomes in large, broadly representative outpatient populations. This review synthesizes evidence from 13 selected DA-based studies examining atrial fibrillation, heart failure, cardiometabolic disease, lipid management, non-alcoholic fatty liver disease (NAFLD)–related cardiovascular risks, cerebrovascular complications, COVID-19-associated vascular events, and modifiable behavioral and anthropometric factors. These studies were selected based on predefined criteria including cardiovascular relevance, methodological rigor, large sample size, and representativeness of key disease domains across the 2000–2025 period. Eligible studies were identified through targeted searches of peer-reviewed literature using the German IQVIA Disease Analyzer database and were selected to reflect major cardiovascular disease domains, risk factors, and therapeutic areas. Across disease domains, the reviewed studies consistently demonstrate the DA database’s capacity to identify reproducible associations between cardiometabolic risk factors, comorbidities, and cardiovascular outcomes in routine outpatient care. While causal inference is not possible, the database enables the identification of clinically meaningful associations that inform hypothesis generation, help quantify disease burden, and highlight gaps in prevention or treatment. The database’s strengths include large sample sizes (often exceeding 100,000 patients), long follow-up periods, and high external validity, while limitations relate to coding accuracy, residual confounding, and the absence of detailed clinical measures. Collectively, the evidence underscores the importance of the DA database as a crucial platform for real-world cardiovascular research. Full article
(This article belongs to the Section Basic and Translational Cardiovascular Research)
16 pages, 1660 KB  
Article
Filling the Gaps Between the Shown and the Known—On a Hybrid AI Model Based on ACT-R to Approach Mallard Behavior
by Daniel Einarson
AI 2026, 7(2), 38; https://doi.org/10.3390/ai7020038 - 23 Jan 2026
Abstract
Today, machine learning (ML) is generally considered a potent and efficient tool for addressing studies in various diverse domains, including image processing and event prediction on a timescale. ML represents complex relations between features, and these mappings between such features may be applied [...] Read more.
Today, machine learning (ML) is generally considered a potent and efficient tool for addressing studies in various diverse domains, including image processing and event prediction on a timescale. ML represents complex relations between features, and these mappings between such features may be applied in simulations of time-dependent events, such as the behavior of animals. Still, ML inherently strongly depends on extensive and consistent datasets, a fact that reveals both the benefits and drawbacks of ML. In the use of ML, insufficient or skewed data can limit the ability of algorithms to accurately predict or generalize possible states. To overcome this limitation, this work proposes an integrated hybrid approach that combines machine learning with methods from cognitive science, here especially inspired by the ACT-R model to approach cases of missing or unbalanced data. By incorporating cognitive processes such as memory, perception, and attention, the model accounts for the internal mechanisms of decision-making and environmental interaction where traditional ML methods fall short. This approach is particularly useful in representing states that are not directly observable or are underrepresented in the data, such as rare behavioral responses for animals, or adaptive strategies. Experimental results show that the combination of machine learning for data-driven analysis and cognitive ‘rule-based’ frameworks for filling in gaps provides a more comprehensive model of animal behavior. The findings suggest that this hybrid approach to simulation models can offer a more robust and consistent way to study complex, real-world phenomena, especially when data is inherently incomplete or unbalanced. Full article
23 pages, 6711 KB  
Article
A Numerical Modeling Framework for Assessing Hydrodynamic Risks to Support Sustainable Port Development: Application to Extreme Storm and Tide Scenarios Within Takoradi Port Master Plan
by Dianguang Ma and Yu Duan
Sustainability 2026, 18(3), 1177; https://doi.org/10.3390/su18031177 - 23 Jan 2026
Abstract
Sustainable port development in coastal regions necessitates robust frameworks for quantifying hydrodynamic risks under climate change. To bridge the gap between generic guidelines and site-specific resilience planning, this study proposes and applies a numerical modeling-based risk assessment framework. Within the context of the [...] Read more.
Sustainable port development in coastal regions necessitates robust frameworks for quantifying hydrodynamic risks under climate change. To bridge the gap between generic guidelines and site-specific resilience planning, this study proposes and applies a numerical modeling-based risk assessment framework. Within the context of the Port Master Plan, the framework is applied to the critical case of Takoradi Port in West Africa, employing a two-dimensional hydrodynamic model to simulate current fields under three current regimes, “Normal”, “Stronger”, and “Estimated Extreme” scenarios, for the first time. The model quantifies key hydrologic parameters such as current velocity and direction in critical zones (the approach channel, port basin, and berths), providing actionable data for the Port Master Plan. Key new findings include the following: (1) Northeastward surface currents, driven by the southwest monsoon, dominate the study area; breakwater sheltering creates a prominent circulation zone north of the port entrance. (2) Under extreme conditions, the approach channel exhibits amplified currents (0.3–0.7 m/s), while inner port areas maintain stable conditions (<0.1 m/s). (3) A stark spatial differentiation in designed current velocities for 2–100 years return periods, where the 100-year extreme current velocity in the external approach channel (0.87 m/s at P1) exceeds the range in the internal zones (0.01–0.15 m/s) by approximately 5 to 86 times. The study validates the framework’s utility in assessing hydrodynamic risks. By integrating numerical simulation with risk assessment, this work provides a scalable methodological contribution that can be adapted to other port environments, directly supporting the global pursuit of sustainable and resilient ports. Full article
(This article belongs to the Section Sustainable Oceans)
25 pages, 5757 KB  
Article
Heatmap-Assisted Reinforcement Learning Model for Solving Larger-Scale TSPs
by Guanqi Liu and Donghong Xu
Electronics 2026, 15(3), 501; https://doi.org/10.3390/electronics15030501 - 23 Jan 2026
Abstract
Deep reinforcement learning (DRL)-based algorithms for solving the Traveling Salesman Problem (TSP) have demonstrated competitive potential compared to traditional heuristic algorithms on small-scale TSP instances. However, as the problem size increases, the NP-hard nature of the TSP leads to exponential growth in the [...] Read more.
Deep reinforcement learning (DRL)-based algorithms for solving the Traveling Salesman Problem (TSP) have demonstrated competitive potential compared to traditional heuristic algorithms on small-scale TSP instances. However, as the problem size increases, the NP-hard nature of the TSP leads to exponential growth in the combinatorial search space, state–action space explosion, and sharply increased sample complexity, which together cause significant performance degradation for most existing DRL-based models when directly applied to large-scale instances. This research proposes a two-stage reinforcement learning framework, termed GCRL-TSP (Graph Convolutional Reinforcement Learning for the TSP), which consists of a heatmap generation stage based on a graph convolutional neural network, and a heatmap-assisted Proximal Policy Optimization (PPO) training stage, where the generated heatmaps are used as auxiliary guidance for policy optimization. First, we design a divide-and-conquer heatmap generation strategy: a graph convolutional network infers m-node sub-heatmaps, which are then merged into a global edge-probability heatmap. Second, we integrate the heatmap into PPO by augmenting the state representation and restricting the action space toward high-probability edges, improving training efficiency. On standard instances with 200/500/1000 nodes, GCRL-TSP achieves a Gap% of 4.81/4.36/13.20 (relative to Concorde) with runtimes of 36 s/1.12 min/4.65 min. Experimental results show that GCRL-TSP achieves more than twice the solving speed compared to other TSP solving algorithms, while obtaining solution quality comparable to other algorithms on TSPs ranging from 200 to 1000 nodes. Full article
(This article belongs to the Section Artificial Intelligence)
24 pages, 805 KB  
Review
Mathematics Teachers’ Pedagogical Content Knowledge in Strengthening Conceptual Understanding in Students with Learning Disabilities: A Practice-Based Conceptual Synthesis
by Friggita Johnson and Finita G. Roy
Educ. Sci. 2026, 16(2), 176; https://doi.org/10.3390/educsci16020176 - 23 Jan 2026
Abstract
Students with learning disabilities (LD) often struggle to develop deep, transferable conceptual understanding in mathematics due to cognitive and processing challenges, underscoring the relevance of instruction grounded in strong teacher pedagogical content knowledge (PCK). This issue is critical given widening post-pandemic achievement gaps [...] Read more.
Students with learning disabilities (LD) often struggle to develop deep, transferable conceptual understanding in mathematics due to cognitive and processing challenges, underscoring the relevance of instruction grounded in strong teacher pedagogical content knowledge (PCK). This issue is critical given widening post-pandemic achievement gaps and increased expectations for conceptual understanding in inclusive classrooms. Although many studies document effective mathematics interventions for students with LD, relatively few examine how teachers’ PCK functions in these classrooms. In contrast, general education research highlights the importance of PCK for conceptual learning. This manuscript bridges these studies by examining how insights from broader PCK research may inform instruction for students with LD. This manuscript presents a practice-based conceptual synthesis of research on mathematics teachers’ PCK, integrating findings from special education and mathematics intervention literature with classroom vignettes and practitioner examples. The synthesis highlights how core PCK components—content knowledge, understanding of student thinking and misconceptions, and instructional strategies—may support early conceptual understanding in students with LD, emphasizing multiple representations, error analysis, and routines that promote generalization through distributed practice. Implications for practice, professional development, and future research are discussed, offering practice-informed pathways to support inclusive mathematics instruction for students with LD. Full article
16 pages, 3865 KB  
Article
Data-Augmented Deep Learning for Downhole Depth Sensing and Validation
by Si-Yu Xiao, Xin-Di Zhao, Tian-Hao Mao, Yi-Wei Wang, Yu-Qiao Chen, Hong-Yun Zhang, Jian Wang, Jun-Jie Wang, Shuang Liu, Tu-Pei Chen and Yang Liu
Sensors 2026, 26(3), 775; https://doi.org/10.3390/s26030775 (registering DOI) - 23 Jan 2026
Abstract
Accurate downhole depth measurement is essential for oil and gas well operations, directly influencing reservoir contact, production efficiency, and operational safety. Collar correlation using a casing collar locator (CCL) is fundamental for precise depth calibration. While neural network has achieved significant progress in [...] Read more.
Accurate downhole depth measurement is essential for oil and gas well operations, directly influencing reservoir contact, production efficiency, and operational safety. Collar correlation using a casing collar locator (CCL) is fundamental for precise depth calibration. While neural network has achieved significant progress in collar recognition, preprocessing methods for such applications remain underdeveloped. Moreover, the limited availability of real well data poses substantial challenges for training neural network models that require extensive datasets. This paper presents a system integrated into a downhole toolstring for CCL log acquisition to facilitate dataset construction. Comprehensive preprocessing methods for data augmentation are proposed, and their effectiveness is evaluated using baseline neural network models. Through systematic experimentation across diverse configurations, the contribution of each augmentation method is analyzed. Results demonstrate that standardization, label distribution smoothing (LDS), and random cropping are fundamental prerequisites for model training, while label smoothing regularization (LSR), time scaling, and multiple sampling significantly enhance model generalization capabilities. Incorporating the proposed augmentation methods into the two baseline models results in maximum F1 score improvements of 0.027 and 0.024 for the TAN and MAN models, respectively. Furthermore, applying these techniques yields F1 score gains of up to 0.045 for the TAN model and 0.057 for the MAN model compared to prior studies. Performance evaluation on real CCL waveforms confirms the effectiveness and practical applicability of our approach. This work addresses the existing gaps in data augmentation methodologies for training casing collar recognition models under CCL data-limited conditions, and provides a technical foundation for the future automation of downhole operations. Full article
(This article belongs to the Special Issue Intelligent Sensors and Signal Processing in Industry)
Back to TopTop