Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (852)

Search Parameters:
Keywords = twin fields

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 3075 KB  
Review
Agentic Artificial Intelligence for Smart Grids: A Comprehensive Review of Autonomous, Safe, and Explainable Control Frameworks
by Mahmoud Kiasari and Hamed Aly
Energies 2026, 19(3), 617; https://doi.org/10.3390/en19030617 - 25 Jan 2026
Abstract
Agentic artificial intelligence (AI) is emerging as a paradigm for next-generation smart grids, enabling autonomous decision-making, adaptive coordination, and resilient control in complex cyber–physical environments. Unlike traditional AI models, which are typically static predictors or offline optimizers, agentic AI systems perceive grid states, [...] Read more.
Agentic artificial intelligence (AI) is emerging as a paradigm for next-generation smart grids, enabling autonomous decision-making, adaptive coordination, and resilient control in complex cyber–physical environments. Unlike traditional AI models, which are typically static predictors or offline optimizers, agentic AI systems perceive grid states, reason about goals, plan multi-step actions, and interact with operators in real time. This review presents the latest advances in agentic AI for power systems, including architectures, multi-agent control strategies, reinforcement learning frameworks, digital twin optimization, and physics-based control approaches. The synthesis is based on new literature sources to provide an aggregate of techniques that fill the gap between theoretical development and practical implementation. The main application areas studied were voltage and frequency control, power quality improvement, fault detection and self-healing, coordination of distributed energy resources, electric vehicle aggregation, demand response, and grid restoration. We examine the most effective agentic AI techniques in each domain for achieving operational goals and enhancing system reliability. A systematic evaluation is proposed based on criteria such as stability, safety, interpretability, certification readiness, and interoperability for grid codes, as well as being ready to deploy in the field. This framework is designed to help researchers and practitioners evaluate agentic AI solutions holistically and identify areas in which more research and development are needed. The analysis identifies important opportunities, such as hierarchical architectures of autonomous control, constraint-aware learning paradigms, and explainable supervisory agents, as well as challenges such as developing methodologies for formal verification, the availability of benchmark data, robustness to uncertainty, and building human operator trust. This study aims to provide a common point of reference for scholars and grid operators alike, giving detailed information on design patterns, system architectures, and potential research directions for pursuing the implementation of agentic AI in modern power systems. Full article
Show Figures

Figure 1

33 pages, 18247 KB  
Article
Learning Debris Flow Dynamics with a Deep Learning Fourier Neural Operator: Application to the Rendinara–Morino Area
by Mauricio Secchi, Antonio Pasculli, Massimo Mangifesta and Nicola Sciarra
Geosciences 2026, 16(2), 55; https://doi.org/10.3390/geosciences16020055 - 24 Jan 2026
Viewed by 54
Abstract
Accurate numerical simulation of debris flows is essential for hazard assessment and early-warning design, yet high-fidelity solvers remain computationally expensive, especially when large ensembles must be explored under epistemic uncertainty in rheology, initial conditions, and topography. At the same time, field observations are [...] Read more.
Accurate numerical simulation of debris flows is essential for hazard assessment and early-warning design, yet high-fidelity solvers remain computationally expensive, especially when large ensembles must be explored under epistemic uncertainty in rheology, initial conditions, and topography. At the same time, field observations are typically sparse and heterogeneous, limiting purely data-driven approaches. In this work, we develop a deep-learning Fourier Neural Operator (FNO) as a fast, physics-consistent surrogate for one-dimensional shallow-water debris-flow simulations and demonstrate its application to the Rendinara–Morino system in central Italy. A validated finite-volume solver, equipped with HLLC and Rusanov fluxes, hydrostatic reconstruction, Voellmy-type basal friction, and robust wet–dry treatment, is used to generate a large ensemble of synthetic simulations over longitudinal profiles representative of the study area. The parameter space of bulk density, initial flow thickness, and Voellmy friction coefficients is systematically sampled, and the resulting space–time fields of flow depth and velocity form the training dataset. A two-dimensional FNO in the (x,t) domain is trained to learn the full solution operator, mapping topography, rheological parameters, and initial conditions directly to h(x,t) and u(x,t), thereby acting as a site-specific digital twin of the numerical solver. On a held-out validation set, the surrogate achieves mean relative L2 errors of about 6–7% for flow depth and 10–15% for velocity, and it generalizes to an unseen longitudinal profile with comparable accuracy. We further show that targeted reweighting of the training objective significantly improves the prediction of the velocity field without degrading depth accuracy, reducing the velocity error on the unseen profile by more than a factor of two. Finally, the FNO provides speed-ups of approximately 36× with respect to the reference solver at inference time. These results demonstrate that combining physics-based synthetic data with operator-learning architectures enables the construction of accurate, computationally efficient, and site-adapted surrogates for debris-flow hazard analysis in data-scarce environments. Full article
Show Figures

Figure 1

47 pages, 2601 KB  
Review
A Review of AI-Driven Engineering Modelling and Optimization: Methodologies, Applications and Future Directions
by Jian-Ping Li, Nereida Polovina and Savas Konur
Algorithms 2026, 19(2), 93; https://doi.org/10.3390/a19020093 (registering DOI) - 23 Jan 2026
Viewed by 63
Abstract
Engineering is suffering a significant change driven by the integration of artificial intelligence (AI) into engineering optimization in design, analysis, and operational efficiency across numerous disciplines. This review synthesizes the current landscape of AI-driven optimization methodologies and their impacts on engineering applications. In [...] Read more.
Engineering is suffering a significant change driven by the integration of artificial intelligence (AI) into engineering optimization in design, analysis, and operational efficiency across numerous disciplines. This review synthesizes the current landscape of AI-driven optimization methodologies and their impacts on engineering applications. In the literature, several frameworks for AI-based engineering optimization have been identified: (1) machine learning models are trained as objective and constraint functions for optimization problems; (2) machine learning techniques are used to improve the efficiency of optimization algorithms; (3) neural networks approximate complex simulation models such as finite element analysis (FEA) and computational fluid dynamics (CFD) and this makes it possible to optimize complex engineering systems; and (4) machine learning predicts design parameters/initial solutions that are subsequently optimized. Fundamental AI technologies, such as artificial neural networks and deep learning, are examined in this paper, along with commonly used AI-assisted optimization strategies. Representative applications of AI-driven engineering optimization have been surveyed in this paper across multiple fields, including mechanical and aerospace engineering, civil engineering, electrical and computer engineering, chemical and materials engineering, energy and management. These studies demonstrate how AI enables significant improvements in computational modelling, predictive analytics, and generative design while effectively handling complex multi-objective constraints. Despite these advancements, challenges remain in areas such as data quality, model interpretability, and computational cost, particularly in real-time environments. Through a systematic analysis of recent case studies and emerging trends, this paper provides a critical assessment of the state of the art and identifies promising research directions, including physics-informed neural networks, digital twins, and human–AI collaborative optimization frameworks. The findings highlight AI’s potential to redefine engineering optimization paradigms, while emphasizing the need for robust, scalable, and ethically aligned implementations. Full article
(This article belongs to the Special Issue AI-Driven Engineering Optimization)
21 pages, 11053 KB  
Article
Characteristics of Stratum Disturbance During the Construction of Dual-Line Shield Tunnels with Consideration of Soil Spatial Variability
by Yuan Lyu, Yong Liu, Chaoqun Huang, Zehang Wang, Dong Huang, Jing Peng and Xuedong Luo
Appl. Sci. 2026, 16(2), 1132; https://doi.org/10.3390/app16021132 - 22 Jan 2026
Viewed by 19
Abstract
Soil spatial variability is an inherent feature of natural strata, and random field theory provides an effective framework for quantifying it, aiding accurate deformation prediction. This study focuses on the tunnel section between Kepugongyuan and Gangduhuayuan Stations on Wuhan Metro Line 12. Its [...] Read more.
Soil spatial variability is an inherent feature of natural strata, and random field theory provides an effective framework for quantifying it, aiding accurate deformation prediction. This study focuses on the tunnel section between Kepugongyuan and Gangduhuayuan Stations on Wuhan Metro Line 12. Its novelty focuses on analyzing dual-line shield-induced ground response with explicit consideration of multi-layer soil spatial variability. It examines the effects of the coefficient of variation and the horizontal/vertical spatial correlation distances of cohesion, internal friction angle, and elastic modulus—considering multilayer soil variability—on ground disturbance induced by twin-tunnel shield construction. The main findings include the following: (1) In cross-section, the settlement trough transitions from a “W”-shaped double trough to a “V”-shaped single trough as excavation advances, with the settlement center moving toward the midpoint between the tunnels. Longitudinally, soil heaves ahead of the shield and settles behind. (2) Ignoring spatial variability results in underestimated deformations; nearly 80% of stochastic simulations produced larger maximum surface settlements compared to deterministic analysis. (3) Ground loss and shield thrust disturbance are categorized into four zones based on tunnel diameter (D): Disturbance Zone, Secondary Zone, Transition Zone, and Undisturbed Zone. These findings provide practical guidance for predicting ground deformation and managing settlement-related risks in urban dual-line shield projects. Full article
Show Figures

Figure 1

44 pages, 2586 KB  
Review
Cellular Automata and Phase-Field Modeling of Microstructure Evolution in Metal Additive Manufacturing: Recent Advances, Hybrid Frameworks, and Pathways to Predictive Control
by Łukasz Łach
Metals 2026, 16(1), 124; https://doi.org/10.3390/met16010124 - 21 Jan 2026
Viewed by 230
Abstract
Metal additive manufacturing (AM) generates complex microstructures through extreme thermal gradients and rapid solidification, critically influencing mechanical performance and industrial qualification. This review synthesizes recent advances in cellular automata (CA) and phase-field (PF) modeling to predict grain-scale microstructure evolution during AM. CA methods [...] Read more.
Metal additive manufacturing (AM) generates complex microstructures through extreme thermal gradients and rapid solidification, critically influencing mechanical performance and industrial qualification. This review synthesizes recent advances in cellular automata (CA) and phase-field (PF) modeling to predict grain-scale microstructure evolution during AM. CA methods provide computational efficiency, enabling large-domain simulations and excelling in texture prediction and multi-layer builds. PF approaches deliver superior thermodynamic fidelity for interface dynamics, solute partitioning, and nonequilibrium rapid solidification through CALPHAD coupling. Hybrid CA–PF frameworks strategically balance efficiency and accuracy by allocating PF to solidification fronts and CA to bulk grain competition. Recent algorithmic innovations—discrete event-inspired CA, GPU acceleration, and machine learning—extend scalability while maintaining predictive capability. Validated applications across Ni-based superalloys, Ti-6Al-4V, tool steels, and Al alloys demonstrate robust process–microstructure–property predictions through EBSD and mechanical testing. Persistent challenges include computational scalability for full-scale components, standardized calibration protocols, limited in situ validation, and incomplete multi-physics coupling. Emerging solutions leverage physics-informed machine learning, digital twin architectures, and open-source platforms to enable predictive microstructure control for first-time-right manufacturing in aerospace, biomedical, and energy applications. Full article
Show Figures

Figure 1

45 pages, 2954 KB  
Review
A Review of Fault Diagnosis Methods: From Traditional Machine Learning to Large Language Model Fusion Paradigm
by Qingwei Nie, Junsai Geng and Changchun Liu
Sensors 2026, 26(2), 702; https://doi.org/10.3390/s26020702 - 21 Jan 2026
Viewed by 108
Abstract
Fault diagnosis is a core technology ensuring the safe and efficient operation of industrial systems. A paradigm shift has been observed wherein traditional signal analysis has been replaced by intelligent, algorithm-driven approaches. In recent years, large language models, digital twins, and knowledge graphs [...] Read more.
Fault diagnosis is a core technology ensuring the safe and efficient operation of industrial systems. A paradigm shift has been observed wherein traditional signal analysis has been replaced by intelligent, algorithm-driven approaches. In recent years, large language models, digital twins, and knowledge graphs have been introduced. A new stage of intelligent integration has been reached that is characterized by data-driven methods, knowledge guidance, and physical–virtual fusion. In the present paper, the evolutionary context of fault diagnosis technologies was systematically reviewed, with a focus on the theoretical methods and application practices of traditional machine learning, digital twins, knowledge graphs, and large language models. First, the research background, core objectives, and development history of fault diagnosis were described. Second, the principles, industrial applications, and limitations of supervised and unsupervised learning were analyzed. Third, innovative uses were examined involving physical–virtual mapping in digital twins, knowledge modeling in knowledge graphs, and feature learning in large language models. Subsequently, a multi-dimensional comparison framework was constructed to analyze the performance indicators, applicable scenarios, and collaborative potential of different technologies. Finally, the key challenges faced in the current fault diagnosis field were summarized. These included data quality, model generalization, and knowledge reuse. Future directions driven by the fusion of large language models, digital twins, and knowledge graphs were also outlined. A comprehensive technical map was established for fault diagnosis researchers, as well as an up-to-date reference. Theoretical innovation and engineering deployment of intelligent fault diagnosis are intended to be supported. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

48 pages, 1116 KB  
Systematic Review
Cybersecurity and Resilience of Smart Grids: A Review of Threat Landscape, Incidents, and Emerging Solutions
by Bo Nørregaard Jørgensen and Zheng Grace Ma
Appl. Sci. 2026, 16(2), 981; https://doi.org/10.3390/app16020981 - 18 Jan 2026
Viewed by 421
Abstract
The digital transformation of electric power systems into smart grids has significantly expanded the cybersecurity risk landscape of the energy sector. While advanced sensing, communication, automation, and data-driven control improve efficiency, flexibility, and renewable energy integration, they also introduce complex cyber–physical interdependencies and [...] Read more.
The digital transformation of electric power systems into smart grids has significantly expanded the cybersecurity risk landscape of the energy sector. While advanced sensing, communication, automation, and data-driven control improve efficiency, flexibility, and renewable energy integration, they also introduce complex cyber–physical interdependencies and new vulnerabilities across interconnected technical and organisational domains. This study adopts a scoping review methodology in accordance with PRISMA-ScR to systematically analyse smart grid cybersecurity from an architecture-aware and resilience-oriented perspective. Peer-reviewed scientific literature and authoritative institutional sources are synthesised to examine modern smart grid architectures, key security challenges, major cyberthreats, and documented real-world cyber incidents affecting energy infrastructure up to 2025. The review systematically links architectural characteristics such as field devices, communication networks, software platforms, data pipelines, and externally operated services to specific threat mechanisms and observed attack patterns, illustrating how cyber risk propagates across interconnected grid components. The findings show that cybersecurity challenges in smart grids arise not only from technical vulnerabilities but also from architectural dependencies, software supply chains, operational constraints, and cross-sector coupling. Based on the analysis of historical incidents and emerging research, the study identifies key defensive strategies, including zero-trust architectures, advanced monitoring and anomaly detection, secure software lifecycle management, digital twins for cyber–physical testing, and cyber-resilient grid design. The review concludes that cybersecurity in smart grids should be treated as a systemic and persistent condition, requiring resilience-oriented approaches that prioritise detection, containment, recovery, and safe operation under adverse conditions. Full article
(This article belongs to the Section Energy Science and Technology)
Show Figures

Figure 1

45 pages, 14932 KB  
Article
An Intelligent Predictive Maintenance Architecture for Substation Automation: Real-World Validation of a Digital Twin and AI Framework of the Badra Oil Field Project
by Sarmad Alabbad and Hüseyin Altınkaya
Electronics 2026, 15(2), 416; https://doi.org/10.3390/electronics15020416 - 17 Jan 2026
Viewed by 148
Abstract
The increasing complexity of modern electrical substations—driven by renewable integration, advanced automation, and asset aging—necessitates a transition from reactive maintenance toward intelligent, data-driven strategies. Predictive maintenance (PdM), supported by artificial intelligence, enables early fault detection and remaining useful life (RUL) estimation, while Digital [...] Read more.
The increasing complexity of modern electrical substations—driven by renewable integration, advanced automation, and asset aging—necessitates a transition from reactive maintenance toward intelligent, data-driven strategies. Predictive maintenance (PdM), supported by artificial intelligence, enables early fault detection and remaining useful life (RUL) estimation, while Digital Twin (DT) technology provides synchronized cyber–physical representations for situational awareness and risk-free validation of maintenance decisions. This study proposes a five-layer DT-enabled PdM architecture integrating standards-based data acquisition, semantic interoperability (IEC 61850, CIM, and OPC UA Part 17), hybrid AI analytics, and cyber-secure decision support aligned with IEC 62443. The framework is validated using utility-grade operational data from the SS1 substation of the Badra Oil Field, comprising approximately one million multivariate time-stamped measurements and 139 confirmed fault events across transformer, feeder, and environmental monitoring systems. Fault detection is formulated as a binary classification task using event-window alignment to the 1 min SCADA timeline, preserving realistic operational class imbalance. Five supervised learning models—a Random Forest, Gradient Boosting, a Support Vector Machine, a Deep Neural Network, and a stacked ensemble—were benchmarked, with the ensemble embedded within the DT core representing the operational predictive model. Experimental results demonstrate strong performance, achieving an F1-score of 0.98 and an AUC of 0.995. The results confirm that the proposed DT–AI framework provides a scalable, interoperable, and cyber-resilient foundation for deployment-ready predictive maintenance in modern substation automation systems. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

21 pages, 8441 KB  
Article
Calculation of Influence of Maneuverability Conditions on Submerged Water-Jet on Actuator Disk Model
by Dongsheng Yang and Liu Chen
J. Mar. Sci. Eng. 2026, 14(2), 189; https://doi.org/10.3390/jmse14020189 - 16 Jan 2026
Viewed by 154
Abstract
This study examines the performance variations and flow field characteristics of a submerged water-jet propulsor under complex oblique sailing conditions, providing theoretical insights for propulsor design optimization and ship maneuverability improvement. Both steady and unsteady numerical simulations were performed, with the unsteady analysis [...] Read more.
This study examines the performance variations and flow field characteristics of a submerged water-jet propulsor under complex oblique sailing conditions, providing theoretical insights for propulsor design optimization and ship maneuverability improvement. Both steady and unsteady numerical simulations were performed, with the unsteady analysis employing an actuator disk model. The results indicate that at a positive drift angle of 30°, the propulsor head decreases by approximately 6%, whereas at a negative drift angle of 30°, it drops significantly by 28%. The entropy generation distribution among the propulsor components was analyzed based on entropy generation theory, revealing that turbulent dissipation contributes the largest portion (64%) of the total entropy generation, with the impeller flow passage accounting for 47%. Furthermore, pressure fluctuations on the propulsor housing surface were evaluated under unsteady conditions. The findings show that a twin-jet configuration with an optimal spacing of 1.6D effectively minimizes flow field interference during maneuvering. Overall, the study provides a theoretical foundation for enhancing the design and hydrodynamic performance of submerged water-jet propulsion systems. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

64 pages, 10763 KB  
Review
The State of HBIM in Digital Heritage: A Critical and Bibliometric Assessment of Six Emerging Frontiers (2015–2025)
by Fabrizio Banfi and Wanqin Liu
Appl. Sci. 2026, 16(2), 906; https://doi.org/10.3390/app16020906 - 15 Jan 2026
Viewed by 179
Abstract
After nearly two decades of developments in Historic/Heritage Building Information Modeling (HBIM), the field has reached a stage of maturity that calls for a critical reassessment of its evolution, achievements, and remaining challenges. Digital representation has become a central component of contemporary heritage [...] Read more.
After nearly two decades of developments in Historic/Heritage Building Information Modeling (HBIM), the field has reached a stage of maturity that calls for a critical reassessment of its evolution, achievements, and remaining challenges. Digital representation has become a central component of contemporary heritage conservation, enabling advanced methods for analysis, management, and communication. This review examines the maturation of HBIM as a comprehensive framework that integrates extended reality (XR), artificial intelligence (AI), machine learning (ML), semantic segmentation and Digital Twin (DT). Six major research domains that have shaped recent progress are outlined: (1) the application of HBIM to restoration and conservation workflows; (2) the expansion of public engagement through XR, virtual museums, and serious games; (3) the stratigraphic documentation of building archaeology, historical phases, and material decay; (4) data-exchange mechanisms and interoperability with open formats and Common Data Environments (CDEs); (5) strategies for modeling geometric and semantic complexity using traditional, applied, and AI-driven approaches; and (6) the emergence of heritage DT as dynamic, semantically enriched systems integrating real-time and lifecycle data. A comparative assessment of international case studies and bibliometric trends (2015–2025) illustrates how HBIM is transforming proactive and data-informed conservation practice. The review concludes by identifying persistent gaps and outlining strategic directions for the next phase of research and implementation. Full article
Show Figures

Figure 1

22 pages, 6194 KB  
Article
Innovative Cyber-Physical/Electronic AI-Assisted Digital Twin Model of Small Energy Harvesting Cantilever Power Generators
by Alessandro Massaro, Giuseppe Fanizza and Giuseppe Starace
Energies 2026, 19(2), 390; https://doi.org/10.3390/en19020390 - 13 Jan 2026
Viewed by 143
Abstract
The paper deals with the design of a Digital Twin model of an energy harvesting cantilever beam for low frequency energy harvesting applications and specifically with a digital model matching simulations corresponding with Finite Element Method solutions in order to validate the model. [...] Read more.
The paper deals with the design of a Digital Twin model of an energy harvesting cantilever beam for low frequency energy harvesting applications and specifically with a digital model matching simulations corresponding with Finite Element Method solutions in order to validate the model. The physical behavior is based on the main parameters to be investigated. The finite elements analysis is geometrically and parametrically carried out for a small PZT5A device of the orders of millimeters and is optimized to take into consideration the relationships between tip displacement, generated voltages and vibration gravitational forces for standard industrial applications in the acceleration range between 0.5 and 2 g. Then a procedure to integrate the Digital Twin into a design framework has been developed, including an artificial intelligence algorithm that supports the modelling of the real behavior of the device. The paper is devoted to help researchers involved in a Digital Twin adoption in the field of electronic design and of the physical characterization of low frequency energy harvesting devices exclusively using open-source tools. Full article
Show Figures

Figure 1

34 pages, 2742 KB  
Review
Recent Advances in Digital Fringe Projection Profilometry (2022–2025): Techniques, Applications, and Metrological Challenges—A Review
by Mishraim Sanchez-Torres, Ismael Hernández-Capuchin, Cristina Ramírez-Fernández, Eddie Clemente, José Luis Javier Sánchez-González and Alan López-Martínez
Metrology 2026, 6(1), 3; https://doi.org/10.3390/metrology6010003 - 12 Jan 2026
Viewed by 263
Abstract
Digital fringe projection profilometry (DFPP) is a widely used technique for full-field, non-contact 3D surface measurement, offering precision from the sub-micrometer-to-millimeter scale depending on system geometry and fringe design. This review provides a consolidated synthesis of advances reported between 2022 and 2025, covering [...] Read more.
Digital fringe projection profilometry (DFPP) is a widely used technique for full-field, non-contact 3D surface measurement, offering precision from the sub-micrometer-to-millimeter scale depending on system geometry and fringe design. This review provides a consolidated synthesis of advances reported between 2022 and 2025, covering projection and imaging architectures, phase formation and unwrapping strategies, calibration approaches, high-speed implementations, and learning-based reconstruction methods. A central contribution of this review is the integration of these developments within a metrological perspective, explicitly relating phase–height transformation, fringe parameters, system geometry, and calibration to dominant uncertainty sources and error propagation. Recent progress highlights trade-offs between sensitivity, robustness, computational complexity, and applicability to non-ideal surfaces, while learning-based and hybrid optical–computational approaches demonstrate substantial improvements in reconstruction reliability under challenging conditions. Remaining challenges include measurements on reflective or transparent surfaces, dynamic scenes, environmental instability, and real-time operation. The review outlines emerging research directions such as physics-informed learning, digital twins, programmable optics, and autonomous calibration, providing guidance for the development of next-generation DFPP systems for precision metrology. Full article
Show Figures

Figure 1

29 pages, 1793 KB  
Review
Digital Twins for Cows and Chickens: From Hype Cycles to Hard Evidence in Precision Livestock Farming
by Suresh Neethirajan
Agriculture 2026, 16(2), 166; https://doi.org/10.3390/agriculture16020166 - 9 Jan 2026
Viewed by 308
Abstract
Digital twin technology is widely promoted as a transformative step for precision livestock farming, yet no fully realized, engineering-grade digital twins are deployed in commercial dairy or poultry systems today. This work establishes the current state of knowledge on dairy and poultry digital [...] Read more.
Digital twin technology is widely promoted as a transformative step for precision livestock farming, yet no fully realized, engineering-grade digital twins are deployed in commercial dairy or poultry systems today. This work establishes the current state of knowledge on dairy and poultry digital twins by synthesizing evidence through systematic database searches, thematic evidence mapping and critical analysis of validation gaps, carbon accounting and adoption barriers. Existing platforms are better described as near-digital-twin systems with partial sensing and modelling, digital-twin-inspired prototypes, simulation frameworks or decision-support tools that are often labelled as twins despite lacking continuous synchronization and closed-loop control. This distinction matters because the empirical foundation supporting many claims remains limited. Three critical gaps emerge: life-cycle carbon impacts of digital infrastructures are rarely quantified even as sustainability benefits are frequently asserted; field-validated improvements in feed efficiency, particularly in poultry feed conversion ratios, are scarce and inconsistent; and systematic reporting of failure rates, downtime and technology abandonment is almost absent, leaving uncertainties about long-term reliability. Adoption barriers persist across technical, economic and social dimensions, including rural connectivity limitations, sensor durability challenges, capital and operating costs, and farmer concerns regarding data rights, transparency and trust. Progress for cows and chickens will require rigorous validation in commercial environments, integration of mechanistic and statistical modelling, open and modular architectures and governance structures that support biological, economic and environmental accountability whilst ensuring that system intelligence is worth its material and energy cost. Full article
(This article belongs to the Section Farm Animal Production)
Show Figures

Graphical abstract

35 pages, 3152 KB  
Review
AI-Resolved Protein Energy Landscapes, Electrodynamics, and Fluidic Microcircuits as a Unified Framework for Predicting Neurodegeneration
by Cosmin Pantu, Alexandru Breazu, Stefan Oprea, Matei Serban, Razvan-Adrian Covache-Busuioc, Octavian Munteanu, Nicolaie Dobrin, Daniel Costea and Lucian Eva
Int. J. Mol. Sci. 2026, 27(2), 676; https://doi.org/10.3390/ijms27020676 - 9 Jan 2026
Viewed by 279
Abstract
Research shows that neurodegenerative processes do not develop from a single “broken” biochemistry process; rather, they develop when a complex multi-physics environment gradually loses its ability to stabilize the neuron via a collective action between the protein, ion, field and fluid dynamics of [...] Read more.
Research shows that neurodegenerative processes do not develop from a single “broken” biochemistry process; rather, they develop when a complex multi-physics environment gradually loses its ability to stabilize the neuron via a collective action between the protein, ion, field and fluid dynamics of the neuron. The use of new technologies such as quantum-informed molecular simulation (QIMS), dielectric nanoscale mapping, fluid dynamics of the cell, and imaging of perivascular flow are allowing researchers to understand how the collective interactions among proteins, membranes and their electrical properties, along with fluid dynamics within the cell, form a highly interconnected dynamic system. These systems require fine control over the energetic, mechanical and electrical interactions that maintain their coherence. When there is even a small change in the protein conformations, the electric properties of the membrane, or the viscosity of the cell’s interior, it can cause changes in the high dimensional space in which the system operates to lose some of its stabilizing curvature and become prone to instability well before structural pathologies become apparent. AI has allowed researchers to create digital twin models using combined physical data from multiple scales and to predict the trajectory of the neural system toward instability by identifying signs of early deformation. Preliminary studies suggest that deviations in the ergodicity of metabolic–mechanical systems, contraction of dissipative bandwidth, and fragmentation of attractor basins could be indicators of vulnerability. This study will attempt to combine all of the current research into a cohesive view of the role of progressive loss of multi-physics coherence in neurodegenerative disease. Through integration of protein energetics, electrodynamic drift, and hydrodynamic irregularities, as well as predictive modeling utilizing AI, the authors will provide mechanistic insights and discuss potential approaches to early detection, targeted stabilization, and precision-guided interventions based on neurophysics. Full article
Show Figures

Figure 1

19 pages, 2083 KB  
Article
Digital Twin Modeling for Landslide Risk Scenarios in Mountainous Regions
by Lai Li, Bohui Tang, Fangliang Cai, Lei Wei, Xinming Zhu and Dong Fan
Sensors 2026, 26(2), 421; https://doi.org/10.3390/s26020421 - 8 Jan 2026
Viewed by 239
Abstract
Background: Rainfall-induced landslides are a widespread and destructive geological hazard that resist precise prediction. They pose serious threats to human lives and property, ecological stability, and socioeconomic development. Methods: To address the challenges in mitigating rainfall-induced landslides in high-altitude mountainous regions, [...] Read more.
Background: Rainfall-induced landslides are a widespread and destructive geological hazard that resist precise prediction. They pose serious threats to human lives and property, ecological stability, and socioeconomic development. Methods: To address the challenges in mitigating rainfall-induced landslides in high-altitude mountainous regions, this study proposes a digital twin framework that couples multiple physical fields and is based on the spherical discrete element method. Results: Two-dimensional simulations identify a trapezoidal stress distribution with inward-increasing stress. The stress increases uniformly from 0 kPa at the surface to 210 kPa in the interior. The crest stress remains constant at 1.8 kPa under gravity, whereas the toe stress rises from 6.5 to 14.8 kPa with the slope gradient. While the stress pattern persists post-failure, specific magnitudes alter significantly. This study pioneers a three-dimensional close-packed spherical discrete element method, achieving enhanced computational efficiency and stability through streamlined contact mechanics. Conclusions: The proposed framework utilizes point-contact mechanics to simplify friction modeling, enhancing computational efficiency and numerical stability. By integrating stress, rainfall, and seepage fields, we establish a coupled hydro-mechanical model that enables real-time digital twin mapping of landslide evolution through dynamic parameter adjustments. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

Back to TopTop