Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (271)

Search Parameters:
Keywords = static and dynamic calibration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 12718 KB  
Article
Proposed Methodology for Correcting Fourier-Transform Infrared Spectroscopy Field-of-View Scene-Change Artifacts
by Kody A. Wilson, Michael L. Dexter, Benjamin F. Akers and Anthony L. Franz
Remote Sens. 2026, 18(2), 317; https://doi.org/10.3390/rs18020317 (registering DOI) - 17 Jan 2026
Abstract
Fourier-transform spectrometers are widely used for spectral measurements. Changes in the field of view during measurement introduce oscillations into the measured spectra known as scene-change artifacts. Field-of-view changes also introduce uncertainty about which target the measured spectrum represents. Though scene-change artifacts are often [...] Read more.
Fourier-transform spectrometers are widely used for spectral measurements. Changes in the field of view during measurement introduce oscillations into the measured spectra known as scene-change artifacts. Field-of-view changes also introduce uncertainty about which target the measured spectrum represents. Though scene-change artifacts are often present in dynamic data, their significance is disputed in the current literature. This work presents a theoretical framework and experimental validation for scene-change artifacts. Field-of-view changes introduce variable interferogram offsets, which standard processing techniques assume are constant. The error between the interferogram offset and its estimate is Fourier-transformed, yielding scene-change artifacts, often confused with noise, in the calibrated spectrum. Previous theoretical models ignored the effect of the interferogram offset in generating SCAs, leading to an underestimation of the scene-change artifact significance. Smooth offset correction removes these artifacts by estimating the variable interferogram offset using locally weighted scatter-plot smoothing. Updating the interferogram offset estimate resulted in the same accuracy expected for static conditions. The resulting spectra resemble the zero path difference spectra, similar to earlier theoretical predictions. These results indicate that Fourier-transform spectroscopy accuracy with variable scenes can be significantly improved with minor modifications to data processing. Full article
Show Figures

Figure 1

35 pages, 830 KB  
Article
Predicting Financial Contagion: A Deep Learning-Enhanced Actuarial Model for Systemic Risk Assessment
by Khalid Jeaab, Youness Saoudi, Smaaine Ouaharahe and Moulay El Mehdi Falloul
J. Risk Financial Manag. 2026, 19(1), 72; https://doi.org/10.3390/jrfm19010072 - 16 Jan 2026
Viewed by 208
Abstract
Financial crises increasingly exhibit complex, interconnected patterns that traditional risk models fail to capture. The 2008 global financial crisis, 2020 pandemic shock, and recent banking sector stress events demonstrate how systemic risks propagate through multiple channels simultaneously—e.g., network contagion, extreme co-movements, and information [...] Read more.
Financial crises increasingly exhibit complex, interconnected patterns that traditional risk models fail to capture. The 2008 global financial crisis, 2020 pandemic shock, and recent banking sector stress events demonstrate how systemic risks propagate through multiple channels simultaneously—e.g., network contagion, extreme co-movements, and information cascades—creating a multidimensional phenomenon that exceeds the capabilities of conventional actuarial or econometric approaches alone. This paper addresses the fundamental challenge of modeling this multidimensional systemic risk phenomenon by proposing a mathematically formalized three-tier integration framework that achieves 19.2% accuracy improvement over traditional models through the following: (1) dynamic network-copula coupling that captures 35% more tail dependencies than static approaches, (2) semantic-temporal alignment of textual signals with network evolution, and (3) economically optimized threshold calibration reducing false positives by 35% while maintaining 85% crisis detection sensitivity. Empirical validation on historical data (2000–2023) demonstrates significant improvements over traditional models: 19.2% increase in predictive accuracy (R2 from 0.68 to 0.87), 2.7 months earlier crisis detection compared to Basel III credit-to-GDP indicators, and 35% reduction in false positive rates while maintaining 85% crisis detection sensitivity. Case studies of the 2008 crisis and 2020 market turbulence illustrate the model’s ability to identify subtle precursor signals through integrated analysis of network structure evolution and semantic changes in regulatory communications. These advances provide financial regulators and institutions with enhanced tools for macroprudential supervision and countercyclical capital buffer calibration, strengthening financial system resilience against multifaceted systemic risks. Full article
(This article belongs to the Special Issue Financial Regulation and Risk Management amid Global Uncertainty)
Show Figures

Figure 1

20 pages, 3259 KB  
Article
Green Transportation Planning for Smart Cities: Digital Twins and Real-Time Traffic Optimization in Urban Mobility Networks
by Marek Lis and Maksymilian Mądziel
Appl. Sci. 2026, 16(2), 678; https://doi.org/10.3390/app16020678 - 8 Jan 2026
Viewed by 292
Abstract
This paper proposes a comprehensive framework for integrating Digital Twins (DT) with real-time traffic optimization systems to enhance urban mobility management in Smart Cities. Using the Pobitno Roundabout in Rzeszów as a case study, we established a calibrated microsimulation model (validated via the [...] Read more.
This paper proposes a comprehensive framework for integrating Digital Twins (DT) with real-time traffic optimization systems to enhance urban mobility management in Smart Cities. Using the Pobitno Roundabout in Rzeszów as a case study, we established a calibrated microsimulation model (validated via the GEH statistic) that serves as the core of the proposed Digital Twin. The study goes beyond static scenario analysis by introducing an Adaptive Inflow Metering (AIM) logic designed to interact with IoT sensor data. While traditional geometrical upgrades (e.g., turbo-roundabouts) were analyzed, simulation results revealed that geometrical changes alone—without dynamic control—may fail under peak load conditions (resulting in LOS F). Consequently, the research demonstrates how the DT framework allows for the testing of “Software-in-the-Loop” (SiL) solutions where Python-based algorithms dynamically adjust inflow parameters to prevent gridlock. The findings confirm that combining physical infrastructure changes with digital, real-time optimization algorithms is essential for achieving sustainable “green transport” goals and reducing emissions in congested urban nodes. Full article
(This article belongs to the Special Issue Green Transportation and Pollution Control)
Show Figures

Figure 1

35 pages, 1515 KB  
Article
Bio-RegNet: A Meta-Homeostatic Bayesian Neural Network Framework Integrating Treg-Inspired Immunoregulation and Autophagic Optimization for Adaptive Community Detection and Stable Intelligence
by Yanfei Ma, Daozheng Qu and Mykhailo Pyrozhenko
Biomimetics 2026, 11(1), 48; https://doi.org/10.3390/biomimetics11010048 - 7 Jan 2026
Viewed by 177
Abstract
Contemporary neural and generative architectures are deficient in self-preservation mechanisms and sustainable stability. In uncertain or noisy situations, they frequently demonstrate oscillatory learning, overconfidence, and structural deterioration, indicating a lack of biological regulatory principles in artificial systems. We present Bio-RegNet, a meta-homeostatic Bayesian [...] Read more.
Contemporary neural and generative architectures are deficient in self-preservation mechanisms and sustainable stability. In uncertain or noisy situations, they frequently demonstrate oscillatory learning, overconfidence, and structural deterioration, indicating a lack of biological regulatory principles in artificial systems. We present Bio-RegNet, a meta-homeostatic Bayesian neural network architecture that integrates T-regulatory-cell-inspired immunoregulation with autophagic structural optimization. The model integrates three synergistic subsystems: the Bayesian Effector Network (BEN) for uncertainty-aware inference, the Regulatory Immune Network (RIN) for Lyapunov-based inhibitory control, and the Autophagic Optimization Engine (AOE) for energy-efficient regeneration, thereby establishing a closed energy–entropy loop that attains adaptive equilibrium among cognition, regulation, and metabolism. This triadic feedback achieves meta-homeostasis, transforming learning into a process of ongoing self-stabilization instead of static optimization. Bio-RegNet routinely outperforms state-of-the-art dynamic GNNs across twelve neuronal, molecular, and macro-scale benchmarks, enhancing calibration and energy efficiency by over 20% and expediting recovery from perturbations by 14%. Its domain-invariant equilibrium facilitates seamless transfer between biological and manufactured systems, exemplifying a fundamental notion of bio-inspired, self-sustaining intelligence—connecting generative AI and biomimetic design for sustainable, living computation. Bio-RegNet consistently outperforms the strongest baseline HGNN-ODE, improving ARI from 0.77 to 0.81 and NMI from 0.84 to 0.87, while increasing equilibrium coherence κ from 0.86 to 0.93. Full article
(This article belongs to the Special Issue Bio-Inspired AI: When Generative AI and Biomimicry Overlap)
Show Figures

Graphical abstract

20 pages, 2092 KB  
Article
Calibration of Snow Particle Contact Parameters for Simulation Analysis of Membrane Structure Snow Removal Robot
by Jiangtao Dong, Fuxiang Zhang, Fengshan Huang and Xiaofei Man
Appl. Sci. 2026, 16(2), 610; https://doi.org/10.3390/app16020610 - 7 Jan 2026
Viewed by 99
Abstract
To enhance the accuracy of discrete element method (DEM) simulation for the snow removal process performed by autonomous robots on membrane structures, this study calibrated the key contact parameters of snow particles used in the simulation. Through literature research, the intrinsic parameters and [...] Read more.
To enhance the accuracy of discrete element method (DEM) simulation for the snow removal process performed by autonomous robots on membrane structures, this study calibrated the key contact parameters of snow particles used in the simulation. Through literature research, the intrinsic parameters and contact parameter ranges for snow particles and membrane structures were determined. A discrete element model of snow particles was established, and the Hertz–Mindlin with Johnson–Kendall–Robert contact model was selected to simulate the formation process of the repose angle. Using the actual repose angle of snow particles as the target, four significant factors were identified through the P-B experiment, and other factors were set at the intermediate level. Through the steepest slope climbing experiment and response surface design, second-order response equations of the four significant factors were obtained. The optimal parameter combination was calculated as follows: the surface energy of snow particles was 0.23 J/m2; the restitution coefficient, static friction coefficient, and rolling friction coefficient of snow–snow were 0.141, 0.05, and 0.03; and the restitution coefficient, static friction coefficient, and rolling friction coefficient of snow–membrane were 0.2, 0.18, and 0.03. The simulated repose angle was 40.62°, and the relative error with the actual repose angle was 0.32%. These calibration results are reliable and can provide a reliable simulation basis and essential data support for the optimal design of a snow removal robot and the dynamic simulation of the operation process. Full article
(This article belongs to the Special Issue Advances in Robotics and Autonomous Systems)
Show Figures

Figure 1

20 pages, 6329 KB  
Article
Physical Characterization of Cumin Seeds and Development of a Discrete Element Simulation Model
by Hongmei Wang, Peiyu Chen, Changqi Wang, Weiguo Chen, Jiale Ma, Liangyang Lu and Yongcheng Zhang
AgriEngineering 2026, 8(1), 19; https://doi.org/10.3390/agriengineering8010019 - 5 Jan 2026
Viewed by 182
Abstract
The low level of mechanization in the production process of cumin seeds is one of the primary factors limiting their yield and economic efficiency. To enhance the mechanization of cumin seed production, this study focused on cumin seeds as the research subject. Physical [...] Read more.
The low level of mechanization in the production process of cumin seeds is one of the primary factors limiting their yield and economic efficiency. To enhance the mechanization of cumin seed production, this study focused on cumin seeds as the research subject. Physical parameters of cumin seeds were determined through physical experiments; based on these parameters, a discrete element model of cumin seeds was established, and the shear modulus was calibrated using angle of repose tests. The established model was used to simulate the seeding process of a seed drill, the model’s accuracy was verified by analyzing the seed trajectory, movement velocity, seeding quality, and the dynamic angle of repose of seeds inside the drill. Results indicated that the collision recovery coefficient, static friction coefficient, and rolling friction coefficient between cumin seeds and ABS plastic, stainless steel plates, and other cumin seeds were 0.3, 0.35, and 0.21; 0.49, 0.39, 0.24; and 0.24, 0.38, 0.18, respectively. Calibration via simulated cylinder accumulation tests yielded a deviation of 0.28% between the simulated accumulation angle and the physical accumulation angle at a shear modulus of 100 MPa; the simulated seed trajectory during dispensing closely matched physical dispensing tests. The average deviation in particle drop velocity within the bridge channel region was 4.23%, with a maximum deviation of 6.07%; the average deviation in dynamic packing angle from start to finish for the particle group was 2.84%, with a maximum deviation of 4.18%; and the average mass discharged from the 14 simulated seed nozzles was 0.0446 g, compared to 0.043 g in physical tests, with a deviation of 3.72%. These results demonstrate the high accuracy and reliability of the established cumin discrete element model and its parameters, providing technical support for the design and optimization of full-process mechanical cumin production systems. Full article
Show Figures

Figure 1

23 pages, 2969 KB  
Review
Dynamic Oxidative States: Interplay of Aging, Metabolic Stress, and Circadian Rhythms in Modulating Stroke Severity
by Jui-Ming Sun, Jing-Shiun Jan, Cheng-Ta Hsieh, Rajeev Taliyan, Chih-Hao Yang, Ruei-Dun Teng and Ting-Lin Yen
Antioxidants 2026, 15(1), 54; https://doi.org/10.3390/antiox15010054 - 31 Dec 2025
Viewed by 455
Abstract
Oxidative stress is a defining feature of stroke pathology, but the magnitude, timing and impact of redox imbalance are not static. Emerging evidence indicates that physiological contexts, such as aging, metabolic stress, and circadian disruption, continuously reshape oxidative status and determine the brain’s [...] Read more.
Oxidative stress is a defining feature of stroke pathology, but the magnitude, timing and impact of redox imbalance are not static. Emerging evidence indicates that physiological contexts, such as aging, metabolic stress, and circadian disruption, continuously reshape oxidative status and determine the brain’s vulnerability to ischemic and reperfusion injury. This review integrates recent insights into how these intrinsic modulators govern the transition from adaptive physiological redox signaling to pathological oxidative stress during stroke. Aging compromises mitochondrial quality control and blunts NRF2-driven antioxidant responses, heightening susceptibility to ROS-driven damage. Metabolic dysfunction, as seen in obesity and diabetes, amplifies oxidative burden through NADPH oxidase activation, lipid peroxidation, and impaired glutathione recycling, further aggravating post-ischemic inflammation. Circadian misalignment, meanwhile, disrupts the rhythmic expression of antioxidant enzymes and metabolic regulators such as BMAL1, REV-ERBα, and SIRT1, constricting the brain’s temporal window of resilience. We highlight convergent signaling hubs, NRF2/KEAP1, SIRT–PGC1α, and AMPK pathways, as integrators of these physiological inputs that collectively calibrate redox homeostasis. Recognizing oxidative stress as a dynamic, context-dependent process reframes it from a static pathological state to a dynamic outcome of systemic and temporal imbalance, offering new opportunities for time-sensitive and metabolism-informed redox interventions in stroke. Full article
(This article belongs to the Special Issue Antioxidants, Metabolic Regulation and Stroke)
Show Figures

Figure 1

24 pages, 4607 KB  
Article
Cross-Modal Interaction Fusion-Based Uncertainty-Aware Prediction Method for Industrial Froth Flotation Concentrate Grade by Using a Hybrid SKNet-ViT Framework
by Fanlei Lu, Weihua Gui, Yulong Wang, Jiayi Zhou and Xiaoli Wang
Sensors 2026, 26(1), 150; https://doi.org/10.3390/s26010150 - 25 Dec 2025
Viewed by 351
Abstract
In froth flotation, the features of froth images are important information to predict the concentrate grade. However, the froth structure is influenced by multiple factors, such as air flowrate, slurry level, ore properties, reagents, etc., which leads to highly complex and dynamic changes [...] Read more.
In froth flotation, the features of froth images are important information to predict the concentrate grade. However, the froth structure is influenced by multiple factors, such as air flowrate, slurry level, ore properties, reagents, etc., which leads to highly complex and dynamic changes in the image features. Additionally, issues such as the immeasurability of ore properties and measurement errors pose significant uncertainties including aleatoric uncertainty (intrinsic variability from ore fluctuations and sensor noise) and epistemic uncertainty (incomplete feature representation and local data heterogeneity) and generalization challenges for prediction models. This paper proposes an uncertainty quantification regression framework based on cross-modal interaction fusion, which integrates the complementary advantages of Selective Kernel Networks (SKNet) and Vision Transformers (ViT). By designing a cross-modal interaction module, the method achieves deep fusion of local and global features, reducing epistemic uncertainty caused by incomplete feature expression in single-models. Meanwhile, by combining adaptive calibrated quantile regression—using exponential moving average (EMA) to track real-time coverage and adjust parameters dynamically—the prediction interval coverage is optimized, addressing the inability of static quantile regression to adapt to aleatoric uncertainty. And through the localized conformal prediction module, sensitivity to local data distributions is enhanced, avoiding the limitation of global conformal methods in ignoring local heterogeneity. Experimental results demonstrate that this method significantly improves the robustness of uncertainty estimation while maintaining high prediction accuracy, providing strong support for intelligent optimization and decision-making in industrial flotation processes. Full article
Show Figures

Figure 1

18 pages, 488 KB  
Article
Decoding Mouse Visual Tasks via Hierarchical Neural-Information Gradients
by Jingyi Feng, Xiang Feng, Yong Luo and Jing Li
Mathematics 2026, 14(1), 31; https://doi.org/10.3390/math14010031 - 22 Dec 2025
Viewed by 225
Abstract
Understanding how the brain encodes and decodes dynamic neural responses to visual stimuli is crucial for revealing visual information representation. Currently, most methods (including deep neural networks, DNNs) often overlook the dynamic generation process of neural data, such as hierarchical visual data, within [...] Read more.
Understanding how the brain encodes and decodes dynamic neural responses to visual stimuli is crucial for revealing visual information representation. Currently, most methods (including deep neural networks, DNNs) often overlook the dynamic generation process of neural data, such as hierarchical visual data, within the brain’s structure. In this work, we introduce two decoding paradigms: fine-grained decoding tests (single brain regions) and coarse-grained decoding tests (multiple regions). Using the Allen Institute’s Visual Coding Neuropixel dataset, we propose the Adaptive Topological Vision Transformer (AT-ViT), which exploits a biologically calibrated cumulative hierarchy derived from single-area decoding performance to adaptively decode topological relationships across brain regions. Extensive experiments confirm the ‘Information-Gradient Hypothesis’: single-area decoding accuracy should recovers the anatomical visual hierarchy, and AT-ViT achieves maximal performance when this data-driven gradient is respected. AT-ViT outperforms non-hierarchical baselines (ada-PCA/SVM) by 1.08–1.93% in natural scenes and 2.46–3.34% in static gratings across sessions, peaking at hierarchy 3 (visual cortex + thalamus/midbrain) with up to 96.21%, but declining 1–2% when including hippocampus data, highlighting its random, performance-hindering nature. This work demonstrates hierarchical networks’ superiority for brain visual tasks and opens avenues for studying hippocampal roles beyond visual decoding. Full article
(This article belongs to the Special Issue Machine Learning and Mathematical Methods in Computer Vision)
Show Figures

Figure 1

22 pages, 7393 KB  
Article
Interpreting Regional Functions Around Urban Rail Stations by Integrating Dockless Bike Sharing and POI Patterns: Case Study of Beijing, China
by Siyang Liu, Jian Rong, Chenjing Zhou, Miao Guo and Haodong Sun
Urban Sci. 2026, 10(1), 1; https://doi.org/10.3390/urbansci10010001 - 19 Dec 2025
Viewed by 369
Abstract
Identifying area functions around urban rail transit (URT) stations is crucial for optimizing urban planning and infrastructure allocation. Traditional methods relying on static land-use data fail to capture dynamic human–environment interactions, while emerging mobility datasets suffer from spatial granularity limitations. This study bridges [...] Read more.
Identifying area functions around urban rail transit (URT) stations is crucial for optimizing urban planning and infrastructure allocation. Traditional methods relying on static land-use data fail to capture dynamic human–environment interactions, while emerging mobility datasets suffer from spatial granularity limitations. This study bridges this gap by integrating spatiotemporal patterns of dockless bike sharing (DBS) with Point of Interest (POI) configurations to characterize station functions. Taking Beijing as a case study, we develop a cluster analysis framework that synthesizes DBS density fluctuations, parking distribution shifts between day/night periods, and POI features. Cluster results reveal functionally distinct station groups with statistically significant differences in both DBS usage patterns and POI distributions. Critically, high-density urban cores exhibit concentrated bicycle usage aligned with mixed POI agglomerations, while suburban zones demonstrate commuter-oriented fluctuations with evening residential surges. This alignment between DBS-derived activity signatures and POI-based land-use features provides actionable insights: planners can optimize bicycle parking in residential clusters, calibrate last-mile connections in employment cores, and adapt infrastructure to localized functional transitions—ultimately enhancing URT-integrated sustainable development. Full article
(This article belongs to the Special Issue Transit-Oriented Land Development and/or 15-Minute Cities)
Show Figures

Figure 1

17 pages, 5847 KB  
Article
A Dynamic Compensation Method Based on Pulse Width for Laser Ranging and Distance Determination in Precision-Guided Aircraft
by Jinghao Li, Zhipeng Li, Yuheng He, Kuizheng Li and Hejuan Chen
Micromachines 2025, 16(12), 1409; https://doi.org/10.3390/mi16121409 - 15 Dec 2025
Viewed by 252
Abstract
This paper proposes a dynamic compensation method for laser ranging based on pulse width for the miniaturization and high-precision requirements of the initiation device in precision-guided aircraft. The study aims to improve the measurement accuracy of the laser ranging unit in the initiation [...] Read more.
This paper proposes a dynamic compensation method for laser ranging based on pulse width for the miniaturization and high-precision requirements of the initiation device in precision-guided aircraft. The study aims to improve the measurement accuracy of the laser ranging unit in the initiation device system and ensure the accuracy and reliability of its fixed-distance initiation decision. The variation in echo pulse width is analyzed by studying laser echo characteristics. The pulse width and the detection distance exhibit an approximately linear negative correlation within the middle range of the applicable distance range. A dynamic compensation method is proposed based on a dual-correction approach using a static lookup table and dynamic compensation. This method establishes the mapping relationship between pulse width and distance deviation, and achieves distance correction by adding distance deviation compensation to the basic value from the static lookup table. The dynamic compensation system integrated with calibration and correction is designed and implemented, and the feasibility of the dynamic compensation method is verified by testing. The relative error between the calculated correction distance and the actual distance is small, and the average relative error is about 1.33%. The proposed method provides key technical support for the establishment of miniaturized and intelligent initiation devices. Full article
(This article belongs to the Special Issue Micro/Nano Optical Devices and Sensing Technology)
Show Figures

Figure 1

22 pages, 2100 KB  
Article
A Novel Execution Time Prediction Scheme for Efficient Physical AI Resource Management
by Jin-Woo Kwon and Won-Tae Kim
Electronics 2025, 14(24), 4903; https://doi.org/10.3390/electronics14244903 - 13 Dec 2025
Viewed by 375
Abstract
Physical AI enables reliable and timely operations of autonomous systems such as robots and smart manufacturing equipment under diverse and dynamic execution environments. In these environments, computing resources are often limited, shared among tasks, and fluctuate over time. This makes it difficult to [...] Read more.
Physical AI enables reliable and timely operations of autonomous systems such as robots and smart manufacturing equipment under diverse and dynamic execution environments. In these environments, computing resources are often limited, shared among tasks, and fluctuate over time. This makes it difficult to guarantee that tasks meet timing constraints. As a result, resource-aware execution time prediction becomes essential for efficient resource management in physical AI systems. However, existing methods typically assume specific environments or static resource usage and often fail to generalize to new environments. In this paper, we propose CARE-D (Calibration-Assisted Resource-aware Execution time prediction), which trains a deep neural network to model the nonlinear relationships among hardware characteristics, resource levels, and task features across environments. The model predicts the execution time of tasks under diverse hardware and dynamically allocated computing resources, using a few execution records from new environments. CARE-D applies few-history-based calibration using only 1 to k execution records from target environments to adjust predictions without retraining the model. Experiments show that CARE-D improves prediction accuracy by about 7.3% over zero-history predictors within a 10% relative error and outperforms regression and deep learning baselines, using only one to five records per target environment. Full article
Show Figures

Figure 1

22 pages, 57539 KB  
Article
From Measured In Situ Stress to Dynamic Simulation: A Calibrated 3DEC Model of a Rock Quarry
by Vivien De Lucia, Domenico Gullì, Daria Marchetti and Riccardo Salvini
Appl. Sci. 2025, 15(24), 13100; https://doi.org/10.3390/app152413100 - 12 Dec 2025
Viewed by 286
Abstract
Accurately reproducing the mechanical and dynamic behavior of fractured rock masses remains a key challenge in rock engineering, especially in marble quarry environments where discontinuity networks, excavation geometry, and topographic effects induce highly non-linear stress distributions. This study presents a multidisciplinary and physically [...] Read more.
Accurately reproducing the mechanical and dynamic behavior of fractured rock masses remains a key challenge in rock engineering, especially in marble quarry environments where discontinuity networks, excavation geometry, and topographic effects induce highly non-linear stress distributions. This study presents a multidisciplinary and physically calibrated numerical approach integrating field stress measurements, structural characterization, and dynamic modeling using the Distinct Element Method (DEM). The analysis focuses on a marble quarry located in the Apuan Alps (Italy), a tectonically complex metamorphic massif characterized by intense deformation and pervasive jointing that strongly influence rock mass behavior under both static and seismic loading. The initial stress field was calibrated using in situ measurements obtained by the CSIRO Hollow Inclusion technique, enabling reconstruction of the three-dimensional principal stress regime and its direct incorporation into a 3DEC numerical model. The calibrated model was then employed to simulate the dynamic response of the rock mass under seismic loading consistent with the Italian Building Code (NTC 2018). This coupled static–dynamic workflow provides a realistic evaluation of ground motion amplification, stress concentration, and potential failure mechanisms along pre-existing discontinuities. Results demonstrate that physically validated stress initialization yields a significantly more realistic response than models based on simplified lithostatic or empirical assumptions. The approach highlights the value of integrating geological, geotechnical, and seismological data into a unified modeling framework for a sustainable quarry stability analysis in fractured rock masses. Full article
(This article belongs to the Special Issue Advances and Techniques in Rock Fracture Mechanics)
Show Figures

Figure 1

24 pages, 2233 KB  
Article
Development of a Digital Twin of a DC Motor Using NARX Artificial Neural Networks
by Victor Busher, Valeriy Kuznetsov, Zbigniew Ciekanowski, Artur Rojek, Tomasz Grudniewski, Natalya Druzhinina, Vitalii Kuznetsov, Mykola Tryputen, Petro Hubskyi and Alibek Batyrbek
Energies 2025, 18(24), 6502; https://doi.org/10.3390/en18246502 - 11 Dec 2025
Viewed by 354
Abstract
This study presents the development process of a digital twin for a complex dynamic object using Artificial Neural Networks. A separately excited DC motor is considered as an example, which, despite its well-known electromechanical properties, remains a non-trivial object for neural network modeling. [...] Read more.
This study presents the development process of a digital twin for a complex dynamic object using Artificial Neural Networks. A separately excited DC motor is considered as an example, which, despite its well-known electromechanical properties, remains a non-trivial object for neural network modeling. It is shown that describing the motor using a generalized neural network with various configurations does not yield satisfactory results. The optimal solution was based on a separation into two distinct nonlinear autoregressive with exogenous inputs (NARX) artificial neural networks with cross-connections for the two main machine variables: one for modeling the armature current with exogenous inputs of voltage and armature speed, and another for modeling the angular speed with inputs of voltage and armature current. Both neural networks are characterized by a relatively small number of neurons in the hidden layer and a time delay of no more than 3 time steps. This solution, consistent with the physical understanding of the motor as an object where electromagnetic energy is converted into thermal and mechanical energy (and vice versa), allows the model to be calibrated for the ideal no-load mode and subsequently account for the influence of torque loads of various natures and changes in the control object parameters over a wide range. The study demonstrates that even for modeling an object such as a DC electric drive with cascaded control, reducing errors at the boundaries of the known operating range requires generating test signals covering approximately 120% of the nominal speed range and 250–400% of the nominal current. Analysis of various test signals revealed that training with a sequence of step changes and linear variations across the entire operating range of armature current and speed provides higher accuracy compared to training with random or uniform signals. Furthermore, to ensure the neural network model’s functionality under varying load torque, a mechanical load observer was developed, and a model architecture incorporating an additional input for disturbance was proposed. The SEDCM_NARX_LOAD neural network model demonstrates a theoretically justified response to load application, although dynamic and static errors arise. In the experiment, the current error was 7.4%, and the speed error was 0.5%. The practical significance of the research lies in the potential use of the proposed model for simulating dynamic and static operational modes of electromechanical systems, tuning controllers, and testing control strategies without employing a physical motor. Full article
(This article belongs to the Section F5: Artificial Intelligence and Smart Energy)
Show Figures

Figure 1

27 pages, 4018 KB  
Article
Developing a Simulation-Based Traffic Model for King Abdulaziz University Hospital, Saudi Arabia
by Mohaimin Azmain, Alok Tiwari, Jamal Abdulmohsen Eid Abdulaal and Abdulrhman M. Gbban
Sustainability 2025, 17(24), 10985; https://doi.org/10.3390/su172410985 - 8 Dec 2025
Viewed by 600
Abstract
Transportation management within university campuses presents distinct challenges due to highly fluctuating traffic patterns. King Abdulaziz University (KAU), which attracts over 350,000 trips daily, is experiencing substantial congestion-related issues. This study focuses specifically on King Abdulaziz University Hospital (KAUH), a major trip generator [...] Read more.
Transportation management within university campuses presents distinct challenges due to highly fluctuating traffic patterns. King Abdulaziz University (KAU), which attracts over 350,000 trips daily, is experiencing substantial congestion-related issues. This study focuses specifically on King Abdulaziz University Hospital (KAUH), a major trip generator on campus characterized by significant temporal variations in travel demand. The objective of this research is to develop a validated and operational traffic demand model using PTV VISUM 2025. A four-step framework was implemented, where campus gates were defined as trip production sources and 13 parking areas were designated as trip attractions. The morning peak-hour, identified as 7:15 AM to 8:15 AM, was selected for analysis due to the highest observed inflow of vehicles. Traffic surveys were conducted at seven bidirectional stations along key links to support Origin–Destination (O–D) matrix estimation and calibration. Both static and dynamic traffic assignment methods were applied to assess model performance. Model validity was evaluated using the R2 statistic, percentage deviations, and the GEH measure of fit. The results demonstrate that both the equilibrium static assignment and the dynamic stochastic assignment achieved strong levels of accuracy, with R2 = 0.98 and 86% of links exhibiting GEH values below 5, alongside average GEH scores of 3.2 and 2.7, respectively. This dual-model approach provides a robust analytical foundation for KAU, enabling long-term strategic planning through static assignment outputs and supporting short-term, peak-hour operational management through dynamic assignment results. Full article
Show Figures

Figure 1

Back to TopTop