Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (812)

Search Parameters:
Keywords = experimental uncertainty analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2618 KB  
Article
A Cascaded Batch Bayesian Yield Optimization Method for Analog Circuits via Deep Transfer Learning
by Ziqi Wang, Kaisheng Sun and Xiao Shi
Electronics 2026, 15(3), 516; https://doi.org/10.3390/electronics15030516 (registering DOI) - 25 Jan 2026
Abstract
In nanometer integrated-circuit (IC) manufacturing, advanced technology scaling has intensified the effects of process variations on circuit reliability and performance. Random fluctuations in parameters such as threshold voltage, channel length, and oxide thickness further degrade design margins and increase the likelihood of functional [...] Read more.
In nanometer integrated-circuit (IC) manufacturing, advanced technology scaling has intensified the effects of process variations on circuit reliability and performance. Random fluctuations in parameters such as threshold voltage, channel length, and oxide thickness further degrade design margins and increase the likelihood of functional failures. These variations often lead to rare circuit failure events, underscoring the importance of accurate yield estimation and robust design methodologies. Conventional Monte Carlo yield estimation is computationally infeasible as millions of simulations are required to capture failure events with extremely low probability. This paper presents a novel reliability-based circuit design optimization framework that leverages deep transfer learning to improve the efficiency of repeated yield analysis in optimization iterations. Based on pre-trained neural network models from prior design knowledge, we utilize model fine-tuning to accelerate importance sampling (IS) for yield estimation. To improve estimation accuracy, adversarial perturbations are introduced to calibrate uncertainty near the model decision boundary. Moreover, we propose a cascaded batch Bayesian optimization (CBBO) framework that incorporates a smart initialization strategy and a localized penalty mechanism, guiding the search process toward high-yield regions while satisfying nominal performance constraints. Experimental validation on SRAM circuits and amplifiers reveals that CBBO achieves a computational speedup of 2.02×–4.63× over state-of-the-art (SOTA) methods, without compromising accuracy and robustness. Full article
(This article belongs to the Topic Advanced Integrated Circuit Design and Application)
Show Figures

Figure 1

20 pages, 2924 KB  
Article
Energy–Exergy–Exergoeconomic Evaluation of a Two-Stage Ammonia Refrigeration Cycle Under Industrial Operating Conditions
by Ayşe Bilgen Aksoy and Yunus Çerçi
Appl. Sci. 2026, 16(3), 1163; https://doi.org/10.3390/app16031163 - 23 Jan 2026
Viewed by 21
Abstract
Improving the thermodynamic and economic performance of industrial refrigeration systems is essential for reducing energy consumption and enhancing cold chain sustainability. This study presents an integrated energy, exergy, and exergoeconomic assessment of a full-scale two-stage ammonia (R717) vapor compression refrigeration system operating under [...] Read more.
Improving the thermodynamic and economic performance of industrial refrigeration systems is essential for reducing energy consumption and enhancing cold chain sustainability. This study presents an integrated energy, exergy, and exergoeconomic assessment of a full-scale two-stage ammonia (R717) vapor compression refrigeration system operating under real industrial conditions in Türkiye. Experimental data from 33 measurement points were used to perform component-level thermodynamic balances under steady-state conditions. The results showed that the evaporative condenser exhibited the highest heat transfer rate (426.7 kW), while the overall First Law efficiency of the system was 63.71%. Exergy analysis revealed that heat exchangers are the dominant sources of irreversibility (>45%), followed by circulation pumps with a notably low Second Law efficiency of 11.56%. The exergoeconomic assessment identified the circulation pumps as the components with the highest loss-to-cost ratio (2.45 W/USD). An uncertainty analysis confirmed that the relative ranking of system components remained robust within the measurement uncertainty bounds. The findings indicate that, although the existing NH3 configuration provides adequate performance, significant improvements can be achieved by prioritizing pump optimization, maintaining higher compressor loading, and implementing advanced variable-speed fan control strategies. Full article
(This article belongs to the Section Applied Thermal Engineering)
Show Figures

Figure 1

36 pages, 4575 KB  
Article
A PI-Dual-STGCN Fault Diagnosis Model Based on the SHAP-LLM Joint Explanation Framework
by Zheng Zhao, Shuxia Ye, Liang Qi, Hao Ni, Siyu Fei and Zhe Tong
Sensors 2026, 26(2), 723; https://doi.org/10.3390/s26020723 - 21 Jan 2026
Viewed by 91
Abstract
This paper proposes a PI-Dual-STGCN fault diagnosis model based on a SHAP-LLM joint explanation framework to address issues such as the lack of transparency in the diagnostic process of deep learning models and the weak interpretability of diagnostic results. PI-Dual-STGCN enhances the interpretability [...] Read more.
This paper proposes a PI-Dual-STGCN fault diagnosis model based on a SHAP-LLM joint explanation framework to address issues such as the lack of transparency in the diagnostic process of deep learning models and the weak interpretability of diagnostic results. PI-Dual-STGCN enhances the interpretability of graph data by introducing physical constraints and constructs a dual-graph architecture based on physical topology graphs and signal similarity graphs. The experimental results show that the dual-graph complementary architecture enhances diagnostic accuracy to 99.22%. Second, a general-purpose SHAP-LLM explanation framework is designed: Explainable AI (XAI) technology is used to analyze the decision logic of the diagnostic model and generate visual explanations, establishing a hierarchical knowledge base that includes performance metrics, explanation reliability, and fault experience. Retrieval-Augmented Generation (RAG) technology is innovatively combined to integrate model performance and Shapley Additive Explanations (SHAP) reliability assessment through the main report prompt, while the sub-report prompt enables detailed fault analysis and repair decision generation. Finally, experiments demonstrate that this approach avoids the uncertainty of directly using large models for fault diagnosis: we delegate all fault diagnosis tasks and core explainability tasks to more mature deep learning algorithms and XAI technology and only leverage the powerful textual reasoning capabilities of large models to process pre-quantified, fact-based textual information (e.g., model performance metrics, SHAP explanation results). This method enhances diagnostic transparency through XAI-generated visual and quantitative explanations of model decision logic while reducing the risk of large model hallucinations by restricting large models to reasoning over grounded, fact-based textual content rather than direct fault diagnosis, providing verifiable intelligent decision support for industrial fault diagnosis. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

21 pages, 1811 KB  
Article
Data-Driven Prediction of Tensile Strength in Heat-Treated Steels Using Random Forests for Sustainable Materials Design
by Yousef Alqurashi
Sustainability 2026, 18(2), 1087; https://doi.org/10.3390/su18021087 - 21 Jan 2026
Viewed by 59
Abstract
Accurate prediction of ultimate tensile strength (UTS) is central to the design and optimization of heat-treated steels but is traditionally achieved through costly and iterative experimental trials. This study presents a transparent, physics-aware machine learning (ML) framework for predicting UTS using an open-access [...] Read more.
Accurate prediction of ultimate tensile strength (UTS) is central to the design and optimization of heat-treated steels but is traditionally achieved through costly and iterative experimental trials. This study presents a transparent, physics-aware machine learning (ML) framework for predicting UTS using an open-access steel database. A curated dataset of 1255 steel samples was constructed by combining 18 chemical composition variables with 7 processing descriptors extracted from free-text heat-treatment records and filtering them using physically justified consistency criteria. To avoid information leakage arising from repeated measurements, model development and evaluation were conducted under a group-aware validation framework based on thermomechanical states. A Random Forest (RF) regression model achieved robust, conservative test-set performance (R2 ≈ 0.90, MAE ≈ 40 MPa), with unbiased residuals and realistic generalization across diverse composition–processing conditions. Performance robustness was further examined using repeated group-aware resampling and strength-stratified error analysis, highlighting increased uncertainty in sparsely populated high-strength regimes. Model interpretability was assessed using SHAP-based feature importance and partial dependence analysis, revealing that UTS is primarily governed by the overall alloying level, carbon content, and processing parameters controlling transformation kinetics, particularly bar diameter and tempering temperature. The results demonstrate that reliable predictions and physically meaningful insights can be obtained from publicly available data using a conservative, reproducible machine-learning workflow. Full article
(This article belongs to the Section Sustainable Materials)
Show Figures

Figure 1

22 pages, 8616 KB  
Review
Research Frontiers in Numerical Simulation and Mechanical Modeling of Ceramic Matrix Composites: Bibliometric Analysis and Hotspot Trends from 2000 to 2025
by Shifu Wang, Changxing Zhang, Biao Xia, Meiqian Wang, Zhiyi Tang and Wei Xu
Materials 2026, 19(2), 414; https://doi.org/10.3390/ma19020414 - 21 Jan 2026
Viewed by 88
Abstract
Ceramic matrix composites (CMCs) exhibit excellent high-temperature strength, oxidation resistance, and fracture toughness, making them superior to traditional metals and single-phase ceramics in extreme environments such as aerospace, nuclear energy equipment, and high-temperature protection systems. The mechanical properties of CMCs directly influence the [...] Read more.
Ceramic matrix composites (CMCs) exhibit excellent high-temperature strength, oxidation resistance, and fracture toughness, making them superior to traditional metals and single-phase ceramics in extreme environments such as aerospace, nuclear energy equipment, and high-temperature protection systems. The mechanical properties of CMCs directly influence the reliability and service life of structures; thus, accurately predicting their mechanical response and service behavior has become a core issue in current research. However, the multi-phase heterogeneity of CMCs leads to highly complex stress distribution and deformation behavior in traditional mechanical property testing, resulting in significant uncertainty in the measurement of key mechanical parameters such as strength and modulus. Additionally, the high manufacturing cost and limited experimental data further constrain material design and performance evaluation based on experimental data. Therefore, the development of effective numerical simulation and mechanical modeling methods is crucial. This paper provides an overview of the research hotspots and future directions in the field of CMCs numerical simulation and mechanical modeling through bibliometric analysis using the CiteSpace software. The analysis reveals that China, the United States, and France are the leading research contributors in this field, with 422, 157, and 71 publications and 6170, 3796, and 2268 citations, respectively. At the institutional level, Nanjing University of Aeronautics and Astronautics (166 publications; 1700 citations), Northwestern Polytechnical University (72; 1282), and the Centre National de la Recherche Scientifique (CNRS) (49; 1657) lead in publication volume and/or citation influence. Current research hotspots focus on finite element modeling, continuum damage mechanics, multiscale modeling, and simulations of high-temperature service behavior. In recent years, emerging research frontiers such as interface debonding mechanism modeling, acoustic emission monitoring and damage correlation, multiphysics coupling simulations, and machine learning-driven predictive modeling reflect the shift in CMCs research, from traditional experimental mechanics and analytical methods to intelligent and predictive modeling. Full article
(This article belongs to the Topic Advanced Composite Materials)
Show Figures

Figure 1

26 pages, 6853 KB  
Article
Machine Learning-Based Diffusion Processes for the Estimation of Stand Volume Yield and Growth Dynamics in Mixed-Age and Mixed-Species Forest Ecosystems
by Petras Rupšys
Symmetry 2026, 18(1), 194; https://doi.org/10.3390/sym18010194 - 20 Jan 2026
Viewed by 64
Abstract
This investigation examines diffusion processes for predicting whole-stand volume, incorporating the variability and uncertainty inherent in regional, operational, and environmental factors. The distribution and spatial organization of trees within a specified forest region, alongside dynamic fluctuations and intricate uncertainties, are modeled by a [...] Read more.
This investigation examines diffusion processes for predicting whole-stand volume, incorporating the variability and uncertainty inherent in regional, operational, and environmental factors. The distribution and spatial organization of trees within a specified forest region, alongside dynamic fluctuations and intricate uncertainties, are modeled by a set of nonsymmetric stochastic differential equations of a sigmoidal nature. The study introduces a three-dimensional system of stochastic differential equations (SDEs) with mixed-effect parameters, designed to quantify the dynamics of the three-dimensional distribution of tree-size components—namely diameter (diameter at breast height), potentially occupied area, and height—with respect to the age of a tree. This research significantly contributes by translating the analysis of tree size variables, specifically height, occupied area, and diameter, into stochastic processes. This transformation facilitates the representation of stand volume changes over time. Crucially, the estimation of model parameters is based exclusively on measurements of tree diameter, occupied area, and height, avoiding the need for direct tree volume assessments. The newly developed model has proven capable of accurately predicting, tracking, and elucidating the dynamics of stand volume yield and growth as trees mature. An empirical dataset composed of mixed-species, uneven-aged permanent experimental plots in Lithuania serves to substantiate the theoretical findings. According to the dataset under examination, the model-based estimates of stand volume per hectare in this region exhibited satisfactory goodness-of-fit statistics. Specifically, the root mean square error (and corresponding relative root mean square error) for the living trees of mixed, pine, spruce, and birch tree species were 68.814 m3 (20.4%), 20.778 m3 (7.8%), 32.776 m3 (37.3%), and 4.825 m3 (26.3%), respectively. The model is executed within Maple, a symbolic algebra system. Full article
Show Figures

Figure 1

27 pages, 10557 KB  
Article
Numerical and Experimental Estimation of Heat Source Strengths in Multi-Chip Modules on Printed Circuit Boards
by Cheng-Hung Huang and Hao-Wei Su
Mathematics 2026, 14(2), 327; https://doi.org/10.3390/math14020327 - 18 Jan 2026
Viewed by 102
Abstract
In this study, a three-dimensional Inverse Conjugate Heat Transfer Problem (ICHTP) is numerically and experimentally investigated to estimate the heat-source strength of multiple chips mounted on a printed circuit board (PCB) using the Conjugate Gradient Method (CGM) and infrared thermography. The interfaces between [...] Read more.
In this study, a three-dimensional Inverse Conjugate Heat Transfer Problem (ICHTP) is numerically and experimentally investigated to estimate the heat-source strength of multiple chips mounted on a printed circuit board (PCB) using the Conjugate Gradient Method (CGM) and infrared thermography. The interfaces between the PCB and the surrounding air domain are assumed to exhibit perfect thermal contact, establishing a fully coupled conjugate heat transfer framework for the inverse analysis. Unlike the conventional Inverse Heat Conduction Problem (IHCP), which typically only accounts for conduction within solid domains, the present ICHTP formulation requires the simultaneous solution of the governing continuity, momentum, and energy equations in the air domain, along with the heat conduction equation in the chips and PCB. This coupling introduces substantial computational complexity due to the nonlinear interaction between convective and conductive heat transfer mechanisms, as well as the sensitivity of the inverse solution to measurement uncertainties. The numerical simulations are conducted first with error-free measurement data and an inlet velocity of uin = 4 m/s; the recovered heat-sources exhibit excellent agreement with the true values. The computed average errors for the estimated temperatures ERR1 and estimated heat sources ERR2 are as low as 0.0031% and 1.87%, respectively. The accuracy of the estimated heat sources is then experimentally validated under various prescribed inlet air velocities. During experimental verification at an inlet velocity of 4 m/s, the corresponding ERR1 and ERR2 values are obtained as 0.91% and 3.34%, while at 6 m/s, the values are 0.86% and 2.81%, respectively. Compared with the numerical results, the accuracy of the experimental estimations decreases noticeably. This discrepancy arises because the numerical simulations are free from measurement noise, whereas experimental data inherently include uncertainties due to thermal picture resolutions, environmental fluctuations, and other uncontrollable factors. These results highlight the inherent challenges associated with inverse problems and underscore the critical importance of obtaining precise and reliable temperature measurements to ensure accurate heat source estimation. Full article
Show Figures

Figure 1

58 pages, 10490 KB  
Article
An Integrated Cyber-Physical Digital Twin Architecture with Quantitative Feedback Theory Robust Control for NIS2-Aligned Industrial Robotics
by Vesela Karlova-Sergieva, Boris Grasiani and Nina Nikolova
Sensors 2026, 26(2), 613; https://doi.org/10.3390/s26020613 - 16 Jan 2026
Viewed by 159
Abstract
This article presents an integrated framework for robust control and cybersecurity of an industrial robot, combining Quantitative Feedback Theory (QFT), digital twin (DT) technology, and a programmable logic controller–based architecture aligned with the requirements of the NIS2 Directive. The study considers a five-axis [...] Read more.
This article presents an integrated framework for robust control and cybersecurity of an industrial robot, combining Quantitative Feedback Theory (QFT), digital twin (DT) technology, and a programmable logic controller–based architecture aligned with the requirements of the NIS2 Directive. The study considers a five-axis industrial manipulator modeled as a set of decoupled linear single-input single-output systems subject to parametric uncertainty and external disturbances. For position control of each axis, closed-loop robust systems with QFT-based controllers and prefilters are designed, and the dynamic behavior of the system is evaluated using predefined key performance indicators (KPIs), including tracking errors in joint space and tool space, maximum error, root-mean-square error, and three-dimensional positional deviation. The proposed architecture executes robust control algorithms in the MATLAB/Simulink environment, while a programmable logic controller provides deterministic communication, time synchronization, and secure data exchange. The synchronized digital twin, implemented in the FANUC ROBOGUIDE environment, reproduces the robot’s kinematics and dynamics in real time, enabling realistic hardware-in-the-loop validation with a real programmable logic controller. This work represents one of the first architectures that simultaneously integrates robust control, real programmable logic controller-based execution, a synchronized digital twin, and NIS2-oriented mechanisms for observability and traceability. The conducted simulation and digital twin-based experimental studies under nominal and worst-case dynamic models, as well as scenarios with externally applied single-axis disturbances, demonstrate that the system maintains robustness and tracking accuracy within the prescribed performance criteria. In addition, the study analyzes how the proposed architecture supports the implementation of key NIS2 principles, including command traceability, disturbance resilience, access control, and capabilities for incident analysis and event traceability in robotic manufacturing systems. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

21 pages, 2947 KB  
Article
HFSOF: A Hierarchical Feature Selection and Optimization Framework for Ultrasound-Based Diagnosis of Endometrial Lesions
by Yongjun Liu, Zihao Zhang, Tongyu Chai and Haitong Zhao
Biomimetics 2026, 11(1), 74; https://doi.org/10.3390/biomimetics11010074 - 15 Jan 2026
Viewed by 183
Abstract
Endometrial lesions are common in gynecology, exhibiting considerable clinical heterogeneity across different subtypes. Although ultrasound imaging is the preferred diagnostic modality due to its noninvasive, accessible, and cost-effective nature, its diagnostic performance remains highly operator-dependent, leading to subjectivity and inconsistent results. To address [...] Read more.
Endometrial lesions are common in gynecology, exhibiting considerable clinical heterogeneity across different subtypes. Although ultrasound imaging is the preferred diagnostic modality due to its noninvasive, accessible, and cost-effective nature, its diagnostic performance remains highly operator-dependent, leading to subjectivity and inconsistent results. To address these limitations, this study proposes a hierarchical feature selection and optimization framework for endometrial lesions, aiming to enhance the objectivity and robustness of ultrasound-based diagnosis. Firstly, Kernel Principal Component Analysis (KPCA) is employed for nonlinear dimensionality reduction, retaining the top 1000 principal components. Secondly, an ensemble of three filter-based methods—information gain, chi-square test, and symmetrical uncertainty—is integrated to rank and fuse features, followed by thresholding with Maximum Scatter Difference Linear Discriminant Analysis (MSDLDA) for preliminary feature selection. Finally, the Whale Migration Algorithm (WMA) is applied to population-based feature optimization and classifier training under the constraints of a Support Vector Machine (SVM) and a macro-averaged F1 score. Experimental results demonstrate that the proposed closed-loop pipeline of “kernel reduction—filter fusion—threshold pruning—intelligent optimization—robust classification” effectively balances nonlinear structure preservation, feature redundancy control, and model generalization, providing an interpretable, reproducible, and efficient solution for intelligent diagnosis in small- to medium-scale medical imaging datasets. Full article
(This article belongs to the Special Issue Bio-Inspired AI: When Generative AI and Biomimicry Overlap)
Show Figures

Figure 1

29 pages, 4522 KB  
Article
Machine Learning-Driven Prediction of Microstructural Evolution and Mechanical Properties in Heat-Treated Steels Using Gradient Boosting
by Saurabh Tiwari, Khushbu Dash, Seongjun Heo, Nokeun Park and Nagireddy Gari Subba Reddy
Crystals 2026, 16(1), 61; https://doi.org/10.3390/cryst16010061 - 15 Jan 2026
Viewed by 198
Abstract
Optimizing heat treatment processes requires an understanding of the complex relationships between compositions, processing parameters, microstructures, and properties. Traditional experimental approaches are costly and time-consuming, whereas machine learning methods suffer from critical data scarcity. In this study, gradient boosting models were developed to [...] Read more.
Optimizing heat treatment processes requires an understanding of the complex relationships between compositions, processing parameters, microstructures, and properties. Traditional experimental approaches are costly and time-consuming, whereas machine learning methods suffer from critical data scarcity. In this study, gradient boosting models were developed to predict microstructural phase fractions and mechanical properties using synthetic training data generated from an established metallurgical theory. A 400-sample dataset spanning eight AISI steel grades was created based on Koistinen–Marburger martensite kinetics, the Grossmann hardenability theory, and empirical property correlations from ASM handbooks. Following systematic hyperparameter optimization via 5-fold cross-validation, gradient boosting achieved R2 = 0.955 for hardness (RMSE = 2.38 HRC), R2 = 0.949 for tensile strength (RMSE = 87.6 MPa), and R2 = 0.936 for yield strength, outperforming the Random Forest, Support Vector Regression, and Neural Networks by 7–13%. Feature importance analysis identified the tempering temperature (38.4%), carbon equivalent (15.4%), and carbon content (13.0%) as the dominant factors. Model predictions demonstrated physical consistency with the literature data (mean error of 1.8%) and satisfied the fundamental metallurgical relationships. This methodology provides a scalable and cost-effective approach for heat treatment optimization by reducing experimental requirements based on learning curve analysis while maintaining prediction accuracy within the measurement uncertainty. Full article
(This article belongs to the Special Issue Investigation of Microstructural and Properties of Steels and Alloys)
Show Figures

Figure 1

32 pages, 8491 KB  
Article
Uncertainty Analysis of Seismic Effects on Cultural Relics in Collections: Integrating Deep Learning and Reinforcement Strategies
by Lin He, Zhengyi Xu, Mengting Gong, Weikai Wang, Xiaofei Yang and Jianming Wei
Appl. Sci. 2026, 16(2), 879; https://doi.org/10.3390/app16020879 - 15 Jan 2026
Viewed by 114
Abstract
Due to the unpredictability of seismic and the complexity of collection environments, significant uncertainty exists regarding their impact on cultural relics. Moreover, existing research on the causal analysis of seismic damage to cultural relics remains insufficient, thereby limiting advancements in risk assessment and [...] Read more.
Due to the unpredictability of seismic and the complexity of collection environments, significant uncertainty exists regarding their impact on cultural relics. Moreover, existing research on the causal analysis of seismic damage to cultural relics remains insufficient, thereby limiting advancements in risk assessment and protective measures. To address this issue, this paper proposes a seismic damage risk assessment method for cultural relics in collections, integrating deep learning and reinforcement strategies. The proposed method enhances the dataset on seismic impacts on cultural relics by developing an integrated deep learning-based data correction model. Furthermore, it incorporates a graph attention mechanism to precisely quantify the influence of various attribute factors on cultural relic damage. Additionally, by combining reinforcement learning with the Deep Deterministic Policy Gradient (DDPG) strategy, this method refines seismic risk assessments and formulates more targeted preventive protection measures for cultural relics in collections. This study evaluates the proposed method using three public datasets in comparison with the self-constructed Seismic Damage Dataset of Cultural Relics (CR-SDD). Experiments are conducted to assess and analyze the predictive performance of various models. Experimental results demonstrate that the proposed method achieves an accuracy of 81.21% in assessing seismic damage to cultural relics in collections. This research provides a scientific foundation and practical guidance for the protection of cultural relics, offering strong support for preventive conservation efforts in seismic risk mitigation. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 2766 KB  
Article
Design and Experimental Validation of an Adaptive Robust Control Algorithm for a PAM-Driven Biomimetic Leg Joint System
by Feifei Qin, Zexuan Liu, Yuanjie Xian, Binrui Wang, Qiaoye Zhang and Ye-Hwa Chen
Machines 2026, 14(1), 84; https://doi.org/10.3390/machines14010084 - 9 Jan 2026
Viewed by 225
Abstract
Biomimetic quadruped robots, inspired by the musculoskeletal systems of animals, employ pneumatic artificial muscles (PAMs) as compliant actuators to achieve flexible, efficient, and adaptive locomotion. This study focuses on a pneumatic artificial muscle (PAM)-driven biomimetic leg joints system. First, its kinematic and dynamic [...] Read more.
Biomimetic quadruped robots, inspired by the musculoskeletal systems of animals, employ pneumatic artificial muscles (PAMs) as compliant actuators to achieve flexible, efficient, and adaptive locomotion. This study focuses on a pneumatic artificial muscle (PAM)-driven biomimetic leg joints system. First, its kinematic and dynamic models are established. Next, to address the challenges posed by the strong nonlinearities and complex time-varying uncertainties inherent in PAMs, an adaptive robust control algorithm is proposed by employing the Udwadia controller. Rigorous theoretical analysis of the adaptive robust control algorithm is verified via the Lyapunov stability method. Finally, numerical simulations and hardware experiments are conducted on the PAM-driven biomimetic leg joints system under desired trajectories, where the adaptive robust control algorithm is systematically compared with three conventional control algorithm to evaluate its control performance. The experimental results show that the proposed controller achieves a maximum tracking error of within 0.05 rad for the hip joint and within 0.1 rad, highlighting its strong potential for practical deployment in real-world environments. Full article
Show Figures

Figure 1

37 pages, 1355 KB  
Review
Risk Assessment of Chemical Mixtures in Foods: A Comprehensive Methodological and Regulatory Review
by Rosana González Combarros, Mariano González-García, Gerardo David Blanco-Díaz, Kharla Segovia Bravo, José Luis Reino Moya and José Ignacio López-Sánchez
Foods 2026, 15(2), 244; https://doi.org/10.3390/foods15020244 - 9 Jan 2026
Viewed by 221
Abstract
Over the last 15 years, mixture risk assessment for food xenobiotics has evolved from conceptual discussions and simple screening tools, such as the Hazard Index (HI), towards operational, component-based and probabilistic frameworks embedded in major food-safety institutions. This review synthesizes methodological and regulatory [...] Read more.
Over the last 15 years, mixture risk assessment for food xenobiotics has evolved from conceptual discussions and simple screening tools, such as the Hazard Index (HI), towards operational, component-based and probabilistic frameworks embedded in major food-safety institutions. This review synthesizes methodological and regulatory advances in cumulative risk assessment for dietary “cocktails” of pesticides, contaminants and other xenobiotics, with a specific focus on food-relevant exposure scenarios. At the toxicological level, the field is now anchored in concentration/dose addition as the default model for similarly acting chemicals, supported by extensive experimental evidence that most environmental mixtures behave approximately dose-additively at low effect levels. Building on this paradigm, a portfolio of quantitative metrics has been developed to operationalize component-based mixture assessment: HI as a conservative screening anchor; Relative Potency Factors (RPF) and Toxic Equivalents (TEQ) to express doses within cumulative assessment groups; the Maximum Cumulative Ratio (MCR) to diagnose whether risk is dominated by one or several components; and the combined Margin of Exposure (MOET) as a point-of-departure-based integrator that avoids compounding uncertainty factors. Regulatory frameworks developed by EFSA, the U.S. EPA and FAO/WHO converge on tiered assessment schemes, biologically informed grouping of chemicals and dose addition as the default model for similarly acting substances, while differing in scope, data infrastructure and legal embedding. Implementation in food safety critically depends on robust exposure data streams. Total Diet Studies provide population-level, “as eaten” exposure estimates through harmonized food-list construction, home-style preparation and composite sampling, and are increasingly combined with conventional monitoring. In parallel, human biomonitoring quantifies internal exposure to diet-related xenobiotics such as PFAS, phthalates, bisphenols and mycotoxins, embedding mixture assessment within a dietary-exposome perspective. Across these developments, structured uncertainty analysis and decision-oriented communication have become indispensable. By integrating advances in toxicology, exposure science and regulatory practice, this review outlines a coherent, tiered and uncertainty-aware framework for assessing real-world dietary mixtures of xenobiotics, and identifies priorities for future work, including mechanistically and data-driven grouping strategies, expanded use of physiologically based pharmacokinetic modelling and refined mixture-sensitive indicators to support public-health decision-making. Full article
(This article belongs to the Special Issue Research on Food Chemical Safety)
Show Figures

Figure 1

25 pages, 2094 KB  
Review
Strategies for Determining Residual Expansion in Concrete Cores: A Systematic Literature Review
by Maria E. S. Melo, Fernando A. N. Silva, Eudes A. Rocha, António C. Azevedo and João M. P. Q. Delgado
Buildings 2026, 16(2), 282; https://doi.org/10.3390/buildings16020282 - 9 Jan 2026
Viewed by 215
Abstract
This systematic review maps and compares experimental strategies for estimating residual expansion in concrete elements affected by internal expansive reactions (IER), with emphasis on cores extracted from in-service structures. It adopts an operational taxonomy distinguishing achieved expansion (deformation already occurred, inferred through DRI/SDT [...] Read more.
This systematic review maps and compares experimental strategies for estimating residual expansion in concrete elements affected by internal expansive reactions (IER), with emphasis on cores extracted from in-service structures. It adopts an operational taxonomy distinguishing achieved expansion (deformation already occurred, inferred through DRI/SDT or back-analysis), potential expansion (upper limit under free conditions), and residual expansion (remaining portion estimated under controlled temperature, T, and relative humidity, RH), in addition to the free vs. restrained condition and the diagnostic vs. prognostic purpose. Seventy-eight papers were included (PRISMA), of which 14 tested cores. The limited number of core-based studies is itself a key outcome of the review, revealing that most residual expansion assessments rely on adaptations of laboratory ASR/DEF protocols rather than on standardized methods specifically developed for concrete cores extracted from in-service structures. ASR predominated, with emphasis on accelerated free tests ASTM/CSA/CPT (often at 38 °C and high RH) for reactivity characterization, and on Laboratoire Central des Ponts et Chaussées (LCPC) No. 44 and No. 67 protocols or Concrete Prism Test (CPT) adaptations to estimate residual expansion in cores. Significant heterogeneity was observed in temperature, humidity, test media, specimen dimensions, and alkali leaching treatment, as well as discrepancies between free and restrained conditions, limiting comparability and lab-to-field transferability. A minimum reporting checklist is proposed (type of IER; element history; restraint condition; T/RH/medium; anti-leaching strategy; schedule; instrumentation; uncertainty; decision criteria; raw data) and priority gaps are highlighted: standardization of core protocols, leaching control, greater use of simulated restraint, and integration of DRI/SDT–expansion curves to anchor risk estimates and guide rehabilitation decisions in real structures. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

36 pages, 968 KB  
Review
Applications of Artificial Intelligence in Fisheries: From Data to Decisions
by Syed Ariful Haque and Saud M. Al Jufaili
Big Data Cogn. Comput. 2026, 10(1), 19; https://doi.org/10.3390/bdcc10010019 - 5 Jan 2026
Viewed by 1041
Abstract
AI enhances aquatic resource management by automating species detection, optimizing feed, forecasting water quality, protecting species interactions, and strengthening the detection of illegal, unreported, and unregulated fishing activities. However, these advancements are inconsistently employed, subject to domain shifts, limited by the availability of [...] Read more.
AI enhances aquatic resource management by automating species detection, optimizing feed, forecasting water quality, protecting species interactions, and strengthening the detection of illegal, unreported, and unregulated fishing activities. However, these advancements are inconsistently employed, subject to domain shifts, limited by the availability of labeled data, and poorly benchmarked across operational contexts. Recent developments in technology and applications in fisheries genetics and monitoring, precision aquaculture, management, and sensing infrastructure are summarized in this paper. We studied automated species recognition, genomic trait inference, environmental DNA metabarcoding, acoustic analysis, and trait-based population modeling in fisheries genetics and monitoring. We used digital-twin frameworks for supervised learning in feed optimization, reinforcement learning for water quality control, vision-based welfare monitoring, and harvest forecasting in aquaculture. We explored automatic identification system trajectory analysis for illicit fishing detection, global effort mapping, electronic bycatch monitoring, protected species tracking, and multi-sensor vessel surveillance in fisheries management. Acoustic echogram automation, convolutional neural network-based fish detection, edge-computing architectures, and marine-domain foundation models are foundational developments in sensing infrastructure. Implementation challenges include performance degradation across habitat and seasonal transitions, insufficient standardized multi-region datasets for rare and protected taxa, inadequate incorporation of model uncertainty into management decisions, and structural inequalities in data access and technology adoption among smallholder producers. Standardized multi-region benchmarks with rare-taxa coverage, calibrated uncertainty quantification in assessment and control systems, domain-robust energy-efficient algorithms, and privacy-preserving data partnerships are our priorities. These integrated priorities enable transition from experimental prototypes to a reliable, collaborative infrastructure for sustainable wild capture and farmed aquatic systems. Full article
Show Figures

Figure 1

Back to TopTop