Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,381)

Search Parameters:
Keywords = optimization criterion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1646 KB  
Article
Assigning Spare Parts Management Decision-Making Strategies: A Holistic Portfolio Classification Methodology
by Simon Klarskov Didriksen, Kristoffer Wernblad Sigsgaard, Niels Henrik Mortensen and Christian Brunbjerg Jespersen
Appl. Sci. 2026, 16(4), 1961; https://doi.org/10.3390/app16041961 (registering DOI) - 16 Feb 2026
Abstract
Maintenance organizations face growing volumes of spare parts, requiring robust classification methodologies to support decision-making. Practitioners continue to rely on simple and single-criterion-specialized methodologies, while research advances toward criteria- and threshold-specialized classification optimization for operationally visible spare parts or predefined classes, revealing criteria [...] Read more.
Maintenance organizations face growing volumes of spare parts, requiring robust classification methodologies to support decision-making. Practitioners continue to rely on simple and single-criterion-specialized methodologies, while research advances toward criteria- and threshold-specialized classification optimization for operationally visible spare parts or predefined classes, revealing criteria dependencies and data completeness requirements. The literature review identifies a gap showing that existing classification methodologies lack inclusion of all spare parts with maintainable asset relevance, consequently excluding, under-prioritizing, or misclassifying essential spare parts, leading to the wrong forecasts and inventory policies. Applying design science research, this study develops a holistic spare parts portfolio classification methodology that increases spare parts inclusion and enables class-based decision-making strategy development to address the gap. The methodology classifies spare parts based on their absence and presence across equipment bills of materials, maintenance history, inventory, and inventory policies, enabling identification and inclusion of operationally invisible spare parts. A case study of 32,521 spare parts demonstrates the interventional effects of the methodology. The intervention improved decision-making efficiency by 91%, increased decision throughput ninefold, and transformed a non-transparent decision-making approach with 9% scope completion and 1.7% stock value increase into a transparent strategy-based approach yielding full scope completion and 33.6% scope stock value reduction. Full article
Show Figures

Figure 1

17 pages, 2733 KB  
Article
Multifidelity Topology Optimization with Runtime Verification and Acceptance Control: Benchmark Study in 2D and 3D
by Nikhil Tatke and Jarosław Kaczmarczyk
Materials 2026, 19(4), 769; https://doi.org/10.3390/ma19040769 (registering DOI) - 16 Feb 2026
Abstract
Topology optimization using density-based approaches often requires high-resolution meshes to achieve reliable compliance evaluation and robustness against mesh dependency. However, increasing the problem sizes—especially in 3D—results in prohibitively expensive computation times. Coarse-mesh approaches significantly accelerate runtimes; however, they also introduce discretization errors that [...] Read more.
Topology optimization using density-based approaches often requires high-resolution meshes to achieve reliable compliance evaluation and robustness against mesh dependency. However, increasing the problem sizes—especially in 3D—results in prohibitively expensive computation times. Coarse-mesh approaches significantly accelerate runtimes; however, they also introduce discretization errors that can guide the optimizer towards incorrect topology families if left unregulated. To address this issue, a multifidelity framework with acceptance control was developed that enables runtime verification and explicitly manages the optimizer state. The main idea is to use coarse discretizations to generate new design proposals and transfer candidate designs to fine discretizations at periodic intervals for verification. Proposals are then accepted or rejected using a best-referenced criterion; if verification fails, the optimizer reverts to the best verified state. The proposed framework balances fine-discretization accountability with coarse-discretization efficiency through configurable verification schedules and a cleanup phase. The framework is evaluated on standard 2D and 3D structural benchmark problems with deterministic load perturbations, and performance is assessed in terms of final verified compliance, wall-clock runtime, acceptance rate, and gray fraction. Full article
(This article belongs to the Section Materials Simulation and Design)
Show Figures

Figure 1

19 pages, 1431 KB  
Article
Robust Trajectory Prediction for Mobile Robots via Minimum Error Entropy Criterion and Adaptive LSTM Networks
by Da Xie, Zengxun Li, Chun Zhang, Chunyang Wang and Xuyang Wei
Entropy 2026, 28(2), 227; https://doi.org/10.3390/e28020227 (registering DOI) - 15 Feb 2026
Abstract
Trajectory prediction is critical for safe robot navigation, yet standard deep learning models predominantly rely on the Mean Squared Error (MSE) criterion. While effective under ideal conditions, MSE-based optimization is inherently fragile to non-Gaussian impulsive noise—such as sensor glitches and occlusions—common in real-world [...] Read more.
Trajectory prediction is critical for safe robot navigation, yet standard deep learning models predominantly rely on the Mean Squared Error (MSE) criterion. While effective under ideal conditions, MSE-based optimization is inherently fragile to non-Gaussian impulsive noise—such as sensor glitches and occlusions—common in real-world deployment. To address this limitation, this paper proposes MEE-LSTM, a robust forecasting framework that integrates Long Short-Term Memory networks with the Minimum Error Entropy (MEE) criterion. By minimizing Renyi’s quadratic entropy of the prediction error, our loss function introduces an intrinsic “gradient clipping” mechanism that effectively suppresses the influence of outliers. Furthermore, to overcome the convergence challenges of fixed-kernel information theoretic learning, we introduce a Silverman-based Adaptive Annealing (SAA) strategy that dynamically regulates the kernel bandwidth. Extensive evaluations on the ETH and UCY datasets demonstrate that MEE-LSTM maintains competitive accuracy on clean benchmarks while exhibiting superior resilience in degraded sensing environments. Notably, we identify a “Scissor Plot” phenomenon under stress testing: in the presence of 20% impulsive noise, the proposed model maintains a stable Average Displacement Error (ADE “≈” 0.51 m), whereas MSE baselines suffer catastrophic degradation (ADE > 2.1 m), representing a 75.7% improvement in robustness. This work provides a statistically grounded paradigm for reliable causal inference in hostile robotic perception. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Discovery)
17 pages, 1042 KB  
Article
Simulation of Nonstationary Fluctuating Wind Fields Using POD Decoupling and Spline Interpolation
by Junfeng Zhang, Yuhang Xia, Ningbo Liu, Zheng Liu and Jie Li
Buildings 2026, 16(4), 804; https://doi.org/10.3390/buildings16040804 (registering DOI) - 15 Feb 2026
Abstract
Improving the simulation efficiency of the spectral representation method (SRM) for nonstationary fluctuating wind fields has attracted considerable attention. To this end, this study proposes a method based on proper orthogonal decomposition (POD) decoupling and Spline interpolation to enhance computational efficiency. This method [...] Read more.
Improving the simulation efficiency of the spectral representation method (SRM) for nonstationary fluctuating wind fields has attracted considerable attention. To this end, this study proposes a method based on proper orthogonal decomposition (POD) decoupling and Spline interpolation to enhance computational efficiency. This method selects a limited number of interpolation points in the time-frequency domain of the evolutionary power spectral density (EPSD) for Cholesky decomposition, utilizes the proper orthogonal decomposition (POD) technique to achieve time-frequency decoupling of the spectral matrix, and employs Spline interpolation but not the traditional Hermite-interpolation to reconstruct the complete time-frequency functions, thereby enabling the rapid synthesis of wind-velocity time histories via the FFT. Then, the wind field on a three-span frame lightning-rod structure is taken as an example to validate the reliability of the proposed method. The influences of the modal order and the number of time-frequency interpolation points on both simulation efficiency and error are investigated, and comparisons are given with the Hermite-interpolation-based method. The results indicate that the simulation efficiency is governed primarily by the modal order, and the method with Spline interpolation shows higher computational efficiency and accuracy because it can satisfy accuracy requirements at a lower modal order. Finally, a rational truncation criterion based on the cumulative energy ratio of at least 99.9% is suggested to determine the optimal modal order, thereby achieving a balance between accuracy and computational efficiency. Full article
(This article belongs to the Special Issue Dynamic Response Analysis of Structures Under Wind and Seismic Loads)
23 pages, 3871 KB  
Article
Optimization of CCGT Start-Up Ramp Rate to Improve Voltage Quality in a 110/220 kV Power System Node
by Madina Maratovna Umysheva, Yerlan Aliaskarovich Sarsenbayev and Dias Raybekovich Umyshev
Energies 2026, 19(4), 1028; https://doi.org/10.3390/en19041028 - 15 Feb 2026
Abstract
With the active modernization of power facilities and the increasing deployment of maneuverable combined-cycle gas turbines (CCGTs), the selection of rational start-up strategies becomes increasingly important from the perspective of power quality. Excessive acceleration of power ramp-up may lead to undesirable voltage deviations, [...] Read more.
With the active modernization of power facilities and the increasing deployment of maneuverable combined-cycle gas turbines (CCGTs), the selection of rational start-up strategies becomes increasingly important from the perspective of power quality. Excessive acceleration of power ramp-up may lead to undesirable voltage deviations, particularly in transmission networks with limited grid stiffness. This study investigates the impact of CCGT start-up ramp rate on voltage dynamics and power quality indicators at a 110/220 kV grid node. A detailed model of the Almaty power hub was developed in MATLAB/Simulink, taking into account the network structure, generating units, transformers, and aggregated loads. Three start-up scenarios were analyzed: an existing combined heat and power plant, a 504 MW combined-cycle gas turbine unit, and a 560 MW combined-cycle gas turbine unit with fuel afterburning. Voltage dynamics were evaluated using RMS-based indicators and a stabilization criterion incorporating a 5 s sliding time window and an 80% admissibility threshold. The simulation results reveal a nonlinear relationship between the start-up ramp rate and voltage quality. Increasing the ramp rate reduces the voltage stabilization time; however, beyond approximately 0.05 MW/s, further acceleration does not lead to additional improvement in power quality. The results indicate the existence of an optimal range of start-up ramp rates that provides a compromise between start-up speed and voltage quality requirements. The proposed approach can be used in the development of start-up algorithms for modern combined-cycle power plants connected to 110/220 kV transmission networks. Full article
Show Figures

Figure 1

21 pages, 1787 KB  
Article
Quantitative Radiographic Morphology of Posterior Calcaneal Spurs Independently Predicts Patient-Centered Outcomes After Extracorporeal Shockwave Therapy for Insertional Achilles Tendinopathy: An MCID and PASS Analysis
by Bilal Aykaç, Mustafa Dinç, Hünkar Çağdaş Bayrak and Recep Karasu
J. Clin. Med. 2026, 15(4), 1538; https://doi.org/10.3390/jcm15041538 - 15 Feb 2026
Abstract
Background/Objectives: Insertional Achilles tendinopathy (IAT) is frequently associated with posterior calcaneal spurs; however, the prognostic significance of spur morphology for patient-centered treatment outcomes remains unquantified. This study aimed to establish treatment-specific minimal clinically important difference (MCID) and patient acceptable symptom state (PASS) [...] Read more.
Background/Objectives: Insertional Achilles tendinopathy (IAT) is frequently associated with posterior calcaneal spurs; however, the prognostic significance of spur morphology for patient-centered treatment outcomes remains unquantified. This study aimed to establish treatment-specific minimal clinically important difference (MCID) and patient acceptable symptom state (PASS) thresholds after extracorporeal shockwave therapy (ESWT) and to determine whether quantitative spur morphology independently predicts achievement of these patient-centered endpoints. Methods: In this retrospective cohort study, 201 patients with IAT and radiographically confirmed posterior calcaneal spurs received standardized ESWT (three weekly sessions, 0.20 mJ/mm2, 8 Hz). Spur length and angle were measured on calibrated weight-bearing lateral radiographs. MCID and PASS thresholds for VISA-A, AOFAS, and VAS scores were determined using anchor-based receiver operating characteristic (ROC) analyses. Optimal spur morphology thresholds were derived from ROC curves using PASS achievement as the outcome criterion and the Youden index for cut-off selection. Multivariable logistic regression analyses, adjusted for age, sex, and body mass index, were performed to assess the independent prognostic value of spur morphology. Results: MCID thresholds were: ΔVISA-A ≥ 16.5 (AUC = 0.886), ΔAOFAS ≥ 11.5 (AUC = 0.830), and ΔVAS ≥ 2.5 (AUC = 0.897). PASS thresholds were: VISA-A ≥ 70.5 (AUC = 0.712), AOFAS ≥ 72.5 (AUC = 0.842), and VAS ≤ 3.5 (AUC = 0.753). While significant mean improvements occurred (all p < 0.001), only 36.8–43.3% of patients achieved MCID and 38.3–53.2% achieved PASS. ROC analysis identified spur length > 8.7 mm (AUC = 0.713) and spur angle > 16° (AUC = 0.738) as optimal thresholds predictive of PASS failure. In multivariable analysis, increased spur length (adjusted OR = 0.23–0.24, p < 0.001) and angle (adjusted OR = 0.16–0.23, p < 0.001) independently reduced the likelihood of achieving both MCID and PASS. Conclusions: This study provides the first anchor-based MCID and PASS thresholds for ESWT in IAT and demonstrates that posterior calcaneal spur morphology—specifically length > 8.7 mm and angle > 16°—independently predicts patient-defined treatment success. These findings support the integration of quantitative spur assessment into clinical decision-making for personalized management of IAT. Full article
(This article belongs to the Section Orthopedics)
Show Figures

Figure 1

14 pages, 3426 KB  
Article
Limit to Self-Field Critical Current Density in Thin-Film, Type-II Superconductors
by Amit Goyal, Rohit Kumar, Armando Galluzzi and Massimiliano Polichetti
Materials 2026, 19(4), 745; https://doi.org/10.3390/ma19040745 (registering DOI) - 14 Feb 2026
Viewed by 85
Abstract
In the last decade, the self-field critical current density Jc(s.f.) in Type-II superconductors has been considered fundamentally limited by a Silsbee-like criterion of Jc(s.f.) = Hc1/λ. We show that this universal limit to self-field critical current density [...] Read more.
In the last decade, the self-field critical current density Jc(s.f.) in Type-II superconductors has been considered fundamentally limited by a Silsbee-like criterion of Jc(s.f.) = Hc1/λ. We show that this universal limit to self-field critical current density Jc(s.f.) is not universally valid. We present several examples for this in YBa2Cu3O7−δ-type and REBa2Cu3O7−δ thin films and one for Nb thin films and show that calculated Jc(s.f.) using the Silsbee-like criterion using thermodynamic parameters has been substantially exceeded experimentally. We also show that Jc(s.f.) can be significantly improved by incorporation of artificial pinning centers (APCs), further implying that no such universal limit to Jc(s.f.) can exist because such an upper bound, Jc(s.f.) would have to be independent of APCs. These findings call for a revision of the accepted understanding of current-carrying limits in Type-II superconductors and reveal substantial potential for improving Jc in REBCO-based coated conductors through optimization of APCs for large-scale applications, including commercial nuclear fusion. Full article
(This article belongs to the Section Materials Physics)
Show Figures

Graphical abstract

20 pages, 22518 KB  
Article
Experimental Study on the True-Triaxial Mechanical Properties and Fracture Mechanisms of Granite Subjected to Cyclic Thermal Shock
by Fan Zhang, Shaohui Quan, Shengyuan Liu, Man Li and Qian Zhou
Appl. Sci. 2026, 16(4), 1892; https://doi.org/10.3390/app16041892 (registering DOI) - 13 Feb 2026
Viewed by 61
Abstract
During reservoir stimulation and long-term operation of Enhanced Geothermal Systems (EGSs), repeated injection of cold fluids induces cyclic thermal shock in the surrounding rock mass, leading to progressive modification of mechanical properties and fracture behavior. However, the combined effects of cyclic thermal shock [...] Read more.
During reservoir stimulation and long-term operation of Enhanced Geothermal Systems (EGSs), repeated injection of cold fluids induces cyclic thermal shock in the surrounding rock mass, leading to progressive modification of mechanical properties and fracture behavior. However, the combined effects of cyclic thermal shock and true-triaxial stress conditions on granite strength and failure characteristics remain inadequately quantified. In this study, a series of true-triaxial compression tests were conducted on granite specimens subjected to cyclic thermal shock at 400 °C. Thermal shock cycles of 0, 1, 5, 10, and 15 were considered in conjunction with intermediate principal stress levels of 5, 20, 30, and 50 MPa to systematically evaluate their coupled influence on characteristic stresses and macroscopic failure behavior. The results show that the peak intensity increases with the rise of the intermediate principal stress, but with the increase in the number of thermal shocks, it first increases and then decreases. Macroscopic failure is dominated by asymmetric V-shaped fracture surfaces, roughly oriented along the σ2 direction. As the intermediate principal stress increases, the failure mode transitions from tensile–shear mixed failure to shear-dominated failure, whereas thermal cycling promotes the persistence of tensile–shear cracking even under relatively high σ2 conditions. Based on these observations, a modified Mogi–Coulomb strength criterion that accounts for thermal shock-induced damage is proposed to describe granite strength under true-triaxial stress conditions. The research results can provide a theoretical basis for optimizing the design of hydraulic fracturing in hot dry rock and evaluating reservoir stability. Full article
Show Figures

Figure 1

23 pages, 638 KB  
Article
Optimal Allocations Under Strongly Pigou–Dalton Criteria: Hidden Layer Structure and Efficient Combinatorial Approach
by Taikun Zhu, Kai Jin, Ruixi Luo and Song Cao
Mathematics 2026, 14(4), 658; https://doi.org/10.3390/math14040658 - 12 Feb 2026
Viewed by 142
Abstract
We investigate optimal social welfare allocations of m items to n agents with binary additive or submodular valuations. For binary additive valuations, we prove that the set of optimal allocations coincides with the set of so-called stable allocations, as long as the [...] Read more.
We investigate optimal social welfare allocations of m items to n agents with binary additive or submodular valuations. For binary additive valuations, we prove that the set of optimal allocations coincides with the set of so-called stable allocations, as long as the employed criterion for evaluating social welfare is strongly Pigou–Dalton (SPD) and symmetric. Many common criteria are SPD and symmetric, such as Nash social welfare, LexiMax, LexiMin, the Gini Index, Entropy, and Envy Sum. We also design efficient algorithms for finding a stable allocation, including an O(m2n) time algorithm for the case of indivisible items, and an O(m2n5) time one for the case of divisible items. The first is faster than the existing algorithms or has a simpler analysis. The latter is the first combinatorial algorithm for that problem. It utilizes a hidden layer partition of items and agents admitted by all stable allocations, and cleverly reduces the case of divisible items to the case of indivisible items. In addition, we show that the profiles of different optimal allocations have a small Chebyshev distance, which is zero for the case of divisible items under binary additive valuations, and is at most one for the case of indivisible items under binary submodular valuations. Full article
(This article belongs to the Special Issue Game Theory and Operations Research)
Show Figures

Figure 1

12 pages, 272 KB  
Communication
Estimating the Parameter of Direct Effects in Crossover Designs: The Case of 6 Periods and 2 Treatments
by Miltiadis S. Chalikias
Stats 2026, 9(1), 17; https://doi.org/10.3390/stats9010017 (registering DOI) - 12 Feb 2026
Viewed by 77
Abstract
The present study investigates the derivation of optimal repeated measurement designs for two treatments, six periods, and n experimental units, focusing exclusively on the direct effects of the treatments. The optimal designs are determined for cases where n ≡ 0 or 1, 2, [...] Read more.
The present study investigates the derivation of optimal repeated measurement designs for two treatments, six periods, and n experimental units, focusing exclusively on the direct effects of the treatments. The optimal designs are determined for cases where n ≡ 0 or 1, 2, 3, 4 (mod 4). The adopted optimality criterion aims at minimizing the variance of the estimator of the direct effects, thereby ensuring maximum precision in parameter estimation and increased design efficiency. The results presented extend and complement earlier studies on optimal two-treatment repeated-measurement designs for a smaller number of periods, and are closely related to more recent work focusing on optimality with respect to direct effects. Overall, this work contributes to the theoretical framework of optimal design methodology by providing new insights into the structure and efficiency of repeated measurement designs, and lays the groundwork for future extensions incorporating treatment–period interactions. Full article
(This article belongs to the Section Statistical Methods)
35 pages, 8103 KB  
Article
Hybrid Quill Shaft for a Multifunctional Portal Machine Tool Centre
by Frantisek Sedlacek, Petr Bernardin, Josef Kozak, Vaclava Lasova, Petr Janda and Jiri Kubicek
Appl. Sci. 2026, 16(4), 1816; https://doi.org/10.3390/app16041816 - 12 Feb 2026
Viewed by 123
Abstract
A hybrid quill shaft for a multifunctional machine tool centre combines a conventional steel body with a wound composite insert that significantly enhances structural stiffness and dynamic properties. This paper presents a methodologically rigorous approach to the design and validation of a hybrid [...] Read more.
A hybrid quill shaft for a multifunctional machine tool centre combines a conventional steel body with a wound composite insert that significantly enhances structural stiffness and dynamic properties. This paper presents a methodologically rigorous approach to the design and validation of a hybrid quill shaft, encompassing material optimisation through the NSGA-II evolutionary algorithm, experimental modal analysis, and verification of the influence of an active pre-tensioning anchor system on the compensation of elastic deformations. A finite element model was coupled with an optimisation tool evaluating eight fibre types across 786 iterations. Results unequivocally demonstrated the superiority of M55J fibre with ±88° orientation as the optimal compromise between stiffness (13.2% reduction in deflection), weight (3% reduction), and cost (4.2% cost increase). Composite safety was ensured through the three-dimensional Tsai-Wu strength criterion applied as a constraint. Experimental validation on an assembly with a hydraulic pre-tensioning system demonstrated symmetrical quill shaft behaviour (±0.07 mm/m) and agreement with finite element analysis (9.5% deviation). Numerical modal analysis revealed a pronounced decrease in natural frequencies with increasing overhang (from 308 Hz to 58 Hz). The resulting design incorporating M55J fibres, 2345 mm length, and epoxy resin in a 60:40 fibre-to-matrix ratio represents a practically implementable solution for enhanced precision and productivity in modern machine tool centres. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

24 pages, 8216 KB  
Article
Mechanical Properties, Acoustic Emission Characteristics, and Damage Evolution of Cemented Tailings Backfill Under Temperature Effects
by Haoliang Han, Chao Zhang, Jinping Guo and Xiaolin Wang
Minerals 2026, 16(2), 193; https://doi.org/10.3390/min16020193 - 12 Feb 2026
Viewed by 95
Abstract
In the context of deep mining and green low-carbon transition, this study characterizes the thermo-mechanical evolution and fracture mechanisms of cemented tailings backfill (CTB) through systematic experiments conducted at 20–60 °C across 3–28 days. Results demonstrate that strength and elastic modulus follow a [...] Read more.
In the context of deep mining and green low-carbon transition, this study characterizes the thermo-mechanical evolution and fracture mechanisms of cemented tailings backfill (CTB) through systematic experiments conducted at 20–60 °C across 3–28 days. Results demonstrate that strength and elastic modulus follow a unimodal dependence on temperature, peaking at 40 °C. Gaussian modeling reveals that curing times narrow the thermal tolerance window, with the elastic modulus exhibiting higher sensitivity to overheating. A consistent “pre-peak activity window” is identified in AE responses, characterized by b-value drops and an increase in tensile event proportions from 66% to 83%. A composite AE damage index (ADI) is introduced to systematically precede macroscopic failure, with thresholds of ADI ≥ 0.60 and 0.70 indicating accelerated crack propagation and imminent instability, respectively. Microstructural analysis confirms that 40 °C promotes C-S-H and fine ettringite bridging, whereas temperatures ≥ 50 °C induce Ca(OH)2 coarsening and enhanced pore connectivity, triggering early tensile-dominated degradation. This study establishes a “temperature → hydration/porosity → AE response → mechanical evolution” pathway, providing an optimal curing window of 40 ± 5 °C and an ADI-based early-warning criterion for temperature-adaptive CTB design and on-site safety management. Full article
(This article belongs to the Special Issue Advances in Mine Backfilling Technology and Materials, 2nd Edition)
Show Figures

Figure 1

25 pages, 9597 KB  
Article
Dynamic Response-Based Safety Monitoring and Damage Identification of Concrete Arch Dams via PSO–LSTM
by Jianchun Qiu, Wenqin He, Changlin Long, Yang Zhang, Xinyang Liu, Pengcheng Xu, Linsong Sun, Changsheng Zhang, Lin Cheng and Weigang Lu
Sensors 2026, 26(4), 1136; https://doi.org/10.3390/s26041136 - 10 Feb 2026
Viewed by 202
Abstract
The measured dynamic response of concrete arch dams under seismic excitation is a typical time series that contains rich information about structural conditions. Safety monitoring based on dynamic responses of arch dam structures is highly important for the timely detection of structural damage [...] Read more.
The measured dynamic response of concrete arch dams under seismic excitation is a typical time series that contains rich information about structural conditions. Safety monitoring based on dynamic responses of arch dam structures is highly important for the timely detection of structural damage and ensuring dam safety. In this study, a PSO-LSTM-based model for safety monitoring and damage identification of arch dam structures was proposed. The method was centered on the long short-term memory (LSTM) neural network, and key hyperparameters were adaptively tuned by the particle swarm optimization (PSO) algorithm to improve monitoring accuracy for nonlinear and nonstationary structural dynamic responses. Structural damage was identified through residual analysis combined with the 3σ anomaly detection criterion. Numerical simulations and shaking table model test cases of an arch dam were introduced for validation. The proposed method was compared with the standalone LSTM model and the SSA-LSTM model in terms of the root mean square error (RMSE), mean absolute error (MAE), coefficient of determination (R2), and damage identification accuracy. The results showed that the proposed PSO-LSTM method achieved greater accuracy in monitoring the safety of arch dam dynamic responses and effectively identified structural damage, thereby verifying its effectiveness. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

26 pages, 44946 KB  
Article
Influence of Adhesive Bonding on the Surface Accuracy of Flat Optics: A Mechanistic Analysis and a Quantitative Approximation
by Jian Xiong, Taiyu Su, Xiao Chen, Zhijing Zhang, Wenhan Zeng, Shan Lou, Yuchu Qin, Wenbin Zhong, Paul James Scott and Xiangqian (Jane) Jiang
Photonics 2026, 13(2), 166; https://doi.org/10.3390/photonics13020166 - 9 Feb 2026
Viewed by 113
Abstract
Surface accuracy is a crucial evaluation criterion for the life cycle performance of optical components. Throughout the assembly process, the optical surface undergoes deformation due to applied assembly stresses, causing the actual surface profile to deviate from the intended design. The key to [...] Read more.
Surface accuracy is a crucial evaluation criterion for the life cycle performance of optical components. Throughout the assembly process, the optical surface undergoes deformation due to applied assembly stresses, causing the actual surface profile to deviate from the intended design. The key to the quantitative optimization of the adhesive bonding assembly process is to elucidate the quantitative coupling mechanism between assembly stress and optical surface deformation and to establish a corresponding quantitative relationship. To address this, a comprehensive study into the optical surface deformation of multi-point adhesive bonded flat optical components is presented. Firstly, the coupled influence mechanisms governing optical surface deformation are analyzed considering both the components properties and assembly parameters, and this mechanistic analysis includes dimensionless modeling, boundary conditions analysis, and reliable design requirement analysis. Secondly, based on the understanding of mechanisms, a quantitative approximation method is developed to predict the deformation patterns within the critical central region (50% aperture) of optical components subjected to multi-point bonding. Finally, the quantitative approximation for the assembly-induced surface deformation is experimentally validated. Through this research, the surface deviation during the multi-point adhesive bonding assembly of flat optical components can be effectively approximated, holding significant importance for further imaging quality prediction and assembly parameters optimization during assembly process, and facilitating a further performance improvement for optical instruments. Full article
Show Figures

Figure 1

37 pages, 2122 KB  
Article
US-ATHC: Unsupervised Multi-Class Glioma Segmentation via Adaptive Thresholding and Clustering
by Jihan Alameddine, Céline Thomarat, Xavier Le-Guillou, Rémy Guillevin, Christine Fernandez-Maloigne and Carole Guillevin
Biomedicines 2026, 14(2), 397; https://doi.org/10.3390/biomedicines14020397 - 9 Feb 2026
Viewed by 173
Abstract
Background/Objectives: Accurate segmentation of gliomas in 3D volumetric MRI is critical for diagnosis, treatment planning, and surgical navigation. However, the scarcity of expert annotations limits the applicability of supervised learning approaches, motivating the development of unsupervised methods. This study presents US-ATHC (Unsupervised Segmentation [...] Read more.
Background/Objectives: Accurate segmentation of gliomas in 3D volumetric MRI is critical for diagnosis, treatment planning, and surgical navigation. However, the scarcity of expert annotations limits the applicability of supervised learning approaches, motivating the development of unsupervised methods. This study presents US-ATHC (Unsupervised Segmentation using Adaptive Thresholding and Hierarchical Clustering), a fully unsupervised two-step pipeline for both global tumor detection and multi-class subregion segmentation. Methods: In the first step, a global tumor mask is extracted by combining adaptive thresholding (Sauvola) with morphological processing on individual MRI slices. The resulting candidates are fused across axial, coronal, and sagittal views using a strict 3D consistency criterion. In the second step, the global mask is refined into a three-class segmentation (active tumor, edema, and necrosis) using optimized affinity propagation clustering. Results: The method was evaluated on the BraTS 2021 dataset, demonstrating accurate tumor and subregion segmentation that outperformed both classical clustering techniques and state-of-the-art deep learning models. External validation on the Gliobiopsy dataset from the University Hospital of Poitiers confirmed robustness and practical applicability in real-world clinical settings. Conclusions: US-ATHC establishes an unsupervised paradigm for glioma segmentation that balances accuracy with computational efficiency. Its annotation-independent nature makes it suitable for scenarios with scarce labeled data, supporting integration into clinical workflows and large-scale neuroimaging studies. Full article
(This article belongs to the Special Issue Medical Imaging in Brain Tumor: Charting the Future)
Show Figures

Figure 1

Back to TopTop