1. Introduction
Structural health monitoring (SHM) has become essential for ensuring the safety, reliability, and serviceability of engineering structures throughout their operational life. Beams—widely used in bridges, buildings, and mechanical systems—are especially critical as their damage can significantly compromise overall structural performance. Common forms of damage, such as cracking, corrosion or material degradation, can lead to local stiffness reduction, altered vibration characteristics, and ultimately performance loss or catastrophic failure [
1,
2,
3]. Therefore, accurate and timely identification of beam damage is crucial for preventive maintenance and life-cycle management.
Inverse analysis has emerged as a powerful tool for structural diagnosis, allowing the estimation of internal damage parameters from measurable system responses (as surveyed in the Handbook of Damage Mechanics; [
4]). Vibration-response-based inverse identification methods have attracted considerable attention in recent decades because they offer non-destructive and globally sensitive means to detect structural damage. Among these, the Frequency Response Function (FRF) analysis is particularly effective, capturing the dynamic relationship between input excitation and structural response in the frequency domain [
5]. As highlighted in a recent state-of-the-art review, FRF-based techniques continue to be a vibrant area of research due to their rich information content, with ongoing developments aimed at improving their accuracy and computational efficiency for damage identification [
6]. Furthermore, contemporary studies are extending FRF concepts to nonlinear system behaviours for enhanced damage sensitivity [
7]. Changes in FRF characteristics—such as resonant frequency shifts, damping variations, or amplitude/phase distortions—can indicate stiffness loss or mass redistribution caused by damage [
8,
9]. Experimental FRFs provide an empirical basis for damage identification since they reflect actual boundary conditions and uncertainties inherent in physical systems. While FRF-based approaches offer rich dynamic information, alternative methodologies such as strain modes-based interval damage identification have demonstrated enhanced sensitivity to local stiffness changes [
10]. However, these methods often rely on extensive sensor networks and may be less suited for full-field degradation identification where spatial variability is significant.
Interpreting FRF variations becomes challenging when the material properties of a structure are spatially heterogeneous or gradually degrade over time. In practice, such variability is unavoidable due to manufacturing imperfections, environmental exposure, or fatigue-induced microstructural changes [
3]. To represent these spatial uncertainties in a physically consistent manner, stochastic field modelling has been introduced into structural analysis. The Karhunen–Loève (KL) expansion offers a mathematically rigorous and computationally efficient framework for representing random fields with reduced dimensionality [
11,
12]. By expressing a stochastic property (e.g., Young’s modulus) as a series of orthogonal eigenfunctions weighted by uncorrelated random variables, the KL expansion can accurately simulate spatial variability and degradation in material stiffness [
13,
14,
15].
Integrating the KL-based stochastic representation with experimental FRF measurements provides a powerful framework for probabilistic damage identification. Simulated degradation scenarios generated via the KL expansion can be compared with experimental FRF data to infer the most probable damage patterns, enabling robust detection and quantification of material deterioration under uncertainty [
16,
17]. This combined approach enhances both the interpretability and reliability of vibration-based SHM systems.
Nevertheless, inverse identification of material degradation from FRF measurements constitutes an inherently ill-posed problem. Multiple stiffness distributions can produce similar dynamic responses, and measurement noise can lead to solution instabilities. Bayesian inference [
18,
19] with Hamiltonian Monte Carlo (HMC) sampling provides a principled framework for uncertainty quantification, but practical implementation reveals a critical tension between mathematical regularisation (required for stability) and enforcement of physical constraints (essential for plausibility). Overly stringent regularisation during iterative sampling can distort convergence, while insufficient constraints yield physically impossible results such as stiffness enhancements in damaged regions. This methodological gap limits the practical application of stochastic identification frameworks.
To address these challenges, this paper presents a novel stochastic framework with three key contributions. First, we combine KL expansion for efficient degradation parameterisation with experimental FRF data within a Bayesian inference scheme, circumventing the “curse of dimensionality” associated with element-wise approaches while providing a probabilistic description of structural material degradation (SMD). Second, we introduce a two-phase constraint strategy that separates mathematical stabilisation during HMC sampling from post-convergence physical bound enforcement. This prevents algorithmic distortion while ensuring final results respect fundamental physical principles. Third, we experimentally validate the framework on a steel cantilever beam with symmetric open-edge cuts, demonstrating successful damage localisation and quantification using laser vibrometry measurements.
The present study therefore focuses on beam damage identification using experimental FRFs in conjunction with KL expansion-based modelling of material degradation. The goal is to investigate how spatial variability in stiffness affects the dynamic response, and how FRF-based indicators can be employed to identify and quantify degradation while addressing the regularisation-plausibility trade-off. Recent advances in structural identification have increasingly adopted hybrid physics-data-driven frameworks to enhance robustness and dimensionality reduction for ill-posed inverse problems under uncertainty. For instance, ref. [
20] developed a reduced-order modelling approach combining principal component analysis (PCA) with a physics-data-driven neural network (PDNN) for aerodynamic load inversion under interval field uncertainties, demonstrating effective integration of physical constraints with data-driven mappings in high-dimensional spatial identification tasks. In a similar spirit, the present study employs KL expansion to represent spatially-varying stiffness degradation, but focuses specifically on experimental FRF-based Bayesian identification and introduces a novel two-phase constraint strategy to explicitly decouple mathematical regularisation from post-convergence physical plausibility enforcement—a methodological distinction from prior hybrid frameworks. A cantilever beam case study underscores the method’s accuracy, robustness, and potential for deployment in aerospace, civil, and mechanical engineering applications. The outcomes are expected to contribute to a probabilistic, data-informed framework for evaluating structural health in beam-like systems.
A key methodological contribution of this work is the development of a two-phase constraint strategy that explicitly decouples the numerical stabilisation required during HMC sampling from the post-convergence enforcement of physical plausibility. While regularisation is commonly employed in ill-posed inverse problems—e.g., through Tikhonov regularisation, Bayesian priors, or ad hoc filtering—existing single-phase approaches often face a fundamental conflict: overly strict constraints applied during iterative optimisation can distort convergence, trap sampling in local minima, or prevent the exploration of parameter space necessary for accurate damage identification. Recent adaptive regularisation methods, such as the Adaptive Probabilistic Regularisation (APR) approach that applies parameter-wise weighting during iteration [
21], have improved convergence efficiency but still enforce physical bounds within the iterative loop. In contrast, insufficient regularisation leads to physically implausible results, such as non-physical stiffness enhancements in damaged regions. The proposed two-phase strategy resolves this tension by applying moderate, mathematically motivated regularisation during inference to stabilise the ill-posed problem, followed by selective, element-wise physical bound enforcement after convergence. This ensures that the final stiffness field respects fundamental physical principles—namely, that damage reduces, rather than increases, structural rigidity—without compromising the algorithm’s ability to locate and quantify degradation during the sampling phase. To our knowledge, this explicit separation of mathematical stabilisation and physical plausibility enforcement represents a novel contribution to the Bayesian inverse identification of structural damage, particularly in the context of FRF-based updating under experimental uncertainty.
The remainder of this paper is organised as follows.
Section 2 presents the spectral parameterisation of structural systems using KL expansion.
Section 3 develops the Bayesian inverse framework with the proposed two-phase constraint strategy.
Section 4 details the experimental methodology and apparatus selection.
Section 5 presents identification results and discusses the efficacy of the constraint approach. Finally,
Section 6 concludes with implications for practical SHM applications.
3. Inverse Problem Regularisation Sensor Position Selection
Identifying structural damage from FRFs is an ill-posed inverse problem complicated by measurement noise, model uncertainty, and non-unique solutions. This section presents an integrated framework to overcome these challenges. First, a Bayesian inference approach quantifies parameter uncertainty. Second, a physics-based regularisation strategy stabilises the solution by constraining the bending rigidity field. A two-phase implementation separates iterative stabilisation from post-convergence physical bound enforcement to avoid algorithmic distortion. Finally, an optimal sensor placement strategy ensures that limited measurements capture essential dynamic features. Together, these components enable robust, physically plausible damage identification under practical experimental constraints.
3.1. Bayesian Inverse Approach for Damage Identification
To quantify the uncertainty in material parameters given measured FRF data, this work employs a Bayesian inference framework. The core of this approach is Bayes’ theorem, which combines prior knowledge with new evidence to form a posterior distribution. For our problem, the theorem is expressed as:
Here,
is the random vector of unknown material parameters to be identified,
is the prior distribution representing our initial beliefs,
is the likelihood function quantifying the probability of observing the measured data
given a set of parameters, and
is the posterior distribution we seek, which represents the updated belief about the parameters after considering the evidence.
The prior distribution for the parameters
is chosen to be a standard multivariate Gaussian for computational tractability, as discussed in
Section 2.2, Equation (
9):
The likelihood function connects the model to the physical measurement data. The accuracy of the structural model in simulating material damage is evaluated by comparing the simulated FRF,
, to the experimentally measured FRF,
. The discrepancy is defined by an error function, taken as the Frobenius norm:
Assuming the measurement errors follow a Gaussian distribution with variance
, the likelihood function is formulated as:
This choice of a Gaussian likelihood is common but not unique; it is selected here for its suitability in modelling measurement noise within this damage identification context.
Substituting the prior and the likelihood into Bayes’ theorem (Equation (
23)) yields the posterior distribution:
Consequently, the task of identifying material degradation or loss is formulated as finding the parameters that maximise this posterior probability, which is equivalent to minimising the error while being regularised by the prior.
3.2. Stabilising the Identification Through Physical Regularisation
Inverse identification using FRF data is inherently ill-posed: multiple stiffness fields may reproduce similar dynamic responses, and small perturbations in the measured data can result in disproportionately large variations in the estimated parameters. To ensure stability and physical plausibility, the Bayesian formulation in
Section 3.1 is complemented by a regularisation term applied directly to the bending rigidity field.
From the posterior distribution in Equation (
27), we can derive the complete expression for the negative log-posterior:
where the constant term includes the normalisation factors from both the prior and likelihood. The maximum a posteriori (MAP) estimate corresponds to the parameter vector
that minimises this negative log-posterior. It is important to note that this specific quadratic form arises from our choice of Gaussian likelihood and Gaussian prior; other distributional assumptions would lead to different functional forms for the MAP objective.
To incorporate additional physical constraints, we augment this objective with a regularisation term
that penalises non-physical solutions:
where
is the measured FRF,
is the simulated FRF corresponding to the parameter vector
, the term
arises from the Gaussian prior on the KL coefficients, and
is a physical regularisation term defined below.
The regularisation term is defined as
which penalises non-physical oscillations in the reconstructed rigidity field, where
is the simulated bending rigidity,
is the assumed actual rigidity field, and
L can be a differential operator that encodes the expected smoothness based on the beam’s mechanics. Importantly, this formulation does not force
to coincide with any baseline value; instead, it promotes spatial smoothness consistent with the mechanics of bending and the expected form of damage. The physical regularisation term
is designed to penalise deviations of the simulated stiffness field from a physically plausible baseline. The operator
L can be linear (e.g., a first-difference operator to promote smoothness) or a scaling function that amplifies discrepancies in regions where stiffness enhancement is suspected. The strength of this regularisation is implicitly balanced with the data misfit term during HMC sampling. The guiding principle was to apply just enough regularisation to prevent solution divergence and oscillatory artifacts, without over-smoothing the genuine damage signature. This was monitored empirically by observing the stability of the HMC chains and the spatial coherence of the intermediate stiffness estimates (the details of the HMC can be checked in
Appendix A).
Formally, the addition of the regularisation term
modifies the posterior distribution from Equation (
27) to:
The MAP estimate in Equation (
29) corresponds to finding the mode of this regularised posterior.
This physical regularisation approach provides the necessary stabilisation for the ill-posed inverse problem. However, implementation revealed a critical balance: while insufficient regularisation yields mathematically unstable results, overly stringent application of
during the iterative identification process distorts the algorithm’s convergence, potentially creating false indications of degradation. This finding motivated the development of a complementary post-convergence strategy described in
Section 3.3, which applies final physical bounds without interfering with the identification iteration.
3.3. Post Regularisation Based on the EI Physical Meaning
The physical regularisation in
Section 3.2 stabilises the identification process. Our implementation revealed that applying a single, global bound during iteration can interfere with algorithm performance, as overly stringent constraints may cause HMC sampling to converge to distorted solutions or fail to explore the parameter space properly.
To resolve this, we implement a refined, two-phase regularisation strategy:
- 1.
Phase 1—Iterative Stabilisation: Apply the regularisation term
with moderate weighting during HMC sampling to guide convergence without imposing rigid physical bounds (
Section 3.2).
- 2.
Phase 2—Post-Convergence Regularisation: After the algorithm converges, regularisation is applied individually to each identified flexural stiffness (EI) profile. For each sample in the posterior distribution, any EI value exceeding the physically plausible threshold is corrected. The regularised value
is obtained as:
where
provides allowance for combined experimental and numerical uncertainties. After this element-wise regularisation, the mean and credible intervals (CI) of the posterior distribution are recalculated to produce the final statistical estimates. Throughout this work, we report uncertainty using Bayesian credible intervals rather than frequentist confidence intervals. This Bayesian interpretation aligns naturally with our probabilistic framework, where the posterior distribution represents our updated belief about the parameters after observing the experimental FRF data.
The threshold value
in Equation (
32) was calibrated to accommodate the combined experimental and modelling uncertainties observed in our setup. Specifically, the pristine beam (Beam 1) was used to quantify discrepancies between the baseline numerical model and experimental measurements. The total uncertainty was estimated at 6–12%, originating from: (i) laser vibrometer measurement noise (∼2%), (ii) model simplifications in boundary conditions and damping (∼4–8%), and (iii) material property tolerances (∼2%). The value
(i.e., a 10% allowance) was selected to conservatively bound this uncertainty range while maintaining physical plausibility. Although this specific value was calibrated for our experimental configuration, the calibration methodology is general: users should characterise their own uncertainty sources using a baseline structure and set
accordingly. For typical vibration-based SHM applications with moderate modelling fidelity,
values between 1.05 and 1.15 are often appropriate.
This phased and element-wise regularisation addresses two distinct requirements:
Mathematical Stability: The regularisation term prevents ill-posedness during identification.
Physical Plausibility: Post-convergence, element-level regularisation enforces that no stiffness value can exceed the undamaged baseline (with an uncertainty allowance), after which the statistical summary (mean and CI) is computed.
By separating these functions and applying the physical bounds to each identified EI value before final statistical aggregation, we avoid the algorithmic distortion that occurs when enforcing strict physical bounds during iterative sampling, while ensuring the final reported mean and CIs respect fundamental physical principles.
3.4. Optimal Sensor Placement for Informative Measurements
While the Bayesian framework with physical regularisation provides a robust mathematical foundation for damage identification, its practical efficacy depends critically on the quality of the experimental data. In a laboratory or real-world setting, it is infeasible to instrument every degree of freedom of a structure. The choice of sensor locations therefore becomes a paramount concern, as poor placement can fail to capture the essential dynamic features needed to resolve the inverse problem. A strategically placed, limited set of sensors must be capable of informing the model to distinguish between healthy and damaged states effectively.
This subsection addresses the practical challenge of transitioning from a well-posed numerical model to a feasible experimental setup. We establish a strategy for optimal sensor placement to ensure that the limited measurements obtained are highly informative of the structural dynamic characteristics, thereby enabling the regularised Bayesian identification framework to perform successfully.
A cantilever beam (the black area denotes a potential SMD area) subject to a unit pulse excitation at its end of length
m, width
m, height
m (
Figure 2) is studied in this section. The cantilever beam is discretised into 100 elements, resulting in a total of 101 nodes. Each node has two degrees of freedom (DoFs): vertical translation and rotation, leading to a total of 202 DoFs.
Given the infeasibility of measuring all 202 DoFs in our discretised cantilever beam model, strategic sensor placement becomes crucial to maximise information content while minimising computational requirements.
Since low-frequency structural response is dominated by fundamental vibration modes, sensors must capture these mode shapes effectively. With the measured frequency range below 200 Hz, the first three modes govern the dynamic behaviour. To capture the essential dynamic characteristics, sensors should be positioned at anti-nodes (points of maximum displacement) of the dominant modes.
Figure 3 illustrates the optimal sensor placement regions, with cyan areas highlighting locations corresponding to mode shape peaks. The recommended configuration, shown in
Figure 4, positions sensors at approximately
,
, and
along the beam length to ensure comprehensive mode characterisation.
4. Experimental Methodology
This section outlines the experimental procedure for validating the Bayesian identification algorithm applied to a cantilever beam. The objectives were to: (1) calibrate a baseline numerical model using a pristine beam, and (2) select the optimal measurement apparatus for subsequent damage identification.
4.1. Experimental Setup and Specimens
Two cantilever beam specimens were tested: Beam 1 (pristine) and Beam 2 (with two symmetrical open-edge cut-outs to simulate a single degraded region). Both beams were manufactured from black mild steel with nominal dimensions: length
m, width
mm, and thickness
mm. The damage in Beam 2 consisted of two identical rectangular cut-outs machined symmetrically from each edge, creating a locally reduced cross-section. Each cut-out had length
mm, width
mm, and depth equal to the full thickness
mm (i.e., complete through-thickness removal). The damaged region was located with its nearest edge
m from the fixed end, resulting in a central weakened zone spanning approximately
m. The intact web between the cut-outs was 6 mm wide, giving a total moment of inertia reduction of approximately 40% within the damaged region relative to the pristine section. These dimensions were selected to produce a measurable but non-catastrophic stiffness reduction representative of early-stage degradation. The specimens and the general test setup are shown in
Figure 5 and
Figure 6, respectively; a detailed schematic of the damage geometry is provided in
Figure 7. The symmetric cut-outs reduce the second moment of area within the damaged zone from the pristine value
to
, corresponding to a 40% local reduction in bending stiffness (
). For the entire cantilever beam (length
m with damage over
m), the overall stiffness reduction is approximately 8%, calculated via series combination of the damaged and pristine segments. This configuration provides a clear yet moderate stiffness loss signature, ideal for evaluating the inverse identification framework’s sensitivity to localised degradation.
Two cantilever beam specimens were tested: Beam 1 (pristine) and Beam 2 (with two symmetrical open-edge cut-outs to simulate one-degraded area). The specimens and the general test setup, which involved a shaker for free-end excitation, are shown in
Figure 5 and
Figure 6, respectively. To collect robust data for model calibration, three measurement techniques were evaluated: a 3-axis digital accelerometer (ADXL345), a PSV-500-3D scanning laser vibrometer, and a FASTCAM SA 1.1 high-speed camera (
Figure 8).
4.2. Apparatus Comparison and Selection
Free vibration tests were conducted to determine the natural frequencies of both beams and to critically assess the three measurement apparatuses. A summary of their key characteristics and performance in this study is provided in
Table 1.
The natural frequencies extracted from each apparatus are listed in
Table 2. The accelerometer data shows a clear downward shift compared to the non-contact methods, a discrepancy attributed to sensor mass and cable damping. This is visually confirmed in
Figure 9, which plots the natural frequencies of the pristine beam from all three tests.
Based on this comparative analysis in
Table 1, the laser vibrometer was selected for this study. Its non-contact nature eliminated mass-loading errors, its high sampling rate ensured data fidelity, and its operational practicality allowed for the extended monitoring periods required for forced vibration tests.
4.3. Initial Model Calibration Based on Natural Frequencies
The baseline finite element model was calibrated using the laser vibrometer data from the pristine Beam 1. The beam was manufactured from black mild steel, with material property ranges provided in
Table 3. The model was updated to match the experimental natural frequencies. The resulting calibrated Young’s modulus of
falls within the expected range for black mild steel. The calibrated density of
is slightly higher than the typical range. This minor deviation is a recognised phenomenon in model updating, where parameters can adjust beyond nominal bounds to compensate for simplifications in the model, such as ideal boundary conditions or unaccounted mass, to accurately replicate the experimental dynamics.
4.4. Forced Vibration Testing and FRF Calculation Methodology
This section details the experimental and numerical procedures for characterising the dynamic behaviour of the cantilever beams. Following the initial model calibration based on natural frequencies (
Section 4.3), forced vibration tests were conducted to obtain Frequency Response Functions (FRFs). These experimental FRFs were then used to further calibrate and refine the finite element model, addressing aspects such as damping characteristics and boundary condition effects that are not fully captured by natural frequency data alone. The refined model, validated against both natural frequencies and FRF data, subsequently served as the reliable baseline for damage identification.
The cantilever beam was excited at its free end using a sweeping sinusoidal signal , where the frequency f ranged from 1 Hz to 100 Hz. We accounted for the shaker’s support stiffness using the Lagrange Multiplier Method to accurately simulate the experimental boundary conditions.
Under sweep excitation conditions, the auto-power spectral density of the excitation signal was approximately constant in the frequency domain (
Figure 10), i.e.,
. This characteristic establishes a clear proportional relationship between the velocity response power spectrum and the FRF amplitude:
Based on this relationship, the experimental data processing uses the square root of the velocity auto-power spectrum as an equivalent measure of FRF amplitude:
where
is calculated using Welch’s method.
This method effectively avoids the difficulties associated with traditional FRF calculations that require precise force sensor data, while ensuring reliable results under sweep excitation conditions.
The numerical model directly calculates the velocity/force FRF (mobility) following the logic of Equation (
21).
Considering the constant power characteristics of sweep excitation, simulation and experimental outputs can be directly compared through normalisation processing. The comparison between experimental and numerical FRFs was performed by scaling the numerical FRF to match the experimental amplitude reference. The scaling factor
a was determined from the ratio of maximum amplitudes:
The scaled numerical FRF for comparison is then:
while the experimental FRF maintains its original amplitude scale:
This scaling approach preserves the physical amplitude relationships in the experimental data while enabling direct quantitative comparison with numerical predictions. The method ensures consistent evaluation of resonance frequencies, mode shapes, anti-resonance locations, and relative amplitude distributions across the frequency range of interest.
Vibrations were measured at six points (P1–P6) along the beam using a laser vibrometer, as shown in
Figure 11. The resulting velocity responses and corresponding mobility FRFs are presented in
Figure 12 and
Figure 13.
Table 4 summarises the first two resonant frequencies, while
Table 5 provides the damping coefficients. Notably, Point 6 near the fixed end showed significantly lower energy, and Point 3 exhibited reduced energy at the second mode, consistent with expected mode shape behaviour.
4.5. Model Refinement and Validation
The pristine FE model was refined to reflect the experimental boundary and excitation conditions. The model refinement proceeded through an iterative process comparing experimental and numerical FRFs. Key parameters adjusted during refinement included:
Damping coefficients: Rayleigh damping parameters were tuned to match the amplitude and bandwidth of resonance peaks observed in experimental FRFs.
Boundary conditions: The stiffness of the shaker support was refined using the Lagrange Multiplier Method to better match experimental boundary conditions.
Local stiffness variations: Minor adjustments were made to account for localised effects observed near the excitation point.
Damping significantly affects FRF amplitudes, especially near resonances. To align simulated and experimental FRFs for Beam 1, a frequency-dependent damping adjustment was employed. Rayleigh damping parameters were tuned specifically around the first two resonant frequencies where velocity-dependent damping effects are most pronounced. This targeted calibration—rather than applying a uniform damping model across the entire spectrum—yielded good agreement in both amplitude and peak shape for the pristine beam (
Figure 14). The success of this calibration supports the assumption that the damping mechanism remains largely unchanged for the geometric damage introduced in Beam 2, validating the use of the same damping model for the subsequent inverse identification.
The refinement was considered complete when the normalised FRFs demonstrated consistent agreement in resonance frequencies, mode shapes, and anti-resonance locations, as quantified by correlation coefficients exceeding 0.95 for the frequency range of interest.
Experimental FRFs exhibited flattened resonance peaks, indicating velocity-dependent damping behaviour. Estimated damping ratios showed that the first mode exhibited approximately three times the damping of the second mode (
Table 5). In addition, the excitation signal displayed a near-constant auto-power density across the frequency range (
Figure 10), confirming that the input amplitude was effectively independent of frequency; therefore, the measured velocity responses were directly applicable for numerical calibration.
Simulated auto-power spectra reproduced the general dynamic characteristics observed experimentally (
Figure 15). However, results obtained at Point 1, located adjacent to the shaker, showed notable deviation, likely due to local interaction effects. This point was therefore excluded, yielding close agreement across the remaining sensor locations (
Figure 14).
To further improve prediction quality, a calibration curve was constructed from the ratio of experimental to simulated responses (
Figure 16). The calibration curve can be applied to adjust the numerical FRF during the iteration for the SMD identification for Beam 2, which would help to overcome the simulation imperfection.
5. SMD Identification Results and Discussion
This section presents the application of the proposed Bayesian inverse identification framework—with the two-phase constraint strategy outlined in
Section 3.2 and
Section 3.3—to experimental FRF data from Beam 2, which contains a single symmetrical open-edge cut-out to simulate localised SMD. The objectives are to: (i) validate the convergence and uncertainty quantification of the identified parameters, (ii) evaluate the effectiveness of the constraint strategy in stabilising the ill-posed inverse problem, and (iii) demonstrate accurate damage localisation while ensuring physical plausibility.
For the inverse identification, twenty random variables (
) were employed.
Figure 17 presents the statistical distribution of these variables over 100 identification iterations. The results, detailed in
Figure 18, demonstrate that the identified FRFs converge towards the target experimental FRFs at each measurement position (P2–P6). Notably, the 95% credible interval of the identified FRFs encompasses the target data at all points, confirming the method’s robustness despite the peak splitting observed in the numerical model.
5.1. Two-Phase Constraint Implementation and Outcomes
Figure 19 utilised a moderate regularisation during the identification iteration that assumed a global stiffness reduction, with the total identified rigidity constrained such that
. This 10% reduction factor was selected based on preliminary experimental observations and engineering judgment, reflecting typical stiffness loss magnitudes in early-stage damage scenarios without over-constraining the solution space. This average of the 100 identified EI successfully located the damage region at approximately
. The identified stiffness reduction within the damage region (approximately 35–45% from
Figure 19) aligns well with the actual 40% local reduction, demonstrating the method’s accuracy in quantifying damage severity despite the modest 8% global stiffness change. However, it also produced physically impossible local stiffness enhancements in other regions (marked by the red dot), with pointwise values of
reaching up to
.
A noteworthy observation is the significant overestimation of EI at the fixed end compared to the baseline. This systematic phenomenon, consistently observed across identifications, suggests either imperfect boundary condition simulation or parameter insensitivity in this region, where bending rigidity variations minimally affect the measured and numerical FRFs. Despite this boundary artefact, the damage region remains clearly identifiable, demonstrating the method’s robustness for damage localisation away from boundary-influenced zones.
The presence of such boundary artefacts motivated the application of the post-convergence physical bound enforcement described in
Section 3.3, which selectively suppresses these non-physical enhancements while preserving the identified damage signature.
To address the non-physical enhancements from the moderate regularisation, the post-convergence regularisation rule (Equation (
32)) was applied individually to each identified EI sample.
Figure 20 presents the final, physically plausible EI distribution after regularisation. The mean EI profile clearly reveals the location and extent of the simulated damage, while spurious stiffness enhancements have been suppressed. This two-phase approach—combining moderate regularisation during identification with targeted post-processing—maintains accurate damage localisation while enforcing the physical principle that degradation reduces, rather than enhances, structural stiffness.
5.2. Post-Regularisation FRF Evaluation
To complete the validation loop, FRFs were recalculated from the regularised EI fields. For each regularised stiffness sample
, the corresponding FRF is obtained by solving the deterministic forward problem:
where
is the dynamic stiffness matrix constructed from
,
is the displacement vector,
is the force vector, and
selects sensor locations. This step does not involve the KL coefficients
; it uses only the physically regularised EI field
as input to the forward solver.
The results are shown in
Figure 21. The mismatch between the post-regularisation FRFs and the experimental data illustrates a fundamental insight: overly stringent physical constraints would yield distorted identification results. This discrepancy arises because no numerical model can perfectly replicate real-world conditions, and unaccounted measurement and modelling uncertainties inevitably exist.
Nevertheless, the preserved alignment of resonant peaks across all sensor positions confirms the feasibility of the inverse identification framework. This spectral coherence indicates that despite the inevitable imperfections in model fidelity, the essential dynamic signature of structural damage remains extractable. This is precisely why the two-phase approach succeeds: it avoids imposing rigid physical constraints during the identification process (which would prevent convergence), while still ensuring the final stiffness field respects physical principles after convergence.
Thus, the observed FRF mismatch does not undermine the method’s validity; rather, it validates the core design rationale. The two-phase strategy acknowledges the inherent limitations of model-based identification while delivering physically plausible damage localisation—the ultimate goal of this work.
This work introduces a two-phase constraint strategy that decouples mathematical stabilisation during Bayesian inference from physical plausibility enforcement after convergence. Unlike traditional single-phase regularisation—which often forces a trade-off between algorithmic stability and physical realism—our approach applies moderate regularisation during Hamiltonian Monte Carlo sampling to ensure well-posedness, then enforces element-wise physical bounds (e.g., prohibiting stiffness enhancements) only after the posterior has been sampled. This separation prevents convergence distortion while guaranteeing that final stiffness estimates respect the fundamental principle that damage reduces, rather than increases, structural rigidity. The calibrated threshold further accommodates experimental uncertainties (6–12%), offering a balanced solution to a long-standing challenge in FRF-based damage identification.
6. Conclusions
This study presents a novel vibration-based probabilistic damage identification approach which can capture any profile or shape of damage at multiple sites based on its vibration response. The approach does not rely on heavily informative priors for structural material degradation (SMD) identification. Instead, it demonstrates for the first time the applicability and performance accuracy a random field model for inverse identification of any arbitrary distribution of SMD along the span of a structure using only a few stochastic parameters. By modelling SMD as a low-dimensional stochastic process, the method achieves significant computational savings, which reduce parameter dimensionality by orders of magnitude, while maintaining high spatial resolution. By reconstructing the full degradation field from a compact set of dominant KL parameters, the framework enables efficient identification of damage, whether it is localised (e.g., cracks, notches) or distributed (e.g., corrosion, uniform wear). The approach has been verified with experimental studies. It shows that tapping the vibration measurement data at only a handful of points is sufficient for an accurate identification of SMD.
This study presents a novel stochastic framework for structural material degradation (SMD) identification that efficiently couples Karhunen–Loéve (KL) expansion with Frequency Response Function (FRF) analysis. By modelling SMD as a low-dimensional stochastic process, the method achieves significant computational savings, which reduce parameter dimensionality by orders of magnitude, while maintaining high spatial resolution. By reconstructing the full degradation field from a compact set of dominant KL parameters, the framework enables efficient identification of damage, whether it is localised (e.g., cracks, notches) or distributed (e.g., corrosion, uniform wear).
A key methodological contribution is the development of a two-phase constraint strategy that resolves the inherent tension in inverse problems: the need for mathematical stabilisation versus the risk of physical over-constraint. The phased approach separates:
Phase 1—Mathematical Stabilisation: Moderate regularisation during Hamiltonian Monte Carlo sampling stabilises the ill-posed problem without distorting convergence.
Phase 2—Physical Enforcement: Post-convergence selective suppression removes non-physical stiffness enhancements, enforcing the principle that damage reduces—not increases—structural rigidity.
This separation proved essential, with the calibrated threshold accommodating experimental uncertainties of 6–12% while preserving damage signatures.
Experimental validation on cantilever beams demonstrated successful localisation and quantification of a single damage region using laser vibrometry-derived FRFs. The method provided both statistically credible intervals (95%) and physical plausibility, addressing a critical limitation of traditional SHM approaches that often yield mathematically valid but physically impossible results.
Two primary challenges were identified for practical deployment: (1) the critical influence of damping modelling accuracy on FRF-based identification, and (2) the increased complexity of identifying multiple concurrent damage regions. Future work will extend the framework to multi-damage scenarios, refine damping representations, and test robustness under varied boundary conditions and noise levels.
The proposed framework represents a significant step toward reliable, physics-informed structural health monitoring, offering a balanced approach to uncertainty quantification and physical constraint enforcement that enhances both mathematical rigour and practical utility in real-world applications. Future work will extend the framework to multi-damage scenarios, refine damping representations, and test robustness under varied boundary conditions and noise levels, contributing to the ongoing development of robust numerical methods for inverse problems in engineering [
28].
Limitations and Outlook for Practical Application
The experimental validation presented in this study was conducted on a laboratory-scale cantilever beam with a single, well-defined damage region. This controlled setting was essential for validating the core components of the proposed stochastic identification framework—the KL-based parameterisation and the two-phase constraint strategy—under known conditions. However, the transition to practical, large-scale structural health monitoring (SHM) applications presents several challenges that must be acknowledged and addressed in future work.
Boundary Condition Uncertainty: Real-world structures rarely possess idealised boundary conditions. Semi-rigid connections, foundation flexibility, and interaction with adjacent components introduce significant uncertainty. While the current model assumed a fixed base, the framework is generalisable. Future extensions could parameterise boundary stiffnesses within the vector for joint identification, or apply the KL expansion to represent spatial uncertainty in support properties.
Computational Scalability: For large-scale structures, the computational cost of repeated forward solves (FRF calculations) within the HMC loop becomes the primary bottleneck, not the parameter dimensionality (which is controlled by the KL truncation). This can be mitigated by integrating model reduction techniques (e.g., modal truncation, Krylov subspace methods) for the forward model, and by exploiting the inherent parallelism of Monte Carlo sampling.
Complex Geometry and Incomplete Measurements: The optimal sensor placement strategy in
Section 3.4 is a first step towards dealing with limited measurements. For complex 2D or 3D structures, advanced placement algorithms (e.g., based on effective independence or information entropy) will be necessary to maximise the information content of sparse sensor data. The KL expansion itself is directly applicable to random fields on complex domains, provided an appropriate covariance kernel is defined.
Interpretation in Multi-Damage Scenarios: Identifying multiple, interacting damage sites increases the ill-posedness of the inverse problem. The Bayesian approach provides a natural advantage here by quantifying uncertainty through the full posterior distribution. The outcome is not a single deterministic damage map, but a probabilistic diagnosis—a set of plausible damage scenarios with associated credibility. This is precisely the form of information required for risk-based maintenance decisions in practical SHM.
In summary, while the current implementation demonstrates the method’s efficacy on a canonical problem, its architectural components—stochastic parameterisation, Bayesian updating, and phased constraint enforcement—are designed with scalability in mind. Future work will focus on integration with reduced-order modelling, validation on more complex structural systems (e.g., plates, frames), and the development of automated, interpretable post-processing tools for high-dimensional posterior distributions.