1. Introduction
The systematic acquisition of structural response data via deployed sensory apparatus constitutes a critical phase in the establishment and operation of comprehensive structural health monitoring (SHM) systems [
1,
2,
3,
4,
5,
6]. The fidelity and completeness of this data are paramount, as the efficacy of critical downstream tasks, encompassing structural damage identification and localization [
7,
8,
9,
10], finite element model updating and calibration [
11,
12,
13], and external load reconstruction [
14,
15,
16], is heavily dependent on the quality of the sensor placement scheme. In light of stringent fiscal limitations and practical constraints associated with data transmission and bandwidth, optimizing the configuration of a finite number of sensors to efficiently acquire structural state information has emerged as a significant research focus within civil engineering.
Optimal sensor placement is typically realized through the application of mathematical criteria designed to maximize information gain or minimize estimation error. Prevalent optimization criteria include the effective independence method, which maximizes the linear independence of mode shapes [
17]; the modal assurance criterion, which ensures the distinctiveness of identified modes [
18]; the Fisher information criterion, which maximizes information content relative to model parameters [
19,
20,
21]; and the information entropy criterion, which minimizes uncertainty in parameter estimation [
22]. Addressing the uncertainties inherent in engineering applications, Kim et al. [
23] proposed an improved effective independence method by introducing stochastic terms to optimize sensor placement, the efficacy of which was verified on a truss bridge model. Furthermore, Gao et al. [
24] constructed an objective function based on the modal assurance criterion and proposed an adaptive gravitational search algorithm, a meta-heuristic approach that significantly enhanced computational efficiency in complex search spaces. Ghosh et al. [
25] utilized the Fisher information criterion to maximize the determinant of the Fisher information matrix, thereby maximizing information regarding unknown structural parameters and diminishing posterior uncertainty. Similarly, Yuen et al. [
26] proposed a heuristic algorithm utilizing the information entropy of parameter identification as a performance index, achieving optimal placement of multi-type sensors through a sequential approach.
Constructing multi-objective functions based on diverse optimization criteria can effectively enhance the aggregate performance of SHM systems by addressing conflicting design goals simultaneously [
27,
28,
29,
30]. Yang et al. [
31] synthesized the Fisher information matrix with the modal assurance criterion to formulate a comprehensive multi-objective optimization function. In their work, interval analysis was introduced to manage uncertainty, and the Pareto front concept was utilized to optimize sensor placement for a spacecraft subsystem, balancing modal distinctness with parameter sensitivity. Addressing the dual challenge of localization and reconstruction of external loads, Liu et al. [
32] utilized the modal shape matrix as a theoretical foundation, integrating the Fisher information matrix, the condition number of the mode shape matrix, and the modal assurance criterion to represent validity, stability, and orthogonality, respectively. By augmenting robustness through interval analysis and employing Pareto front solutions, optimal sensor placement and load reconstruction for a mild steel cantilever beam were achieved. Civera et al. [
33] further advanced this domain by constructing a multi-objective optimization function based on the auto-modal and cross-modal assurance criteria, proposing a method that integrates genetic algorithms to ensure high-quality modal identification.
However, traditional sensor placement optimization criteria are primarily designed around specific SHM objectives under the assumption of perfect sensor functionality; the probability of sensor failure is frequently omitted during optimization. During the prolonged operation of an SHM system, which may span decades, sensors are subjected to non-negligible failure risks attributable to harsh environmental erosion, electromagnetic interference, and cumulative service duration [
34,
35,
36,
37,
38]. When a sensor failure occurs, the topology of the sensor network changes, and the objective function values of the original placement scheme deviate from their design values, subsequently compromising the reliability of the monitoring system. Consequently, incorporating failure probability factors into the optimization process is of paramount significance for ensuring continuous, stable, and reliable health monitoring throughout the structural lifecycle.
The establishment of an accurate and representative sensor failure probability model is essential for quantitatively assessing the robustness of monitoring systems. Currently, prevalent failure probability models in reliability engineering are frequently constructed based on exponential [
39,
40,
41,
42] or Weibull [
43,
44,
45] probability density functions. In a comparative analysis concerning wireless sensor networks, Le et al. [
46] posited that the Weibull distribution delineates sensor failure characteristics, particularly those related to aging and wear-out, with greater accuracy than the memoryless exponential distribution. Based on extensive experimental data, Qiu [
47] verified that the sensor failure probability distribution for typical monitoring nodes conforms to the Weibull distribution utilizing the Kolmogorov–Smirnov test, providing a solid empirical basis for its adoption.
This study proposes a robust placement method for multi-type sensing equipment that explicitly and quantitatively accounts for sensor failure risks. First, an evaluation framework for sensor placement is established based on Bayesian inference and the minimization of information entropy, thereby quantifying the uncertainty inherent in parameter identification. Then, a sensor failure probability model is developed using the Weibull distribution to capture time-dependent reliability characteristics. This model is integrated into a modified information entropy calculation method to evaluate the expected performance of the sensor network. Finally, a heuristic search strategy is employed to determine the optimal sensor configuration, efficiently balancing computational cost with solution optimality. Compared to the DIE method, this approach systematically incorporates the impact of potential sensor failures on posterior information entropy, significantly improving the robustness of the optimization results. This ensures the accuracy and reliability of parameter identification throughout the structural lifecycle, maintaining system effectiveness even in the event of partial sensor degradation.
2. Information Entropy Calculation Based on Bayesian Method
Consider a discrete linear structural system possessing
degrees of freedom (DOFs) and the dynamics of which are governed by the following equation:
where
represent the mass, damping, and stiffness matrices of the structure, respectively, encapsulating the mass, damping, and stiffness properties of the system;
denote the acceleration, velocity, and displacement vectors of the structure at time
t; and
is the external load vector at time
t, representing the excitation forces applied to the system.
Consideration is given to the utilization of multi-type sensors, specifically displacement transducers, velocimeters, and accelerometers, to synchronously acquire structural response information across distinct physical dimensions. While accelerometers remain the most widely deployed instruments in practice, modern SHM systems increasingly integrate displacement sensors (e.g., RTK-GNSS, vision-based systems) to accurately capture low-frequency drifts without integration errors, and velocimeters (e.g., geophones) for robust mid-frequency kinetic energy assessment. Fusing these modalities provides a comprehensive broadband representation of the structural state. Denote the sensor sampling interval by
. The full-state structural response vector at the
sampling time step is defined as
, wherein
and
. Assuming
sensors are employed to monitor the dynamic response, the sensor placement scheme is represented by the Boolean selection matrix
. The monitoring data
at the
time step may be expressed as a linear transformation of the state vector with noise:
where
represents the sensor placement position matrix mapping the full state to the measured DOFs;
denotes the measurement noise at the
time step, typically assumed to be Gaussian white noise; and
is the total quantity of monitoring data points collected. The aggregate dataset
is expressed as:
For modal identification,
frequency bands containing significant resonant peaks are extracted from the structural response spectrum. The frequency domain response within the
band is utilized to identify the structural modal parameters
for the
mode:
where
represents the square of the
natural angular frequency (i.e., the
eigenvalue of the structural system), and
represents the mode shape component vector corresponding to that mode at the measured locations. Parameterizing the identification problem utilizing the eigenvalue
rather than the frequency simplifies the subsequent analytical calculation of the local curvature (Hessian matrix) of the log-likelihood function. Modal parameters are selected as the primary identification targets in this study because they are directly and stably extractable from ambient vibration data. While updating physical parameters (such as element stiffness) is highly valuable for downstream damage assessment, formulating the Bayesian information entropy objective around modal parameters ensures the likelihood evaluation remains computationally tractable across the vast combinatorial search space of sensor placement. In accordance with Bayes’ theorem, which provides a probabilistic framework for updating beliefs based on observed data, the posterior probability density function (PDF) of the modal parameters
can be expressed as:
where
is the posterior PDF of parameter
, representing the updated knowledge of the parameters;
is the likelihood function, representing the probability of observing the data given the parameters; and
represents the prior PDF, encapsulating prior knowledge. In the absence of specific prior information,
may be assumed to be a non-informative uniform distribution [
48]. In this case, the posterior distribution of
is directly proportional to the likelihood function. When the measurement data is sufficient and the noise is Gaussian,
is asymptotically approximated as [
49]:
where
represents the frequency domain response vector of the measurement data
at the
frequency subsequent to the application of the fast Fourier transform (FFT);
denotes the conjugate transpose of
; and
represents the theoretical power spectral density matrix of the measurement data at the
frequency, contingent upon the parameters
and the sensor placement
[
49].
The optimal estimate
of the modal parameters, which corresponds to the mode of the posterior distribution, may be obtained by solving the following maximum likelihood optimization problem:
Predicated on the optimal value
, the local curvature of the log-likelihood function is analyzed to quantify the posterior uncertainty. The Hessian matrix
is defined as the second-order partial derivative of the negative log-likelihood function with respect to the modal parameters. Given the complexity of the theoretical power spectral density matrix
, which incorporates the distinct characteristics of multi-type sensors (displacement, velocity, and acceleration) via the selection matrix
an analytical derivation is computationally intractable. Consequently, the Finite Difference Method is utilized to approximate the Hessian matrix elements [
50]. By perturbing the parameter vector
near the optimal solution
, the curvature is estimated numerically, capturing the sensitivity of the specific sensor mix defined in
. This matrix
essentially quantifies the sharpness of the likelihood peak; a larger curvature implies higher information content and lower uncertainty. Finally, by integrating the Hessian matrices associated with the first
modal parameters, the global uncertainty quantification matrix
for the overall modal parameter estimation is obtained.
Utilizing the Laplace approximation for the integral of the posterior distribution, the information entropy, which serves as a scalar measure of uncertainty, corresponding to the sensor placement scheme
may be approximated as:
The initial term on the right side represents the entropy contribution derived from the dimensionality of the parameter space, where
corresponds to the total number of modal parameters (frequencies and mode shapes) being estimated. It serves as a baseline offset determined by the multivariate Gaussian assumption. Whilst this term is constant for a fixed number of sensors, the second term specifically reflects the impact of the placement scheme on the volume of the uncertainty ellipsoid of the parameters. A diminished value of
indicates lower information entropy, implying a greater magnitude of information acquired by the corresponding placement and thus a more precise estimation of the structural parameters.
3. Sensor Optimization Driven by Heuristic Search Strategy
The optimal sensor placement problem is mathematically formulated as a discrete optimization problem, often hindered by combinatorial explosion. As the number of candidate positions and sensors increases, the search space expands factorially, rendering exhaustive search methods computationally intractable. To overcome this barrier, a heuristic search strategy, specifically the sequential sensor placement algorithm, is adopted to enhance optimization efficiency by reducing the number of schemes evaluated at each step.
Consider the fundamental case of placing a single sensor. There exist
possible schemes, constituting the initial placement set
, where
is a unit vector with a single non-zero entry corresponding to the sensor location. Subsequent to the calculation of the information entropy for each scheme in this initial set, a pruning strategy is employed.
schemes possessing the lowest entropy are selected to form the candidate set
and
schemes possessing the absolute lowest entropy are selected to form the current optimal set
. The sizes of the candidate set
and the retained optimal set
act as the beam width for the heuristic search, directly dictating the trade-off between computational speed and global optimality. A minimal value (e.g.,
= 1) reduces the process to a greedy forward search, which is highly efficient but susceptible to local optima. Conversely, a large
approaches an exhaustive search, increasing the probability of finding the true global optimum at the cost of significant computational time. The results are sensitive to this assumption at lower bounds; however, beyond a certain threshold, further increasing
yields diminishing returns. Therefore, these values are determined empirically to balance breadth of search with computational speed [
26].
Predicated on the initial set
, the sensor placement configuration is expanded iteratively. In the
step (where
), the algorithm seeks to add an additional sensor to the existing best configurations. The schemes in the precedent optimal set
are combined with the single-sensor schemes in the candidate set
to generate a new expanded candidate set
:
where
represents the
optimized configuration of
sensors, and
represents the
candidate sensor position from set
. The information entropy of all placements in the generated set
is calculated, and the
schemes with the minimum entropy are selected to form the new optimal set
. These steps are reiterated sequentially until the number of sensors reaches the target
. Finally, the placement scheme with the global minimum information entropy is selected from the final set
as the optimal solution
.
However, the traditional information entropy criterion assumes all sensors remain fully operational throughout the monitoring period [
26], thereby ignoring the stochastic risk of failure. Sensors may malfunction due to environmental erosion, moisture ingress, or aging. Neglecting this factor diminishes the robustness of the monitoring system; a system optimized for a specific configuration may suffer catastrophic information loss if a critical sensor fails. Incorporating a failure probability model into the optimization criterion facilitates the design of placement schemes with inherent redundancy, thereby improving operational reliability and maintaining data quality even in the event of partial sensor degradation.
4. Robust Optimization of Multi-Type Sensor Placement Considering Sensor Failure
To address the limitations of deterministic optimization, this study advances a robust optimization framework. Within the iterative procedure described above, for each candidate placement in the set (), consideration is explicitly given to the scenario where at most one sensor in the system fails. By introducing a modified information entropy metric to evaluate the robustness of each placement scheme, a method capable of accommodating and mitigating the impact of sensor failure is constructed.
To mathematically characterize the failure behavior of sensors during long-term service, the Weibull distribution is adopted to establish the failure probability model. The Weibull distribution is widely recognized in reliability engineering for its versatility in modeling various stages of a component’s bathtub curve. Crucially, it is capable of capturing both progressive aging and sudden, random failures. Accidental damage (e.g., physical impact or environmental severing) can be modeled as a constant failure rate by setting the shape parameter
β = 1, which reduces the model to an exponential distribution. Conversely, setting
β > 1 models an increasing failure rate, characteristic of the wear-out phase. The probability of failure for a sensor within a projected service life
is expressed as:
where the scale parameter
(characteristic life) assumes values
corresponding to displacement, velocity, and acceleration sensors, respectively, reflecting the inherent durability of different sensor technologies. The shape parameter
(
) delineates the distribution characteristics of failure probability evolution over time; a value of
indicates an increasing failure rate, consistent with aging components.
Figure 1 illustrates the cumulative failure probability curves. It is important to note that the specific values of
β and
η depicted in this figure (and utilized in subsequent case studies) are based on representative engineering assumptions designed to demonstrate the algorithm’s capability to handle heterogeneous sensor networks. They are not derived from specific empirical datasheets; rather, they reflect the relative inherent durability of the technologies. As depicted, failure probability increases non-linearly with service duration. Sensors with higher
values, such as accelerometers, demonstrate pronounced wear-out characteristics with a rapid operational decline (modeling the potential fatigue of complex MEMS or piezoelectric components), whereas displacement sensors exhibit a more gradual degradation, reflecting their comparative mechanical robustness (modeling simpler devices like LVDTs).
In the stepwise placement process, the set
(
) contains
schemes, each consisting of
sensors. For the
placement scheme
(
), in accordance with the independent failure assumptions and the Weibull model in Equation (10), the reliability function, denoting the probability that all sensors function normally within service life
, is given by:
where
represents the failure probability of the
individual sensor in the configuration
within
years. If the specific case is considered where the
sensor fails whilst the remaining
sensors function normally, the joint probability of this specific failure state is given by:
Let
denote the standard information entropy when all sensors are operational, and
denote the information entropy of the reduced sensor set when the
sensor fails. The modified information entropy, which serves as the robust objective function for placement
, can be expressed as the expected value of the entropy over the considered failure scenarios:
where
is the modified information
entropy considering failure risk; and
represents the normalized
weight of the scenario where the
sensor fails (with
indicating the scenario of no
failure). This formulation effectively penalizes configurations that rely
heavily on sensors with high failure probabilities or configurations where the
loss of a single sensor leads to a drastic increase in information entropy, namely
loss of observability.
Note that the robust objective in Equation (13) evaluates a truncated scenario set consisting of the nominal state (no failure) and independent single-sensor failure states. In operational environments, it is highly realistic that multiple sensors, particularly those in close spatial proximity or of identical hardware types, may experience correlated or simultaneous failures due to localized environmental hazards or systemic aging. To rigorously include such scenarios, the independent probability model (Equation (12)) would need to be replaced by spatially correlated joint probability models (e.g., Copulas), and the objective function would need to aggregate higher-order failure combinations (). However, multi-sensor and correlated failure scenarios
are excluded from the current scope to maintain computational tractability
within the heuristic search algorithm. Evaluating all combinatorial
permutations of simultaneous failures would cause the computational cost to
grow exponentially at each sequential step. Nevertheless, penalizing
independent single-sensor failures serves as a highly effective first-order
robustness measure. It actively prevents the algorithm from selecting brittle
configurations that rely excessively on a single, failure-prone critical node
to maintain system observability, effectively eliminating single points of failure
without incurring prohibitive computational costs. The probability mass of multi-sensor
failures is neglected in the current objective, representing a complex
extension reserved for future research.
Predicated on the modified information entropy and the heuristic search strategy described in
Section 3, the robust optimal sensor placement
is finally obtained. The
specific procedure involves replacing the standard entropy calculation in the sequential
sensor placement algorithm with this modified entropy metric. By quantifying
the impact of distinct failure scenarios on the magnitude of monitoring
information, this method significantly improves the reliability of the
placement scheme, ensuring that the SHM system remains effective throughout its
design life.
The detailed algorithmic procedure for the RIE method is delineated as follows:
Step 1: Initialization and Preliminary Measurement. Define the discrete set of potential sensor locations , distinguishing between sensor
types (displacement, velocity, acceleration). For large-scale civil
infrastructure, defining requires spatial down-sampling
of the finite element model to maintain computational tractability (e.g.,
discretizing a continuous deck into 15 m or 20 m intervals). Furthermore,
practical engineering constraints must be applied to exclude physically
inaccessible nodes or boundary supports with negligible modal participation.
While the optimal solution is bounded by this discrete set, the spatial
smoothness of lower-order mode shapes ensures that the macroscopic sensor
distribution remains relatively insensitive to minor variations in the grid
discretization. Instead of relying solely on numerical simulations, conduct a
preliminary measurement campaign (e.g., utilizing a temporary dense sensor
array) to acquire the actual baseline structural responses. Construct a
preliminary numerical Finite Element Model (FEM) of the structure to identify
the reference modal parameters (frequencies and mode shapes ) that will serve as the ground
truth for the optimization process. While optimal sensor placement is typically
an a priori numerical procedure, if resources permit, an initial measurement
campaign utilizing a small set of temporary, roving sensors can be optionally
conducted to calibrate this initial FEM and acquire highly accurate baseline
responses. Establish the Weibull reliability parameters (η, β) for each sensor type. In practical engineering applications, these parameters are not arbitrary; they
should be rigorously estimated by fitting historical maintenance logs from
similar SHM deployments or by utilizing Accelerated Life Testing (ALT) and Mean
Time Between Failures (MTBF) data provided by hardware manufacturers, typically
employing Maximum Likelihood Estimation (MLE) to derive the shape and scale
variables.
It is important to note the sensitivity of the optimization results to the assumed Weibull parameters. The algorithm acts as a dynamic scale, balancing a sensor’s modal sensitivity against its failure probability. Consequently, the final configuration is highly sensitive to the inputs β and η; if the characteristic life (η) of the accelerometers were improved by hardware advancements to match that of the displacement sensors, the proposed robust configuration would naturally converge back toward the accelerometer-dominated traditional layout. Therefore, the accuracy of the robust design is fundamentally tied to the quality of the empirical reliability data used to define the failure model.
Define the target service duration . Compute the baseline failure probability for each sensor type using Equation (10) to serve as
static constants for the remainder of the algorithm.
Step 2: Initial Screening (). Compute the modified information entropy for every available
single-sensor location, incorporating the specific failure probability of that
sensor type. Identify the subset of optimal configurations (of size ) and the candidate pool (of size ) by minimizing the entropy
metric.
Step 3: Iterative Augmentation Process. For each sequential step from to the target sensor quantity ,
a. Configuration Expansion. Construct the expanded search space by systematically appending sensor locations from the candidate pool to the existing optimal configurations in .
b. Reliability Assessment. For each candidate configuration within , retrieve the individual failure probability for every constituent sensor from the pre-calculated values established in Step 1.
c. Scenario Probability Calculation. Compute the joint probabilities for the nominal (non-failure) state and all independent single-sensor failure states utilizing Equations (11) and (12).
d. Robust Objective Evaluation. Calculate the modified information entropy for the configuration according to Equation (13), thereby aggregating the weighted entropies of all considered operational scenarios. From a computational implementation perspective, the most expensive operation is calculating the standard entropy via the Hessian matrix. If the user has previously executed a DIE optimization, the intermediate standard entropy values can be cached in a lookup table. The robust evaluation can then reuse these cached values for the required subsets and , avoiding redundant Hessian calculations and significantly accelerating the algorithm, even though the heuristic search path itself must be re-executed due to the modified objective.
e. Optimal Selection. Update the optimal set by retaining the configurations that yield the lowest values.
Step 4: Final Determination. Upon conclusion of the iterative process (), designate the configuration
within the final set that possesses the global
minimum modified information entropy as the definitive robust optimal sensor
placement scheme
The flowchart of the algorithm is given in
Figure 2.
6. Conclusions
A robust optimization methodology for multi-type sensor placement, explicitly accounting for sensor failure, is proposed and validated in this study. First, an information entropy evaluation framework was established based on Bayesian inference to quantify parameter identification uncertainty. Subsequently, the Weibull distribution was utilized to construct a sensor failure probability model, capturing time-dependent hardware reliability characteristics. By integrating these failure probabilities into a modified information entropy metric and employing a heuristic search strategy, an optimal sensor configuration was derived. Numerical validations on a frame structure and a bridge model demonstrate that, under identical sensor quantity constraints, the RIE method yields lower expected information entropy compared to DIE approaches. This effectively reduces identification uncertainty over the lifecycle, providing a robust monitoring solution that ensures data availability despite hardware degradation.
A modified information entropy index with modal parameters as the identification target is constructed in this paper; however, the RIE method may be extended to other structural parameter identification scenarios. Based on the identical theoretical framework, the optimization objective can be replaced with structural physical parameters. In terms of implementation, this would require redefining the parameter vector to represent element stiffness or damping coefficients and updating the forward model in the likelihood function (Equation (6)) to compute the theoretical power spectral density directly from the system matrices (). By applying the finite difference sensitivity analysis to these physical parameters, an optimal sensor placement scheme that balances information content and robustness may similarly be obtained, demonstrating the method’s favorable generalization capability.
It is acknowledged that the current optimization loop considers only single-sensor failure scenarios to manage computational tractability. However, as service duration extends, the probability of simultaneous multi-sensor failures rises significantly. Analysis indicates that to fully address life-cycle monitoring requirements, the optimization model must eventually accommodate higher-order failure combinations, despite the associated increase in computational complexity. Consequently, the development of efficient algorithms capable of navigating the combinatorial search space of multi-sensor failures remains a critical direction for future research.
While the proposed Robust Information Entropy (RIE) method effectively accounts for the stochastic degradation and failure of the sensor hardware, it currently assumes that the underlying structure’s baseline dynamic properties remain constant. In reality, as civil infrastructure ages and degrades (e.g., through fatigue cracking or corrosion), the localized loss of stiffness alters the structural mode shapes. Consequently, a sensor layout optimized exclusively for a pristine baseline may exhibit reduced observability if severe, unanticipated structural damage significantly shifts the dynamic response. To design a network capable of optimally tracking deep degradation, future research must extend this methodology to account for structural epistemic uncertainty. This would involve optimizing the sensor layout across a probabilistically defined ensemble of simulated damage scenarios, effectively coupling hardware reliability with structural damage robustness.
Furthermore, while the current RIE method successfully balances modal observability against hardware failure probabilities, it represents a purely technical optimization. In practical SHM deployments, particularly when utilizing a heterogeneous network of varying sensor types, the financial costs of the hardware and the logistics of replacement are decisive factors. A cheaper sensor with a shorter design life might be economically preferable if maintenance access is facile, whereas an expensive, highly durable sensor is required for inaccessible locations. To holistically address this, future iterations of this methodology should be expanded into a multi-objective optimization framework. By defining an Expected Life-Cycle Cost (LCC) function—incorporating initial capital expenditures and the probabilistic costs of future replacement interventions derived from the sensor failure models—the algorithm could generate a Pareto front. This would allow practitioners to explicitly trade-off robust information entropy against total financial cost.