Seismic Assessment of Historical Masonry Buildings at Different Scale Levels: A Review

: The relevant losses determined by recent earthquakes stressed the vulnerability of historical masonry constructions towards horizontal seismic actions, therefore highlighting the need for reliable approaches for the structural assessment and following retroﬁt. During the last decades, the scientiﬁc community has widely investigated the tools to analyse the performance of such structural typologies, resulting in a multitude of different methodologies depending on the building’s features and the goal of the analysis. The task is particularly challenging because of the frequently limited knowledge concerning the state of the art and the high structural complexity due to overlapped construction phases. A general literature review of the methods adopted for the structural assessment of historical masonry buildings is proposed in the present paper. The methods are grouped according to the operational scale, providing an overview of the current state of the art.


Introduction
The conservation of historical masonry buildings has been widely debated, especially over the last half-century, thanks to their undeniable social and economic importance.On the one side, cultural heritage is necessary for preserving collective memory and identity [1]; on the other side, in places where tourism is the driving force of the economy, the loss of cultural heritage would strongly affect the quality of life of the local population [2].
The concept of conservation can be seen, in general terms, as the set of practices oriented to defend an asset for future generations, restoring from possible existing damages and preventing potential deficiencies and critical issues.It is widely known that unreinforced masonry structures are designed to withstand static gravitational loads but are otherwise generally vulnerable to even low-entity horizontal seismic actions, resulting in the arising of cracks and partial or global structural collapses.In this framework, accounting for the conservation needs, the structural analysis can be seen as a necessary tool for understanding the features characterising the buildings state of the art and for eventually planning retrofitting measures starting from detected deficiencies.According to Roca, Cervera [3], the structural analysis marks all the steps contributing to the effective conservation of historical buildings, starting with the diagnosis and leading to a reliable retrofit design.
More than for ordinary buildings, in the case of historical constructions, the retrofit design needs to pursue the principle of minimum intervention, applying only those measures required for reducing risks for people, structures and artworks and avoiding what could unnecessarily affect the authenticity of the asset.The choice of a proper method for assessing the structural performance of an existing building acquires, in this sense, a primary role in preventing over or under-statement of the effective capacity towards static or dynamic/seismic actions, possibly leading to incorrect retrofit planning and resulting economic waste.
In the last decades, the overwhelming improvement of information technologies (IT) has promoted the application of increasingly developed computer tools for the analysis of historical buildings.Complex nonlinear analyses are becoming ever more widespread, allowing numerical simulations that were unthinkable until a few years ago.Despite that, the analysis of the cultural heritage preserves some challenging features that cannot be overcome by simply increasing computer power: historical buildings are characterised by limited initial knowledge concerning geometry, structural features, construction systems, material characterisation, inner heterogeneity due to overlapped construction, etc., and these aspects cannot be automatically accounted for in numerical analyses.
In the present paper, a review of the principal existing methodologies for the structural assessment of historical buildings is presented and discussed with a critical approach, trying to point out the main limitations and possibilities.The selected methodologies are presented by distinguishing the level of details (or, in other words, the 'scale') adopted for the modelling of materials, elements and whole buildings.In this sense, 'large-scale approaches', as in the following, can be even mentioned as 'territorial approaches' and include methods for the territorial/urban scale evaluation of a large number of masonry buildings and are often connected to risk assessment; 'small-scale approaches' are adopted, as guessable, for portions of the building, structural components and single buildings, in this last chance eventually including the case of aggregates, even if the matter is of high complexity.In several cases, the choice of adopting a large-or, on the contrary, a small-scale method for modelling a single building depends on structural features and construction properties, and the large-scale method-in general, easier to apply-can be preliminarily used as a 'screening tool' for criticisms to be further explored through deeper approaches.
For sure, a complete discussion of the overall scenario of current methodologies is not possible due to the multitude of approaches and even declinations that, starting from the first proposed method, have arisen over the years.The aim of the present work is therefore to provide a detailed review of the most common and most adopted methods for the vulnerability assessment of cultural heritage.The concept of seismic risk can be 'easily' related since it is provided by the convolution of vulnerability, exposure and seismic hazard.

Large-Scale Approaches
Large-scale approaches are usually related to the concept of seismic risk assessment, which is helpful in simultaneously analysing a large number of buildings at a territorial scale, frequently leading to defining those mainly requiring retrofit or maintenance interventions.A multitude of methodologies for the evaluation of the three factors defining the seismic risk (i.e., exposure, vulnerability and seismic hazard) are currently available, accounting for different building typologies and different complexity degrees [4][5][6][7][8][9].In the field of historical buildings, particular relevance is paid to vulnerability, which is directly related to the construction typology and its structural features, thus representing the aspect that can be improved through engineering solutions.
Trying to simplify the issue, the large-scale approaches can be grouped into (i) empirical methods, (ii) mechanical methods and (iii) hybrid methods (Figure 1-adapted from [10]), which are detailed in the following.

Empirical Methods
The empirical (i.e., macroseismic) methods began to spread around the 70s based on the post-earthquake damage survey and on the qualitative evaluation and statistical elaboration of a rather reduced parameter list.Their empirical nature represents, meanwhile, their strength and their weakness: it allows the analysis of a large number of buildings at a territorial or even national scale, but, on the other hand, rather rough results are achieved since affected by (only) qualitative information and by the need of determining approximately homogeneous classes where buildings are finally located.The roughness of the result is even because the post-earthquake damage survey is often incomplete, be-ing focused on the most damaged areas near the earthquake's epicentre and therefore underestimating the real number of undamaged buildings [11].

Empirical Methods
The empirical (i.e., macroseismic) methods began to spread around the 70s based on the post-earthquake damage survey and on the qualitative evaluation and statistical elaboration of a rather reduced parameter list.Their empirical nature represents, meanwhile, their strength and their weakness: it allows the analysis of a large number of buildings at a territorial or even national scale, but, on the other hand, rather rough results are achieved since affected by (only) qualitative information and by the need of determining approximately homogeneous classes where buildings are finally located.The roughness of the result is even because the post-earthquake damage survey is often incomplete, being focused on the most damaged areas near the earthquake's epicentre and therefore underestimating the real number of undamaged buildings [11].
Empirical methods are particularly useful when attempting to define the damage scenario at a large-scale level; the numerous studies performed in the field led to the outline of three different approaches: (i) damage probability matrix (DPM) method, (ii) fragility curves method, and (iii) vulnerability index method (VIM); see Table 1.Empirical methods are particularly useful when attempting to define the damage scenario at a large-scale level; the numerous studies performed in the field led to the outline of three different approaches: (i) damage probability matrix (DPM) method, (ii) fragility curves method, and (iii) vulnerability index method (VIM); see Table 1.

Damage Probability Matrix Methods (DPM)
The DPM methods express the conditional probability of reaching a certain damage level D during an earthquake of intensity i: P[D = j|i], deriving from observing post-event damages in a specific site after a seismic event of a given intensity.The DPM methods gather the damage grade distribution for each building vulnerability class in the analysed area according to different macro-intensities of the seismic event.Thus, the same probability of reaching a specific damage state for a determined seismic intensity is associated with each element of the vulnerability class.Bernardini, Giovinazzi [18] Oliveira, Ferreira [19] Bernardini and Lagomarsino [20] Lagomarsino and Cattari [21] DPM methods-first introduced by Whitman and Cornell [45]-allow a rapid and relatively cheap evaluation of the seismic risk of a large number of buildings.They show some limits related to the need for a huge amount of data concerning all the macroseismic intensities and building typologies of a specific area: this means, practically, that they are suitable for regions with extensive historical seismicity records and the availability of a huge data report [46].To be sufficiently reliable, DPM methods should therefore be used in the same region where they were originally calibrated, since they are influenced by structural and architectural features [47].
In the Italian national scenario, the devastating Friuli (1976) and Irpinia (1980) seismic events promoted the first systematic application of large-scale vulnerability assessment methods [12,13,48,49].In Braga, Dolce [12], the statistical analysis of the damages caused by the Irpinia seismic event led to an elaborate DPM-based procedure defining vulnerability classes accounting for vertical and horizontal structural typologies.Each class had the same conditional probability of reaching a given damage grade-expressed through a binomial distribution ranging between 0 and 1-for a determined seismic intensity defined in terms of the Medvedev-Sponheuer-Karnik scale (MSK scale) [50].Similar applications, just to cite a few, can be found in De Natale, Madariaga [13], Dolce, Sabetta [14] and Di Pasquale and Orsini [15].
The definition of a new European Macroseismic Scale, the EMS98 [51], allowed the first relevant evolution of the method proposed by Braga, Dolce [12].Depending on materials and the code design level, the EMS grouped buildings into six classes (from A to F) based on the expected behaviour during a similar seismic event.Each class damage attitude was depicted through the same fragility curve, and five discrete damage-related levels were proposed, differentiated as a function of the seriousness and extent of the damage that occurred within the structural elements [21,52].The EMS98 was used to integrate the DPM method, such as in Dolce, Masi [16], introducing an additional vulnerability class to account for buildings realised since 1980 and showing a lower vulnerability since retrofitted or designed in agreement with more recent seismic codes.Giovinazzi and Lagomarsino [17]-later refined in Bernardini, Giovinazzi [18]-derived a complete DPM from the EMS definition according to six vulnerability classes, six damage grades and the macroseismic intensity from V to XII.Fuzzy sets were used to associate numerical ranges to the damage frequencies expressed through linguistic expression in the EMS98, and the achieved results were used, for instance, in Oliveira, Ferreira [19].
A similar attempt was made by Di Pasquale, Orsini [53], replacing the original MSK with the MCS (Sieberg, 1930)-the Mercalli-Cancani-Sieberg scale-widely used in the Italian building catalogues.Further, the building's number was replaced by the dwelling's number to fit with the 1991 Italian National Statistical Office (ISTAT) data.In Bernardini and Lagomarsino [20], the DPM method was applied to the case of monumental buildings with a knowledge-based approach.Since the EMS 98 scale refers to ordinary buildings, a new vulnerability model was proposed by grouping in class types the monumental buildings with similar use, architecture and, potentially, seismic behaviour, according to the observation of post-earthquake damages described in the available literature.
The DPM was, therefore, generally used to describe the damages of a set of buildings with observed homogenous behaviour, leading to defining a discrete number of resulting classes.Due to the frequently limited data availability, not enough for covering all the building typologies and all the intensities required for a complete DPM model, the expert judgment [54], the neural network system [55], or the fuzzy set theory [56] were used as a support or replacing-tool, besides the probabilistic processing of the surveyed data [57].In Lagomarsino and Cattari [21], binomial intensity coefficients were used to complete the matrix levels of damage with a few data caused by the lack of information concerning damage grades for all levels of intensity at a certain site characterised by a given building type.
If we consider the huge variety of historical buildings, even internally differing in materials, construction techniques, geometry, etc., the application of DPM methods-even if thought for single structural units within the aggregate-is not always suitable, and other approaches can be preferred.

Fragility Curves Methods
The fragility curve methods overcome the DPM's limit, represented by the discretization of damage states and the following need for lots of data that are not always available.A continuous approach is adopted to characterise the structure's damage for any earthquake intensity, evaluating the probability of exceeding a certain damage state ds according to increasing values of a seismic intensity-related parameter (e.g., spectral displacement Sd) [58].The final output is a fragility-or vulnerability-curve correlating the mean damage grade with the earthquake intensity.As for DPM, the fragility curves method has limited applicability since the curves should be used in the same region where they are defined and based on a consistent database.The curves depend on the building features, e.g., construction techniques, types of manpower, materials, etc., and a relevant amount of data is needed to properly correlate observed damages with expected ones [59].
The fragility curve method initially experienced some difficulties in matching the continuous nature of the method with the discrete nature of the macroseismic intensity measurements of that time [10].Spence, Coburn [22] elaborated the Parameterless Scale of Intensity (PSI), which is able to derive vulnerability functions starting from the damage observation using the MSK scale.The PSI was also used by Orsini [24] to analyse areas stricken by the Irpinia earthquake, correlating PSI and PGA through empirical functions and focusing on the flat scale.Other examples applied at the Italian level were proposed by Sabetta, Goretti [23] and Rota, Penna [25].The first defined fragility curves as a function of PGA, Arias Intensity and effective peak acceleration [27], while the latter processed observational damage probability matrices to obtain lognormal typological fragility curves relating, for five damage states, the damage probability to the mean PGA referred to the building's municipality.PSI's employment did not find a wide application in the field of historical clusters, since based on empirical data, a large sample size was assumed [46].
Fragility curves can even be defined with reference to spectral acceleration or spectral displacement instead of PGA or macroseismic intensity [28][29][30].If spectral displacement is used, relationships can be achieved between the frequency content of the ground motion and the fundamental period of vibration of the building stock.
Since fragility curves should be determined and used with reference to the same region, over the years they have been developed with reference to different places and building typologies: to cite a few of them, Ghodrati Amiri, Jalalian [26], Rota, Penna [31], Del Gaudio, De Martino [32], Rosti, Del Gaudio [33] and Rosti, Rota [11].Nevertheless, as for DPM methods, their use for analysing buildings as variable as the historical ones should be carefully evaluated and preceded by a specific study in the area of interest.

Vulnerability Index Methods (VIM)
VIM, as well as fragility curve methods, try to overcome the limits of DPM by proposing a continuous approach to evaluate the probability of exceeding a certain damage level, given an earthquake characterised by a specific macroseismic intensity, or PGA.The vulnerability evaluation is based on buildings' structural features (instead of typologies adopted in DPM), and the 'vulnerability index' (Iv) provided is the result of the weighted sum of selected parameters.Data related to observed damages are used to calibrate vulnerability functions for buildings belonging to the same typology.The resort to instrumental parameters (e.g., PGA) instead of macroseismic measures allows for easier result comparison even if, as stressed by D'Ayala and Novelli [60], the accuracy provided by the correlation between PGA and damages has been recently questioned [61][62][63].The main shortcoming of VIM is the fact that the result is significantly affected by the technician's judgment, as well as the parameters selected, and their weights are affected by a certain degree of uncertainty and related to the analysed building typology [10,63].
The VIM was introduced in 1984, starting with the elaboration of a huge amount of field survey damage data based on the survey form proposed by Benedetti and Petrini [34] and applied by Benedetti, Benzoni [35].A scoring assignment ranging between class A (situation aligned to the Italian code prescriptions) and D (most unsafe condition) qualified each parameter, weighted depending on the relative influence of the accounted structural features on the building vulnerability.The different weighted parameters were finally added up, providing the building vulnerability index (Iv).
Benedetti and Petrini's [34] proposal was the basis of the GNDT II level contribution [36] for existing masonry and ordinary reinforced concrete structures.The form was developed from the post-damage observations for identifying the primary structural system and the relevant deficiencies through visual inspections.Compared to Benedetti and Petrini [34], modifications in the parameters' list-such as the inclusion of the non-structural element parameter-and in the assigned scores-recalibrated based on the damage survey performed-were introduced.The GNDT II level was implemented in Puncello, Caprili [64] to fit the case of historical monumental masonry buildings.Assuming to first disaggregate the building into its structural units, a survey form was provided to account for the main structural and geometrical features, determining a score for each unit.As a result, the most vulnerable unit among the aggregate was identified, therefore suggesting the order to follow for a more in-depth investigation and design of retrofit interventions.
A significant VIM update was due to Formisano, Florio [37] and applied in many further works [38][39][40]42,65].The issue of masonry building aggregates, reflecting the structural layout of many historic Italian city centres, was first proposed and analysed.Inspired by Lagomarsino and Giovinazzi's [66] research, Formisano, Florio [37] integrated the original form by introducing five parameters that account for the interaction among adjacent buildings.The scores and the weighting factors were determined by realising 3D models of a building, used as a representative sample, through 3Muri ® software [67].The issue of building interaction was even faced by Vicente, Parodi [43], who proposed an updated survey table that rediscussed some existing parameter criteria and introduced new ones that accounted for the interaction among ordinary buildings.
Besides somehow quantifying the safety under seismic loads, the Iv can contribute to defining vulnerability functions if related, through stochastic methods, to a global damage index D of buildings having the same typology, in the case of equal macroseismic intensity I (or PGA).Index D was provided by the weighted combination of values describing the post-earthquake state of the building and had values ranging between 0% and 100% [47].The damage index represents the ratio of repair cost to replacement cost, and it is negligible for a PGA value less than a certain threshold and increases linearly up until a collapsed PGA, from where it takes a unitary value [10].Recently, Zuccaro et al. [44] proposed a correct model of vulnerability curves, in PGA, for masonry structures in Italy, using a heuristic approach starting from damage probability matrices (DPMs) and adopting the information collected in a database of major Italian seismic events.

Mechanical Methods
To assess vulnerability, mechanical methods employ appropriate mechanical models of different complexity relating the main building features to the structural response to seismic action.
Unlike empirical methods, mechanical ones are based on a consistent number of parameters to describe the structural response of the entire structure or portions of it using the identification of failure modes or force/displacement curves.Therefore, they may need even more complex mathematical correlations, and their use has been encouraged by the recent growth of computing.
Mechanical methods are normally adopted when retrofitting projects for a rather reduced group of constructions shall be conceptually elaborated, being otherwise more hardly used for the contemporary analysis of a large building sample.
A further distinction between simplified and detailed analytical methods, in the following described, can be performed; see Table 2.

Simplified Mechanical Methods
Simplified numerical methods are used to analyse a rather relevant number of buildings based on a few mechanical and geometrical input parameters, leading to a reliable qualitative evaluation in a relatively short time.Even if more time demanding than empirical methods, they provide higher result accuracy since they are based on data with direct physical meaning.
Among them, methods referring to the evaluation of a single resistance factor related to the likely predominant collapse mechanisms (e.g., in-plane shear failure) and therefore depending on the building typology characteristics, can be found.The result is approximated since the collapse mode is a priori defined but has the advantage of requiring a quite limited number of geometrical and material features, then being relatively fast.After the Umbria-Marche earthquake (1997), Lourenço and Roque [73] elaborated one of the first procedures-later implemented by other contributors such as Lourenço, Oliveira [75]-related to a predominant collapse mechanism.The procedure is based mostly on geometrical data, allowing the calculation of three indexes (i.e., in-plan area ratio, area-to-weight ratio and base shear ratio) that, compared to reference values, allowed for identifying the most vulnerable buildings within the same seismicity zone.Again, the SAVE procedure, determining the collapse peak ground acceleration (PGA) in each direction starting from the building's shear resistance evaluated through geometrical data was proposed by Dolce and Moroni [71].Chinni, Mazzotti [76] implemented the SAVE approach elaborating the RE.SIS.TO method and accounting for the possible structural criticisms highlighted during the in situ surveys by means of reductive coefficients.
Besides the above-presented approaches, another category of simplified mechanical methods is based on kinematic models applied to the structures subdivided into macroelements.Such methods aim to verify the activation of failure modes involving one or more structure portions.The demand is defined through parameters such as drift, displacement or PGA, associated with an output in terms of collapse-load multipliers representing a vulnerability index to be correlated with damage thresholds.The subdivision into macro-elements, necessary for the method application, fits the results of the knowledge investigations (e.g., structural system, connections between perpendicular walls, structural discontinuities, historical evolution, surveyed crack pattern, etc.).As stressed by Chiozzi, Grillanda [92] and Torelli, D'Ayala [2], the kinematic analysis presents some shortcomings, therefore needing careful handling of the results.The method application is based on the strong simplification of no-tension material with unlimited compression strength, while more refined material properties, ignored by the current model, can influence the collapse modality, such as orthotropic behaviour, limited compressive strength and shear-normal stress interaction [93].Further, as for all the simplified approaches, the geometry and the loading conditions are often approximated, and only a pre-defined set of collapse mechanisms is analysed, possibly leading to an inaccurate assessment of the structural capacity.
The VULNUS method introduced by Bernardini, Gori [68] represents one of the first relevant mechanical simplified models based on the kinematic approach developed for masonry structures.It was calibrated for small and regular buildings and provided three output indexes representing as many collapse multipliers as possible.The first two indexes accounted for in-plane and out-of-plane behaviour, concerning only two failure modes; the last was a weighted sum of parameters related to the height, connections and homogeneity between adjacent buildings.D'Ayala and Speranza [70] and D'Ayala and Speranza [94] proposed an improvement to the VULNUS method: starting from the collection of geometrical data, the spreadsheet FaMIVE, capable of identifying a wide range of the most dangerous local mechanisms for the outward façades of the masonry building aggregates, was elaborated.External inspections only were required, resulting in a quick procedure applicable even in case of dangerous and not accessible buildings in a post-emergency phase.The FaMIVE method was later improved [72,74] by including the issues of cohesion and irregular opening layouts, the latter representing a more realistic situation for historical buildings.
Calvi [69] introduced the limit state method, which was later applied to the city centre of Catania.The method was based on the evaluation of displacement and energy dissipation capacity, and results were presented in terms of the probability of occurrence of each damage limit state for a specified earthquake motion, depicted by means of a displacement response spectrum for each building.The method can be reliably used for a global loss prediction but lacks in evaluating single building performance.
Concerning masonry structures, Restrepo-Vélez and Magenes [95], Restrepo-Vélez [96], and Modena, Lourenço [97] proposed the MeBaSe (Mechanical Based Procedure for the Seismic Risk Estimation of Unreinforced Masonry Buildings) procedure.According to the reached damage level, the approach accounted for three limit states related to in-plane failure mechanisms; the limit state functions were determined through geometrical and material data, using correction factors to simplify a tri-dimensional building into a bidimensional model.Unlike Calvi [69], out-of-plane phenomena were included through one-way and two-way bending mechanisms.
The two wide categories of simplified mechanical models above-mentioned are resumed in the approach proposed by the Italian Guidelines for cultural heritage analysis and retrofitting [98].The Guidelines adopted the principles of the performance-based approach outlined by the Italian Code [99] to implement the Eurocode [100] prescriptions at a national level [2].They outlined three different analysis levels of increasing complexity: the first two belonged to simplified mechanical approaches, while the last one proposed global analysis through proper numerical techniques.In particular, the first level (LV1), similar to the SAVE procedure, was meant to be applied at a territorial level, providing different prescriptions depending on the construction typology.The second level (LV2) is based on the kinematic approach applied to the structure defined in terms of macro-elements.Several have been the subject of the application of the Italian Guidelines, such as Casapulla, Argiento [101]-with reference to a masonry palace-and Torelli, D'Ayala [2]-with reference to towers.
Being developed and calibrated for different building typologies, the mentioned methods can be generally used also for a relatively quick estimation of the structural capacity of monumental buildings when not requiring high regularity and homogeneity of the structural layout.

Detailed Mechanical Methods
The detailed mechanical methods are based on the capacity spectrum methods (CSM) [77,102,103], assessing the expected building performance through the spectral coordinates' comparison.A graphical representation of the global capacity-in terms of force-displacement relationship-is provided to be compared with the response spectrum representation of the earthquake demand.In particular, the capacity curve is determined by applying an incremental static load to the structure and recording the base shear and the displacement of a reference point on the top step by step.
Detailed mechanical methods developed for existing buildings usually require the buildings' classification-depending on typology and seismic design-and the definition of the damage states.Then, a capacity curve is associated with the building class to correlate with a spectral demand curve described by acceleration-displacement response spectra opportunely reduced to account for the inelastic behaviour.Finally, the evaluation of the seismic response is performed in terms of performance points [60].
The approach was first proposed by HAZUS [77], introducing 33 classes based on the functional destinations and 36 model buildings referring to the material and the structural system.Four damage levels for structural and non-structural elements and the relative fragility curves characterised each building class; pushover curves and capacity curves were also defined, accounting for the indications outlined by ATC-40 [104] and FEMA356 [105].
One of the main advantages of these methods is that they do not require detailed information on all the analysed buildings.Once defined the capacity curves for each typology, only the damage thresholds and the number of buildings for each typology are needed.The assessment of the structural behaviour of a given building within the range defined by its reference fragility curve is allowed through a parametric analysis [60].
The method's strength is, at the same time, its weakness since it provides a reliable result if the area being analysed is characterised by the homogeneity of the building typology and by a consolidated seismic design code [58].Such shortcoming makes the method suitable for the American scenario rather than for the European ones; despite that, a few attempts to use the HAZUS approach in the European framework can be found in the scientific literature [78, 79,82].
A mechanics-based approach similar to that proposed in HAZUS [77] was developed within the framework of the RISK-UE project [80].Lagomarsino and Giovinazzi [66] proposed a double-level approach, later detailed within the hybrid methods, including a mechanical model applicable when PGA and spectral values are provided to describe the hazard.A simplified version of the CSM was introduced to evaluate the building's seismic performance in terms of damage limit states.
More recently, Borzi, Pinho [83] proposed a mechanical method for the large-scale assessment of RC buildings-later extended to masonry ones [84].It proposed to identify the capacity limit on the capacity curve to be compared with the demand deduced from a response spectrum for each building in the random population, finally resulting in vulnerability curves.
Particularly important for cultural heritage assets was the modified CSM proposed within the framework of the PERPETUATE project [87].It introduced corrective factors for the building's irregularities and proposed new limit states, specifically calibrated for cultural heritage buildings, accounting for human life safety and specific conservation requirements.Each limit state was correlated to damage measures for single structural elements and the entire building.As for the other CSM methods, the approach was calibrated, assuming perfect connections between perpendicular walls.Implementing the CSM proposed within the PERPETUATE project was the investigation concerning the rocking of masonry structures performed in Lagomarsino [89].The out-of-plane overturning was predicted with reference to the maximum spectral displacement as a seismic intensity measure instead of PGA, proposing a new procedure for assessing the capacity curve of complex multiple-block mechanisms and related damage levels.For an insight into methods referring to rocking analysis, the reader is addressed, for instance, to Casapulla, Giresini [106].
In Lagomarsino, Cattari [88], the DBV-masonry method (displacement-based vulnerability) was proposed by integrating the model elaborated by Cattari, Curti [107] with some contributions presented in Pagnini, Vicente [85] and Cattari, Lagomarsino [86].It investigated the global response related to the activation of the in-plane response of masonry panels.The correlation of geometrical, mechanical and technological parameters, together with a given global collapse mechanism, determined the parameters describing the capacity curve assumed as bilinear, with elastic perfectly plastic behaviour.Corrective factors accounted for peculiarities of the existing buildings and helped to improve the assessment of the vibration period (e.g., non-homogeneous size of piers, irregularities in the in-plan configuration, spandrel stiffness).
Most of the existing procedures belonging to the CSM approach analyse a reduced number of failure mechanisms, mostly accounting only for in-plane or frame-like behaviour.Born for simplification purposes, this choice is not perfectly suitable for historic masonry structures.The lack of sufficient diaphragm action and proper tying of horizontal to vertical members may, in fact, induce significant out-of-plane phenomena [90].Some attempts to overcome this shortcoming can be found in the recent literature: for instance, Vamvatsikos and Pantazopoulou [90] proposed a procedure for reproducing the global vibration characteristics while estimating the typical local failures through a local deformation shape.The simplification performed allowed avoiding an excessive computing effort but restricted the applicability to simple box-type buildings with a rectangular shape and flexible floors.Giordano, De Luca [91] presented a mechanical approach to derive the out-of-plane response of unreinforced masonry walls according to different boundary conditions and vertical loads.The final vulnerability assessment was proposed in accordance with the modified CSM method proposed in Lagomarsino, Modaressi [87].

Hybrid Methods
By combining them, hybrid methods (see Table 3) try to overcome the respective limits of empirical and mechanical methods-e.g., partial or lack of proper data and computational effort-by using specific models.Numerical input/output from mechanical models meet statistical and stochastic data to define the risk-related parameters.In this way, the most suitable approach can be selected according to the available information and the building typology.Issues related to applicability and reliability probably represent the main shortcoming of hybrid methods, influenced by the definition of proper mechanical and structural features in numerical terms and by the appropriate management of the uncertainty sources related to vulnerability, exposure and hazards [60].
A hybrid method widely used in Greece, even if calibrated for RC buildings, has been developed by Kappos, Stylianidis [110] to analyse an area where only a few empirical data were available.Avoiding resorting to data collected in similar regions, the authors proposed a mechanical model that was reliable in describing the building behaviour for a representative seismic load.The models were calibrated thanks to earthquake data so that DPM matrices could be obtained in a cost-benefit framework.This method was broadly used with a few variations, showing good reliability when buildings are investigated with accuracy [111,112,114].
In the framework of the RISK-UE project [66], two methodologies were developed for the vulnerability assessment of existing buildings and the evaluation of earthquake risk scenarios.The first accounted for a macro-seismic model defining the vulnerability through vulnerability curves and macroseismic hazard maps, while the second proposed a mechanical-based model defining the vulnerability using capacity curves and requiring the hazard in terms of peak ground acceleration [21].As explained in Lagomarsino [115], three levels were defined according to an increasing level of knowledge and a different operating scale.
More recently, the procedure developed by Maio, Vicente [116] for the aggregated assessment of a building was introduced.The cluster was decomposed into its structural units, and a hybrid technique based on the CSM method was adopted by modelling the structures through the 3Muri ® software and performing pushover analysis.A correlation between the bilinear capacity curve and the EMS98 was then adopted to estimate fragility curves and damage probability distributions.Moreover, the indirect technique of Formisano, Florio [37] and Vicente, Parodi [43] was used to allow a comparison of the results.A similar analysis of a masonry cluster was performed by Chiumiento and Formisano [117] to investigate the influence of the structural unit position within the aggregate on the structural capacity.Two structural units placed in intermediate and head positions were selected and analysed by means of a pushover analysis performed with 3Muri ® software and a quick assessment based on Formisano, Florio [37].
Other examples of hybrid methods can be found, for instance, in Giovinazzi and Lagomarsino [17] and Giovinazzi and Lagomarsino [57] as a combination of VIM and DPM-updated in agreement with EMS98-in addition to an analytical equation used to correlate the seismic input in terms of macroseismic intensity to the physical damage.

Small-Scale Approaches
The small-scale approaches are mainly adopted to analyse a building's portion, a single building or, at least, a structural aggregate composed of different units variously interconnected.According to the phenomenon to be investigated, the investigation aim, and the available data and solving tools, several numerical methodologies have been developed over the years.Especially in the last decade, the availability of increasingly sophisticated software and increased computer capacity has encouraged the adoption of such approaches to investigate the structural behaviour of complex structures.
To simplify, the methods can be grouped into (i) limit analysis methods, (ii) FE methods and (iii) simplified methods, as summarised in Figure 2.

Limit Analysis Methods
Limit analysis (LA) methods represent a powerful tool to assess the failure load associated with different collapse mechanisms of structures that can be disaggregated into blocks.The presence of joints determining natural predefined weaknesses planes makes this kind of discrete modelling and analysis particularly attractive, especially for structures such as arches, piers and abutments of bridges [122].For similar reasons, it can be successfully used for buildings not characterised by box behaviour due to the absence of rigid floors or the presence of potential partial collapses, as frequently happens in the case of historical constructions [3,122].
LA generally refers to the lower-bound (static) or upper-bound (kinematic) method for the evaluation of the collapse multiplier and the assessment of the damage; it gives results only for the ultimate condition [123], neglecting intermediate steps, ultimate displacement and post-peak response.LA methods cannot simulate brittle failure modes In the framework of small-scale approaches, it is also worth mentioning the blockbased modelling strategies, which account for rigid or deformable blocks that interact through a frictional or cohesive frictional contact surface [118].Given the chance of representing the masonry heterogeneity, they have as counterparts a remarkable computational demand causing, so far, their applicability mainly to panel scale despite some extent to entire buildings analysis may be found in the literature [119,120].Not analysed within this paper, a detailed insight into the block-based strategies may be found, for instance, in [3,117,121].

Limit Analysis Methods
Limit analysis (LA) methods represent a powerful tool to assess the failure load associated with different collapse mechanisms of structures that can be disaggregated into blocks.The presence of joints determining natural predefined weaknesses planes makes this kind of discrete modelling and analysis particularly attractive, especially for structures such as arches, piers and abutments of bridges [122].For similar reasons, it can be successfully used for buildings not characterised by box behaviour due to the absence of rigid floors or the presence of potential partial collapses, as frequently happens in the case of historical constructions [3,122].
LA generally refers to the lower-bound (static) or upper-bound (kinematic) method for the evaluation of the collapse multiplier and the assessment of the damage; it gives results only for the ultimate condition [123], neglecting intermediate steps, ultimate displacement and post-peak response.LA methods cannot simulate brittle failure modes (e.g., limited shear or compression strength).
They are hardly able to account for all the possible mechanisms detectable by plastic hinges in a complex structural system [124]; therefore, they are usually less accurate compared to other methods.
Thanks to the reduced computational effort, LA methods were largely used in the past, even for historical masonry constructions, since they required a limited number of input parameters concerning materials, information usually missing in such structures [125].Limit analysis for masonry structures was introduced by Heyman, exploiting the three well-known hypotheses of (i) masonry infinite compressive strength, (ii) no sliding between parts, and (iii) zero tensile strength [126,127].Over the years, several studies, based on Heyman formulations, addressed the analysis of arches and bridges modelled as a discrete system of rigid blocks [128][129][130].Baggio and Trovalusci [131] approached the issue of introducing block sliding through a non-linear mathematical problem, appealing to avoid assumptions concerning the arrangements of blocks and interfaces.The problem was difficult to solve compared to that resulting from the classical theory and generally based on the associated flow rule.Minimizing the load factor to achieve the safest solution was proposed but further invalidated by Orduña and Lourenço [132], Orduña and Lourenço [133] as being unrealistic in the presence of Coulomb friction.Orduña and Lourenço [125] and Orduña and Lourenço [134] proposed further developments accounting for limited compressive stress, and Ferris and Tin-Loi [135] adopted a mathematical programme with equilibrium constraints.Gilbert, Casapulla [136] proposed an approach requiring iterative application of linear formulations, later extended to three-dimensional structures, including torsional effects, by Portioli, Casapulla [137].
Full three-dimensional limit analysis of masonry buildings was proposed by Milani, Casolo [138], including mortar joint cohesion and masonry crushing issues.The upper bound limit analyses and in-and out-of-plane loaded walls were used by adopting the Mohr-Coulomb failure criterion for bricks, implementing interfaces with tension cut-off and cap in compression for the mortar joints [139].Particularly interesting in the field of ancient buildings were the contributions by Giuffrè [140], Giuffrè and Carocci [141], Giuffrè [142], Giuffrè [143], which allowed identifying recurrent failure modes by observing real damages in traditional constructions.Kinematic LA was applied to rigid blocks, providing useful results even for buildings without box behaviour.In Lagomarsino [121], the procedure has been further implemented by means of a combination with the capacity spectrum method.
Other examples concerning limit analysis methods are provided in Roca Cervera [3] and in D'Altri, Sarhosis [117].

Finite Element Methods
In the last few years, thanks to the huge development of IT tools, the numerical finite element method (FEM) has become increasingly important for evaluating the structural behaviour of existing structures-including historical buildings.Being generally flexible to reproduce a variety of situations, the FEM is an indeed appealing tool, even if, in the case of complex situations such as for historical buildings, the accuracy is often combined with high computational effort and numerical issues.
The FE approach includes, as is known, the two steps of modelling (FEM) the considered structure and analysing its behaviour (finite element analysis-FEA), which are investigated in the following.

FEM Modelling: Strategies and Issues Applied to Cultural Heritage
The application of the FE method to the engineering field has been widely debated [144][145][146][147], and three different strategies are currently available (see, for instance, Roca, Cervera [3], Lourenço and Silva [148]) for modelling and analysing structures with an increasing level of detail and accuracy, i.e., micro-modelling, simplified micro-modelling and macro-modelling.First FEM attempts for masonry structures are due to [149]; the FEM micro-or block-by-block modelling [123] represents the more detailed approach, individually modelling the three masonry components (i.e., blocks/element, mortar, interface) through continuum finite elements for units and mortar and discontinuous elements for the interface, accounting for potential cracks or slip planes.The in-and out-of-plane orthotropic masonry behaviour is well reproduced without needing interpretation, and the anisotropy of the material, as characterised by experimental tests, is directly accounted for.The counterpart is the relevant computational burden, bounding the method's applicability to the analysis of material local responses or small structural components.
The computational effort is partially reduced by adopting the simplified micromodelling [123], where only the unit is modelled as a FEM continuum, whereas the mortar and the interface are smeared at the vertical and horizontal joints, which are represented by a discontinuous element.Even if succeeding in simplifying the numerical problem, this approach has shown relevant errors when applied to nonlinear analysis and is therefore seldom used for complex structural analysis [148][149][150][151].More information can be found in Sarhosis [152].
The macro-models (see Lourenço and Silva [148] for numerical and theoretical background) have found a wider application at the building-level analysis thanks to the highly reduced calculation demand.A fictitious homogeneous material is generated by adopting a suitable relationship between average masonry strains and stresses, allowing mesh dimensions higher than the single unit.For historical buildings, the adoption of a macro-model is nearly mandatory due to the presence of multi-leaf irregular randomly assembled masonry, almost impossible to be represented through micro-modelling and difficult to be accurately characterized due to the limitations concerning the execution of destructive tests on cultural heritage [117].Hereinafter, only the FEM macro-models will be thoroughly analysed.

Modelling of Geometry
The determination of building geometry is fundamental for a reliable FEM model.Indirect survey techniques, including photogrammetry and terrestrial laser scanning (TLS), are mainly used when the building's size and scale of representation require a high density of point capture and need post-processing.Direct survey techniques, including the total station theodolite (TST), the global positioning system (GPS) or measured drawing, represent less 'automatic' procedures since they are dependent on the surveyor's skills.Recently, mainly the former typology was adopted in the field of historical constructions thanks to the increasing technology capability and the high accuracy granted.Particularly, the use of a terrestrial laser scanner (TLS) has been demonstrated to be a valid tool for accurately detecting the external geometry (including permanent deformations, out of plumbs, local deviations, misalignments etc.) and the inner morphology (presence of cavities, thicknesses, etc.).TLS succeeds in describing the geometry of hardly accessible areas and can even be adopted in poorly lit environments such as churches [153][154][155][156][157][158][159][160][161][162][163][164][165].TLS allows the contemporary survey of the geometrical position of a huge number of points (the so-called 'point cloud model'), which can be elaborated in different ways to provide the structural model.

Modelling of Materials
Nonlinear constitutive laws have been more widely used for the seismic assessment of 3D structures since they are able to represent the behaviour of large and complex buildings by setting a reduced number of parameters.Thanks to the diffusion of commercial FE codes, the current scientific contributions generally use laws based on fracture mechanics (smeared crack models) [166][167][168], plasticity theory [158,[169][170][171][172][173], and damage mechanics [174,175].Several models currently adopted for masonry structures were initially developed for reinforced concrete ones, e.g., Lubliner, Oliver [176], Weihe, Kröplin [177] and Jankowiak and Lodygowski [178].Despite their limitations in the multi-level anisotropy and the heterogeneity of masonry compared to concrete, their use spread thanks to the availability of commercial FE software and the relative ease of their use.For historical buildings, isotropic models have been generally preferred to the (correct) anisotropic ones since a complete characterisation of the mechanic behaviour along all the geometrical axes would require expensive experimental campaigns; besides, masonry could be highly irregular within the same building (and even component), therefore, requiring the determination of multiple constitutive laws.
Some attempts to develop a proper anisotropic model for masonry have been made, e.g., Berto, Saetta [179], Pelà, Cervera [180,181], but the consistent computational effort required and the numerous parameters to be set by these approaches make them more suitable for research aims than for real-world case study applications.For a more detailed review of constitutive masonry models, the reader can refer to D'Altri et al. [117].

Modelling of Components
Once identified the geometry, the FEM model can be realized through different element typologies depending on the analysis goals.For vertical structures, solid elements are used when great accuracy is needed, such as when structural and architectural features are strongly related to each other (e.g., complex pillars with capitals).In this way, elements/components are simulated realistically, but a high computational effort is required, therefore limiting their applicability to buildings with reduced dimensions or easy geometry [158,167,[182][183][184][185], or portions of them [174].In Orlando, Betti [185], the problem of simulating multi-layer walls with solid elements is faced with reference to the analysis of the Rocca Strozzi (Campi Bisenzio, Florence, Italy).Double-leaf walls, composed of two external layers of good quality masonry and a heterogeneous inner infill, were modelled as an equivalent solid continuum characterised by the same thickness as the existing walls and a homogenised Young's modulus.A similar procedure was also present in Chellini, Nardini [161].
The use of shell elements entails a relevant simplification, reducing the numerical demand and therefore being recommendable, even for more complex structures [156,166,168,186,187].Another example of the use of bidimensional elements for masonry walls can be found in Lourenço, Trujillo [187], which also introduces the possibility of accounting for wall damage: the damage pattern surveyed in the church of St. George of the Latins Church (Famagusta, Cyprus) was included within the model by reducing the thicknesses of elements located within the damaged areas.
Modelling the horizontal structures has always represented a significant issue since, unlike recent buildings, the floors of historical-masonry buildings are mainly composed of vaults and timber floors, which are hardly schematised through infinitely rigid planes.When dealing with timber frame structures, besides modelling frame elements [188], the horizontal structures may be simulated through concentrated mass [156,172,182,185], therefore neglecting their stiffness contribution.This assumption implies that the floors do not guarantee any load transfer from heavily damaged walls to still-efficient structural elements.Dealing with highly deformable structures, neglecting the floor contribution is more realistic than assuming a completely rigid plane [167]: the influence of the modelling strategy for timber frame floors was investigated by Clementi, Gazzani [161], comparing the effects when considering them as (i) all deformable, (ii) partially deformable and partially rigid (as the surveyed situation), (iii) or all rigid in their plane.Formisano and Massimilla [189] proposed the use of two diagonal trusses with St. Andrew's cross configuration to represent deformable floors; the diagonal axial stiffness of the diagonals was calibrated on the actual floor in-plane stiffness.Alternatively, the adoption of a bidimensional shell element was suggested, accounting for the floor's in-plane stiffness by calibrating proper orthotropic materials.Timber floors as equivalent plates (i.e., plates with the same stiffness as the timber frames) were also adopted by Castellazzi, D'Altri [190].
Vaulted surfaces can be modelled according to their real shape both through shell elements [161,186] and solid elements [173,184].The thrust effect of the vaults on the confining walls is properly evaluated, but the computational effort and meshing drawbacks are relevant when dealing with a high number of vaults.When aiming at assessing the structural capacity of a whole building, it is commonly accepted in the literature to model vaults as equivalent plates [190][191][192][193][194][195], therefore strongly limiting numerical and meshing issues.Indeed, when the aim is the analysis of a masonry vault, the numerical model shall include both the vault (properly modelled) and the surrounding bearing structures since the seismic behaviour of the building strongly influences that of the vaults within it, as demonstrated by D'Altri, Castellazzi [173].

Modelling of the 'In-Aggregate' Effect
When dealing with non-isolated buildings or when there is the need for modelling only a building portion, FE models of historical constructions shall face the issue of representing the interaction with adjacent structures.This condition is quite common since historical buildings are often part of an urban fabric resulting from progressive additions, demolitions and modifications, which frequently determine a connection among adjacent constructions or even within portions of the same construction.Depending on the toothing level between adjacent structural parts and the following reciprocal constraint established, fully connected [188] and not connected [158] configurations may be adopted as limit conditions, but intermediate situations may be even caught.Casarin and Modena [166] modelled the Reggio Emilia cathedral by accounting for a portion of the surrounding buildings, calibrating the model through dynamic tests.The comparison of natural frequencies and modal shapes of the two extreme conditions above-mentioned was performed even in the analysis of the San Felice sul Panaro fortress [172], later updated through the modelling of the main tower of the fortress together with some portions of the adjacent elements, while simulating the remaining parts through section surfaces properly implemented by varying the Young's modulus.
Baggio, Berto [188] analysed the influence of the boundary conditions regarding a portion of the Palazzo dei Musei (Modena, Italy) and four different cases: the presence of translational springs with low stiffness and the presence of translational springs with high stiffness, besides the two limits conditions of a not connected and fully connected building.In the two intermediate situations, the elastic stiffness for the spring elements used to simulate the non-modelled part was determined based on a reliable range of fundamental periods.A similar approach was used by Vaiano, Venanzi [196] for the Sciri Tower in Perugia (Italy).

FE Modelling and Analysis: Strategies Applied to Cultural Heritage
The first attempts at structural evaluation using FEM models have been performed through linear elastic analyses thanks to their relatively reduced numerical effort, partially controlling the analysis's intrinsic limits through large expertise [197][198][199].With the overwhelming improvement of the computing and software tools, the use of linear analysis has been often limited to a preliminary and relatively fast assessment of the model reliability and definition (e.g., check on the mesh definition, load definition and distribution, reliability of reactions and deformations), since it generally lacks accuracy for the structural assessment of masonry structures.Besides the complex nonlinear behaviour usually shown by masonry structures, even at low-stress levels, linear analyses cannot account for key aspects of the masonry behaviour, such as the non-tension response.
A nonlinear approach appears necessary to properly catch the collapse behaviour, possibly associated with large displacements and not the negligible role of geometrical nonlinearity.This approach is thus nowadays the most common to define in detail the structural capacity, even if examples of linear approaches may still be found, such as in Caprili, Mangini [200], where the consistent knowledge of the analysed building allowed performing limit state and local mechanism analysis in addition to a linear dynamic analysis assuming a reduced masonry stiffness to account for the damage condition.
Nonlinear static pushover (PO) analysis is often the method adopted to analyse existing masonry structures according to what is suggested by current codes (to cite a few, [104,[201][202][203][204][205]; PO accounts for possible geometric and material nonlinearities, is relatively easy to execute, and has a reduced computational burden.Many examples of PO applications can be found in the literature from the last two decades [2,138,171,172,174,175,184,[206][207][208][209][210][211][212].On the other hand, several weaknesses can be highlighted concerning the adopted load patterns, which usually do not account for degradation except in the case of adaptive pushover analysis, where the progressive changes in the modal frequencies due to crushing and cracking phenomena are introduced (Papanikolaou and Elnashai [213]; Papanikolaou, Elnashai [214]).
Nonlinear time-history (dynamic) analyses-or nonlinear response-history analyses-can be carried out for masonry structures, providing a more detailed assessment of the structural response to strong seismic impulses but requiring, meanwhile, higher accuracy in modelling materials due to the need to include cyclic performance and corresponding strength and stiffness degradation.They currently represent the most accurate available method for the structural evaluation of masonry buildings, but, although several contributions may be found [158,173,187,188,190], they are still rarely used in engineering practice.The limited use is the consequence of operational issues such as the complexity of the time-integration algorithm, the difficulties in damping representation and identification of the suitable static and dynamic parameters, and the nature of the earthquake records, which exhibit different peculiarities entailing a complex transposition in the numerical analysis [215].Since incorporating inelastic members' behaviour under cyclic earthquake ground motions, the time-history analysis allows the explicit simulation of hysteretic energy dissipation in the nonlinear range.The analysis provides the dynamic response for the input ground motion, determining the response history data on the relevant demand parameters.The analysis shall be repeated for several ground motions due to the variability in earthquake ground motion [215].
It is a matter of fact that the reliability of the analysis results cannot be easily proved when dealing with complex constructions such as historical buildings.Model calibration is needed, adopting different strategies that can be grouped into direct or indirect methods, according to Ewins [216].The direct methods are based on tuning individual elements in the mass and stiffness matrices definition through a direct comparison with measured data; the indirect ones are based on the calibration of elemental properties in the FE model to reduce the gap between measured and predicted values (usually preferred since related to the tuning of physical quantities).Roca and Elyamani [215], reviewing FE model updating methods, promoted the subdivision by Atamturktur, Hemez [217], referring to the deterministic and stochastic updating approaches: by comparing FE output and direct measurements, the former had the purpose of identifying the most probable value for undefined input parameters; the latter, more realistic, looked for a statistical correlation between the FE and direct data through a probabilistic formulation of the accounted parameters.Quite common is calibrating according to the dynamic identification test, allowing to understand the real dynamic structural response.For historical masonry constructions, frequently used is the output-only modal identifications technique consisting in using the ambient vibrations (for instance wind or traffic-induced vibrations) as excitation sources [164,166,168,184,187,209,218].Through highly sensitive piezoelectric accelerome-ters, preferably placed at the top of the building and connected with an acquisition board, the dynamic properties of the structures are evaluated in terms of natural frequencies, mode shapes and damping ratios.On the contrary, the excitation is considered a white noise signal and is not measured.The FE model's parameters are then tuned according to one of the optimisation processes available in the categories introduced before, aiming to minimise the residual between the experimental and numerical responses.

Simplified Methods
Several methods have been proposed over the years for reconducting the complexity of a masonry structure to a simplified system that can be more easily (and quickly) analysed.Monodimensional and bidimensional models were associated with the different methods with the aim of limiting the need for complex numerical computing.They are generally based on the use of different structural elements (e.g., trusses, beams, panels, shells) to simulate piers, columns, arches and vaults under the assumption of homogeneous material behaviour.

Monodimensional Models
Several methods have been proposed to reduce the complexity of a masonry structure to a simplified system, limiting the computational burden (Table 4).In the simplified macroelement methods, the structure is generally modelled through a set of panel-scale components simulating the behaviour of the masonry elements (e.g., piers and spandrels).Components, therefore, need to be identified a priori, referring to the observed damages in existing buildings that suggest the subdivision.If such a subdivision is relatively easy and 'spontaneous' in simple and regular buildings, difficulties arise when analysing complex existing structures, such as historical ones [139,219,220].Regularisation of the geometry is generally required, followed by meshing referring to the dimensions of piers and spandrels, therefore needing high expertise to provide reliable results.
Unlike continuous models used for the FE methods, the constitutive law defined for macroelement models needs to reproduce the response at the panel scale.Zero tensile strength of masonry is accounted for, and a rigid block system represents the walls, having a reduced number of degrees of freedom and requiring a relatively low computational effort.The high simplification and the following reduced computational burden supported the large diffusion of these methods in the engineering practice, even if the simplification entails some limitations such as neglecting local out-of-plane mechanisms.As stated by Degli Abbati, D'Altri [139], although those mechanisms may be analysed by resorting to other techniques such as the limit analysis already mentioned, neglecting their possible simultaneous occurrence with the in-plane mechanism may lead to a misestimation of the seismic capacity.Moreover, macroelement methods are mostly ineffective in accounting for structural details such as the toothing among perpendicular walls.Recently, Baraldi et al. [221] proposed, trying to overcome such limitations, a simple and effective rigid beam model-already effective for freestanding stone columns-for the analysis of cantilevered unreinforced masonry walls, considered along their thickness under out-of-plane actions.

Rigid Plane Approaches
The rigid plane approaches, also known as shear-connection methods, assume the masonry structural system as a combination of two perpendicular sets representing vertical planes (i.e., the façades and the inner walls) and an additional one simulating the horizontal planes (i.e., floor slabs).The approximation of infinitely rigid horizontal planes and the assumption of a full connection between floors and walls are the hypotheses at the base of the method, making it less accurate when dealing with historical structures provided by irregular wall openings, deformable slabs and poor connections.[226] Augenti [227] Tena-Colunga and Licona [228] El-Dakhakhni, Elgaaly [229] In Italy, rigid-plane approaches began to spread after the Friuli earthquake of 1976 to estimate the seismic building capacity on a large scale.In the Regional Law of Friuli-Venezia Giulia made in 1977 (L.R. 20.06.1977) the VeT and the POR [223] methods were introduced, to be later included in Circolare n • 21,745 of 30.07.1981 [224].Even though their application may suggest a large-scale approach, the above-cited methods can be reliably applied to a relatively reduced group of buildings.They both assumed a regular distribution of piers at each level, and the stiffness of the wall as the sum of the shear stiffnesses of in-series piers and the diagonal shear strength as unique parameters accounted for the elastic-perfectly plastic constitutive law.For representing walls with openings, the POR method used an assemblage of strips that were infinitely stiff and resistant; pier elements were represented as idealised one-dimensional elements.A limited plastic deformation, where the plastic branch's length was determined depending on the masonry category (e.g., untreated stone, new brick masonry, existing brick masonry), is assumed differently from VeT.
One of the main limitations of the POR method concerns the collapse mechanism, being shear failure the only considered, neglecting otherwise flexural phenomena.In general, the method highlighted good reliability when applied to two-floor structures, leading-in other cases-to an overestimation of the stiffness.
From 1985 to 1995, several contributions were published aiming at a more accurate representation of the masonry walls' behaviour, overcoming the POR limits.The PORFLEX presented by Braga,Dolce [12] proposed an elastic-brittle constitutive law, strips infinitely stiff but not infinitely resistant and walls with variable stiffness to account for damage evolution.The axial force increment due to the horizontal load was even included in the model, as were tension and compression failures.Dolce [225] proposed the POR90 method by introducing a new methodology for the piers' stiffness evaluation, helpful in the case of irregular walls, based on a definition of equivalent height dependent on the spandrels' dimensions and position.Diagonal shear or axial compression and bending failure of piers were considered within the method.
Similar to the POR, the VEM method [226] proposed a floor-by-floor analysis by neglecting the increment of the axial forces on the piers due to the horizontal loads.The aim was to identify the load collapse multiplier needed for determining the maximum resistant horizontal force of each level.Spandrels were assumed to be rigid elements, and the piers were analysed in terms of diagonal and sliding shear and compressive-bending actions.The main features of the POR-related methods are summarised in Table 5 by developing the analysis of Pellegrino [259].
The RAN method devised by Augenti [227] inspired by POR, allowed a global analysis of the building performance through the assumption of a cumulative response of storeys and walls.An elastic-perfectly plastic constitutive law was used for both shear and compressive-bending collapse criteria.The piers were modelled through one-dimensional elements with constant thickness, double curvature restraint conditions whose height was defined as the entire pier's height.The spandrels, on the contrary, were rigid and infinitely resistant.The method allowed the evaluation of the panel failure modes besides the maximum load of the walls transmitted to each slab.Unlike the other techniques presented, it is relatively easy to use also through a spreadsheet without resorting to a computer.Tena-Colunga and Licona [228] proposed a method based on the rigid plane approach, requiring the additional assumptions of (a) walls carrying more than 75% of gravitational loads, (b) limitation on plan ratio, (c) limitation on the ratio between the height of the building and the shorter plan side, and (d) building height not exceeding five floors or 13 m.The method, developed with reference to regular low-rise masonry buildings, introduced a new proposal to evaluate the wall stiffness based on effective shear area factors calibrated to account for different structural performance levels, namely elastic, completely nonlinear and partially nonlinear response.The presented proposal was later partly included in the Mexican seismic code [260].

Strut Models
Strut models represent the panel's behaviour through a strut element that schematise the masonry wall's reactive portion.The inclination of the strut element varies according to the horizontal load, and the collapse happens when reaching excessive rotation or compression.Generally not used for historical constructions, strut models found a wide application for accounting-in a relatively easy way-the effect of masonry infills in stiffening and strengthening the surrounding concrete or steel frame in framed buildings [229,230].Additional information can be found in Mohamed and Romão [231].

Equivalent Frame Models (EFM)
Equivalent frame models (EFM) are widely diffused within the framework of macroelement methods, based on discretising the load-bearing wall system by means of a frameset.A two-nodes geometry connected through rigid beams or rigid offset is used to model piers and spandrels.The rigid arms schematised the joint panels, usually not showing relevant damages after earthquakes, and have the role of solving the inaccuracies arising from the application of the frame discretisation.The employability and critical issues related to the EFM procedure have been investigated, for instance, by [220,261].
Between the 60s and the 90s, several EFM were proposed [232][233][234][235][236]. Kwan [236] was among the first to study the walls' shear deformation problem, accounting for it through the transfer in the rigid arms instead of in the column element modelling the walls.The models showed a good level of accuracy both for solid or hollow walls and façades.
Relevant at the Italian and international level was the SAM (Simplified Analysis of Masonry Buildings) method [237,238,240], proposing a schematisation of the 'wall with openings' through an equivalent frame idealisation in which the in-plane response of walls governed the resisting mechanisms.The method was first developed for plane structures [238] and later extended to three-dimensional ones [239].Both piers and spandrels were modelled as beam-column elements, accounting for the equivalent height proposed by Dolce [225], characterised by shear deformation, and connected through joint elements that were infinitely resistant and stiff.Joint elements were modelled through rigid offsets at the ends of the pier and spandrel elements; an elastic-plastic behaviour with limited deformations was associated with the piers, approximating the experimental resistance under cyclic load.No axial tension was allowed, while the expression of the strength criterion was detailed in Magenes and Calvi [262] and Magenes and Fontana [238].The method aimed to analyse the whole wall by assigning an incremental horizontal load up to the wall failure and analysing the global equilibrium.The diagonal shear, the shear sliding and the flexural or rocking failure were accounted for by the piers; only the diagonal shear and flexural failure criteria were otherwise considered for the spandrels.
The method was further developed through SAM II, a user-friendly computer code [244] where reinforced masonry or concrete elements with nonlinear behaviour can be modelled.In this way, the method offered the possibility of analysing reinforced masonry and mixed structures besides the unreinforced masonry ones.Both an infinitely rigid diaphragm and deformable floors can be modelled, the latter is generally more suitable for existing structures.
A method accounting for an equivalent frame system made of nonlinear elements was proposed by Molins and Roca [263], with reference to spatial structures composed of curved, three-dimensional members with variable cross-sections.The method included a set of partial models to describe masonry's non-linear behaviour, accounting for cracking in tension and yielding or crushing in compression.In Roca, Molins [124], the method was then extended to the analysis of 3D systems by modelling the walls according to the proposal of Kwan [236].Several other contributions deal with the non-linear behaviour of masonry schematised through frame equivalent systems.Some of them focused on the formulation of non-linear beams [245], some others were based on the development of specific software [238], and some others were related to the use of general software packages [241][242][243].
An equivalent-frame approach specifically developed for structural units in masonry aggregates was proposed by Formisano and Massimilla [189].It is based on the lumped plasticity approach [242] to model nonlinear mechanical behaviour, which has already been applied to historical structures [264].Piers and spandrels were modelled as elastoplastic, with the former having two rocking hinges at the ends of the deformable part and one shear hinge in the middle, while the latter had only the shear hinge.Piers' out-of-plane behaviour was neglected for these elements by assigning rotational releases at their ends.Two different models were considered for the mechanical analysis: one related to the whole aggregate, and the second dealing with the selected structural unit as an isolated structure equipped with elastoplastic links as boundary elements.Such links were calibrated starting from the analysis results of the whole aggregate, since once the capacity curve of the aggregate was evaluated, it was possible to assess the shear forces and nodal displacements of all the structural units, therefore calibrating the links [265].

Bidimensional (Macroelement) Models
Another widely explored category of macroelement-related models adopts bidimensional elements to simulate the masonry components.One of the first proposed bidimensional models was the SISV (Setto Inclinato a Sezione Variabile) developed by D'Asdia and Palombini [246].Piers, spandrels and joints were simulated through triangular finite elements by excluding the in-tension parts and simulating those in compression through a beam with a variable section and axis linking the midpoints of the reactive outer sections.The panels had an adaptive geometry, changing during the analysis due to the increase in horizontal loads.The problem was nonlinear, and the hypothesis of stiff spandrels was adopted to analyse each level according to its local reference system.The method was slightly updated a few years later [266], changing its name to PEFV (Parete a Elementi Finiti a Geometria Variabile).
Studies performed by Caliò, Marletta [253] and Vanin and Foraboschi [255] evidenced that the use of beam-type macro-elements implied an inaccurate simulation of the interaction between macro-elements and weak modelling of the panels' cracked condition [256].These authors respectively proposed a two-dimensional macro-element model based on using a set of non-linear springs-also adopted by the computer code 3DMacro [267]-and a strut and tie model.In the former proposal, two diagonal springs connected the opposite element corners, simulating shear behaviour, while a discrete distribution of springs along the element's sides simulated its interaction with the adjacent macro-elements (Marques and Lourenço, 2011).Springs were also used for the rigid body spring model (RBSM) of Casolo and Pena [254] and Casolo and Sanjust [268], where plane quadrilateral rigid elements were reciprocally connected through two normal springs and one shear spring at each side.Separate hysteretic laws were assigned to the axial and shear deformation between elements, and a Coulomb-like law was used to combine the strength of the shear springs with the vertical axial loading [3].
The Mas3D model was proposed by Braga, Liberatore [251], using the macro-element model with 'no-tension' proposed by Braga and Liberatore [269].It adopted a 'multi-fan panel' as an element with no tensile strength and linear compressive behaviour.Stiff endfaces characterised multiple compressed fans in each panel, without interaction among the fans' sides.Each fan's radial tension defines the tensional state since there were no tangential tensions, and the stiffness matrix can be directly evaluated with a reduced computational effort.
A model which has found wide use in the last two decades is the one proposed and implemented in the 3Muri ® software [67], based on the macroelement approach proposed by Gambarotta and Lagomarsino [247] and later developed in several contributions [248][249][250]252,257].This programme accounted for the compression-bending failure and the shear failure only.In the former case, the effective redistribution of the compression due to the section reduction was accounted for when the maximum strength was reached, determining the ultimate displacement according to the maximum drift value expected for the mechanism (0.6%).In the case of shear failure, a Mohr-Coulomb criterion was assumed so that the model followed the progressive decrease of element strength and stiffness through hysteretic structural behaviour.The ultimate shear deformation was associated with a 0.4% drift [270].
The pier damage modelling approach introduced by FEMA356 [105] and later updated [205,258] can also be mentioned.According to it, each wall was modelled through a nonlinear translational spring with force-deformation characteristics determined by the critical failure mode identified as the smallest shear capacity between rocking failure, sliding failure and diagonal tension failure.The building's capacity curve (shear force-displacement diagram) was then determined by using a macromodel composed of nonlinear shear hinges representing the lateral-load displacement response of each pier composing the wall [271].The axial force acting on each wall was evaluated by spreading the gravity loads according to the standard tributary area approach.Further, the increment of axial load due to the overturning effects was accounted for in a simplified manner by assuming a linear axial load profile along the building length [258].Each storey was analysed separately by imposing displacements distributed to the piers through a rigid diaphragm.The building's capacity curve was determined by setting a lateral displacement at the story level and calculating the displacement demand on each pier, including torsional and translational effects.Finally, the piers' contributions were added up by considering the shear springs in parallel.The failure modes were determined at each analysis step among the possible failure modes (i.e., rocking failure, sliding failure and diagonal tension failure).The elastic rigidity was evaluated by combining shear and flexural rigidities.
Applying the above-mentioned methods requires a few parameters for characterising the masonry mechanical properties, such as elastic and shear modulus, compressive and pure tangential shear strength and the tensile strength defined in an implicit or explicit way.The modelling strategies come from experimental results based on single panel observations, but the modelling of an entire building is more complex, and the panels' interaction acquires more relevance [256].

Conclusions
The historical (unreinforced) masonry buildings represent a fundamental component of our cultural heritage, and their vulnerability towards horizontal seismic actions is nowadays a well-known issue.Despite research carried out over the past decades and an incessant development of engineering and IT practices, the analysis of historical masonry buildings still remains a challenging activity due to their intrinsic features.Limited knowledge of geometry, structural and material features, construction systems, etc., makes their analysis different (and more complex) compared to ordinary buildings.
In this paper, a wide review of the most common methods for the analysis of historical masonry buildings has been presented.The advantages and disadvantages of each of them have been outlined, as well as their feasibility depending on the extent of the analysis to be carried out, the level of accuracy and the typology of data collected.
A first wide subdivision of the existing methods is based on the number of buildings to be investigated.Large-scale approaches are needed when the contemporary analysis of a consistent number of buildings is required.Generally operating at a territorial scale, their aim is not a detailed structural analysis of each building but rather the identification of the most vulnerable units within a set or the prediction of the possible damage depending on the seismic input.Such methods usually need fairly basic input data, allowing a relatively prompt assessment.
Among large-scale methods, empirical ones are based on the statistical elaboration of qualitative information coming from post-earthquake damage surveys, needing to be calibrated for the building typologies of a specific region.Analytical methods based on mechanical and geometrical input parameters show better accuracy in terms of results, generally at the expense of a greater computational burden and execution time.Their outcomes can be directly useful for retrofitting projects, being otherwise able to analyse a more reduced number of buildings compared to empirical methods.Aiming to overcome the respective limits of the two families of approaches mentioned, the hybrid methods were developed by combining numerical input/output from mechanical models with statistical data.As a shortcoming, their applicability is usually strongly related to the calibration performed through survey data and therefore related to a specific area.
By reducing the scale of interest and focusing on a single building or on a reduced number of them, methods belonging to the small-scale approach should be adopted.Within them, limit analysis methods can be used to evaluate the failure load related to different collapse mechanisms of structures ideally subdivided into blocks.Particularly appealing for buildings, such as the historical ones, where the lack of rigid floors prevents a box behaviour, they need reduced computational requirements.On the other hand, they do not account for brittle failure modes and investigate fewer possible mechanisms compared with those detectable through plastic hinges.From this perspective, FE methods maybe represent the currently most accurate approach from the model and analysis point of view, allowing the simulation of a multitude of different situations thanks to the several options in terms of representation of geometry, simulation of materials, modelling of components and so on.As a counterpart, the more the analysed building is complex and heterogeneous, as it can be for historical buildings, the more the computational burden is relevant.
To limit the complexity of the analysis, several methods have been developed to reduce a masonry structure to a simplified set of panel-scale elements simulating the building's components, typically piers and spandrels.A significant decrease in the computational effort is achieved, otherwise needing an a priori building disaggregation.In the case of very irregular structures, such as historical ones, disaggregation can be particularly difficult, therefore needing high expertise.
To conclude, many approaches, with different strengths and limitations, are available nowadays to deal with historical masonry structures.The most effective should be chosen each time based particularly on the complexity of the situation faced, its extent, the typology of information and data available and, mainly, on the analysis aim.

Figure 1 .
Figure 1.Categories of large-scale approaches (the figure is based on [10]).

Figure 1 .
Figure 1.Categories of large-scale approaches (the figure is based on [10]).

Table 1 .
Summary of different proposals for the empirical analysis of masonry structures.

Table 2 .
Summary of different proposals for the mechanical analysis of masonry structures.

Table 3 .
Summary of different proposals for the hybrid analysis of masonry structures.

Table 4 .
Summary of different proposals for the monodimensional models of masonry structures.

Table 5 .
Comparison of the main typologies of rigid plan approaches.