The interactions of the north American and the Pacific tectonic plates across much of California have created a network of major and minor active faults in the proximity of major cities such as Los Angeles and San Francisco that are capable of generating earthquakes as large as
8.3 or so. Major north-south trending faults in the vicinity of Los Angeles include the San Andreas fault, the San Jacinto fault, the Elsinore fault, and the Newport Inglewood fault. East-west trending faults include the Santa Monica-Hollywood-Raymond fault, the Sierra Madre fault, and the Puente Hills blind-thrust fault. Other yet-to-be discovered blind-thrust faults may be present as well. The proximity of these faults to the Los Angeles metropolitan area, the existence of large number of tall steel structures in the region, and the unexpected brittle failures in several steel buildings during the 1994 Northridge earthquake have prompted several investigations of the performance of these types of buildings (mainly of the steel moment frame variety) under hypothetical earthquake scenarios (e.g., [1
]). These studies have focused on one or two events and their effects on structures. While such studies are useful to gain insights into potential outcomes, they are rarely suitable for rational decision-making as such one-off hypothetical events cannot capture the totality of the seismic hazard at the site. Here, we attempt to address this problem for earthquakes on the San Andreas fault.
We select kinematic finite source inversions of past earthquakes on geometrically similar faults and map these onto multiple locations on the southern San Andreas fault. We allow these earthquakes to propagate in two alternate directions, north-to-south and south-to-north. Here, we will refer to these simulated earthquakes as “scenario earthquakes”. We simulate a total of 60 scenario earthquakes spread over a magnitude range of 6–8. For each scenario earthquake, we compute 3-component ground motion histories at 636 “analysis” or “target” sites on a 3.5 km grid in southern California using the spectral element method for the low frequencies (<0.5 Hz) and empirical Green’s functions for the high frequencies (– Hz).
Rupture forecasts such as the Uniform California Earthquake Rupture Forecast (UCERF, [8
]) combine data from several sources, including local earthquake catalogs, magnitude frequency distributions (usually derived using global earthquake catalogs), paleoseismic observations, and GPS measurements of tectonic movement, and rules about rupture propagation between faults to predict probabilities of all plausible earthquakes on known faults during a specified time interval. UCERF bases these probabilities on four modeling components: (i) a (fault) model of the physical geometry of known California faults; (ii) a deformation model of slip rates and related factors for each fault section; (iii) an earthquake rate model of the region; and (iv) a probability model. It hypothesizes hundreds of thousands of ruptures (referred to as “forecast earthquakes” in this article) on specific seismogenic locations of faults and provides yearly occurrence rates that are most consistent with observations. These rates are transformed to probabilities of occurrence assuming an underlying probability distribution such as the Poisson distribution. Methods to prune this exhaustive set of ruptures down to those that contribute significantly to the hazard at a given site have been under development [10
]. Here, we describe a rational method to redistribute the UCERF3 forecast earthquake probabilities to the pre-determined (selected) set of scenario earthquakes. Combining the probability of occurrence of the scenario earthquakes with the predicted ground motions, we develop probabilistic estimates of ground shaking over the next 30 years in the Los Angeles basin from San Andreas fault earthquakes.
While the study is conceptually similar to the Southern California Earthquake Center’s CyberShake initiative, the following are critical differences in the methodology: (i) CyberShake utilizes pseudo-dynamic rupture models whereas our source models are kinematic source models of past earthquakes on geometrically similar faults with similar source mechanisms. (ii) The ground motions simulated as part of the last CyberShake release (V15.4) were limited in frequency to 1 Hz (Web Ref: CyberShake (https://scec.usc.edu/scecpedia/Comparison_of_CyberShake_Studies
)) whereas the ground motions in this study contain frequencies up to 5 Hz. (iii) While CyberShake source locations and extents follow the UCERF rupture definitions in order to directly use the forecast probabilities therein requiring a far greater number of events (and intensive computational resources) to be simulated, this study describes a method to redistribute the UCERF earthquake probabilities over a smaller set of earthquake sources, significantly lowering the computational demands and making it possible to test and compare alternate approaches. We should also note that the scope of CyberShake is much broader with all known faults being considered, whereas our study is focused on the San Andreas fault.
3. Ground Motion Simulation
In addition to a mathematical description of the earthquake source, a detailed mapping of the earth’s density and elasticity structure is needed to characterize the seismic wave speeds in the region, allowing for the deterministic simulation of site-specific ground motions. The spatial resolution of this mapping dictates the limiting wavelength (and frequency) of the seismic waves that can be reliably propagated through a finite-element/finite-difference/spectral-element model of the earth; the higher the resolution, the shorter the limiting wavelength and the higher the limiting frequency. Two regional wave-speed models of southern California exist, both developed and maintained by the Southern California Earthquake Center (SCEC): (i) the SCEC Community Velocity Model (CVM, [22
]), and (ii) the SCEC-CVM-Harvard or SCEC-CVMH [25
]. Both models are capable of propagating seismic waves with frequencies at least up to 0.5 Hz and have been used in long-period ground motion simulations in the Los Angeles and surrounding basins (for e.g., [1
], etc.). To synthesize the higher frequencies (above 0.5 Hz) in the ground motion, stochastic (e.g., [39
]) and empirical (e.g., [41
]) methods have been developed. Broadband ground motion is produced by combining these with the deterministic low-frequency ground motion from finite-element, finite-difference, or spectral-element simulations.
Here, we follow the [41
] methodology to produce broadband ground motions with frequencies up to 5 Hz. High-frequency seismograms generated using a variant of the classical empirical Green’s function (EGF) approach of summing recorded seismograms from small historical earthquakes (with suitable time shifts) are combined with low-frequency seismograms produced using the open-source seismic wave propagation package SPECFEM3D (V2.0 SESAME, [28
]) that implements the spectral-element method. SESAME uses Version 11.9 of the SCEC-CVMH seismic wave-speed model, accounting for 3-D variations of seismic wave speeds, densities, topography, bathymetry, and attenuation. The SCEC-CVMH model incorporates tens of thousands of direct velocity measurements that describe the Los Angeles basin and other structures in southern California [25
]. It includes background crustal tomography down to a depth of 35 km [44
] enhanced using 3-D adjoint waveform methods [27
], the Moho surface [30
], and upper mantle teleseismic and surface wave-speed models extending down to a depth of 300 km [26
]. The wave-speed model-compatible spectral element mesh of the Southern California region was developed by [46
], who adapted the unstructured mesher CUBIT [47
] into GeoCUBIT for large-scale geological applications such as this.
The classical empirical Green’s function (EGF) approach involves the use of aftershock earthquake records as the Green’s functions sampling the travel paths from the source to those stations [48
]. The rupture plane of an event is divided into (uniform or non-uniform) sub-faults. A pre-selected Green’s function (selected on the basis of the closest match to the subfault-to-target site path) is used to represent the seismic wave radiated from a given sub-fault. The EGFs are selected from a pool of thousands of low-magnitude historic events (
2.5–4.5) that have occurred in the vicinity of the San Andreas fault over the past few decades. Selection was based on signal quality and the scanning of the hundreds of thousands of records was automated. The Green’s functions from all sub-faults are time-shifted and summed to yield the ground shaking at a target site. The key challenge in this approach is that it is difficult to replicate Brune’s spectrum [59
] in both the high- and low-frequency regimes simultaneously. Scaling based on seismic moments, where the total seismic moment of the EGFs matches that of the simulated event, will correctly reproduce the low-frequency content of the ground motion. On the other hand, scaling based on areas, where the total area of the EGFs matches that of the simulated event, will correctly reproduce the high-frequency content [51
]. Ref. [41
] was recently successful in developing a variant of the EGF summation that allows for the simulation of high-frequency ground motion (0.5–5.0 Hz) without the use of any artificial filters to achieve agreement with Brune’s spectrum. Because energy release from an EGF is typically much smaller than the moment release associated with slip on a single subfault, the EGF has to be summed several times to match the moment release on the subfault. Furthermore, the duration over which EGF energy release occurs must equal the duration of slip on the subfault, i.e., the EGF energy release must occur over the duration of the source-time function. To achieve this, the EGFs have to be time-shifted and summed. Ref. [41
] adopts a non-uniform temporal distribution of EGFs over the source time function of each subfault. This temporal distribution of time shifts is sparse at the start and end of the rupture of that subfault and dense in between. He used low-magnitude (
2.5–4.5) earthquakes as EGFs and combined the high-frequency waveforms generated using this approach with low-frequency waveforms from the deterministic spectral element approach (lowpass-filtered using a second order Butterworth filter with corner at 0.5 Hz) to reproduce ground motions at large distances under the
6.0 Parkfield and the
7.1 Hector Mine earthquakes. We use this hybrid approach to simulate ground motions at the 636 greater Los Angeles sites from the 60 scenario earthquakes on the San Andreas fault.
shows the median values of three commonly used ground shaking intensities, peak horizontal displacement, peak horizontal velocity, and 5%-damped spectral acceleration
at 1 s and 0.2 s periods, for the ten ruptures corresponding to each magnitude level of the scenario earthquakes (see the blue lines). The vertical bars show the one standard deviation spread of the data on either side of the median correspond to median. Also shown for comparison are the corresponding values determined using the Campbell-Bozorgnia (CB-08) Next Generation Attenuation (NGA) relation [60
]. The soil properties for the 636 sites, as characterized by the
values from [61
], the basin depths from the SCEC-CVMH model [30
], and the Joyner-Boore distance, defined as the shortest distance from a site to the surface projection of the rupture plane, are used as inputs for the NGA computation. The choice of CB-08 over NGA-West2 was driven by the fact that NGA-West2 excludes peak ground displacements (PGD) due to its sensitivity to frequency-filtering parameters and record processing [62
]. Large earthquakes generate long-period ground motion that is characterized by large PGDs and it is important to compare our simulations against those predicted by the GMPEs, especially when considering the implications for long-period structures such as tall buildings.
There is good agreement between simulations and CB-08 in the peak velocity and displacement intensity measures for the lower magnitude earthquakes (up to 6.92). For the larger earthquakes, the simulations predict larger peak horizontal velocities (and much larger variances as well), whereas CB-08 predicts higher peak ground displacements (with comparable variances). CB-08 relies on observed near-field permanent displacements to constrain the PGD attenuation relation. The large permanent ground displacements (up to 9 m) observed during the magnitude 7.6 Chi-Chi earthquake of 1999, one of the few large magnitude earthquakes for which seismic, geologic, and geodetic near-source data is available, may have a strong influence on the PGD attenuation relation. On the other hand, CB-08 relies on seismic data alone for the PGV relation. Unfortunately, there is a sparsity of records from large magnitude earthquakes, especially in deep sedimentary basins such as the Los Angeles basin. This may, in part, explain the differences between the predictions by the simulations and the attenuation relations.
The median values of 1 s predicted by CB-08 are higher for the magnitude 6.00 and 6.58 earthquakes, about the same for the magnitude 6.92, 7.28, and 7.59 earthquakes, and significantly lower for the magnitude 7.89 earthquakes, when compared against the those predicted by the simulations. CB-08 predictions for 0.2 s are higher for the lower magnitude 6.00 and 6.58 earthquakes, but lower for the higher magnitude 6.92, 7.28, 7.59, and 7.89 earthquakes. We should note that the low-magnitude events ( 2.5–4.5) used as EGFs are generally deficient in the higher frequencies (higher than about 5 Hz) due to attenuation. Thus, there is a natural tendency for the synthetic ground motions from the hybrid approach to be somewhat deficient in these higher frequencies. Additionally, the two-pass Butterworth filter used in filtering out the higher frequency ground motions has a corner at 0.2 s or 5 Hz. Ground motion intensities fall off smoothly with increasing frequency beyond this filter corner frequency and the values for 0.2 s are probably further under-estimated.
a,c,e show maps of the median values of the geometric mean of the horizontal ground velocity under the ten ruptures of the magnitude 7.28, 7.56, and 7.89 scenario earthquakes, respectively. The maps cover the 636 analysis sites in southern California at which ground motions are computed. The corresponding maps, generated using the CB-08 attenuation relations with the site-specific soil and basin depth (Figure 7
) information for the 636 analysis sites, are shown in Figure 6
b,d,f. The strong influence of the basins is clearly seen. Ground motions are significantly amplified in each of the three basins, San Fernando, Los Angeles (LA), and San Gabriel (SG). The San Fernando valley’s proximity to the San Andreas fault (and perhaps seismic wave-speed structure) results in far more intense shaking there as compared to the LA and SG basins. The simulated ground motions are significantly more intense than the intensities predicted by CB-08, with this difference growing with earthquake magnitude.
Spectral accelerations at 1 s and 3 s periods from the scenario earthquake simulations are compared against those generated using the CB-08 NGA relations in Figure 8
and the one standard deviation spread on either side of the mean are shown plotted as a function of source-to-site distance in Figure 8
a,b. The fact that the peaks occur not at shorter distances, but at 35–65 km distances is due to the combined effect of basins (the closest distance to which is about 40 km from the fault) and the Joyner-Boore definition of distance that does not take into account the location of slip asperity on the fault or rupture directivity, being based upon fault proximity alone instead. It is interesting to note that the simulated ground motions carry comparable power at 1 s and 3 s periods. If anything, the peaks in the 3 s
are higher than those in the 1 s
plots. This is not the case with the NGA predictions with the 3 s period spectral accelerations being significantly diminished when compared to the 1 s period spectral accelerations.
] identified fourteen locations in the greater Los Angeles region where a significant number of tall buildings exist. These include Irvine, downtown Los Angeles, Anaheim, Long Beach, Hollywood, El Segundo, Santa Monica, Century City, Universal City, and Park La Brea in the Los Angeles basin, Encino, Canoga Park in the San Fernando basin, and Glendale and Pasadena in the San Gabriel basin (see Figure 7
for locations). Table 3
shows the median and standard deviation of the PGV, PGD,
at these fourteen locations from the ten (five locations and two rupture directions) simulated
7.89 scenario earthquakes on the San Andreas fault. Ground motion is particularly strong at downtown LA, Canoga Park, Anaheim, El Segundo, Santa Monica, and Century City. The corresponding tables for the
7.59 and 7.28 earthquakes can be found in [41
illustrates the effect of source directivity on ground motions. The north-to-south rupture at location 1 (see Figure 1
) directs a great amount of energy into the region of forward directivity, which is the San Fernando valley and the Los Angeles beyond. The south-to-north rupture, on the other hand, directs the energy away from the LA basin into the central valley to the north. The focusing effect is enhanced by the added proximity of the target region to the primary slip asperity in the source in the case of the north-to-south rupture scenario, while the opposite is true for the south-to-north rupture scenario. Note that in reversing the rupture direction, the slip distribution is reversed as well, such that an asperity on the south side of the north-to-south rupture is located on the north side of the south-to-north rupture. Peak horizontal velocity in the target region under the north-to-south rupture scenario is two to four times that under the south-to-north rupture scenario. For scenario earthquakes at rupture location 5, it is the south-to-north rupture that produces the stronger ground motions in the target region and the contrast is comparable to that in the location 1 scenario. The consideration of both N-to-S and S-to-N rupture directivities ensure that the results, when considered collectively, are not biased by unilateral rupture considerations. For example, when a source with strong rupture directivity toward the LA basin is flipped to propagate in the other direction, the latter ground motions are much weaker. When the ground motions from the two events are considered collectively, any bias associated with the former rupture is removed.
The simulated ShakeOut scenario earthquake, used in the Great California ShakeOut Exercise and Drill, is a
7.80 rupture, initiating at Bombay Beach and propagating northwest through the San Gorgonio pass, terminating 304 km away at Lake Hughes in the north. Using a source developed by [63
] and the SCEC-CVM wave-speed model [22
], ref. [64
] simulated 3-component long-period ground motion waveforms in the greater Los Angeles region. The south-to-north propagating
7.89 scenario earthquake at location 5 [Figure 1
j] closely resembles this earthquake in as far as location, rupture directivity, and magnitude (with scenario earthquake having a slightly higher moment magnitude) are concerned. The ShakeOut scenario has served as a benchmark for ground motion simulation methodologies [65
] and in Figure 10
a-b we compare the results of the simulations here against this established benchmark. The ground motions simulated in this study are more intense than those predicted for the ShakeOut scenario. But the overall pattern of basin amplification is quite similar. The differences may be attributed to the slightly lower magnitude of the ShakeOut earthquake (with 30% smaller energy release) as well as the differences in the source (e.g., peak slip of 16 m in the ShakeOut source versus 12 m in the Denali earthquake source used for the earthquake simulated here) and wave-speed (SCEC-CVM versus SCEC-CVMH) models. The predictions by the NGA relations are far lower [Figure 10
c]. The large red blob in the ShakeOut motions, attributed to a wave-guide through Whittier-Narrows by [66
], cannot be found in the NGA predictions. Rupture directivity and wave-guide focusing, that clearly may have a strong influence on ground motions, are not explicitly accounted for in the NGA relations. In our simulation, a larger feature encompassing the wave-guide-related feature of the ShakeOut earthquake can be seen.
5. Discussion and Limitations
Using a single scenario earthquake to represent all forecast earthquakes within a magnitude bin is error-prone by construction. For instance, in the southern San Andreas case study, a magnitude 7.28 scenario earthquake is used to represent seismic risk from all earthquakes with magnitudes 7.15–7.45. To eliminate bias in the results, we have selected the scenario earthquake magnitude to be at the bin center based on seismic moment. However, if an alternate method is adopted to choose the magnitude of the scenario earthquake to be simulated such that the scenario earthquake happens to have a magnitude closer to the upper (or lower) limit of the bin, the lower (or higher) occurrence probability for the scenario earthquake assigned by our seismic moment release rate-based method automatically compensates for the introduced bias. For example, let there be just two forecast earthquakes, one with magnitude 7.15 and a yearly rate of 0.0010, and the other with magnitude 7.45 and a yearly rate of 0.0005. If we use an 7.28 scenario earthquake to represent this bin in our rupture-to-rafters simulations-based case study, our method would result in a scenario earthquake probability of occurrence of 0.045 over the next 30 years. If, on the other hand, an 7.40 scenario earthquake is used to represent this bin, the probability of occurrence drops to 0.030. Obviously, our rupture-to-rafters simulations would predict higher ground motions, heavier building damage, and losses when the 7.40 magnitude scenario earthquake is used. Fortunately, the lower probability of occurrence estimated for the 7.40 earthquake would at least partially offset these increases, perhaps resulting in comparable 30-year losses. Likewise, if we use a 7.18 scenario earthquake to represent this bin, the 30-year probability of occurrence increases to 0.063. This time the lower ground motions and economic losses predicted by the simulations would be at least partially offset by the higher occurrence probability. It is important to note that the errors associated with differences in ground motions from forecast and scenario earthquakes decrease monotonically with increasing density of magnitude and location sampling of scenario earthquakes. This is because the scenario earthquakes sources would monotonically tend to forecast earthquake sources.
The deaggregation of forecast earthquake rates into seismic moment release rates of the segments comprising the rupture solves another commonly encountered problem in PSHA. No matter what domain is chosen for the earthquakes to be considered in the PSHA, at least one or more of the forecast earthquakes will straddle the domain boundary, i.e., only portions of these ruptures will lie within the domain. Where in traditional GMPE-based PSHA, this problem would be circumvented by choosing a region of interest large enough to encompass all of the seismic sources that may contribute significantly to the hazard, it is not practical with rupture-to-rafters simulations-based PSHA. Maintaining the same sampling in rupture location would require many more scenario earthquake simulations and increase the computing resource demand significantly. The question then arises as to what fraction of the probability of occurrence of these earthquakes should be assigned to our closest-occurring scenario earthquake? The deaggregation in our method breaks these ruptures down to the participating segments, and within a small margin of error these segments will lie wholly either inside or outside the domain, thus automatically resolving this problem.
The ground motion estimates in this study have been generated by a limited set of six strike-slip events. The results are likely to be highly influenced by source model selection. To eliminate bias in the results and make them more robust, at each magnitude level, multiple source models from different faults may be used to generate additional scenarios. The size and location of slip asperities, rupture velocity distribution and source-time functions play a critical role in the resulting ground motion in addition to rupture directivity and basin effects considered here. Considering plausible variations in these parameters would lend greater credibility to the ground motion estimates. Likewise, the distribution just 5 rupture locations for the smaller magnitude earthquakes is not dense enough to adequately capture the distribution of shaking from the smaller events. While the sources for the large magnitude events overlap, this is not the case with the small magnitude scenarios. So sampling the full extent of the fault more densely should be a goal for future efforts.
The uniform sampling of scenario earthquakes used in the case study here was chosen for convenience. Sometimes, a non-uniform distribution of earthquakes may be preferred. For instance, one may wish to sample more densely fault regions of high earthquake probability density and/or fault regions whose earthquakes may result in greater variability in basin effects, etc. The outlined method can be used in these cases without any modification.
One of the limitations of our approach is the somewhat incompatible mixing of the highly precise wave propagation simulations using a highly discretized source with the gross nature of the forecast earthquake probabilities in the UCERF model. Three points are to be noted in this context: First, earthquake probabilities, however crude they may be, are needed to make real-world decisions. Only greater data-gathering over time and through ubiquitous instrumentation can help reduce the epistemic uncertainties. Second, the resolution of the data that is used to generate the UCERF model is coarse and good enough to estimate probabilities of occurrence of forecast earthquakes in a smeared manner alone, not high enough to provide recurrence information on fine patches/segments of the source. While this is an inherent limitation of UCERF, it is still the only comprehensive source of earthquake probability information in California. Third, the outlined method does not result in any loss of the existing resolution in earthquake probabilities because segment sizes used in the mapping of the forecast earthquake probability space to the scenario earthquake probability space [in step (iii) of the method] are similar.
Finally, we should note that sometimes there may exist very low probability events that may extend beyond the domain of the rupture-to-rafters simulations. But because their contributions to structural collapse risk may be minuscule it may not be worthwhile to expand the domain of simulation to cover their full rupture extents. Yet their probability of occurrence may be included in the largest magnitude bin of the scenario earthquakes considered. A case in point is a wall-to-wall rupture on the San Andreas fault that extends from the (center of) SAF-Offshore section to the SAF-Coachella section have a recurrence time of 150,000 years. This rupture extends beyond the domain of our simulation to the north. But its probability of occurrence over, say, a period of 30 years is just 2 × 10. Even assuming that the probability of a structure collapsing under ground motions from this event is 1.0, this still means that the probability of collapse of the structure over a 30 year period increases by just 2 × 10. While the full rupture extent of this event is not considered in the study, its probability of occurrence is included as part of the highest magnitude bin of scenario earthquakes (7.78–8.34].