Tomographic Experiments for Defining the 3D Velocity Model of an Unstable Rock Slope to Support Microseismic Event Interpretation

: To monitor the stability of a mountain slope in northern Italy, microseismic monitoring technique has been used since 2013. Locating microseismic events is a basic step of this technique. We performed a seismic tomographic survey on the mountain surface above the rock face to obtain a reliable velocity distribution in the rock mass for the localization procedure. Seismic travel-time inversion showed high heterogeneity of the rock mass with strong contrast in velocity distribution. Low velocities were found at shallow depth on the top of the rock cliff and intermediate velocities were observed in the most critical area of the rock face corresponding to a partially detached pillar. Using the 3D velocity model obtained from inversion, localization tests were performed based on the Equal Differential Time (EDT) localization method. The results showed hypocenter misfits to be around 15 m for the five geophones of the microseismic network and the error was significantly decreased compared to the results produced by a constant velocity model. Although the localization errors are relatively large, the accuracy is sufficient to distinguish microseismic events occurring in the most critical zone of the monitored rock mass from microseismic events generated far away. Thus, the 3D velocity model will be used in future studies to improve the classification of the recorded events.


Introduction
Unstable rock slopes are likely to result in rockfalls, causing serious threats to human safety and properties, industrial activities, and transportation infrastructures. Predicting and preventing rockfalls is a difficult task due to the uncertainty and diversity of geological and geomechanical properties and the absence of obvious forerunners [1]. Identification of the danger by monitoring an unstable rock slope is the first phase in the rockfall risk assessment [2]. As part of our research on applying different geophysical methods to monitor hydrogeological risks for embankments [3][4][5][6], landslides [7,8], and rock slopes [9,10], a microseismic monitoring network was installed on a rock face threatening the city of Lecco in northern Italy. Microseismic monitoring has been increasingly used in rockfall studies in the last two decades because this method is promising to acquire information about the stress of the subsurface rocks, providing the chance to deeply understand the cracking process within the rock mass [11][12][13][14][15][16].
Hypocenter localization is one of the basic steps in microseismic monitoring, usually following signal recording and classification. Localization of microseismic events related to unstable rock slopes or rock masses has been investigated in several studies. Spillmann et al. [17] and Colombero [18] both adopted the nonlinear probabilistic localization algorithm [19,20] to determine the hypocenter parameters of microseismic signals recorded on an unstable mountain slope and a rock cliff, respectively. Helmstetter & Garambois [21] attempted to locate microseismic events collected in an area prone to rocksliding using a constant velocity model. All these studies showed that the localization procedure requires detailed information about the velocity in the rock mass to reliably locate hypocenters. Seismic tomography is an efficient tool to estimate the velocity distribution in mountain areas and to characterize the rock mass. The purpose of seismic tomography is to reconstruct the velocity field in the rock mass, through which ray-paths traveled between a set of sources and receivers. Using the travel-times of the rays from sources to receivers, the velocity model can be calculated by the tomographic inversion. Heincke et al. [22] performed a 3D seismic refraction tomographic survey on an unstable mountain slope in the Swiss Alps. The results of inversion revealed the presence of a large volume of rock with low P-wave velocities down to 35 m depth. Samyn et al. [23] applied seismic refraction tomography to characterize landslide geometry in southeastern France. The low velocity distribution was interpreted to be related to the reworked blocks, superficial cracks, and deeply fractured zones. Dussaguge-Peisser et al. [24] conducted a 2D seismic tomography survey on a fractured limestone cliff to characterize the fracture pattern in the massif. The results showed strong velocity gradients in the rock mass, and some low-velocity zones could be attributed to open fractures.
This work presents the seismic tomographic survey performed to estimate the velocity distribution in a steep and unstable rock face close to the town of Lecco in the Italian Prealps. We first describe the source test measurements carried out to select a suitable source for the tomographic survey. We then focus on seismic data acquisition and tomographic travel-time inversion. Finally, we present the localization algorithm used to locate the shot positions, analyzing the decrease in localization errors using a 3D velocity model rather than constant velocity.

The Study Site
The study site is the southern side of Mount San Martino where a monitoring network consisting of five three-component (3C) geophones, a rain gauge, and two temperature sensors has been operating since 2013. Mount San Martino is located in the southernmost section of the Grigne massif in the Italian Prealps (Figure 1a,b). Its western side dives into the Lecco branch of the Como Lake and the southern part of this mountain features 300 m-high vertical walls (Figure 1c). The rock walls are composed of limestone belonging to the Esino Formation (Middle Triassic) and are part of the front of Coltignone tectonic thrust. They are characterized by blurred stratification with the inclination ranging from 70° to 90°. Fractures with N-S and W-E trends are distributed on the cliff, dividing it into dihedrals, which are likely to cause failures such as toppling failure and wedge sliding. Accumulation of more than 10 m of debris at the base of the rock cliff has formed a slope approximately oblique to the main dip angle. The slope is a natural connection between Lecco and Mount San Martino and, in case of rockfalls, it serves to mitigate the damage from rockfalls by slowing down the bouncing rock blocks. The study site has experienced several rockfall events in the past and its instability is a serious threat to Lecco city. The most tragic event happened on the night of 23 February 1969. About 15,000 m 3 rock blocks were detached from the steep rock face, leaving seven deaths and three injured people, as well as destroying a part of a building [25]. After this catastrophic failure, a series of measures including the construction of protection walls and installation of rockfall fences at the base of the cliff to stop the bouncing blocks were employed to mitigate damages from other possible failures. During the last two decades, frequent rockfalls with rock volumes from a few tens of cubic meters to several hundreds of cubic meters still took place on the rock face. These failures are the results of internal and external factors. The internal factors consist of the presence of the extremely steep face and the severe tectonization suffered by the rock mass, while the external factor is mainly related to precipitations [26].

Microseismic Monitoring System
A microseismic monitoring system was installed at the edge of the rock face about 50 m above the partially detached pillar, which is west of the 1969 rockfall scarp (i.e., left side of the rockfall scarp in Figure 1c). The monitoring network is composed of 18 channels: 15 channels for five 3C geophones with 28 Hz natural frequency, one channel for a rain gauge, and two channels for two temperature sensors to measure the air temperature outside and inside a shallow rock fracture. Two geophones were deployed on the summit of the rock face where the acquisition board is located. Geophones 1 and 2 were placed at the bottom of a 4.5 m-deep and a 9 m-deep vertical borehole, respectively ( Figure  1c). Geophones 3, 4, and 5 were installed in shallow holes on the rock face near the area where the 1969 rockfall occurred. The rain gauge was placed on top of the borehole of geophone 1, one temperature sensor was placed next to the acquisition board to measure the air temperature, and the other one was located in a rock fracture near the borehole of geophone 2.
The acquisition board is equipped with a GPS receiver that provides a time reference with high stability and accuracy. The network is powered by a 12 V/100 Ah battery recharged by a solar panel. The acquisition board is continuously acquiring data from the meteorological sensors with a 10 s sampling interval. The maximum and minimum values of the background noise sensed by the geophones are also recorded with the same sampling rate. Microseismic events are collected with a 1 kHz sampling frequency according to a triggering methodology. Based on the results of some tests to obtain the information about the background noise of the investigated site, a triggering threshold was set at a fixed velocity value slightly above the root-mean-square of the background noise level. Whenever the signal value sensed by a single channel of one of the five geophones is higher than the threshold, all geophones are triggered to record the signal. Since the weakness of this triggering methodology is that several useless signals may also be recorded, an automatic classification algorithm is now working on the monitoring system to select the microseismic events related to instability conditions of the rock slope [26].
Datasets recorded by the monitoring network are provisionally stored in an embedded field PC. By using an exclusive 5 GHz Wi-Fi link, the data are transmitted daily to a remote station located at the Lecco campus of Politecnico di Milano.

Source Selection for Tomographic Survey
Tomographic surveys require suitable sources capable of generating signals strong enough to reach all geophones with high signal-to-noise ratios. In our study, the generated energy needs to propagate at least 70 m to arrive at the three farthest geophones located on the rock face. Therefore, we performed field measurements using different sources to select the optimum source with good signal quality and portability in a harsh environment. An 8 kg sledgehammer, a seismic gun, and small explosions produced with firework charges were employed to compare their capability in generating the required seismic energy in real propagation conditions.

Data Acquisition
The source test measurements and later the tomographic survey were performed in an area of about 1400 m 2 at the summit of the rock face near the acquisition board ( Figure 2). This area is a steep N-S slope of rock debris partially covered by trees, with a small valley in the central part and with an elevation ranging from about 660 m to 700 m above sea level. Figure 3 illustrates more detailed information for the topography of the study area. Figure 3a is the Digital Terrain Model (DTM) of the monitored area with a resolution of 2 m in the horizontal plane for the surface of the mountain after removing the vegetation. It shows that the rock face is almost vertical and the valley located in the central part can be clearly distinguished. Figure 3b presents two vertical slices cut from the geometric model obtained from discretizing the DTM with the grid spacing of 2 m in horizontal and vertical directions. A local coordinate system with its origin at the lower-left corner of the contour map shown in Figure 2b was used and the elevation values were not changed. The monitoring network was centered in the geometric model, which has dimensions of 50 × 50 × 140 m. The left slice in Figure 3b is the plane cut at 32 m along the north-south direction, which is the maximum distance of northing coordinate for the monitoring network. The right slice is the plane cut at 10 m along the east-west direction, which is the minimum distance of easting coordinate for the network. The slices show that on top of the rock cliff, the mountain surface is quite steep along both the north-south direction and the east-west directions with nearly 45° slope angles ( Figure 3b). Generally, the site features difficult logistic conditions, and since it is not accessible with any kind of vehicle, all necessary equipment for tomographic measurements had to be carried by our research group, each carrying bags weighing at least 20 kg. The monitoring area is quite close to the edge of the steep rock cliff, which makes the working situation more dangerous. Moreover, we had to walk on slippery small debris and a thin layer of fallen leaves, which asked for slower careful movements. Another specific feature of the study area is the presence of trees (Figure 2a), which limited the deployment of sensors (wired to the acquisition board) and the repositioning of the seismic source (hampered by the triggering cable). Therefore, the really tough conditions limited the sensor and source deployment to the positions that could be reasonably accessible. Consequently, it was almost impossible to repeat the shots with different receiver deployments. Figure 2b illustrates the positions of points selected to test the effectiveness of the three sources.  The permanent microseismic network was used to measure the data for source tests. The channel that is usually used to record thermometer data in the rock fracture was temporarily used as the triggering channel. A triggering system was set up and connected to the microseismic network to synchronize the records. The trigger extension was 50 m long to make sure that the trigger sensor could be connected to the microseismic network for all test points. The details of acquisition parameters are presented in Table 1. The trigger sensor should be properly installed in order to have a good coupling with each source. For the firework charge test, charges were placed in shallow holes (about 10 cm deep) that were temporarily closed with some plaster to avoid excessive loss of energy upwards. As triggering sensor, an 8 Hz geophone, was screwed into a heavy base that was successively placed onto the ground near the charges. For the hammer test, the triggering signal was recorded by a piezoelectric accelerometer attached to the hammer handle close to its head. Considering that more seismic energy can be transmitted in the subsurface using a plate base [27], the hammer hit a kevlar impact plate instead of directly hitting the ground surface. Due to difficulties in drilling holes in the rocky surface for shooting the seismic gun, we assembled a wooden base with a central hole and with adjustable legs that could host the gun despite the ground roughness. The trigger sensor was mounted on the protecting metal base connected to the gun and deployed over the wooden base.

Performance Comparison of Different Sources
In total, 28 shots were performed for the source tests. The number of shots (Table 1) is larger than the number of source positions because some positions were shot more than once (Figure 2b). For each shot, seismograms were recorded by 16 channels: one channel was used as the trigger sensor and other 15 channels were used for the five 3C geophones. To avoid confusion when analyzing the data, the channels were numbered as follows: channel 1 for the trigger sensor, channels 2-4, 5-7, 8-10, 11-13, and 14-16 for the three components of geophones 1 to 5, respectively ( Figure 1d).
In order to compare the performance of the sources, the number of channels that have sensed the signals generated by each source should be controlled. We observed that many of the recordings of source test measurements were disturbed by noise (Figure 4a), and thus, a filtering procedure was essential. Band-pass Butterworth filtering and notch filtering were applied to the raw data using the ReflexW software [28]. To suppress low frequency and high frequency noises, the range 5-100 Hz was set for band-pass Butterworth filtering. To remove the effects of harmonic noise, notch filtering with frequencies of 25 Hz, 50 Hz, 75 Hz, and 100 Hz was applied. Figure 4 shows the typical signals recorded from a hammer shot before and after filtering. The results showed that filtering could effectively remove the noise. Figure 4b shows how filtering made it possible to see whether channels 6, 7, 12, and 13 have sensed the signals or not. Figure 5 summarizes the results regarding the performances of the three sources. The source performance measured for each channel is the ratio of the number of times that each channel sensed the signals generated by each source to the number of shots performed with that source. Figure 5 illustrates that the hammer and seismic gun perform better compared to firework charges. For the tests with the hammer and seismic gun, all the channels, except channels 10 (both 0%), 13 (29% for hammer, 67% for seismic gun), and 16 (both 0%) successfully sensed the signals with 100% success rate. This means that all the five geophones could record the signals of the shots generated by the hammer and seismic gun. For the tests with firework charges, all geophones recorded the signals only for one shot out of 8. Moreover, except for the three channels of geophone 1, other channels had lower performance percentages compared to the tests with the hammer and seismic gun. Therefore, both hammer and seismic gun could be suitable sources for the tomographic survey.  We made a further comparison of the signal-to-noise ratio (SNR) considering the hammer and seismic gun tests since signals with higher SNR would be beneficial for the phase picking process. The SNR was calculated according to the following equation: where Sms and Nms are the mean-square values of the signal and the noise. Considering the 2 s pretriggering window (Table 1), the window widths for calculating Sms and Nms were defined as 1.95-2.3 s and 0-1.95 s, respectively. A comparison was made at the shot positions where both the hammer and seismic gun were used. As an example, Figure 6 shows the results of SNR at the shot position near geophone 2 ( Figure 2b). Recordings of the hammer and seismic gun have similar SNRs at all the channels with minor differences in favor of the hammer. It is noteworthy that SNRs at channels 10, 13, and 16 are quite small and this is consistent with the results of source performance ( Figure 5), which showed that the same channels either did not sense the signals or had low-performance percentages. The lower sensitivity of these channels was probably due to the incident direction of the signals when waves propagated to the corresponding geophones. Channels 10 and 16 correspond to the N-S direction of geophones 3 and 5, respectively. Both these geophones are installed on the surface of the cliff. The incident direction of the signals could be perpendicular to the N-S direction on the rock cliff, resulting in lower sensitivity of channels 10 and 16. Channel 13 approximately corresponds to the vertical direction of geophone 4, which is placed on the partially detached pillar of the cliff. The direction of wave propagation and the stronger vibration modes might be different in this rock volume being partially decoupled from the rock face hosting geophones 3 and 5. Therefore, the weakest component recorded by geophone 4 may be different compared to geophones 3 and 5. Contrary to our expectation, the comparison of the hammer with the seismic gun does not show any superiority of the gun. The probable explanation could be the poor coupling of the gun arising from the fact that it could not be shot in holes due to the rock debris on the ground surface. Therefore, both the hammer and the gun can be used as a source for the tomographic survey, but we selected the hammer because of its portability and flexibility in the harsh environment of the study area.
It should be noted that the spectral contents of the signals were also compared for the hammer and the seismic gun. We observed similar spectra with some higher frequencies for the gun (e.g., 150 Hz vs. 100 Hz), but only when the geophone was very close to the shot (less than 10 m). The frequency range arriving at the receiver was basically the same for the hammer and the gun (with maximum 100 Hz) as soon as we moved far from the shot. Therefore, the analysis of the spectral contents did not affect our conclusion in selecting the source type.

Data Acquisition and First Arrival Picking
The tomographic survey was performed in the same area of source test measurements. As already discussed, the site was quite rugged and made it impossible to design long straight profiles to deploy the sensors. Twenty-four vertical geophones were deployed with 3 m spacing along a spread passing over the complex topography from the point close to the acquisition board to nearly the lowest part of the rock face summit (Figure 7). A 24-channel Geode recording system was used for measurements. Nine source positions with different spacing based on the convenience in using the hammer were located along the geophone spread. Three other source positions were designed outside the line of geophones to increase the volume of the rock mass traveled by rays. The five 3C geophones of the microseismic network were also used in combination with the 24 geophones. Similar to source test measurements, the channel that is generally dedicated to record temperature data in the rock fracture was used as the triggering channel. The acquisition parameters used for the tomographic survey are summarized in Table 2.  In order to further increase the volume explored by the tomographic test, travel-times measured in source tests with the hammer and seismic gun were also used ( Table 2). The data were picked manually and, from the complete set of 854 seismograms, we were able to manually pick 503 firstarrival times ( Table 2) with an accuracy of about 2 ms. The picking uncertainty was estimated using the standard deviation of the travel-times at repeated shot positions. Mean standard deviation was 2.9 ms for the traces recorded by the microseismic network and 1.4 ms for the recordings from the Geode system. These errors reflect the accuracy of the travel-time picking and are also affected by the delay of the trigger time. Figure 8 shows the signals recorded by both recording systems for a hammer shot close to the acquisition board. The data recorded by the Geode recording system (Figure 8a) have a good SNR. For the data recorded by the microseismic network (Figure 8b), the channels of geophones 1 and 2 (channels 2-7), which are closer to the source, are less affected by noise compared to the channels of geophones 3, 4, and 5 (channels 8-16).

Seismic Travel-Time Inversion
Travel-times of the first arrivals were used in the inversion to obtain the P wave velocity distribution in three-dimensional space. The inversion comprises three basic steps: input of the initial velocity model, calculation of the ray-paths and travel-times, and solving the inversion problem. We used GeoTomCG software [29], which performs three-dimensional tomographic inversion with any source and receiver distribution in a 3D grid. The 3D geometric model was obtained from DTM with a resolution of 2 m. The size of the model is 50 × 50 × 140 m with the grid spacing of 2 m in the horizontal plane and 4 m in the vertical direction. After testing different initial constant velocity models (in the range from 1500 m/s to 3000 m/s) and observing similar inversion results, the constant velocity of 2000 m/s (closest value to the average of the final velocity model) was selected as the input model. For the area in the air, the velocity was fixed to 300 m/s. In this method, the medium was sampled by using grid nodes and the velocity values were assigned to the grid nodes.
Curved ray-path calculation was performed using a revised form of ray bending derived from the method of Um & Thurber [30]. The ray-bending approach iteratively subdivides an initially straight ray-path (connecting the source and receiver) into an increasing number of straight segments. Each step comprises a division phase and an adjustment phase. In the initial subdivision step, a straight ray-path is constructed between the source and receiver, and the midpoint is calculated to divide the original straight ray-path into two segments. The local velocity gradient is computed at the midpoint and the midpoint location is adjusted. The adjustment phase is repeated several times within a range of adjustments to find the one that gives the path with the shortest travel-time. In the second subdivision step, the same process is applied to all the individual segments produced in the first step. Subsequent subdivisions continue in the same routine doubling the number of segments each time and constructing more smoothly curved paths. For each step, the travel-time along the path is calculated and the subdividing process stops when a stable minimum travel-time is obtained.
GeoTomCG software performs inversions with the Simultaneous Iterative Reconstruction Technique (SIRT) [31,32], which is less influenced by noise and can obtain more stable results compared to other algorithms [33]. The SIRT method modifies an initial velocity model by repeating a three-step cycle: forward calculation of travel-times using the ray-tracing methods, calculation of residuals between the calculated travel-times and the observed travel-times, and application of velocity corrections to update the velocity model. These three steps are common in ray-tracing tomographic methods, mainly including iterative reconstruction techniques and matrix inversion methods. These methods are distinguished by the way the velocity corrections are calculated. Iterative reconstruction techniques (e.g., SIRT) calculate these corrections by using the residuals of ray paths while matrix inversion methods (e.g., least square method, LSQR) solve linear matrix to obtain the corrections. Since the linear matrix is usually quite large, sparse, and ill-conditioned due to the ray distribution, matrix inversion methods require large computational effort. The SIRT method avoids the problems in solving matrix and has the advantage of the relatively low computational time for reconstruction. The GeoTomCG implementation of the SIRT inversion algorithm offers the opportunity to set up a damping factor and a smoothing parameter to stabilize the solution and a low-pass filter for the velocity field. A local coordinate system with its origin at the lower-left corner of the contour map shown in Figure 7 was used during the inversion. The elevation values were not changed, while the X-and Yaxes were parallel to the east and north, respectively. The input for the inversion consists of the traveltimes and the coordinates of sources and geophones. Travel-times were preliminarily filtered by removing some outliers, defined as those that produce an average velocity (calculated assuming a straight ray between source and receiver) out of the 300-5000 m/s range, where 300 m/s is the approximate velocity of P wave in the air and 5000 m/s was assumed as the maximum velocity of P waves in this limestone rock mass. In order to test the effectiveness of the velocity model obtained from this inversion, some travel-time data recorded by the monitoring network in the tomographic survey were excluded from the inversion and were used in re-locating the shots as discussed in the next section. Figure 9 illustrates the ray-path distribution traced on the initial velocity model (constant velocity) and the final velocity model (after 8 iterations). Figure 10 shows the average residual for each iteration. After eight iterations the average residual became stable.  The velocity of P-wave calculated from the inversion was in a wide range from 300 to 5000 m/s. The resulting velocity model based on the 3D grid is shown in Figure 11, with the color scale restricted to 400-4600 m/s to highlight the most significant variations in velocity. Only the areas sampled by at least one ray are displayed. Using the three geophones installed on the rock face, the velocity was resolved in the range from 590 m to 700 m in the vertical direction. Figure 11 shows a gradual and remarkable increase in velocity from the summit of the cliff (about 500-1500 m/s) to the lower part of the rock mass near geophones 3, 4, and 5 (>3000 m/s) with some areas of the rock mass showing higher velocities (>4000 m/s). Horizontal slices of velocity distribution at 20 m elevation intervals from the top of the rock cliff are presented in Figure 12, and Figure 13 shows some vertical crosssections. White color is used for air, while gray color is used for rock volumes that are not crossed by any ray. Velocity maps in Figures 12 and 13 indicate that the velocity distribution is highly heterogeneous from the summit of the cliff to the lower section.   Heterogeneous velocity distributions in unstable rock slopes may be caused by weathered layers, cracks, fractures, and faults. Heincke et al. [22] showed that many fracture zones and faults observed at the surface were the causes of the ultra-low velocities (<1500 m/s) in a gneissic rock at a depth of about 25 m. Dussauge-Peisser et al. [24] indicated that the low-velocity zones could be correlated to field observation of open fractures on a limestone cliff. In our case, we can observe low velocities on the top of the rock cliff due to rock weathering and the presence of a thin cover of rock debris. Furthermore, we can observe a high-velocity area (>4000 m/s) on the rock cliff for X Using the 3D velocity model, we estimated a potential volume that could be involved in the unstable rock pillar. Since the estimation of this volume was based on the velocity distribution, we only considered the area travelled by rays because there is no reliable velocity information in areas not crossed by rays (Figures 12 and 13). Due to the large number of fractures within and surrounding the unstable rock pillar, the velocities in this area should be relatively low. Considering the maximum value of the 3D velocity model (5000 m/s) and the relatively low-velocity zone (2200-3200 m/s) corresponding to the upper part of the partially detached rock pillar, three low-velocity thresholds were selected to quantify the low-velocity distribution and to calculate the volume: 2000 m/s, 2500 m/s, and 3000 m/s. Considering the real position of the unstable rock pillar and the positions of three geophones on rock surface, volume calculation was constrained in the distance X < 38 m. The results of volume estimation are shown in Table 3. The volume with velocity less than 2000 m/s was only 336 m 3 , while the volume with velocity less than 2500 m/s or 3000 m/s was over 1000 m 3 . The total low-velocity volume was 1392 m 3 , which is much smaller compared to 15,000 m 3 rock collapse that occurred in 1969. The monitoring network only covers the upper part of the unstable rock pillar ( Figure 1c) and thus, the area traveled by rays was only a small part of the whole unstable rock pillar. As a result, volumes in Table 3 should be considered as a very optimistic estimate of the possible volume involved in a future collapse from that unstable area. Table 3. Estimation of the low-velocity volume.

Sensitivity Test
To evaluate the stability of the tomographic inversion, we performed two sensitivity tests with synthetic data. The first test was the checkerboard resolution test [34,35]. First, a checkerboard model was made with positive and negative velocity perturbations of 200 m/s assigned alternatively to grid nodes in three-dimensional space at an interval of 6 × 6 × 8 m. We then added this checkerboard model to a constant velocity model (2000 m/s). In this perturbed velocity model, synthetic traveltimes were calculated with the same source-receiver layout of inversion using the 3D ray-tracing method. The synthetic travel-times were then inverted using the same inversion algorithm and parameter settings used in the inversion of real travel-times. Finally, we calculated the difference between the inverted model and the constant model. The image of the reconstructed checkerboard from the synthetic inversion suggests the achievable resolution. Figure 14 shows the checkerboard anomaly patterns and the results of the checkerboard resolution test on a few examples of the vertical and horizontal slices. The anomalies on the upper areas (Z > 666 m) are well restored, where the ray coverage is the best (Figure 9). In the middle areas (626 m < Z < 666 m), the anomalies are smeared following the almost vertical distribution of ray-paths (Figure 14b, X = 20, 32, 38 m). In the lower areas (Z < 626 m) where the three surface geophones are located, the anomalies are only partially restored due to the relatively poor ray coverage (Figure 14a, Z = 624 m; Figure 14b, X = 32 m). We also tested smaller size checkerboard anomaly patterns (e.g., 2 × 2 × 4 m and 4 × 4 × 4 m), but the inversion failed to reconstruct the anomaly distribution. Thus, the results show that the horizontal resolution is around 6 × 6 m through all the model and the vertical resolution is close to 8 m in the upper area and larger than 8 m in the middle and lower areas. The second synthetic test was named the restoring resolution test [34]. First, the 3D velocity model obtained from the inversion was set as the synthetic model. Then, the synthetic travel-times were calculated for the same source-receiver layout of inversion using the 3D ray-tracing method. We added random errors with a standard deviation of 2 ms (the expected accuracy of the picked travel-times) to the synthetic data. The synthetic travel-times were then inverted using the same inversion algorithm and parameter settings used in the inversion of real travel-times. A comparison of the results obtained from the two inversions shows how well the main features of the real result are restored. Figure 15 shows horizontal slices of velocity distributions obtained from the two inversions (top and middle rows). Although some differences can be realized between the two velocity models, the main features are present in both models. By looking at the smaller details that remain stable despite the noise perturbation of the travel-times (Figure 15), we have a rough indication of the resolution of the tomographic reconstruction of the velocity field. Accordingly, it can be assumed that we can resolve horizontal features with the dimensions of about 6 × 6 m, which is consistent with the results of the checkerboard resolution test. The results of the test are also important to understand where the solution is more stable, i.e., where the problem is better constrained by the source-receiver layout of the experiment. These are the volumes where our tomographic test is more sensitive to the velocity field, while the volumes where the random noise generates higher instabilities are volumes of poor sensitivity because of ill-conditioning. To evaluate the solution stability, we repeated the restoring resolution test by updating the random errors added to the synthetic travel-times. After 20 tests, we calculated the ratio of the standard deviation to the average of the reconstructed velocity field in each grid cell. The result is plotted in the bottom row of Figure 15. We can observe that the highest deviations are in the order of 25% but most of the deviations are lower than 13 %. Having done a pixel-by-pixel comparison between the velocity images (top row) with the corresponding standard deviation maps (bottom row), we can define the regions where the measured velocities are more unstable and less reliable. It seems that the upper and the lower zones of the 3D velocity model are better constrained than the intermediate part (e.g., standard deviation map at Z = 652 m), which sounds reasonable considering that the intermediate part of the investigated volume is totally missing the receiving or transmitting stations and is only crossed by vertically oriented rays.

Localization of Seismic Shots
We performed a source re-localization test to assess and quantify the convenience of using a 3D velocity model rather than using a constant velocity model. Re-localization was performed with the 5 geophones of the microseismic network to estimate the accuracy when localization will be applied to the real events recorded by the monitoring system. As mentioned in the previous section, the data used in re-localization were excluded from the tomographic inversion.
Among different methods available for localization, we selected the direct global grid search method [36], commonly used to locate hypocenters of earthquakes. To define the misfit function, we used the Equal Differential Time method (EDT) [37,38], which compares the difference in the observed arrival times with the difference in the calculated travel-times for any two geophones. The hypocenter can be determined by searching the source point that best satisfies all the observation pairs, i.e., which minimizes the global misfit between the observed and calculated differential times. The misfit function that we use is a standard L2-norm misfit function: where and are the observed arrival times at geophones a and b, and ( ) and ( ) are the calculated travel-times between grid point x and geophones a and b, respectively. If the number of geophones is N, the sum is taken over all pairs of geophones, i.e., N(N − 1)/2. The 3D grid model is the one obtained in the tomographic inversion. Based on the grid spacing, the resolution of localization is 2 m in the horizontal direction and 4 m in the vertical direction. The travel-times are calculated using the bending ray-tracing method implemented in GeoTomCG software. The misfit values are calculated for all the points in the grid model. The estimated source position is determined by the grid node which has the minimum misfit.
The data excluded from the tomographic inversion and selected for the localization test consist of eight hammer shots. The 3D velocity model and the constant velocity model were used to localize the shots with the 5 geophones of the microseismic network to assess the accuracy improvement resulting from the 3D model and the location accuracy that we can generally expect from the monitoring system. The constant velocity used for the comparison is 1703 m/s, which is the constant velocity that best fits the tomographic data used to generate the 3D velocity model, i.e., the velocity that we found by inverting again the tomographic data using a unique velocity that describes the entire rock mass.
The location errors are shown in Figure 16. When using 5 geophones, the average location error with the constant velocity field is 39 m (ranging from 31 to 45 m), while the average error is about 15 m (ranging from 5 to 28 m) when using the 3D velocity model. The results showed an improvement in the accuracy of source localization using the 3D velocity model rather than using the constant velocity model. However, the location errors with 5 geophones are relatively large considering the small monitored volume. Large errors are mainly due to the fact that only 2 geophones are near the shots performed on the mountain surface and other 3 geophones installed on the rock cliff are much farther from the shots and basically vertically aligned. As a result, the localization problem for shots performed on the rock face summit is not well constrained, especially in the XY plane.
Although the location errors using the 5 geophones are relatively large for localizing real events, the accuracy is enough to distinguish microseismic events occurring within or close to the monitored rock mass from microseismic events generated far from the unstable rock slope. For a more accurate classification, i.e., to distinguish events caused by the development of fractures in the rock mass from rock falls occurring near the monitoring area, we need to further reduce the localization error. However, considering events occurring within the rock mass, e.g., closer to the three geophones installed on the rock cliff, rather than surface events occurring on the rock face summit, we are confident that the localization problem is better constrained and the localization error could be lower than 15 m. Besides, we can expect more improvements from the following actions: a) improving the velocity field in the neighboring areas not explored by the tomographic survey (currently described by a constant velocity) by extrapolating the 3D velocity model, and b) testing other misfit functions and introducing to the misfit function a quality factor based on the supposed reliability of the picked travel-times. Of course, the most effective action that we can plan to achieve the desired localization accuracy would be to increase the number of geophones of the monitoring network by adding some receivers in strategic positions of the rock cliff.

Conclusions
Source tests were performed using the five 3C geophones of a small microseismic network deployed on an unstable limestone cliff to determine the source suitable for the tomographic survey. The results showed that the hammer and the seismic gun performed better than small firework charges. Furthermore, the hammer and the seismic gun showed a similar performance in terms of SNR and they also produced a similar frequency spectrum for receivers farther than 10 m. Therefore, we selected the hammer as the source because of its superior advantage on the portability and flexibility in the harsh field conditions of the study site.
Seismic tomography was conducted on top of the rock face where a rockfall occurred in 1969. A 24-geophone spread was deployed and it was combined with the microseismic network to record the seismic data. Inversions of the first-arrival travel-times revealed a high heterogeneity of the velocity distribution in the rock mass. As expected, we found lower velocities in the shallow unconsolidated part at the top of the mountain (500-1500 m/s) likely due to rock weathering. Relatively low velocities (2200-3200 m/s) were observed in the most critical area of the rock cliff monitored by the geophones and corresponding to the partially detached pillar. Instead, high velocities (>4000 m/s) were estimated elsewhere, indicating compact unfractured limestones.
A few shots excluded from the tomographic inversion were used for a relocalization test to validate the 3D velocity model and to evaluate the expected location accuracy offered by the 5 geophone network. The EDT location method was used with L2-norm misfit function to calculate the estimated hypocenters. Localization errors were reduced significantly from 39 m to 15 m by using the 3D velocity model rather than a constant velocity field. Although these errors are relatively large, the accuracy seems sufficient to distinguish microseismic events near the monitoring site from those far from the monitoring network. In order to better classify microseismic events near the monitoring area, i.e., to distinguish rock falls from fracture development, the localization results should be further improved, for example, by better extrapolating the velocity field to the neighboring areas of the current 3D model and by testing new misfit functions, also including a travel-time quality factor. However, we point out that the expected localization error for events generated within the monitored rock mass is likely to be smaller than 15 m, since this error was obtained for events generated on the rock face summit, for which the location problem is seriously under-constrained. Therefore, after this validation, the localization algorithm will be systematically implemented to locate the microseismic events using the 3D velocity model. The results will give reliable information to support signal classification, to discriminate events arriving from the most hazardous areas, and to monitor the microseismic activity rate in the area. The 3D velocity model will be also used to perform a systematic analysis of the sensitivity and the expected accuracy of the hypocenter localization in the different areas of the rock mass. We expect that the results of this analysis might suggest the optimal design of a microseismic network expanded by installing some more geophones in strategic positions.