PlumeTraP: A New MATLAB-Based Algorithm to Detect and Parametrize Volcanic Plumes from Visible-Wavelength Images

: Tephra plumes from explosive volcanic eruptions can be hazardous for the lives and livelihoods of people living in the proximity of volcanoes. Monitoring and forecasting tephra plumes play essential roles in the detection, characterization and hazard assessment of explosive volcanic events. However, advanced monitoring instruments, e.g., thermal cameras, can be expensive and are not always available in monitoring networks. Conversely, visible-wavelength cameras are signiﬁcantly cheaper and much more widely available. This paper proposes an innovative approach to the detection and parametrization of tephra plumes, utilizing videos recorded in the visible wavelengths. Speciﬁcally, we have developed an algorithm with the objectives of: (i) identifying and isolating plume-containing pixels through image processing techniques; (ii) extracting the main geometrical parameters of the eruptive column, such as the height and width, as functions of time; and (iii) determining quantitative information related to the plume motion (e.g., the rise velocity and acceleration) using the physical quantities obtained through the ﬁrst-order analysis. The resulting MATLAB-based software, named Plume Tracking and Parametrization (PlumeTraP), semi-automatically tracks the plume and is also capable of automatically calculating the associated geometric parameters. Through application of the algorithm to the case study of Vulcanian explosions from Sabancaya volcano (Peru), we verify that the eruptive column boundaries are well recognized, and that the calculated parameters are reliable. The developed software can be of signiﬁcant use to the wider volcanological community, enabling research into the dynamics of explosive volcanic eruptions, as well as potentially improving the use of visible-wavelength cameras as part of the monitoring networks of active volcanoes. Furthermore, PlumeTraP could potentially ﬁnd a broader application for the analysis of any other plume-shaped natural or anthropogenic phenomena in visible wavelengths.


Introduction
Volcanic plumes are mixtures of volcanic particles (i.e., tephra), gases and entrained air and are associated with a variety of explosive eruptions. Large particles (>2 mm, i.e., lapilli and blocks/bombs) can be part of this mixture in the early stages of plume development while, later, only ash-sized particles (<2 mm) are involved. The material is injected into the atmosphere, rises and expands in a turbulent flow before, potentially, being dispersed at multiple scales [1]. This can result in tephra dispersal in the atmosphere and tephra fallout on the ground, both of which can have severe consequences for everyday life, e.g., air traffic, buildings, infrastructure, agriculture [2][3][4], and for people's health [5]. Consequently, the dispersal of volcanic particles is one of the most widespread hazards posed by active volcanoes. Ensuring an accurate assessment of the hazard posed by volcanic plumes is, therefore, an essential step in reducing impacts on communities and ecosystems [6].
In order to provide a comprehensive assessment of volcanic hazards, it is important to characterize past activity of active volcanoes and carry out probabilistic modeling (e.g., [7]). Such information can also improve the accuracy and precision of forecasting activities (e.g., [8]). Nowadays, volcanic activity is monitored using both ground-and space-based instruments with different spectral coverage, accuracy and global measurement density that allow volcanologists to record and characterize explosive eruptions as well as to support modeling of volcanic processes (e.g., [9][10][11][12][13][14]).
Nevertheless, the use of video recordings of volcanic eruptions to make scientific measurements is not widespread [15] and the majority of such studies reported in the literature, particularly in recent years, are based on thermal infrared (TIR) imagery [15][16][17][18]. Indeed, there are relatively few studies in which visible-wavelength cameras have been used [19][20][21][22]. Despite this, visible-wavelength cameras are frequently installed as monitoring equipment on volcanoes [12,23,24], although the obtained data are often only qualitative or semiquantitative. On the other hand, regardless of their high potential for volcano monitoring and the rapid development of their technology, infrared cameras remain relatively niche products [25]. This is due to the extremely prohibitive costs (tens of thousands of USD) and the low resolution (rarely greater than 640 × 512 pixels) of infrared cameras compared to high-resolution visible-wavelength cameras [25]. However, the development of an automated method for the detection of emitted plumes and quantification of plume properties from visible-wavelength images, without the need for manual tracking performed by the user, remains lacking. Although some methods designed for use with infrared imagery do exist [15,18], they cannot be effectively applied to videos recorded in the visible light. This is because visible-wavelength videos are often affected by variable lighting conditions and poor contrast between the plume and the background. As opposed to thermal images, images in the visible wavelengths are composed of three channels (red, green and blue), and the contrast between the tephra plume and the rest of the image is highly dependent on the meteorological conditions. Therefore, visible image analysis of volcanic plumes has almost always been overlooked because of the difficulty in cleanly tracking the plume, especially in adverse meteorological conditions. In fact, a critical part of the image processing is to automatically distinguish pixels of the image corresponding to the plume from pixels corresponding to the sky, the background and the landscape. As such, the use of visible-wavelength analysis in the study of volcanic plumes remains limited.
Motivated by the need for an automated method for tracking volcanic plumes using visible-wavelength imagery, this work presents the development of Plume Tracking and Parametrization (PlumeTraP), a MATLAB-based software able to detect, track and parametrize plume-shaped objects through the analysis of visible videos. This is achieved by developing segmented masks for visible-wavelength images of eruption plumes that exclude noise and provide a good outline of the plume boundaries. The developed algorithm has been tested on tephra plumes recorded in different daylight and atmospheric weather conditions, with satisfactory results up to a certain level of cloudiness, which represents the primary limiting factor of the present work. However, this limitation is intrinsic to the use of visible-light sensors and cannot be removed, but at best mitigated. This project is pioneering because PlumeTraP is based on a quasi-automatic image processing technique which outputs the basilar geometric and time-derivative plume parameters that represent the starting point for modeling a specific event. Moreover, with targeted adjustments, PlumeTraP could be adapted in the future for near real-time eruption monitoring.
In the rest of this paper, we first summarize the dataset used to test the methodology (Section 2), before going on to describe the structure of the software and image analysis methodology (Section 3). We then present results of the image analysis, i.e., plume isolation, and calculations of physical parameters, e.g., heights and rise velocities (Section 4). We finally discuss these results whilst comparing with other works (Section 5) before outlining our conclusions (Section 6). A detailed tutorial about how to set up and use PlumeTraP is presented in Appendix A.

Field Site
The dataset used to test the software was collected during a field campaign in July and August 2018 at Sabancaya volcano ( Figure 1), Peru, and consists of 49 high-resolution visible-wavelength videos, each recording one explosion. The videos were captured using a Canon Legria HF G40 at a frame rate of 50 fps and a frame size of 1920 × 1080 px 2 . Sabancaya is a Holocene-aged intermediate-composition stratovolcano [26] belonging to the Ampato-Sabancaya Volcanic Complex in Southern Peru. During the last few thousand years, it has been characterized by low-to-moderate explosive activity with a volcanic explosivity index (VEI) of 1-2 [26,27]. After a two-century-long quiescence, its recent activity is characterized by intermittent periods of explosive activity lasting years (e.g., 1990-1998 and 2016-present day). Eruptive periods consist of alternating Vulcanian and phreatomagmatic to phreatic events [26][27][28][29]. The field campaign of 2018 was conducted during the current eruptive cycle (2016-ongoing), characterized by multiple Vulcanian explosions per day, with maximum tephra plume heights varying between 1 and 3 km above the crater [30][31][32].

Methods: Structure of PlumeTraP Software
The primary goal of this work was to develop a new algorithm to process and analyze each frame extracted from a video recorded in the field of visible light, with the specific objective of detecting and tracking the plume through time. PlumeTraP, the result of this work, was written in MATLAB version R2018b and uses the Image Processing Toolbox 10.3. However, the software has also been successfully tested with the R2020a and R2021b releases (using the Image Processing Toolbox 11.4). A detailed guide describing how to set up and work with PlumeTraP is presented in Appendix A and we have uploaded the MATLAB scripts that can be found in the Supplementary Materials to the repository https://doi.org/10.5281/zenodo.6406009 (accessed on 26 June 2021), where future releases will also be added. In the rest of this section, we present a description of the workflow of PlumeTraP.

Video Analysis
Once the user has selected the video (or videos) to be analyzed, the first operation performed by PlumeTraP is to automatically save the first frame of the video and then a sequence of frames at a frequency determined by the user. This enables the user to produce output data at a temporal resolution sufficient for their purpose, up to a maximum of the frame rate of the original recording. The frequency is set through the scale factor n, that ranges from 1 up to the frame rate of the original recording and may be a non-integer (e.g., if n = 1, a 50 fps video results in a frame saved every 50 frames). The result is a set of images that corresponds to the original video resampled with a frame rate of n fps. Whilst a resampled frame rate of 1 fps is sufficient to obtain satisfactory data for the bulk properties, i.e., height and width, of the volcanic plumes studied here, there may be other processes which happen on shorter timescales, e.g., turbulence.
Typically, the identification of plume pixels is achieved through thresholding and thus relies on a strong difference in pixel intensity between the plume and its surroundings [15]. Thus, an effectively segmented output requires a more elaborate strategy than for thermal imagery, in addition to multiple image processing methods, to reconstruct the plume shape. We first explored multiple image analysis methods for detecting plume-containing pixels using ImageJ [33] and, once a reliable strategy was found, this was then applied to all frames of the videos in MATLAB. The PlumeTraP image analysis technique is shown in Figure 2 and consists of the following procedure: 1.
To produce a segmented plume image for a given frame, three images are required ( Figure 2a): • the image of the current frame (at time t); • an initial image captured just prior to eruption (at time t 0 ), that is intended to be a reference for the background subtraction; • the image of the previous frame (at time t − n −1 ).

2.
The RGB multichannel images are split into their red, green and blue channels, obtaining three images in which each pixel has a specific value of brightness intensity between 0 and 255.

3.
For each image, the red channel is subtracted from the blue channel as, during testing and development, we found that this commonly increases the contrast between the plume and the background compared to other possible operations. The result is an image where the plume and the landscape are highlighted with respect of the rest of the image (Figure 2b).

4.
A segmentation process is then applied to create binary images using a user-defined threshold pixel intensity value ( Figure 2c). This threshold luminance value is determined during a pre-processing step and iterated until a satisfactory segmented output is obtained. Suggested values and additional information about the pre-processing can be found in Appendix A (Appendix A.3).

5.
Two partial masks are created by exploiting the difference in intensity between the images (Figure 2d): • the first one consists of the modulus of the subtraction of the segmented first frame from the segmented current frame, resulting in a binary image that shows the plume without the landscape. However, some meteorological clouds (if they are moving) and some noise may remain. In the case of the first frame, the subtraction gives an empty image; • the second results from the modulus of the difference between the segmented current frame and the segmented previous frame (this is not applied to the first frame, as a preceding image does not exist). Thus, it highlights the local movement of the plume (and the meteorological clouds) between frames and is fundamental to creating a mask that fits the original plume extension [15].
Remote Sens. 2022, 14, 1766 5 of 23 6. The two masks are then summed (Figure 2e), which goes a long way towards enabling identification of the presence and evolution of the plume. However, it is still not fully isolated from the rest of the image.

7.
A two-dimensional median filter is applied, using a window of 4 × 4 pixels, to remove noise ( Figure 2f). 8.
A mask is then applied such that all the pixels outside a region of interest (ROI) containing the plume are set equal to zero. This ROI is selected as the minimum rectangular area that captures the plume on the last frame and can be drawn automatically or manually during the pre-processing step (see Appendix A.3). This mask is essential for ensuring that the plume is selected rather than other similar objects (e.g., atmospheric clouds, degassing clouds or the landscape if its intensity and contrast are similar to those of the plume) ( Figure 2g). 9.
All objects (white regions) apart from the largest are removed, thus removing noise and objects not connected with the plume (Figure 2h). 10. Finally, holes in the remaining object are filled, leading to the final image with the extracted plume shape (Figure 2i).
This processing is applied to all the images previously saved to obtain images of the plume shape and its evolution through time.

Geometrical Calibration
Following the image processing, it is then necessary to apply a geometric calibration to convert pixel locations into physical coordinates (from pixel units to metric units). Here, we follow a method modified from Bombrun et al. [15]. The vertical position z i of a single pixel within the image, measured with respect to the altitude of the camera, is related to the horizontal distance Y of the camera from the image plane containing the plume and to the geometric parameters of the camera ( Figure 3) through where φ is the camera inclination with respect to the horizontal, β v the vertical field of view (FOV), j the vertical position (measured in pixels from the bottom of the image) of the pixel whose height is being measured and N v the number of pixels in the vertical direction. Thus, β v /N v is the vertical angle subtended by a single pixel. Furthermore, the geometric calibration produces a systematic error associated with the spatial resolution of the image, that must be taken into consideration, and is automatically calculated by the algorithm. The vertical extent of a pixel is calculated as and, therefore, the associated uncertainty z err j as In this way, z err j is defined as half the vertical pixel extent. Thus, the farther a pixel is from the center of the image, the greater the pixel's vertical extent and, consequently, the greater the uncertainty on z j . There may well be other sources of uncertainty due to imprecise knowledge of the camera properties, e.g., location, inclination and orientation. However, as will be discussed in Appendix B, the uncertainty due to these will depend on how the visible data are collected and will be different for each user.  With regard to the horizontal component of the geometric calibration, Bombrun et al. [15] assumed the horizontal pixel extent ∆x i to be approximately constant across the image. We choose to avoid this assumption by following a similar procedure to that employed for the vertical component. Thus, the horizontal position x i of a pixel, with respect to the leftmost pixel of the image (Figure 4), is given by where β h is the horizontal FOV and N h the number of pixels in the horizontal direction. Then, the real horizontal extent ∆x i of a pixel can be defined as We also define the horizontal systematic error associated with the spatial resolution of the image x err i as half the horizontal pixel extent, that is Furthermore, in the software, the user can enter both maximum and minimum possible values of Y to allow for uncertainty in the camera-to-image plane distance. Thus, the software calculates the pixel positions for both the maximum and minimum values of Y, before outputting the average of these values (with the superscript "mean") and the associated errors as and To summarize, Equations (1)-(8) are applied to each pixel by PlumeTraP to obtain the physical coordinates of each pixel and the associated error. However, if available, pixel positions can also be manually uploaded by the user (see Appendix A.1).

Calculation of Parameters
Once the geometric correction is performed, it is possible to use the obtained pixel heights, measured with respect to the camera altitude, and horizontal positions to define the geometric parameters of the plume. The first-order analysis consists of calculating the main dimensions of the identified plume, i.e., height and width (as a function of height), as functions of time. A second-order analysis is then performed to retrieve the ascent velocity and the acceleration of the plume head in the atmosphere.
The plume height is defined as the uppermost pixel of the plume mask ( Figure 5). The algorithm then subtracts the height of the vent from the calibrated highest value to then obtain a relative height with respect to the vent where h max f is the maximum height of the plume in frame f and h vent is the height of the vent. The width of the plume is firstly calculated at each pixel level, before the maximum width is identified, to enable quantification of plume spreading during ascent ( Figure 5). Calculation of width is simply done by subtracting, for each row, the horizontal position of the leftmost plume pixel from that of the rightmost plume pixel where w r( f ) is the width calculated in row r of a certain frame f , and x right r and x le f t r are the horizontal distances of the right and left plume margins, respectively, measured from the leftmost image pixel. Then, the maximum value was selected for each frame such that the maximum width as a function of time can be identified. Note that the outputted plume width is twice the radius.
The change in plume height between two frames can be converted to vertical ascent velocity, calculated along the vertical axis, by using the time elapsed between them. The algorithm calculates both an instantaneous rise velocity v inst and a time-averaged rise velocity v avg . The first is the difference in plume height between two consecutive frames ∆H divided by the time difference between the two frames ∆t. The latter is obtained by dividing the plume height reached in the current frame H by the time elapsed since the eruption onset t. These two quantities are expressed, respectively, as and v avg = H/t.
In the same way, the instantaneous and time-averaged accelerations of the plume top are determined from the vertical ascent velocity. Thus, the instantaneous acceleration a inst is calculated from v inst divided by ∆t. On the other hand, the time-averaged acceleration a avg is v avg divided by t. These result in and It is important to note that these measurements are associated with sources of uncertainty. The first contributing factor is intrinsic to the pixel extent, as a given pixel represents a finite area and does not represent a single point. The other is related to the uncertain position of the vent, that varies between each volcano, and even from different recording sites at the same volcano. As already mentioned, these errors are automatically calculated by PlumeTraP, making use of the finite pixel extent, and minimum and maximum estimates of the distance between the camera and image plane, respectively.

Image Processing Techniques
Application of the created software, PlumeTraP, shows that it is capable of developing segmented masks for visible-wavelength images of eruption plumes that reproduce the plume shape with promising results (Figure 6). We have used raw videos taken at Sabancaya volcano to test the software, even though a pre-processing of videos is possible (e.g., brightness and contrast enhancement) and may help to isolate the plume better. The primary problem related to the use of video recorded in visible wavelengths is the fact that each pixel is associated with a true color resulting from the combination of the three RGB channels. Furthermore, a strong difference in pixel intensity between the plume and the background is required to obtain a good plume isolation (Figure 6a,b). Thus, the algorithm does not work as well for videos with significantly cloudy conditions (Figure 6d) as for those in which there is a higher contrast between the plume and the background (i.e., a blue sky). Sometimes, this situation requires the user to choose between a good reproduction of the plume boundaries with some noise, and a less noisy segmented image where parts of the plume are lost. However, it is also important to note that the presence of meteorological clouds does not always negatively affect the image processing since, if they are sufficiently spatially separated from the plume or there is a good contrast, they can easily be filtered from the final segmented image (Figure 6a,c). Although cloudy conditions represent a limit for the application of the developed algorithm, since meteorological clouds and volcanic degassing are common, 26 out of 49 explosions were recorded in sufficiently good conditions to obtain a mask that almost perfectly overlies the plume.

Geometrical Calibration
To assess if the geometrical calibration is accurate and to verify the code, it is useful to compare a known a priori length with that measured in the image. Here, the comparison was made between the diameter of the crater and the observed basal width of the volcanic plumes. The measured maximum diameter of Sabancaya crater is 384 m [26] which is comparable with the plumes' basal widths as calculated by the algorithm, ranging from (324 ± 5) m to (387 ± 3) m. The considered values are chosen from four videos in which the plume rises almost vertically, as well as for the larger eruptions in the dataset, to ensure that the tephra plume is filling the crater's inner area. From this agreement, we can conclude that the geometrical calibration works well.

Extracted Parameters
Once an accurate plume shape reconstruction is obtained, PlumeTraP is then able to successfully automatically parametrize the tephra plume through the application of Equations (9)- (14). As described above, the calculated basal plume widths are comparable to the crater diameter at Sabancaya, suggesting measurements of plume widths are accurate. Furthermore, the other calculated parameters extracted from the Sabancaya dataset are consistent with already-published data as well. The resulting maximum heights are compatible with the heights of tephra plumes reported by the Instituto Geofísico del Perú (IGP) and Instituto Geológico, Minero y Metalurgico (InGeMMet) [30][31][32], with plumes rising up to at least 3 km above the vent (Figure 7). Additionally, the heights reached by the plumes within the first 30 s, along with the vertical rise velocities, are consistent with other observed Vulcanian eruptions around the world [21,22]. Examples of the obtained results are presented in Figure 7.

Discussion
As already stated, the plume is well tracked in good weather conditions, in which a strong difference in pixel intensity between the plume and the background is common. With very cloudy conditions, this difference in intensity is lower such that the output produced by PlumeTraP is reduced in quality or even unusable. Further work to improve the algorithm, such that image segmentation under bad background conditions functions better, would be a significant enhancement that could be implemented in the future, potentially through further exploration of image processing techniques. Another developable option, which may also allow fuller automation of the processing, could be to use a supervised machine learning method to select and isolate grayish plume pixels (e.g., [34]). Regardless, it is important to emphasize that the presence of atmospheric clouds does not always have a negative influence on the image processing, provided they are away from the plume and do not interact with it (e.g., Figure 6a). Concurrently, volcanic degassing can be automatically isolated if its luminance value in the image is different from that of the plume (e.g., Figure 6c).
The resulting calculated parameters for the tested Sabancaya explosions are consistent with already-published data. In particular, the heights of the plumes reach values up to 2.5 km above the vent, that are comparable with the 3 km maximum plume heights reported by the Instituto Geofísico del Perú (IGP) and Instituto Geológico, Minero y Metalurgico (InGeMMet) [30][31][32] for the corresponding period. Moreover, the height values, as well the vertical rise velocities, are coherent with other observed Vulcanian eruptions around the world [21,22]. Additionally, the plume widths were considered correct because of the similarity of the calculated basal plume widths to the crater diameter at Sabancaya.
The capability of PlumeTraP to identify plumes in visible-wavelength imagery was also compared to the Plume Ascent Tracker software [18] (Figure 8), an open-source MATLABbased algorithm using graphical user interfaces (GUIs), thus resulting in a user-friendly environment for setting the video analysis parameters and choosing the outputs. However, the methodology used in Plume Ascent Tracker is optimized for the analysis of thermalwavelength videos and, thus, its use for analyzing videos in the visible range of wavelengths presents some issues.
We have found that Plume Ascent Tracker works well at detecting a well-contrasted plume in a non-moving, clear background and uniformly lit environment [15]. In all other cases, results obtained with Plume Ascent Tracker appear to be affected by two issues. Firstly, it is not possible to select both lighter and darker parts of the plume through the thresholds used to segment the images, meaning the obtained masks frequently miss parts of the plume. Secondly, the plume shape is also reconstructed by using separated polygons, meaning that, sometimes, the recognized plume pixels do not belong to the volcanic cloud, but instead to atmospheric clouds or even to the sky. These can lead to point scattering and to consistent underestimations of the calculated physical parameters. The first of the two issues was manually corrected by smoothing the results in the presented plots (Figure 8) to show a more realistic comparison between the two pieces of software. It is clear that the thresholds that can be set through the user interface are insufficient for analysis of visible-wavelength images. In the presented examples (Figure 8), the image analysis parameters in Plume Ascent Tracker were set to obtain a plume height that was as reliable as possible. Therefore, the plots show similar plume heights and vertical rise velocities to those calculated with PlumeTraP, but the plume widths are extremely underestimated. Plume Ascent Tracker was also useful to confirm the accuracy of our geometric calibration.
A further complication concerning the calculation of the parameters that are calculated by PlumeTraP is that the current algorithm does not account for the effect of wind on plume rise. This is because wind-bent plumes will follow a trajectory determined by the wind direction, and thus may not be confined to the image plane. The direct consequence is that a weak plume may be bent towards or away from the camera recording the event and, therefore, the resulting heights and widths would be over-or underestimated, respectively. Consequently, the plume heights and widths calculated by PlumeTraP can more accurately be described as corresponding to the projection of the plume onto the image plane. Thus, care should be taken to ensure that the current version of PlumeTraP is only applied to strong volcanic plumes, i.e., those barely affected by wind. Work to include the effect of wind direction in PlumeTraP is ongoing [35].
We also tested PlumeTraP with two different computers to get information on the speed of the operations. From the presented results (Table 1), we conclude that the PlumeTraP workflow is relatively rapid even using a low-performance processor (Computer A from Table 1), with a minimum total speed for frame-dependant processes of almost 0.2 fps whilst, excluding the binarization settings, almost 17 min are required to analyze a 3 minute-duration explosion. These values become almost 0.5 fps and 6.5 min with a higher performance computer (Computer B from Table 1). These results highlight the possible use of PlumeTraP for near-real time volcanic plume analysis, whilst even faster processing speeds could potentially be attained by using higher performance processors or by automatizing the threshold luminance value selection to obtain a fully automated algorithm.
The capability of PlumeTraP to identify plumes in visible-wavelength imagery was also compared to the Plume Ascent Tracker software [18] (Figure 8), an open-source MATLAB-based algorithm using graphical user interfaces (GUIs), thus resulting in a userfriendly environment for setting the video analysis parameters and choosing the outputs. However, the methodology used in Plume Ascent Tracker is optimized for the analysis of thermal-wavelength videos and, thus, its use for analyzing videos in the visible range of wavelengths presents some issues.   Application of PlumeTraP to nearly real-time monitoring may reveal even more dynamical information. This could potentially be done on-line within the software by adding new sections to the main script which apply different models (e.g., [36]) or equations (e.g., [18,21,22,37]) that could enable important parameters about volcanic plumes, such as the erupted mass [38], volume, solid fraction or temperature [22], to be inferred. Essentially, this algorithm represents an example of how the remote analysis of volcanic plumes can be an essential tool for understanding their dynamics and related hazards. Therefore, given the increasing availability and development of remote-imaging instruments [13], automated plume detection can play an important role in monitoring and research of volcanic plumes.
Although this algorithm has not been validated for the analysis of thermal videos, it is expected to work as well as for high-resolution visible videos. Indeed, since thermal imagery works in only one channel and the infrared signal of volcanic plumes is normally much stronger than background clouds, it should be easier to isolate the plume-containing pixels. The only parts of the algorithm that would not be required are the channel splitting and subsequent subtraction between channels. Moreover, potentially useful future work would be to develop a software capable of successfully analyzing both visible and infrared videos.
Although this work is mainly focused on volcanic tephra plumes, we stress that PlumeTraP could potentially be applied to any plume-shaped object, both natural (e.g., hydrothermal black smokers, solar flares, wildfires) or anthropogenic (e.g., laboratory experiments, smoke plumes emitted by factory chimneys or by fires).
When applying PlumeTraP, some caveats should be considered. In particular, it is difficult to apply PlumeTraP in cases of: • poor contrast between the plume-shaped object and the background; • dark light conditions and/or cloudy background; • the presence of meteorological clouds inside the ROI; • abundant gas emissions from the vent that form dense degassing clouds in the image background (only for the case of volcanic tephra plumes).
We would like to stress that these caveats are a direct consequence of the intrinsic limitations of visible-wavelength cameras. Therefore, they are inevitably reflected in remote sensing tools exploiting this type of sensor, which is the case for PlumeTraP.

Conclusions
We have developed a MATLAB-based software, PlumeTraP, which allows, through the analysis of high-resolution visible-wavelength videos, parameterization of the evolution of volcanic plumes. We have tested and demonstrated the use of the software by analyzing Vulcanian-style eruptive plumes at Sabancaya volcano, Peru, to obtain results in terms of their morphology and rise velocity. The key image analysis algorithm, developed to identify plume-containing pixels, is new and could be of significant use to the wider volcanological community, mainly for research purposes, but in the future, with adequate and targeted adjustments, also for volcano monitoring.
Despite the lower costs and the generally higher resolution of the final output of visible-wavelength cameras [25] compared to their infrared counterparts, as well as the rarer availability of infrared cameras in the monitoring networks of volcano observatories, quantitative analysis of visible images of volcanic plumes has almost always been overlooked, because of the difficulty in obtaining an accurate plume tracking. Our study confirms that this type of image processing is possible, especially if applied in good weather conditions, and could be potentially useful for both volcano research and monitoring.

Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.

Data Availability Statement:
The data presented in this study are obtained from visible-wavelength videos that are resampled at 1fps and openly available as Supplementary Materials. The PlumeTraP software and future releases (including a planned new version with the wind correction) will be available on the website github.com/r-simionato/PlumeTraP (DOI: 10.5281/zenodo.6406009).

Acknowledgments:
The authors thank Rigoberto Aguilar and Nelida Manrique Llerena of INGEM-MET as well as Allan Fries for their help in obtaining the videos used to test the software. We thank Andrea Marzoli and Lara Maritan of the University of Padova for supporting the project and helping with the SEMP exchange procedures, respectively. We also thank the anonymous reviewers for their comments that helped us to improve the manuscript and C. Bignami for having handled the manuscript.

Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. How to Set Up PlumeTrAP
PlumeTraP was developed using MATLAB R2018b and the Image Processing Toolbox 10.3, but it was also tested with the R2021b release and the Image Processing Toolbox 11.4, and found to be completely operative. The operating systems used to develop and check its functioning are Windows 10 Pro 64-bit (OS build 19044), MAC OS X El Capitan 10.11.6 and Ubuntu 20.04.3 LTS 64-bit. PlumeTraP was tested with videos of MPEG-4 file format (.mp4). For other supported video file formats, please check the specific MATLAB support webpage [39].
Two versions of PlumeTraP were created, one to semi-automize the process (Plume-TraP_AUT.m) and another with a more user-friendly purpose (PlumeTraP_INT.m). Here, the general workflow of PlumeTraP will be presented. For specific functioning or setting up, the user may refer to the MATLAB scripts, which are appropriately documented.

Appendix A.1. Semi-Automatic Version
PlumeTraP_AUT.m must be set up in the Edit input files section of the script ( Figure A1). The data to be analyzed are selected from an input folder, where one or more videos can be saved. Thus, it is possible to analyze multiple videos in a single execution of the script. Once the main output folder and the output image format have been entered by the user, it is possible to decide which processing steps the user wants to perform, including frame saving, frame processing and parameter calculation. Thus, for example, once the frames are saved, it is not necessary to repeat this process to re-run other parts of the algorithm. Frames are extracted using a parameter that expresses how many frames per second the user wants to save.
To apply the geometrical calibration and calculate the parameters of the plume, some known a priori parameters must be inserted. This is achieved through an input .txt or .csv file with columns that should follow a specific order and format: name of the video, minimum camera-image plane distance Y near , maximum camera-image plane distance Y f ar , horizontal FOV β h , vertical FOV β v and inclination of the camera φ. The cameraimage plane distances are calculated along the camera orientation and do not follow the topography. Specifically, these are the distances from the camera to the lines perpendicular to the camera orientation that intersect the nearest and the farthest extent of the crater, respectively. Thus, the camera position and orientation are required knowledge ( Figure A2). These calculations can be done through any available geographic information system (GIS) software or simply with Google Earth. If the camera-vent distance is certain, the two values should be equal.
It is also possible to avoid the geometrical calibration if the user has already done it manually, simply by saving two .txt or .csv files. The first one must be a 2-by-N h file containing the horizontally calibrated position of each of the N h pixels along the first row and the related horizontal error on the second row. The other file must be a must be a 2-by-N v file with the vertically calibrated position of each of the N v pixels and the related vertical error. Calibrated positions must be referenced to the leftmost pixel for the horizontal calibration and to the bottom (or to the pixel pointing towards an inclined camera if this is the case) for the vertical calibration. Both must be entered into the files in ascending order. If the original video is not available and the user wants to analyze already saved or processed frames, it is sufficient to comment out the Start the process line and the Reading video section, and define a name (e.g., name = 'Explosion_1 ) and an extension (e.g., ext = '.mp4 ) of a hypothetical video. The variable name must correspond to the first part of the name of the folder where the frames are saved (i.e., Explosion_1_Frames or Explosion_1_Processed; see Appendix A.3 for the folder structure). Frames inside this folder should be named in numerical order and with the same number of decimals.
If running this version of PlumeTraP, the only further action required from the user is to set up the image analysis technique, that is explained in Appendix A.3.  Both PlumeTraP main scripts are based on the same functions, thus they have in common the procedures described in this section. They also share the same folder structure ( Figure A4), with two subfolders being saved inside the video output folder: one for the original frames and another for the processed frames (ending with _Frames and _Processed, respectively). The video output folder is a subfolder of the main output folder set by the user and is also where the final output files with the physical plume parameters are saved. Once the frames from the video are saved, the frame_processing.m function is called. The following pre-processing procedure is applied to perform the image analysis:

1.
Before starting the processing, a dialog box asks for the user to input the threshold luminance value used to create a binary image. The threshold value cannot be fixed as it reflects the luminance condition of the video and, therefore, has to be specified as a numerical scalar or numerical array with values in the range between 0 and 1, although, generally, values ranging from 0.05 to 0.2 should be used. This value can also be set using different values for the first image than for the other ones (helpful to obtain a clean background-isolated image).

2.
The image analysis procedure (described in Section 3.1) is applied by the image_analysis.m function to the last frame to show a preliminary result of the analysis ( Figure A5). Thus, the algorithm recognizes the plume shape in the last frame by keeping only the biggest object that has value equal to 1 in the binary image. At this point, a dialog box asks if the selected parameters isolate the plume sufficiently well. If this is not the case, it is possible to restart the pre-processing and set new thresholds, as this part of the script is inside a while loop that can be run until a satisfactory output result, mainly in terms of segmentation, is obtained. 3.
Once the plume seems to be well isolated from the background in the last frame, a rectangular region of interest (ROI) is drawn automatically around the plume to create a mask containing the supposed plume area. If the ROI does not correspond to or incorporate the plume well (e.g., because clouds are recognized as the bigger object), it can be drawn manually ( Figure A5; zooming in is highly recommended) simply by responding to the appropriate dialog box. Then, the next dialog box asks if the user is satisfied with the drawn ROI or wants to draw it again.

Appendix B. Evaluation of Uncertainties due to Calculated Parameters
As stated in the manuscript, other sources of uncertainties may derive from the camera properties and geometrical setting, e.g., location, camera inclination φ and horizontal and vertical FOVs, β h and β v , respectively. These uncertainties are dependent on the data collection and may differ between different users. Therefore, we quantify them here by providing some general examples based on the available dataset:

•
The camera-image plane distance Y can be uncertain due to uncertainties in the vent and camera positions. Therefore, assuming the camera is GPS-located with a precision of ± 20 m, we find uncertainties in the plume height ( Figure A7a) and maximum width ( Figure A8a) to be ±0.1 % for Y = 20 km and ± 0.3% for Y = 5 km. • A variation of ± 1 • in the camera inclination φ results in a shift in the plume height ( Figure A7b) of ± 0.6 % for φ = 4 • and ± 1.5% for φ = 18 • . The maximum width of the plume is not affected by this parameter.

•
If the FOV of the camera is unknown, the easiest way to estimate the horizontal FOV β h is by matching geographical locations of features in the FOV with their pixel positions. The vertical FOV β v can then be calculated using the image aspect ratio. We find that an error of ± 1 • in the estimate of β h results in an error of ± (0.5-0.6) • in β v . This uncertainty propagates through to the calculation of the plume height ( Figure A7c) and maximum width ( Figure A8b) and is found to be ± 1.5% for an FOV of 69.3 • × 39.0 • and ± 7% for an FOV of 14.7 • × 8.3 • . Therefore, the FOV is the parameter that can cause the greatest error in the calculated main parameters of the plume.
Clearly, these uncertainties propagate through to the plume parameters that are calculated from the height or maximum width, e.g., the ascent velocity and acceleration of the top of the plume.