Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images

We demonstrate a new image processing methodology for resolving gas bubbles travelling through liquid metal from dynamic neutron radiography images with intrinsically low signal-to-noise ratio. Image pre-processing, denoising and bubble segmentation are described in detail, with practical recommendations. Experimental validation is presented - stationary and moving reference bodies with neutron-transparent cavities are radiographed with imaging conditions similar to the cases with bubbles in liquid metal. The new methods are applied to our experimental data from previous and recent imaging campaigns, and the performance of the methods proposed in this paper is compared against our previously developed methods. Significant improvements are observed as well as the capacity to reliably extract physically meaningful information from measurements performed under highly adverse imaging conditions. The showcased image processing solution and separate elements thereof are readily extendable beyond the present application, and have been made open-source.


Introduction
Gas bubble flow in liquid metal is encountered in a variety of industrial processes. Examples include liquid metal stirring, purification and continuous casting in metallurgy, liquid metal-based chemical reactors, and more. Controlling the output of these processes is essential and it was proposed that this can be (and in some cases already is) achieved via applied magnetic field (MF) [1][2][3][4][5][6][7][8].
Through recent efforts and the advent of dynamic x-ray and neutron radiography of two-phase liquid metal flow [37][38][39][40][41][42][43], fundamental investigation of bubble chain systems mimicking industrially relevant flow conditions is underway [1,8,15,29,30,37,[44][45][46][47]. In bubble chain flow, bubbles are released into a liquid metal system one-by-one with a uniform time delay between each, at a certain gas flow rate, and ascend to the free surface of liquid metal. Such systems are usually rectangular vessels filled with gallium [1,8,44] or an eutectic gallium-indium-tin alloy [29,30,37,45,47] where bubbles are introduced via horizontal [8,44] or vertical [1,29,30,45,47] nozzles at the bottom of the vessel, or top-submerged vertical [37] nozzles. Bubble chains flow systems are the next logical step from single-bubble flow investigations, since single-bubble flow, while very informative of the bubble wake flow dynamics and characteristic trajectories without and with applied MF, is not representative of the actual flow conditions typical for the above mentioned industrial processes where one has columns and swarms with a high number density of deformable bubbles [48][49][50][51].
Therefore these systems are a crucial milestone in a transition from studying single-bubble flow to investigations of many-bubble systems that are very close to their actual industrial counterparts. However, despite the relative simplicity, dynamics exhibited by bubble chain flow in liquid metal without or with applied MF are still very complex. Depending on the gas flow rate, bubbles produce unstable elongated wake flow regions where periodic vortex detachment occurs and turbulent pulsations are generated -shed vortices and turbulent wakes of leading bubbles strongly affect the trailing bubbles, leading to bubble pair coupling across the ascending chain [8,15,44,46,[52][53][54][55]. There exists a feedback loop involving combined perturbations of bubble shapes and within the bubble chain, surrounding liquid metal flow, and the influence of the free surface at the top of the metal vessels with instabilities and oscillations in the bubble chain shape [8,15,44,46,55]. Recently, dynamic mode decomposition (DMD) has been applied to the output of the MHD bubble chain flow simulation to study both large-scale flow structures and bubble wake flow in the bubble reference frame [56]. It was demonstrated that DMD is a viable tool for an in-depth analysis of the complex dynamics mentioned above, and it was shown that there exists a very complex interplay of bubble wake flow and large-scale flow modes with a wide range of spatial and temporal scales.
It has become clear that specialized and rather advanced image processing methods and tools are required to extract physically meaningful data from data sets acquired via dynamic neutron and/or X-ray imaging [1,8,44]. This is mainly due to the low signal-to-noise ratio (SNR) associated with imaging thick (> 20-30 mm) layers of liquid metal at frame rates 100 frames per second (FPS) and the need to resolve many often closely packed interacting objects. High frame rates are a requirement to enable capturing fast bubbles, drops and particles flowing in liquid metal and to avoid motion blur [1,8,44]. Meanwhile, neutron flux that can be used in experiments is limited by both the utilized neutron source and the rapid activation of model liquid metals such as gallium.
Here we describe and demonstrate our new image processing methodology developed over the course of our dynamic neutron imaging experiments with bubble flow in liquid metal. We show that the implemented code is robust and can operate reliably at very low SNR in presence of image artefacts. It is also shown that this version clearly outperforms our previous solution by resolving/avoiding known issues. In addition to demonstrating performance for dynamic neutron imaging data sets from our measurements with a rectangular gallium vessel with bubble chain flow, we have also performed direct experimental validation of the code by imaging a reference spherical body, both stationary and in motion, and have quantified the shape detection errors.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images Bubble flow imaging with and without applied magnetic field was conducted at the thermal neutron imaging beamline NEUTRA (SINQ, PSI, 20 mm aperture, 10 7 n cm −2 s −1 mA −1 flux) at the Paul Scherrer Institute (PSI) using a medium spatial resolution set-up (MIDI) [57]. Experiments were performed with both horizontally and vertically oriented inlet nozzles. The setup with the horizontal inlet is described in [8,44], whereas a modified version of that model gallium/argon system was designed for the new experiments with the vertical inlet -the latter will be described in a follow-up publication. A thin-walled (4 mm) rectangular 150 mm × 90 mm (95 mm) × 30 mm (interior dimensions) glass (boron-free) vessel filled with liquid gallium up to the 130to 140-mm mark was used. Neutron flux was parallel to the 30-mm dimension of the vessel. A square field of view (FOV, 112.8or 123.125-mm side) above the gas inlet was imaged at 100 FPS. The distance between the liquid metal layer and the scintillator varied depending on the setup (magnetic field system used, if any) and was [4; 32] mm. A sCMOS ORCA Flash 4.0 camera was used to collect the scintillator (200 µm thick 6 LiF/ZnS screen).
Reference experiments were performed at the cold neutron beamline ICON (SINQ, PSI, 20 mm aperture, ∼ 1.3× NEUTRA flux) to validate the developed image processing methodology [58]. A 20 mm×20 mm×30 mm rectangular brass reference body (stationary and moving) with a central spherical cavity (5 mm radius) was imaged at 100 FPS within a 120.06 mm square FOV (a sCMOS ORCA Flash 4.0 V2 camera with a 200 µm 6 LiF/ZnS scintillator) to reproduce the imaging conditions for argon bubbles in liquid gallium. The reference body was imaged with neutron flux directed along its shorter or longer axes. In addition, the distance between the scintillator and the body was either 0-(static body) / 2 mm (moving body) or either with an extra 1 cm. These adjustments to the test conditions were meant to determine how the reference shape acquisition error depends on SNR which these conditions modify.
For each image sequence recording at NEUTRA and ICON camera dark current signal and neutron beam profile signals were recorded to be used for subsequent image normalization during pre-processing.

Image properties
The acquired images are 16-bit 1-channel TIFFs with a 1024 × 1024 pixel resolution (2 × 2 average-binned 2048 × 2048 frames) with a 0.11-0.12 mm pixel size (0.12 mm for the reference experiments). Short exposure times (10 ms) result in strong Poisson (multiplicative) noise from neutrons and converted photons, and salt-and-pepper noise of varying density is present due to overexposed (gamma ray noise) or "dead" camera pixels. The neutron beam flux over the FOV is non-uniform with a fall-off near the edges of the acquired images. Figure 1a is an example of an acquired raw image. Here gas flow rate was 120 sccm (standard cubic centimeters per minute) and static vertical magnetic field of 125 mT was applied to the bubble chain region. Typically one can see the following captured in a full FOV image: the liquid gallium volume within the glass container (delimited by the interior orange lines and the light blue line), the container walls (orange lines), the free surface of liquid metal (the light blue line), the surrounding air (regions outside the walls and above the metal free surface) and the neutron flux shielding (the red line) for the magnetic field system. However, not all of the FOV is of interest - Figure 1b shows the FOV cropped to the liquid metal volume in false color after pre-processing (Section 3.2). Note also the bubble regions highlighted in Figures 1a and b -these are the objects of interest that must be segmented and their properties such as centroids, projection areas, tilt angles, aspect ratios, etc. measured. Note that the same color scheme and normalized luminance scale as in Figure 1b are used in all other figures beyond this point unless stated otherwise.
To illustrate the image processing challenge, Figures 2 and 3 show examples of image noise and neutron flux transmission signal for bubbles after pre-processing (Section 3.2). Figure 2 shows a typical luminance distribution about a bubble region: note that here moving averages (window width in the scan direction is equal to one pixel) over (a) horizontal and (b) vertical image patches are shown, since plots over any given single pixel line would be unintelligible. Estimates from a control set of bubble regions for different frames indicate many of the images have SNR ∼ 1.5-1.6, and there are cases with SNR as low as ∼ 1.2. Similar plots for (a) horizontal and (b) vertical patches for the entire width/length of the cropped FOV are shown in Figure 3.
While visible in (a), it is especially evident in (b) that despite the compensation for neutron flux non-uniformity over the FOV, the background "mean" is still not uniform/flat. One also has rather densely packed noise peaks with luminance values comparable to the values associated with the bubbles.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT Figure 1: (a) Original captured FOV after outlier removal and luminance normalization with marked container walls (orange dashed lines), the mean gallium free surface level (light blue), the neutron flux shielding and bubble locations within the FOV (white). Note the scale bar in the bottom-left corner. (b) FOV in false color (color bar on the right) after cropping to the container walls and the metal free surface, dark current and flat field corrections, and normalization. Figure 2: Bubble neighborhood analysis: width mean of luminance (gray) over the length of (a) horizontal and (b) vertical patches roughly fitted to the bubble region within the FOV. Normalized luminance versus pixel coordinates is shown in both cases. Note the pixel-to-mm scale bar in (b). Bubble neighborhood patches shown in (a,b) are not to scale and are normalized as in Figure 1. Scan directions are indicated by the dashed blue arrows and the red curve is the total variation filtered (Gaussian, regularization parameter equal to 1 [59]) gray curve. Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images  [44]. Such velocities and dynamic phase boundaries dictate the high acquisition frame rate required for physical analysis. 100 FPS was considered the optimal trade-off frame rate for the experiments covered herein in that one already avoids significant motion blur about the bubbles while still maintaining manageable image SNR.
Bubbles ascend in vertical chains from the bottom of the gallium vessel to the free surface where they exit the liquid metal volume. Bubbles generally exhibit non-uniformly accelerated motion depending on the applied magnetic field and flow rate. Bubble trajectories are mostly planar zigzags (plane parallel to the largest container face) with slight out-of-plane perturbations that intensify with gas flow rate. Bubble collisions, coalescence or breakup do not occur for image sequences considered in this paper, but rather chains with closely packed bubbles manifest for higher flow rates. In the future, the methodology developed herein will also be applied for even higher flow rate cases where bubble collisions do take place.
3 Image Processing

Assumptions & considerations for algorithm development
The results obtained via the previous version of our image processing code [8,44] determined the objectives that must be met here: 1. Improved bubble edge detection stability -more reliable detection, less artefacts and false positives 2. Greater bubble shape detection precision 3. Increased bubble detection rates at the bottom of the FOV The following assumptions regarding the bubbles are in effect: • Bubbles have perturbed elliptic/circular shapes, i.e. not necessarily convex.
• Bubble phase boundaries do not exhibit local curvature radii less that a pre-defined fraction of their mean curvature, i.e. smoothness is assumed below a certain length scale due to surface tension. • No coalescence or breakup of bubbles is expected at flow rates < 500 sccm considered herein.
In addition, the development of the methodology outlined herein was subject to the following considerations: • Given the image properties/quality as described in Section 2.2, we do not attempt to perform filtering & segmentation such that argon/gallium volume fraction can be recovered, since low SNR will translate to significant errors in the output if both volume fraction field and shape recovery are attempted. • Since the 100 FPS frame rate is the lower limit for prevention of considerable motion blur, we do not employ methods that perform spatio-temporal noise filtering -only spatial denoising is used. • Furthermore, given the above and to make our methods more general, we treat image sequence frames separately at all stages of image processing. As such, it is possible to implement parallelization on the frame level with simultaneous multi-threading on to speed up the processing.
To achieve the above goals, it was decided to separate the bubble segmentation into two stages: global and local filtering/segmentation. That is, one starts by obtaining first estimates for bubble segments within images (the entire cropped FOV), and then uses a separate routine for local filtering/segmentation about the preliminary segments to improve shape detection precision at local scales (i.e. lower-wavelength corrections to first shape estimates) and to resolve false positives.
The global filtering/segmentation routine, which is essentially a completely overhauled version of our previous image processing approach [8,44], was designed and adjusted to maximize bubble detection rates at the cost of reduced shape detection precision and higher false positive rates. Further, to improve edge detection stability, we have opted for implicit edge detection. Global noise filtering is performed in multiple stages, each targeting a certain noise type and/or wavelength range.
As for the local filtering/segmentation routine, a recursive multi-scale analysis algorithm was implemented that is shown to perform well even for images with especially low SNR and recovers bubble shapes in cases where the first segment estimation outright fails to capture initial bubbles shapes within an acceptable margin of error. The overall structure of the proposed image processing solution is outlined in Algorithm 1.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT Algorithm 1: Overall structure of the new image processing pipeline Input: Raw image sequence 1 Pre-process images (Algorithm 2) 2 Perform global filtering for all images (Algorithm 3) 3 Process images with the multi-scale recursive interrogation filter (MRIF, Algorithm 4) 4 Apply the luminance map-based false positive filter to the MRIF output (Algorithm 6) 5 Apply logical filters Output: Centroids and shape parameters for bubbles detected in each image

Pre-processing
Image pre-processing is performed in ImageJ as shown in Algorithm 2).
Algorithm 2: Pre-processing for raw images Input: Raw image sequence: 16-bit 1-channel TIFFs, 1024 × 1024 pixels 1 Crop the FOV to the liquid metal domain ( Figure 1) 2 Compensate the images for camera dark current (mean) 3 Compensate the flat field images for dark current (mean) 4 Perform flat field correction (32-bit precision) 5 Convert to 16-bit 6 Remove outlying bright luminance values (median thresholding) 7 Normalize the resulting images (pixel luminance re-scaled to [0; 1]) Output: Dark current and flat-field corrected normalized images Images as cropped as indicated in Figures 1a and b. Dark current compensation is performed by subtracting the mean projection of 5K-10K recorded dark current noise images from all images in the raw image sequence (typically 3K-9K images). Then the dark current-compensated flat field is computed from the mean projection of 5K-10K neutron beam flux distribution images. Afterwards the dark current-compensated FOV images are normalized with respect to the flat field compensated for the dark current. These corrections for the cropped raw FOV images can be expressed as where I are the luminance maps of the corrected images, I 0 are the cropped raw images, and I dark and I beam are the dark current and neutron beam flux images. This correction results in the spatial dependence of the SNR as a consequence of less neutrons detected behind the sample compared to the open beam.
Afterwards, bright outliers are removed using median thresholding with a 1-to 2-pixel radius. The threshold is chosen such that outlier removal modifies only the pixels where luminance by far exceeds the local median (within the designated radius). Pixels with luminance above the threshold are then assigned the local median values. Finally, the images are normalized and saved, then passed to Wolfram Mathematica for further processing.

First segment estimates
First segment estimates are obtained using an algorithm referred to in this article as the global filter, which is outlined in Algorithm 3. Segment estimation is performed in three stages -noise filtering & background removal, implicit edge detection and segment filling & cleanup.
Noise filtering is done as follows. First, Perona-Malik (PM) filtering is performed on an input image whereby the following non-linear diffusion partial differential equation (PDE) is solved over the image luminance map I over virtual time t with Neumann boundary conditions (BCs) [61]: where I = I( r, t), f (I, α) is the diffusion coefficient and α is its control parameter, and I 0 ( r) = I( r, 0) is the input image. f (I, α) prescribes anisotropic diffusion by restricting luminance diffusion across sharp edges. The number of PM iterations, α and the gradient Gaussian regularization width σ that controls the sensitivity of f (I, α) to noise are Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT Algorithm 3: Global first segment estimator for pre-processed FOV images Input: Normalized pre-processed images (Algorithm 2) Noise filtering & background removal 1 Perona-Malik (PM) filter (2) 2 Total variation (TV) filtering, Poisson model (4) 3 Self-snakes curvature flow (SSCF) filter (5); memoize output 4 Soft color tone map masking (SCTMM) (7); memoize output Implicit edge detection 5 Compute the luminance using the gradient filter (Gaussian regularization + Bessel derivative kernel) 6 2-threshold hysteresis binarization 7 (Optional) Morphological erosion 8 Thinning transform Segment filling & cleanup 9 Filling transform 10 Mean filtering (small radius) 11 Otsu binarization 12 Remove border components Output: • Image mask with first segment estimates • Memoized SSCF-filtered image for later use in Algorithm 4 • Memoized SCTMM-filtered image for later use in Algorithm 6 chosen such that the PM process only acts on very small-scale noise structures -Equation (2) is edge-preserving -and it is mainly aimed at removing the remainder of the salt and pepper noise due to bright and dark outliers that survived pre-processing (Algorithm 2). Note that where K σ is a Gaussian kernel with its standard deviation σ. In the case of our images, we found that is productive to set α to the 0.5 quantile of |∇I| in I 0 ( r), and to set σ = 1 and the number of PM iterations to 5 [62,63].
Next, the total variation (TV) filter is applied assuming Poisson noise (typical for low-SNR underexposed images as in this case) by solving the following PDE with Neumann BCs [64]: Input/output similarity (4) where the regularization parameter β determines the balance between noise filtering (pixel value variation minimization) and input/output similarity preservation. Solving (4) over virtual time t asymptotically transforms the input image into a stationary (with respect to t) filtered image. The TV filter is set up such that it eliminates noise structures up to a fraction of the length scale of bubbles on the order of bubbles while avoiding overly distorting the luminance map. In our case, we find that setting β ∈ [0.8, 1] and limiting the number of TV iterations to 100 [65] yields better results.
The third stage is the application of the self-snakes curvature flow (SSCF) filter. Here the following PDE is solved with Neumann BCs [66]: where g(I, γ) is the curvature diffusion coefficient and γ is its control parameter. Unlike the PM filter (2), SSCF diffuses local luminance curvature, not luminance itself. It is also edge-preserving. The number of SSCF iterations and γ are set such that the SSCF filter eliminates any remaining sharply localized luminance maxima about the bubbles since there the luminance curvature is the greatest; at the same time, filtering must preserve bubble region contrast with respect to background. For our images we find that in some cases one may set γ → ∞, which simplifies (5) to Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT which is the mean curvature flow PDE. However, other cases require anistropic curvature diffusion and then we set γ to the 0.5 quantile of |∇I| in I 0 ( r) as with the PM filtering. The number of performed SSCF iterations is set to 7 [67].
The next stage is the soft color tone map (CTM) masking (SCTMM) which is a non-linear filter designed to clean up the image background by removing large-scale artefacts left over after denoising and to further separate background from bubbles while avoiding excessive erosion of the bubble regions. The large-scale structures in the background were actually one of the sources of the edge detection instability in the previous approach, especially for low-CNR images with higher bubble number density where bubble detection was often outright impossible due formed edge artefacts that could not be reliably removed.
Given a normalized original image x, the SCTMM background correction generates a new image y: where CTM(x, c) is the CTM operation and c is the luminance compression factor. The CTM operation maps the colors (in this case the gray-scale values) of the image using gamma compression with a global compression factor c [68]. The idea behind SCTMM (7) is as follows. A pure x * x product would have the effect of non-uniformly increasing the distances between the nearest luminance values (input x is normalized) and luminance maxima and minima values would be much more distant from the mid-range luminance values, which are affected the most. If one masks or lowers the values of certain pixels within one of the images x , then x * x would act as a "soft" i.e. weighed mask (as opposed to a "hard" binary mask) that would shift the pixels with reduced values in x further towards the lower end of the luminance range, ideally making them background. Soft masking is preferred here because masking using binarization and then replacing the removed background using luminance interpolation or other methods will generally produce artificial and potentially very pronounced edges and/or reduce the contrast of actual edges.
Here it is required here that x is such that background and post-filtering artefacts are removed, and bubble contrast is enhanced while bubble features are eroded as little as possible. It was decided to opt for additive masking of the form x (x) = x − mask. An inverted CTM(x, c) was chosen as mask because, if the right c value is set, 1 − CTM(x, c) will have high luminance for background and denoising artefacts since CTM(x, c) reduces the difference in their luminance values. This way x = x − (1 − CTM(x, c)) has, conversely, greatly reduced luminance for artefacts and background. The resulting product (7) then has the desired properties and emphasizes the bubbles while reducing the impact of image artefacts. For the images considered here c = 0.5 generated better results.
Another major change is the substitution of explicit segmentation and edge detection with an implicit procedure. The SCTMM output is normalized and passed to the gradient filter (luminance gradient magnitude map of a Gaussian kernel convolution (3) for an image; the Bessel derivative kernel is used) which estimates bubble edge regions (halos) in the image. The halos are segmented using double-threshold hysteresis binarization (pixel corner connections enabled) [69] and then the edge estimates are obtained using the thinning transform [69] (exploiting the edge gradient symmetry).
Optionally, morphological erosion [70] can be applied to the halo segments before thinning, which is suggested if thinning outputs jagged edges. Finally, bubble shape masks are generated by applying the filling transform [69] followed by a small-radius mean filtering. The advantage of this procedure is that it is more stable and does not require edge/area repairs for bubble edges/masks.
Afterwards, small-radius (2-3 pixels) mean filtering step followed by Otsu binarization [71] ensures that the edge dendrites that are occasionally left over at the filled bubble shape boundary are pruned -this is much computationally cheaper than morphological pruning, which would also generally require multiple iterations to converge to edges that are Jordan curves immune to the pruning transform. The gradient filter regularization kernel scale is set to roughly match the bubble scale (Section 2.3). The primary threshold for the hysteresis binarization of edge halos is by default 0.35, though it had to be reduced to 0.25 in some cases where the image quality was especially problematic. The secondary threshold is computed using the Otsu method. In our case we perform morphological erosion using disk structural elements with a 5-pixel radius. Finally, we remove segments that are in contact with image boundaries -this is because only bubbles that are fully within the FOV are of interest.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT

MRIF core
The filtering methods utilized in Algorithm 3 are rather aggressive. Testing revealed that, while bubble detection rates are indeed significantly higher than before and bubbles are detected everywhere within the FOV, it comes at the cost of decreased shape resolution precision and a higher false positive rate. The former is very important for a more in-depth analysis of the effects of varying flow rate and MF on the behavior of bubble chains. It was decided to fine-tune the global filter such that the bubble detection rates are maximized and good first estimates of bubble shapes/sizes are obtained, and complement it with a routine that would use the first estimates to generate more precise bubble shapes and efficiently filter out false positives. To this end, we have developed an algorithm for iterative segmentation refinementthe multi-scale recursive interrogation filter (MRIF) outlined in Algorithm 4 and schematically illustrated in Figure 4.

Algorithm 4: Multi-scale recursive interrogation filter (MRIF)
Input: • An image mask with first segment estimates (Algorithm 3, Step 12) • The global SSCF filter output (Algorithm 3, Step 3) 1 Define square (side length L ) interrogation windows (IWs) for segments based on their areas and centroids (8,9) 2 Define the FOV image as an IW with scale L based on FOV dimensions 3 For every initial segment and updated segments: Map the global SSCF filter output onto the segment IW Perform local filtering (Algorithm 5) if updated segments were found then Define new IWs (side length L ) for the updated segments based on their areas and centroids (8,9) Redefine preceding IW scales as L → L else Break end end 4 Memoize converged IWs for all segments 5 Map the centroid coordinates of the resulting segments from converged IWs onto the original image 6 Map the updated segment masks onto the original image and build an updated global mask Output: • Updated bubble shape masks for the FOV • Converged IWs for later use in Algorithm 6 Figure 4: A schematic illustration for the MRIF algorithm.
The key idea is to define interrogation windows (IWs) about the initially detected bubbles to exclude irrelevant parts of the image and its intensity histogram from the analysis. This also helps to reduce the influence of any remaining Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images artefacts left over from the global filtering. Especially for images with lower SNR and with closely packed bubbles, it may be the case that the initial segmentation is poor, i.e. two or more bubbles have been segmented as one due to the surrounding artefacts, or a bubble was connected to large-scale artifact structures, forming a large segment that obscures the true object. This means that an appropriate local filtering algorithm must be devised for IWs. However, a single local filtering pass may not be enough for various reasons, e.g. what might appear visually as a poorly segmented bubble at the scale of the current IW might actually turn out to be, after local filtering, multiple bubbles -these would then each require another pass at a finer scale. This means that in general a series of consecutive interrogation passes could take place. Thus, MRIF performs object filtering at different scales, effectively checking that the segments have been properly resolved by the global filter and/or in the preceding local filtering iterations. A stopping criterion based on the IW size similarity between iterations makes sure that MRIF recognizes that it only makes sense to re-filter an image patch if the object is significantly smaller than the previous IW, since in this case the finer shape features may have been under-resolved.
MRIF consists of several components: an IW generator that centers the IW at the segment location and adjusts its size according to the segment size; a local filter that is responsible for filtering within IWs; a recursive routine that performs a "scale descent" and converges to the "true" segment scale starting from the first estimates; a procedure that collects the final, updates segments and maps them onto the original bubble segment mask, substituting the first estimates. The ε criterion is the stopping factor that controls recursion depth, i.e. the lower IW scale threshold. In our case ε = 2 yielded good results. The MRIF core components are described in detail in Sections 3.4.2, 3.4.3 and 3.4.4.

Interrogation windows (IWs)
IWs are square windows with side length L which is determined for segments as follows: where S is the segment area in pixels and s is a user-defined scale factor, i.e. L scales with the equivalent radii of segments. Given L and the segment centroid r = (x, y), an IW is defined by pixel coordinate intervals: Since L for the entire image is required for the first iteration of MRIF and one should always perform local filtering at least once for every initial segment, here one can set L = s 0 · max dim(z) where z is the SCCM-filtered FOV image (Algrithm 3, Step 3) and s 0 ≥ 1 is an arbitrary scaling factor such that L /L > ε is always true for the first MRIF iteration.
Note that in general an IW may be out of the image bounds -for this reason, two IWs are generated for a segment at every MRIF iteration: a virtual IW defined by (9), and a real IW given by so IW is not necessarily square. To reiterate: the current filtering scale is defined by the IW scale L and the SSCF filter output is mapped onto IW . For simplicity, IW will be referred to as IW from this point, as in Algorithm 4, unless stated otherwise later. We observed that for our images s ∈ [2; 2.5] yielded better results.

Local segment updates
The local filter used for recursive filtering in MRIF (Algorithm 4, Step 3) has elements similar to the global filter, but with important distinctions. It was designed to be less aggressive because MRIF ensures that only the crucial background context from the image is retained within an IW, e.g. the somewhat destructive SCTMM is not required. The local filter operates on an IW as described in Algorithm 5.
Here the mean filtering radius is a fraction of the expected bubble scale -this is to avoid averaging out the finer shape features. The filtering radius is set based on the expected lower threshold for bubble surface perturbation wavelength. The gradient filter regularization kernel (3) scale was set equal to the mean filtering radius.
Chan-Vese binarization is a variational method for object segmentation in images that does not explicitly utilize edges. It is generally more robust than edge-and histogram-based methods (e.g. Otsu, Canny [71,72]) but is more computationally expensive [73]. However, this can be afforded relatively easily in the case of IWs that are only a Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT Algorithm 5: Local segment refinement for IWs Input: Global SSCF filter output (Algorithm 3) mapped onto an IW 1 Mean filtering Implicit edge detection 2 Compute edge halos using the gradient filter (Gaussian regularization + Bessel derivative kernel) 3 Chan-Vese binarization (11) 4 (Optional) Morphological erosion 5 Thinning transform Segment filling & cleanup 6 Filling transform 7 Mean filtering (small radius) 8 Otsu binarization 9 Remove border components Output: Updated local shape estimate fraction of the original images. Chan-Vese two-level segmentation works by assigning the following functional to an image (in this case single-channel) and minimizing it iteratively [73]: with respect to C, where C is the set of segment contours, L is the length of C, S is the area enclosed by C, and µ, ν, λ 1 and λ 2 are the control parameters. The Chan-Vese process is typically initialized by defining C such that the image area is covered with a checkerboard of small circular contours of adjustable size, preferably very fine [73]. Two regions within an image are defined: areas within C initially given by circular contours (D 1 ) and outside C (D 2 ). Minimizing (11) has several effects on C: their combined length (term 1), area of D 1 (term 2), as well as the total discrepancy between the region luminance values and the region averages (terms 3 & 4) are reduced -the relative prevalence of these effects is dictated by µ, ν, λ 1 and λ 2 . C is defined as the zero crossing of a special level set function [73]. Optimization is performed for a certain number of iterations, generally resulting in the unification/dissolution of the initial disjoint C components until, ideally, C encapsulates the desired segments in the image. The Chan-Vese process also guarantees that C is a set of Jordan curves.
Minimization of (11) can be performed by solving the following PDE with respect to the level set function ϕ for C [73]: with the BC Here ϕ = ϕ( r, t) with |ϕ| = 1, n is the outward boundary normal, and δ( , ϕ) is the level set regularization function ϕ( r, t) is initialized as where λ ϕ determines the wavelength of the initialized checkerboard pattern. Equation (12) with (13) and (15) is then solved iteratively by alternating between updates for I D1 and I D2 , and ϕ.
In our case µ = 0.03, ν = 0, λ 1 = λ 2 = 1 and = 1. As indicated in [73], (15) with λ ϕ = 5 is a good initialization strategy leading to fast convergence, but we have observed that λ ϕ ∈ [2; 5) yields faster convergence without perceivable Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT quality degradation in our cases. The reason Chan-Vese instead of Otsu or hysteresis binarization is used is that the former exhibits much stabler edge halo detection in IWs and is not explicitly tied to image histograms (i.e. does not only minimize inter-class and maximize intra-class variance for the level sets like the Otsu method) which can be very different across IWs. Once binarization is complete, one performs the same sequence of operations as with the global filter (Algorithm 3, Steps 7-12). For local filtering, erosion for segments is performed with 1-to 5-pixel radius disk elements and the mean filtering (cleanup) radius is 1-3 pixels.

Mapping updated segments to original images
Centroids are mapped from converged IWs onto original images (full FOV) using the following transformation: where r is the updated centroid location in the FOV, r are the updated centroid coordinates from MRIF in the IW coordinate system, r IW is the location of the virtual IW center in the FOV, r 0 (L) is the center of the virtual IW in its coordinate system and Λ(L, r iw ) is the IW crop correction for r 0 (L). The mapping is illustrated in Figure 5. Since IW and IW' both always contain the segment (if any detected) and their centers are always within the FOV, for an IW fully within the bounds of the FOV one simply extends a radius vector r iw (stored by MRIF, Algorithm 4) to the IW center and then from that to the bubble centroid via r − r 0 (L) (Figure 5a, the red vector). If the virtual IW is partially out of bounds, then the actual IW ( Figure 5b) has a different center coordinate r 0 = r 0 + Λ(L, r iw ) since it is displaced when the IW is cropped (10). The crop correction given an image z is as follows: where x min , x max , y min and y min are given by L and r iw via (9) and dim(z) = (w, h). The IW crop correction is required as a separate step because the IW coordinate system origin is not necessarily within the FOV.

Luminance map-based false positive filtering
The MRIF procedure, in addition to its intrinsic false positive filtering capacity stemming from the recursive multi-scale analysis, is supplemented by a dedicated false positive identification algorithm that processes the segments output by MRIF. The procedure is outlined in Algorithm 6.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT  (7) 3 Multiply by the segment binary mask 4 Normalize the image 5 Compute I · max(I) for all segment regions 6 Thresholding for all segments: if I · max(I) < η; η ∈ [0; 1] (user-defined) then Flag the segment as a false positive else Nothing end 7 Remove the identified false positives from the bubble shape mask Output: FOV bubble shape masks without detected false positives Algorithm 6 exploits the observation that SCTMM applied locally to the global SSCF filter output mapped onto converged IWs should produce very strong intensity maxima and an overall higher intensity in the segment region in the case of a true bubble while the opposite should hold for false positives. The product of mean and maximum intensity is used so that neither of the two criteria alone are enough to pass the filter, since it might be the case that a region with an otherwise background level intensity might exhibit a tightly localized intensity maximum; similarly, mean intensity filtering alone is not enough since a bubble should have a strong maximum of transmission intensity about its centroid, which in itself is not as strongly correlated to the mean intensity. Another way to interpret this is that, if the maximum and mean thresholding have certain probabilities of accepting a false positive, then the max/mean product thresholding has a false positive acceptance probability at least lower than the greater of the two components. Note that Algorithm 6 uses a luminance compression factor c = 0.5 for SCTMM. We found that false positives are efficiently filtered (i.e. also minimizing true positive elimination) when η ∈ [0.1; 0.125].
After filtering the MRIF output using Algorithm 6, all properties of interest are measured for all remaining bubble segments and logical filters can be applied for further false positive elimination. In our case logical filters check for implausible bubble coordinates, sizes and aspect ratios and remove the outliers from the dataset of measured bubble shapes. Finally, the resulting data can be post-processed and interpreted.
4 Performance analysis 4.1 First segment estimates Figure 6 shows the effect of subsequent operations that the global filtering routine (Algorithm 3) performs for a preprocessed image. The difference between Figures 6a and 6b is that the noise with low wavelengths (sharply localized luminance maxima and minima left over from pre-processing) have been eliminated. Next, the TV filter (Figure 6c) consolidates the high luminance values within the bubble regions (white dashed circles), increasing the SNR for the bubbles. Also, noise is significantly damped and the wavelength of its features is increased even more. However, the TV filter does not remove the sparse larger-scale luminance maxima still seen in Figure 6c in the background between the bubbles as efficiently as it is desired without degrading the CNR for the bubbles. This function is performed by SSCF (Figure 6c) which specifically diffuses the leftover intensity maxima, increasing bubble SNR (and CNR due to reduced noise in the background) further. In the case of Figure 6 one has γ → ∞ for (5). A close-up view of one of the bubble neighborhood's transformations is shown in Figure 7 where one can visually notice that bubble CNR is increased by SSCF (d) and that intensity maxima are indeed eliminated from the background. A more detailed analysis of how how noise filtering stages affect the image and the bubbles can be seen in Figure 8 where a vertical strip containing both bubbles is taken from the images in Figure 6 and their relief plots (a) and mean luminance profiles (b and c) are shown.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT  With SNR increased by the sequence of PM, TV and SSCF filters, SCTMM is now applied to increase the CNR -this is especially clearly seen in Figure 8c where one can see that SCTMM significantly flattens the background while preserving bubble signal intensity, as intended. This enables the gradient filter to produce bubble edge halos with an even greater CNR (note the high depth of the halo "wells" in Figure 8a-6, which extend well below the binarization threshold, down to background luminance levels), enabling clean segmentation, as seen in Figure 6g.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT Note that all luminance profiles in (b) and (c) have been normalized for direct comparison.

MRIF
The first segment estimator could have been enough for bubble shape extraction, when tuned appropriately, if not for the fact that the image considered in Figures 6-8 is one of the better examples in terms of noise and artefacts present, i.e. a considerable portion of the images captured in our experiments are of a much poorer quality. Not only are the obtained shapes often imprecise or deformed, they can at times be decidedly non-physical -one such example is show in Figure 9 where in (a) a segment is shown that looks like two bubbles that are in the process of merging. Such events are not expected at the flow rate for which this image was acquired, therefore Figure 9a shows an obvious artefact. The bottom-right corner of (b) contains closely packed high-luminance spots which likely have been combined and merged with the bubble region in the upper-left corner. However, once MRIF targets the segment and the local filter Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT is applied to the SSCF output projected onto the segment IW (c) in stages (d-h), the artefact is no longer present and a single bubble is correctly resolved.
In addition to such artefacts, since the global filter was tuned to maximize the odds of detecting bubbles in the FOV, there are cases where detected segments are false positives. Two instances of such segments interrogated by MRIF are shown in Figure 10. In (a) one can see that the local filter has revealed that there is indeed no segment contained within the IW. However, interrogating the false positives in an IW might on occasion produce segments yet again, as in (b), where the gradient filter stage (b4) generated a structure that resembles a bubble edge halo. It was then segmented and, through edge cleanup, transformed into a segment that seems eligible -but simply overlaying it over the original image projected onto the IW, one can see that this is not the case. However, MRIF effectively performs a two-factor false-positive check, and in such cases the luminance map-based filter (Algorithm 6) serves as a backup. Once the global SCTMM output is projected onto the IW and SCTMM is applied to the resulting image (b9), one can see that the segment overlay contains only background, and thus this false positive will be eliminated since it has I · max(I) < η.
An example containing several instances of false positives, under-resolved shapes and bubble regions merged with noise patterns is shown in Figure 11. Notice that the image quality, even visually, is much worse than in Figures 6-8. The artefacts in the upper part of the image stem from lower CNR in (b), whereas one of the bottom artefacts comes from a noise structure in the background that resembles a bubble. However, as seen in (d) and (e), MRIF successfully removes all false-positives and artefacts while improving the shape estimates.
An even more difficult case is seen in Figure 12 where very large segments appear. The largest one in the upper part of (c) has occluded two of the four bubbles visible in (a-b). This is also an instance where the crop correction (17) is significant for remapping the updated segments onto the FOV (16). MRIF successfully resolves bubbles from the first estimates, as seen in (d-f). Notice the segment in the bottom-left part of (e) and (f) -its luminance map contains background only, so it will be later eliminated by Algorithm 6.
To see how MRIF iteratively resolves cases like the above two examples, consider Figure 13 where the updates for the first segment estimates are shown for MRIF iterations. One can see in (e3) that the SNR and CNR are even worse than in the cases shown in Figures 11 and 12. The largest segment seen in (e1) and (a1) is first resolved into two bubbles (b1) and then each bubble is interrogated once more, obtaining more precise shapes. The bottom-most segment in (e1) requires the most MRIF iterations -the first two, (b3) and (c3), remove portions of the artefact that had obscured the bubble, and the last iteration (d) updates the resolved shape. The resulting bubble shapes are then mapped onto the FOV as indicated in (b-e).
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images  Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT Here it is important to reiterate that the performance of MRIF strongly depends on the user-defined IW scaling factor s (8) and the critical IW length scale ratio ε for the latest and the (potential) next recursion iterations. Specifically, the χ = s/ε ratio is of interest -we suggest χ ≥ 1 in general. An optimal ratio, which, as we observe, should be the same for all image sequences acquired under similar conditions, enables MRIF to efficiently "strip" the first segment estimates of artefacts and perform one final update for the resolved bubbles, as in Figures 13a-3 -13d. Here χ = 1. 25 for examples shown in Figures 11-13, with s = 2.5 and ε = 2. Before adjusting χ, we would recommend that the user determine the s value which gives the best performance for the local filter, also adjusting the settings for the latter -this will determine a starting value for before further optimization. Converged segments are flagged with green ticked boxes. Colored frames in (e2) correspond to (b2), (c1-2) and (d) via respective colors. Orange frames in (e1) indicate the IWs at R = 1: note the high aspect ratio of the topmost IW -its virtual counterpart is significantly out-of-bounds and therefore considerable crop correction (17) is assigned to properly map (c1-2) onto the FOV.

Results for existing & new data
Aside from the examples shown in Sections 4.1 and 4.2, it is also of interest how the developed approach performs for entire image sequences in terms of bubble detection density in the FOV and the physicality of obtained results, i.e. bubble trajectory and shape properties. Figures 14-16 demonstrate the differences in performance for the preceding image processing pipeline [8,44] and the methodology presented here.
One can clearly see in Figure 14 that the new version of the image processing code outperforms the previous version by completely avoiding the blind zones in the lower part of the FOV for both image sequences. Notice also that bubble tracks visible in (b) are much more coherent than in (a). Figure 15, in turn, shows that with the new methods one can now clearly resolve the classical S-shaped mean trajectory cluster formed by zigzag trajectories, as seen in (c) and (d), as opposed to (a) and (b) where a significant portion of the events is missing. The deflection bias in the x > 0 direction in (c) and (d) is determined by the horizontal inlet releasing gas in that direction.
Another point of interest are the tilt angle dynamics resolved in [8] versus the current results -this is showcased in Figure 16. Three things are important to note here: first, as a consequence of the blind zone elimination, the new curves extend all the way through the FOV; second, the average trends yielded by both approaches indicate that the previously used code indeed resolved the dynamics without unacceptable inaccuracy; third, the error bands are considerably narrower about the averaged curves for the present results. The latter is especially true for the case with applied MF Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images   Figure 14b with (c) all bins and (d) bins with 4+ detections shown. Note the color legend to the right of (d).
shown in (b) where the SNR was much lower then in the image sequence corresponding to (a). This indicates that the new approach indeed yields significant improvement not just in bubble detection, but also in shape boundary resolution. The experimentally obtained results in [8] were in a rather good agreement with performed simulations, meaning one thus has indirect validation of the presented approach.
The developed approach was also applied to the newly acquired data to ensure consistency in the code output across different experimental campaigns -one instance of the new results is shown in Figure 17. Again, the bubbles are resolved over the entire FOV for all three cases shown. An in-depth physical analysis of the bubble dynamics is beyond the scope of this paper and is reserved for a follow-up article.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT Figure 16: Bubble tilt angle versus elevation (averaged curves and error bands) over the FOV bottom for (a) the case with no applied MF (Figures 14a and b) and (b) applied horizontal ∼ 265 mT MF (Figures 14c and d), both cases at a 100 sccm flow rate. Orange indicates the previous paper [8] and the current results are shown in gray. The tilt angle definition is shown in the bottom-right corner of (a). An important note on Figure 16: the averaged curves and the error bands were computed from the image processing code output as outlined in Algorithm 7. The quantile spline envelopes (QSEs) were computed following [74] and using the code (Wolfram Mathematica package) available on GitHub: Anton Antonov (antononcube): MathematicaForPrediction/QuantileRegression.m. For all the datasets represented in Figure 16, we used q = 0.9 with 3-rd order splines and 2.5% · N spline knots for QSEs where N is the number of points in the dataset; we set N QSE = 14% · N and δ b = 0.5% · N (point density-adaptive physical bin size); the TV regularization parameters [59] were 1 for QSEs and 0.25 for the binned data.
Finally, even though one cannot check how many bubbles the image processing code actually failed to detect without manual inspection (not feasible), one can evaluate the amount of detection events that are ruled out as false positives at the various stages of code execution for a sequence of images. The results for the five image sequences considered above (Figures 14 and 17) are presented in Table 1.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT Algorithm 7: Post-processing for the output of the image processing code. Input: Tilt angle (or any other bubble property) values over elevation (or any other independent variable) 1 Construct the upper and lower quantile spline envelope (QSE) functions for the input data with the quantile q 2 Sample the QSEs evenly in intervals of N QSE points over the independent variable range 3 Smooth the QSEs using Gaussian TV filtering 4 Designate the data points above the upper and below the lower lower QSEs as outliers and remove from the dataset 5 Bin the remaining data points over the independent variable range into bins with size of δ b points 6 Compute means and standard deviations for the resulting bins 7 Smooth the resulting mean value sequence using Gaussian TV filtering Output: An averaged curve for the dataset with error bands Image sequence MRIF Algorithm 6 Property filters Total Remainder  Table 1: False positive elimination rates (%) over three stages -MRIF, the luminance map-based filter (Algorithm 6) and the object property filter (with respect to the input that each filter received), and the percentage of detection events input to MRIF that were eliminated in total.
Notice that in the most difficult case of the five (Figure 14d) most of the work is done by the luminance map-based filter and the object property filter. However, the intrinsic filtering capacity of MRIF is significant because it filters out the detection events that very likely would have passed both of the two following stages.

Stationary reference body
The developed image processing algorithm is first validated by applying it to the images of a stationary reference body described in Section 2.1. Three imaging cases are considered here: neutron flux transmission through the shorter body axis, the longer axis, and the latter with an extra distance from the body to the scintillator. Thus, the SNR of the neutron-transparent spherical cavity within the body progressively decreases for these cases. This is illustrated in Figure  18. (e-g) normalized mean luminance maps for the respective image sequences. Note the color bar to the right.
A neutron radiography image of the reference body (slightly inclined) is shown in (a) where one can see the rectangular brass frame (darker) and a circular projection of the spherical void (brighter), as well as the surrounding background due to air. Neutron transmission in the case of (a) is along the shorter of the body axes. In all three imaging cases Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT the images are cropped as indicated in (a). Examples of cropped images of the spherical void within the body with exposure very similar (∼ 1.3×) to that for the bubble images (100 FPS, ∼ 1.3× neutron flux) are shown in (b-d) for the three cases listed above, respectively. The corresponding mean luminance maps shown in (e-g) were obtained by averaging over ∼ 5.8K, ∼ 12K and ∼ 9.5K images (entire recorded sequences), respectively -these averaged images are used to obtain reference shapes. The shapes detected by the image processing code in images like (b-d) are then compared to the reference shapes to compute shape detection error metrics. Note that all images seen in Figure 18 and used for validation are obtained from raw images by pre-processing via Algorithm 2, as with bubble flow images.
Note that the images shown in Figure 18 have the image side length to sphere diameter ratios that are very similar to what one has for MRIF IWs. To compare the reference body images to the bubble images in terms of image quality, consider Figures 19-21 versus Figure 2.  Figure 18. The red curve is the total variation filtered (Gaussian, regularization parameter equal to 1 [59]) gray curve. Notice that the case shown in Figure 21 is very similar to the bubble neighborhood case in Figure 2. In fact, Figure  21 exhibits arguably even worse CNR and SNR, meaning that, despite different materials (gallium and argon versus brass and air) and slightly different neutron flux, the reference measurements are representative of the flow imaging conditions and can be used for direct validation of the new image processing code.
To obtain the reference shapes from the images in Figure 18e-g, one first applies the Gaussian TV filter [59] ∂I ∂t = ∇ ∇I |∇I| Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT Figure 21: Neighborhood analysis for the reference spherical void, transmission along the longer axis with a extra distance to the scintillator: normalized width mean of luminance (gray) over the length of (a) horizontal and (b) vertical patches roughly fit to the sphere dimensions, spanning the cropped body images.
where the notation is as in (4) (note that the left-most term is different), µ = 0.25 is the regularization parameter and 15 TV iterations are performed [65]. Afterwards, double-Otsu hysteresis binarization is performed followed by image border component removal and mean filtering (5-pixel radius).
To ensure fair verification, we apply both the global (Algorithm 3) and the local (Algorithm 5) filters to the reference void images with parameters identical to those used for bubble images -these are provided in Sections 3.3 and 3.4.3.
Once all images are processed and masks given by the global and the local filter are obtained, we compute the following shape detection error metrics for both filters: • S ∆ -the area of the difference between the detected and the reference masks • δS -the absolute difference in the areas of the detected and the reference masks • δr -the absolute difference of the detected and the reference mask effective radii • δc -the absolute difference in the circularity (the ratio of the equivalent disk circumference to the shape polygonal length) of the of the detected and the reference mask • (δx, δy) -the absolute difference in centroid coordinates between the detected and the reference masks where all metrics are normalized to the respective properties of the reference masks, except (δx, δy) is normalized to the reference radius.
Here the δc and δr metrics serve primarily as "red flags" against gross inaccuracies in the detection of the circular void projection shapes. Under normal circumstances, one should have δc < 3-5% and the δr histogram should conform to that of δS. The other three metrics are used directly for shape detection error quantification where the most important one is S ∆ .

Moving reference body
The principles behind imaging a moving reference body are as above, except the body is now attached to a pendulum ( Figure 22) that periodically oscillates, and thus the body travels back and forth through the FOV. The motion is mostly horizontal and is initially strongly damped until the pendulum amplitude reaches a state where its oscillations exhibit a very slow decay. This enables us to determine the dynamics of the error metrics outlined in Section 5.1 and assess the effects of motion blur as the body and the void within it decelerate.
Here, before the global and local filters can be applied to the cropped reference body images, one must first segment the body within the FOV (Figure 22), crop the masked image to an IW about the mask centroid, repair the IW images as necessary, and then apply the filters. This procedure is somewhat more involved that the one in Section 5.1 and is outlined in Algorithm 8.
The Poisson TV filter (4) reduces the noise in the image and the image luminance map inversion makes the darker body area foreground and the surrounding air (brighter) background. The SCTMM filter (7) then increases the body CNR and the body is segmented using the Chan-Vese process (11). Morphological opening (disk structural elements, 15-pixel radius) and erosion (disk structural elements, 5-pixel radius) help separate the body from the clamps (Figure 22), as well as to remove artefacts, if any. Border component removal is done because only body segments fully within the FOV are eligible for analysis. Note that here we use c = 0.5 for SCTMM and β = 1 for the TV filter.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images Input: A normalized pre-processed (Algorithm 2) image as in Figure 22 Image filtering 1 Poisson TV filtering (4) 2 Luminance map inversion 3 SCTMM filtering (7) 4 Image normalization Reference body segmentation 5 Chan-Vese binarization (11) 6 Morphological opening (disk elements) 7 Morphological erosion (disk elements) 8 Delete border components Void region extraction 9 Mask area thresholding 10 Define an IW about the body centroid based on the body segment area (8,9) 11 Project the input image onto the IW masking the pixels outside the body segment Void region restoration 12 Body background extrapolation to masked IW regions using texture synthesis-based inpainting 13 Image normalization 14 (Optional) Remove high-luminance outliers and normalize the image Output: A repaired IW containing the spherical void surrounded by the body background After area thresholding removes mask components that have abnormally small areas (usually left over clamp segment fragments), an IW is defined about the body centroid via (8) and (9) with s = 1.25, and the input image with masked non-segment pixels is projected onto the IW. An example is illustrated in Figure 23. Note that (a), which shows the detected segments, contains a small artefact to the bottom left of the body segment -such segments are removed by area thresholding. An IW is then defined about the centroid of the remaining body segment and a void region is extracted, which is shown in (b). Note, however, that the IW contains a portions of the masked background from (a), which can interfere with the global and local filters.
It is therefore necessary to extrapolate the body background about the void into such regions. These regions are detected by inverting the IW luminance map and running histogram-based segmentation with a single threshold of 0.999. The masked regions are then segmented from the IW, as shown in (c). Texture synthesis-based inpainting is then performed with a maximum N neigh = 150 neighboring pixels used for texture comparison and a maximum of N samp = 300 sampling instances for texture fitting [75].
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images In rather rare cases inpainting introduces pixels with strongly outlying luminance values, which are eliminated as follows. An Otsu threshold I c is computed for the input image, and then the image is binarized using the k I · I c threshold, where k I is the threshold scaling factor. With the right k I value, this operation segments pixels that differ by more than a certain luminance value from the bulk of the histogram. The segmented pixels are then masked in the input image. We found that k I = 4 does not affect the images without significant outliers (ones that strongly affect global and local filter output) while effectively cleaning up severe outliers without modifying the bulk image histogram. An example output of Algorithm 8 is shown in Figure 23d. Afterwards, the resulting repaired void regions are used as input for the global and local filters. The settings for the global and local filters are the same as in the cases with stationary reference bodies.
In the cases where neutron flux transmission was along the longer body axis (Figure 22a) it was more difficult to segment the body without the pendulum clamps with Algorithm 8 as outlined above. Therefore, minor adjustments were made: • Otsu binarization was used instead of the Chan-Vese process • Morphological opening disk element radius was increased to 50 (Step 6) • Morphological dilation using disk structural elements with a 5-pixel radius was performed after Step 7 • Body masks were oriented to minimize the area of masked background in the IW before performing Step 11 and other Steps and parameters of Algorithm 8 remained unchanged. This procedure was necessary because clamp removal from the body segments using larger-scale morphological opening resulted in body segments smaller then the body by a considerable margin, which was compensated for by morphological dilation to recover the eroded area. Body reorientation was required because the masked image margins to be filled using texture synthesis often constituted a significant fraction of the IW area, resulting in artefacts. Body orientation was detected by fitting a minimum area oriented bounding box and determining its angle φ (definition as in Figure 16a) with respect to the horizontal axis. The masked body image was then rotated by −φ radians and the body region was obtained and repaired as usual. In two cases we had to use lower values, N neigh = 50 and N samp = 200, to avoid sampling the void regions and producing high-luminance synthetic background.
In addition, another issue exists: filtering is performed for IWs with slightly different sizes and void positions. Therefore, one cannot directly overlay detected void shapes over a reference mask. One also cannot obtain a reference mask by averaging the images from the recorded sequence as with a stationary body. As a solution, we use the reference void segment detected from Figure 18e (best SNR and CNR) to generate a reference circle that the detected shapes will be compared against. Using the radius determined for the reference void segment, optimal circles C ref are fitted to the detected segments: where r k are the segment boundary pixel positions, c 0 is the optimized circle position and R ref is the reference void radius measured from Figure 18f. This approach to reference mask placement works well only if the global and local filters are known not to systematically produce shapes with significant errors in centroid position with respect to the true void position, which has been verified using the image sequences with stationary bodies.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT Given this, it is expected that (19) affects S ∆ to a degree, and δS, δr and δc are unaffected by definition. However, the centroid determination error is considerably diminished, but this is an acceptable trade-off. In fact, with (19) (δx, δy) becomes a measure of circularity correlated to δc and it is arguably more useful to treat it as such. In the case of the moving reference body (δx, δy) can also be used to quantify shape detection errors induced by motion blur.

Error analysis
With the images from the reference experiments processed and the error metrics calculated, one can now assess the performance of the developed code more strictly than in Section 4.3. Starting with the imaging for a static reference body, the results are presented in Figures 24-26. One can see in the S ∆ histograms (Figure 24) that the global filter error distribution peaks and means are shifted towards higher S ∆ values going from neutron flux transmission along the shorter (a) to longer (b) axis, and value dispersion is also increased with considerably more instances of S ∆ > 10%. When the body is moved 1 cm away from the scintillator (c), the global filter peak is shifted further towards higher values but the dispersion does not change significantly. The maximum values (not shown) are higher as well with respect to (b). Similar tendencies can be observed for the local filter errors, but the overall error values are considerably decreased and distribution peaks are shifted back to lower S ∆ values. The S ∆ dispersion and maximum values are also diminished across the board. Thus, while not radical, the improvements due to local filtering are still very clear. The δS distributions shown in Figure 25 are different in that there are many more instances of δS < 5%. Note that, while in (a) where there are relatively less δS < 5% values than in (b,c) for the global filter, the local filter improves the results significantly -dispersion is minimized and the distribution mean is shifted below δS = 2%. This is in contrast to what is seen in (b,c): in (b), the δS peak is shifted towards larger values by the local filter, and in (c) it stays at roughly the same position. However, again, the local filter drastically minimized δS dispersion and maxima. The reason why one observes relatively less δS < 5% values in (b,c) than in (a) is that the local filter performance depends on how well the first three stages of the global filter improve SNR and CNR. That is to say, most of the instances with δS 7% are converted to values about the peaks of the local filter error distribution because the local filter in most cases cannot achieve improvement to below than δS 3%. Therefore it stands to reason that less such values are seen in (c) than in (b). As with S ∆ (Figure 24), one can see that the local filter performs progressively worse from (a) to (c), as expected.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images  In the static body case one would expect δx ≈ δy, which is indeed the case, since there is no bias due to motion blur. Figure 26 therefore shows the norm of the position error vector. Notice that here the local filter improves the error distributions only sightly, reducing the relative amount of higher δx, δy values in all cases. It is important to point out that δr distributions conform to δS plots and the δr metric values are lower by a factor of 2, while δc is negligible across the board. The error values seen in Figures 24-26 are within acceptable ranges.
Moving on to the moving reference body imaging, the results of these tests are shown in Figures 27-32. Figure 27 shows the pendulum (and reference void) velocity dynamics over consecutive frames for all imaging series. The [20; 40] cm/s range of expected bubble velocities (Section 2.3) is covered by the performed measurements, as seen in (a) and (a1). Importantly, in addition to shorter and longer axis transmission tests, we also used purposefully inappropriate texture synthesis and outlier removal settings in Algorithm 8 to generate three sets of data with synthetic image edge artefacts (luminance similar to the void regions) and single pixels with luminance exceeding the image maximum by an order Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT of magnitude (greatly reduced image CNR) to see how the filters perform. Consider Figure 28 where S ∆ and δS distributions are shown for the three test groups.  Colors indicate different imaging instances. S + ∆ is the correction for the reference mask fitting bias (red arrows).
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT As before, the global filter performance in terms of both S ∆ and δS progressively decreases as the transmission axis length is increased and then image artefacts are introduced to the data from the longer axis transmission measurements. Notice that the S ∆ and δS values in Figure 28 are in all cases greater than in the static body cases (Figures 24-26). This is clear enough to be seen visually, as histogram peaks in (a,b) are just under 15% errors for the global filter output, while only being just under 8% in the worst case for S ∆ in Figure 24c. However, this is exactly where the local filter makes a radical difference -observe in Figures 28a, c and e that the dispersion, maxima, minima and means of S ∆ are greatly reduced. Similarly good performance is seen in (b,d,f) in terms of δS. Notice that the changes in the local filter error distributions as image quality gets worse from (a,b) to (e,f) are consistent with what is seen for static body imaging in Figures 24 and 25. Another important point here is that two of the shorter axis transmission instances (Figures 28a and b) and one from the longer axis groups (Figures 28c and d) are with an extra 1 cm body-to-scintillator distance (2 mm by default) -however, in this case the differences are not significant enough to warrant attention. This likely stems from the fact a large fraction of errors is due to motion blur, especially during the pendulum deceleration stage, which is within the first 300-500 imaging frames ( Figure 27). It is also very clear from Figures 28e and f that introducing additional image artefacts considerably degrades the global filter performance. The local filter, again, greatly improves the quality of detected shapes, but, of course, also produces greater errors as opposed to what is seen in (c) and (d). One must remember that S ∆ is reduced by (19). To estimate the reduction, consider that if the centers of two circles with equal radii are displaced by a factor k ≤ 2 of the radius of either, the relative area of their difference is S ∆ (k) = 1 − (2/π) arccos(k/2) + (k/π) 1 − k 2 /4. Assuming the worst-case scenario where mask fitting via (19) "improves" the position of the detected shape by the peak δx, δy value in 26c, k ∼ 8%, the S ∆ underestimation comes out to S + ∆ ∼ 5%. To verify this, we also observe how S ∆ changes for shapes with significant deformities detected (rarely) for stationary body images when ground truth masks (Figures 18e-f) are replaced with fits using (19) -this is in agreement with the above idealized estimate rather well, yielding S ∆ differences ∈ (4; 5%). If this adjustment is applied to Figures 28a, c and e, the observations are consistent with the results for the stationary reference body, and the local filter S ∆ values come out to about ∼ 10% at distribution peaks (versus ∼ 8% for Figure 24c), excluding one of the cases in Figure 28e (light green curves). To reiterate, δS, δr and δc are unaffected by (19) by their definitions. Here the δr distributions again conform to δS. δc is negligible for all cases and, as seen in Figure 29, δx, δy is such that the asphericity of the detected shapes in within acceptable bounds.
Turning to Figure 29 and recalling that with (19) δx and δy are correlated to the sphericity of the detected shapes, notice that the errors are anisotropic in all cases. The greater component δx corresponds to the main direction of motion. Importantly, this anisotropy is also observed when inspecting the shape/reference difference masks for the local and global filters, where the largest contributions to S ∆ are at the shape sides in the −x and x directions which are the pendulum (Figure 22) oscillation directions.
Since S ∆ and δS maxima for the global filters were not included in Figure 28 to maintain visual clarity, and it is hard to quantify dispersion visually, Figures 30 and 31 show these quantities explicitly for all three test groups and for both global and local filters. While shape detections with S ∆ and δS values close to the values indicated in Figures 30b and  31b are very rare, it is important that the local filter can significantly improve the quality of detected shapes even in these instances.  The effects of motion blur can be assessed more quantitatively and directly -consider Figure 32 where S ∆ is plotted as a time series for two of the processed image sequences. One can see that higher body motion velocity indeed corresponds to greater errors, which decline over time as the body decelerates. Note that, between Figures 32a and b, it is evident that the intrinsic error signal due to the global filter quickly obscures the error contribution due to motion blur.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images A PREPRINT However, it also seems that the global filter is affected by the motion blur more than the local filter -as seen in (b), the S ∆ rather quickly relaxes to a quasi-stationary value at about N ∼ 250, while in (a) the error decay persists until N ∼ 500. Similar dynamics can be observed in (c) and (d) as well.  . (a,b) show the first 1000 frames for visual clarity. Note that 3000 images were analyzed in total for both shown cases. The error time series show here are obtained from raw data by removing the outliers above and below the Gaussian TV-filtered (regularization parameter 1) q = 0.9 QSEs (3-rd order splines, 90% · N spline knots) and applying the Gaussian TV filter (regularization parameter 2) to the remaining data points. The velocity curves are as in Figure 27a.
We assess the reference void detection failure rates in the image sequences used for the code validation -the results are summarized in Table 2.   (2) and (4).
There are no detection failures for static body imaging and for a moving body with neutron flux transmission along the shorter axis, and very small percentages are found for the longer axis case. The cases with synthetic artefacts have significantly higher failure rates for the global filter. However, all cases exhibit values < 5%, which indicates acceptable performance.
Finally, Table 3 indicates peak errors induced by motion blur at near the maximum pendulum velocity for the local filter for all the imaging instances, including S + ∆ ∼ 5%.
Resolving gas bubbles ascending in liquid metal from low-SNR neutron radiography images  Note that the peak errors are within 14% for tests without synthetic artefacts, and within 20% for instances with artefacts added to images.

Further improvements & extensions
While the demonstrated image processing performance is satisfactory for the problems that the code was developed for, its degree of parallelization could still be increased, especially for MRIF, and we expect that significant speedup can be attained for several components.
The developed code has been tested using the following hardware: and we found that memory utilization for parallel execution of Algorithm 3 using hyperthreading (all images are processed independently) and all available threads for a sequence of 3000 images (properties given in Section 2.2) requires almost all of the memory for the first and the last two machines, while the second and third machines ran out of memory and the image batch size had to be decreased. While this is not a critical issue, the mean execution time per 1000 images reduces significantly between machines as the thread count increases, so we expect the reduction in memory utilization to be worthwhile. For context, the first machine fully processes 3K images and outputs results in ∼ 2 hours, while the fourth machine finishes in ∼ 1.3 hours. A more detailed performance report will be provided in a follow-up publication for a greater number of processed image sequences.
It is also planned to implement a feedback loop that will enable coupling with our recently developed object tracking algorithm MHT-X [76] for iterative reinforced object detection and tracking. Finally, we will also apply the developed methods to image sequences with even smaller bubble-bubble distances where bubble collisions also occur.

Conclusions
To summarize, we have demonstrated the new version of our image processing methodology for resolving gas bubbles travelling through liquid metal imaged using dynamic neutron radiography. The showcased components of our code, such as the multi-scale recursive interrogation filter (MRIF) and the underlying global and local image filters, as well as soft color tone map masking, proved effective for detecting bubbles and extracting their dynamic shapes from images with low SNR and CNR. Output quality was further improved by the implemented luminance map-based false positive filter that bolstered the MRIF's intrinsic false positive filtering function.
It as shown by direct comparison that the new image processing code clearly outperforms the previous version used in [8,44], while the outputs of both are still consistent. In addition, we have validated the new methods experimentally by imaging a reference body, both stationary and in motion, with a precisely machined spherical cavity. Results indicate that that local filtering performed by MRIF largely limits the shape detection errors: relative shape mismatch area and shape area difference with respect to reference shapes are within acceptable bounds of ∼ 14% (∼ 20% with synthetic artefacts) and ∼ 10% (∼ 14% with synthetic artefacts), respectively (accounting for motion blur and the worst-case underestimation correction), while the asphericity of the detected shapes is rather negligible. As such, we find that applying the current methodology to the neutron radiography images obtained for our model systems with bubble chain flow is safe in that physically meaningful results with manageable errors can be expected. Note that we have also used the present image processing code to benchmark our object tracking code MHT-X [76].
In follow-up articles, we are going to process the data acquired in the previous and latest neutron imaging campaigns using the methods presented in this paper and our MHT-X code, and showcase the effects of applied horizontal and vertical magnetic field with different strengths on bubble chain flow in a rectangular liquid gallium vessel. Bubble trajectories (length, curvature, oscillation frequencies, envelopes, etc.), velocity (both overall spectra and dynamics, including acceleration) and shapes (aspect ratio, tilt angle, etc., and dynamics thereof) will be assessed and compared for different flow conditions -in addition to magnetic field configurations, a range of gas flow rates will be considered. We will also attempt to perform dynamic mode decomposition for the bubble shapes extracted from neutron radiography images and compare the dynamics against simulations -this will be done using the methods recently developed in [56].
Finally, we expect that the developed image processing pipeline and/or separate elements thereof should be applicable beyond the current application and context, which we also plan to demonstrate in follow-up papers. In the meantime, the image processing code is available on GitHub: Mihails-Birjukovs/Low_C-SNR_Bubble_Detection. The code will be improved as outlined in Section 6.