Formation, Detection, and Modeling of Submerged Oil: A Review

Submerged oil, oil in the water column (neither at the surface nor on the bottom), was found in the form of oil droplet layers in the mid depths between 900–1300 m in the Gulf of Mexico during and following the Deepwater Horizon oil spill. The subsurface peeling layers of submerged oil droplets were released from the well blowout plume and moved along constant density layers (also known as isopycnals) in the ocean. The submerged oil layers were a challenge to locate during the oil spill response. To better understand and find submerged oil layers, we review the mechanisms of submerged oil formation, along with detection methods and modeling techniques. The principle formation mechanisms under stratified and cross-current conditions and the concepts for determining the depths of the submerged oil layers are reviewed. Real-time in situ detection methods and various sensors were used to reveal submerged oil characteristics, e.g., colored dissolved organic matter and dissolved oxygen levels. Models are used to locate and to predict the trajectories and concentrations of submerged oil. These include deterministic models based on hydrodynamical theory, and probabilistic models exploiting statistical theory. The theoretical foundations, model inputs and the applicability of these models during the Deepwater Horizon oil spill are reviewed, including the pros and cons of these two types of models. Deterministic models provide a comprehensive prediction on the concentrations of the submerged oil and may be calibrated using the field data. Probabilistic models utilize the field observations but only provide the relative concentrations of the submerged oil and potential future locations. We find that the combination of a probabilistic integration of real-time detection with trajectory model output appears to be a promising approach to support emergency response efforts in locating and tracking submerged oil in the field.


Introduction
In the Deepwater Horizon (DWH) oil spill, oil droplets were found at a depth of 900-1300 m that persisted for more than six months [1]. Oil suspended in the water column (neither at the surface nor on the bottom) is defined as submerged oil. Being submerged and persistent are characteristics that make submerged oil difficult to locate and track, posing a challenge to emergency oil response and impact assessment. Investigating submerged oil is important because submerged oil may pose significant environmental and economic risks [2], potentially causing hypoxic or anoxic zones from biodegradation, and impacting coral reefs [3] or other sensitive ecosystems.
To find the submerged oil during response, proper estimates of the submerged oil distributions and trajectories are needed. In order to get an accurate estimate, a good understanding on the submerged oil formation mechanisms is desired, together with proper detection methods and modeling techniques [4].  [18]) (blue dots are sunken oil cases and orange dots are the well blowout oil spill cases).

Detection
Detection is an essential tool for locating and tracking submerged oil intrusion layers during an oil spill response. Fast, efficient, and accurate detection with quantitative measurements of the oil components are required to help estimate the ecological risks of the oil spill contamination and potential hypoxia, and help determine where, when, and how to carry out response plans [27]. Subsurface detection can be accomplished with either in situ or ex situ systems, based on whether the observations are analyzed on site or in the lab. The time delay in obtaining the information from the observations matters a great deal for search operations. For the ex situ method, the samples collected need to be analyzed in the lab and, thus, a longer turn-around time is needed. The in situ method is more widely used for the submerged oil. Here, we only discuss the in situ method.
Sensors or cameras are coupled with submarine devices (e.g., autonomous underwater vehicles (AUVs) [13,28] or remotely-operated underwater vehicles (ROVs)) [29] to make in situ measurements. Different sensors can be used to measure CDOM (colored dissolved organic matter), PAHs (polycyclic aromatic hydrocarbon), and DO (dissolved oxygen) [1,30] as proxies of submerged oil. These measurements provide valuable information on the real distribution of submerged oil. However, the detection of submerged oil at the field scale is not simple, as was shown during the DWH spill. Submerged oil was first coarsely found approximately one month after the initial release [31]. Significant effort was taken to find submerged oil, but the sampling procedures were not efficient due to the slowness of bottle sampling from a vessel and the characteristics of searching for a subsurface oil layer [32]. Therefore, developing more efficient sampling plans is desired to improve the efficacy of obtaining samples. Meanwhile, improving the sampling capacities of the devices and the measurement accuracies of the sensors can also help improve the detection.

Modeling
Modeling is an indispensable tool for predicting the trajectory and fate of submerged oil. The submerged oil is advected by subsurface currents, diffused primarily horizontally by ambient turbulence, removed from the intrusion layer by being incorporated into marine snow flocs (3-4%) Figure 1. Notable oil spills with submerged oil in North America (spill volume data are from [18]) (blue dots are sunken oil cases and orange dots are the well blowout oil spill cases).

Detection
Detection is an essential tool for locating and tracking submerged oil intrusion layers during an oil spill response. Fast, efficient, and accurate detection with quantitative measurements of the oil components are required to help estimate the ecological risks of the oil spill contamination and potential hypoxia, and help determine where, when, and how to carry out response plans [27]. Subsurface detection can be accomplished with either in situ or ex situ systems, based on whether the observations are analyzed on site or in the lab. The time delay in obtaining the information from the observations matters a great deal for search operations. For the ex situ method, the samples collected need to be analyzed in the lab and, thus, a longer turn-around time is needed. The in situ method is more widely used for the submerged oil. Here, we only discuss the in situ method.
Sensors or cameras are coupled with submarine devices (e.g., autonomous underwater vehicles (AUVs) [13,28] or remotely-operated underwater vehicles (ROVs)) [29] to make in situ measurements. Different sensors can be used to measure CDOM (colored dissolved organic matter), PAHs (polycyclic aromatic hydrocarbon), and DO (dissolved oxygen) [1,30] as proxies of submerged oil. These measurements provide valuable information on the real distribution of submerged oil. However, the detection of submerged oil at the field scale is not simple, as was shown during the DWH spill. Submerged oil was first coarsely found approximately one month after the initial release [31]. Significant effort was taken to find submerged oil, but the sampling procedures were not efficient due to the slowness of bottle sampling from a vessel and the characteristics of searching for a subsurface oil layer [32]. Therefore, developing more efficient sampling plans is desired to improve the efficacy of obtaining samples. Meanwhile, improving the sampling capacities of the devices and the measurement accuracies of the sensors can also help improve the detection.

Modeling
Modeling is an indispensable tool for predicting the trajectory and fate of submerged oil. The submerged oil is advected by subsurface currents, diffused primarily horizontally by ambient turbulence, removed from the intrusion layer by being incorporated into marine snow flocs (3-4%) [33] and transformed by biodegradation (~70%) [34] and dissolution (~15-25%) [35,36]. These processes are considered by oil spill models to make predictions.
In general, the models can be divided into two categories: deterministic models and probabilistic models. A deterministic model (e.g., OSCAR [37]) uses hydrodynamic theory and particle tracking to make oil location predictions. The model requires input of physical conditions such as the ocean currents. On the contrary, the probabilistic model (e.g., SOSim [38,39]) uses probabilistic theory to make predictions based on the observations on submerged oil concentrations. All these models provide time updated predictions on submerged oil locations and concentrations [40]. However, the ocean current information needed by the deterministic model is usually provided by an external hydrodynamic model, thus the deterministic model's predictions are sensitive to the accuracy of hydrodynamic models [40]. Meanwhile, the probabilistic models only provide relative concentrations and the model accuracy relies on the quality and quantity of field observations. Trajectory models should be improved for subsurface oil tracking, which requires advances in ocean circulation models relevant to subsurface oil layers.
To understand submerged oil formation mechanisms in order to better track the submerged oil during the oil response, this review paper reviews submerged oil formation, detection, and modeling methods. Although there are extensive reviews on surface oil modeling [41][42][43][44], to our knowledge, there is no review paper on submerged oil in the intrusion layer. Our work aims to fill this gap. We focus on the detection and modeling work related to the submerged oil in the intrusion layers, and other aspects including environmental and ecological risks and response operations are not considered in this review.
In the next section, we provide an overview of the formation of intrusion layers (see Figure 2 flow chart). Then the detection methods are reviewed in Section 3. The modeling methods, including near-field, far-field deterministic and probabilistic models are reviewed in Section 4. Finally, we conclude with a discussion on an integrated method to better assist future oil response.  [33] and transformed by biodegradation (~70%) [34] and dissolution (~15-25%) [35,36]. These processes are considered by oil spill models to make predictions. In general, the models can be divided into two categories: deterministic models and probabilistic models. A deterministic model (e.g., OSCAR [37]) uses hydrodynamic theory and particle tracking to make oil location predictions. The model requires input of physical conditions such as the ocean currents. On the contrary, the probabilistic model (e.g., SOSim [38,39]) uses probabilistic theory to make predictions based on the observations on submerged oil concentrations. All these models provide time updated predictions on submerged oil locations and concentrations [40]. However, the ocean current information needed by the deterministic model is usually provided by an external hydrodynamic model, thus the deterministic model's predictions are sensitive to the accuracy of hydrodynamic models [40]. Meanwhile, the probabilistic models only provide relative concentrations and the model accuracy relies on the quality and quantity of field observations. Trajectory models should be improved for subsurface oil tracking, which requires advances in ocean circulation models relevant to subsurface oil layers.
To understand submerged oil formation mechanisms in order to better track the submerged oil during the oil response, this review paper reviews submerged oil formation, detection, and modeling methods. Although there are extensive reviews on surface oil modeling [41][42][43][44], to our knowledge, there is no review paper on submerged oil in the intrusion layer. Our work aims to fill this gap. We focus on the detection and modeling work related to the submerged oil in the intrusion layers, and other aspects including environmental and ecological risks and response operations are not considered in this review.
In the next section, we provide an overview of the formation of intrusion layers (see Figure 2 flow chart). Then the detection methods are reviewed in Section 3. The modeling methods, including near-field, far-field deterministic and probabilistic models are reviewed in Section 4. Finally, we conclude with a discussion on an integrated method to better assist future oil response.

The Formation of Intrusion Layers
Oil droplet intrusion layers peel off from the rising oil plume. This process is determined by the characteristics of the oil plume, which are governed by the rise velocities, buoyancy, DSD, dissolution, and biodegradation. Subsurface application of dispersants to the oil release site affects the plume dynamics by changing the DSD of the oil entering the sea. SSDI was a major oil spill response strategy during the DWH spill. SSDI results in a lower interfacial tension between the oil and water, breaking up large oil droplets into smaller ones [45]. This reduces the amount of surface oil and increases the amount of submerged oil droplets in the intrusion layers [46]. Thus, SSDI improves safety to the oil spill responders by decreasing evaporated oil components in the air [47]. Reduction of the amount of oil at the surface reduces the exposure of birds and marine mammals to the oil.
The evolution of subsurface intrusion layers from the multi-phase plume is determined by the relative importance of the ocean cross-current versus stratification. There are two types of intrusion layers. The first type is formed in a cross-current dominant condition with weak stratification. Under this condition, the horizontal current bends the plume downstream and the current is strong enough to push the entrained water out of the plume, causing the bubbles and large droplets to separate from the plume at a separation height [20] and keep the dissolved oil and small oil droplets in the entrained seawater [5,48]. The separated plume then falls because of the decreased buoyancy, forming an intrusion layer in a weak stratification condition [49]. In the cross-current condition, there would be only one peeling layer [7]. The second type is in stratified dominant currents. Under this condition, the bubbles and large droplets separate from the plume at several heights. At the trap height, the large oil droplets and gas bubbles separate from the multi-phase plume and then continue to rise, and the dissolved and small oil droplets remain in the intrusion layer. This process may happen several times causing several intrusion layers as the separated oil droplets and gas bubbles rise to the surface [5,48,50].
Several intrusion layers were identified in the DWH spill [51] and the current speeds reported by [13] were <0.1 m/s, suggesting that the current condition was stratified dominant. A stable, neutrally-buoyant intrusion layer was found at a depth of 900-1300 m with oil droplets and dissolved oil, and a second intrusion layer was found around 1200 m after the riser was removed as the oil continued to release [52,53]. These observations provided the water column density ranges between the top and bottom of the intrusion layers. With the detected oil concentrations and density information, a Bayesian model could help predict the locations to find submerged oil drifting away from the source in the cross-currents [39].

Detection Methods
Various detection methods have been used for submerged oil after an oil spill [4,16]. Here we mainly focus on methods applied to the detection of submerged oil in the deep sea. Submarine devices (e.g., AUVs, ROVs [29], gliders, and the deep submerged vehicles) or ship-lowered rosettes are often included with the in situ sensors and cameras to detect submerged oil in real-time [28]. The collected sea water samples can be analyzed either directly by the in situ sensors or by the on-board equipment. The commonly used sensors, cameras, onboard equipment and their functions are summarized in Table 1.
Generally, the in situ sensors can be divided into two categories based on whether or not they measure the hydrocarbons directly. Sensors including the fluorometer [54] and MS (mass spectroscopy) sensor measure the CDOM, PAHs and dissolved hydrocarbons directly. In contrast, the conductivity/temperature/depth (CTD) sensor [55] helps to reveal the depth variations of the density layer(s) containing submerged oil by measuring densities. The DO sensors reveal the previous presence of oil droplets in the subsurface layer by measuring the DO level and the DO deficit can be calculated by comparing the detected value with the background level [56], which works when the hydrocarbon signal is not significant and cannot be detected by direct measurements. Overall, the in situ sensors provide comprehensive information on the submerged oil. However, these sensors are not sensitive enough to measure the submerged oil at low concentrations. For example, MS sensors can quantitatively identify dissolved hydrocarbons ≥~1 ppm [57]. To detect oil with dissolved hydrocarbon concentrations below that level, water samples are collected and processed by on-board gas chromatography-mass spectroscopy (GC-MS) with flame ionization detector (FID) [58]. In addition to the sensors mentioned above, cameras are also used to detect the microdroplets and determine their sizes in the intrusion layer. For example, a digital holographic camera (Holocam), video camera or a laser light scattering instrument (e.g., Sequoia's Laser In Situ Scattering and Transmissometer [LISST] instrument) [59] were used during the DWH spill. Other systems, such as Silhouette cameras (SilCam) [60,61] and high-speed camera systems, could also be applied to detect oil droplets [62] in the future.

Detection Application in the DWH Spill
In order to locate the submerged oil in the intrusion layer and support emergency response, samples were collected to reveal the distributions of submerged oil. The sampling process during the DWH spill was extensive [1,10,30,59,[63][64][65][66][67]. Three groups, including the U.S. Coast Guard oil response team (USCG) [26,55], scientific group [13], and damage assessment group [68,69], monitored the submerged oil by cruise-based sampling and the submarine devices (e.g., AUVs and ROVs). The USCG monitored the submerged oil from mid-May through mid-September by cruises. Furthermore, the scientific research team applied the AUV Sentry to locate the vertical and horizontal distributions of submerged oil in June and December 2010 [28]. Sentry descended in the water column in a spiral motion. Once Sentry found oil, Sentry moved horizontally at a constant depth in a zigzag pattern to seek the hydrocarbons. The CDOM, PAHs, and DO deficit were measured and used as proxies of submerged oil.
The detections by the three groups were carried out at different sampling depths and locations at different times over the DWH spill period. As a result, the depth range, the approximate extent and the moving directions of submerged oil were revealed. The detection activities in the DWH spill are summarized in Table 2. On 12 May 2010, shortly after the spill, an oil depth zone was found between 1000-1400 m depth by a team from NIUST (National Institute for Undersea Science and Technology). The determination of the layer was based on three findings: a substantial increase in murkiness detected by the transmissometer, an abnormally high fluorometer reading, and a drop-in dissolved oxygen level [72]. Government and academic scientists then made extensive efforts to find and detect the submerged oil. From early May until mid-June, a few cruises sampled at fixed, predetermined stations and depths, missing the submerged oil plume [10]. Later, responders found that the submerged oil was concentrated in a constant density layer that varied in depth between 900-1300 m. Density layers are not horizontal in the deep ocean, due to the geostrophic currents that tilt the isopycnals. This explained why initial sampling based on using a constant depth was not maximally productive.
After that, many researchers began to make more detailed observations. Autonomous and ship-based samples were collected almost throughout the entire oil spill period and were concentrated in the region within 10-20 km of the wellhead. The depths of the submerged oil layer were more accurately identified as between 1000 and 1300 m depth, and the southwest moving direction of the oil layer was identified in late June [73]. Weaker signals of submerged oil occurred northeast of the well site [1]. Other separate weak plumes were found at shallower depths to the southeast in early June [74] between 800-1000 m depths. Overall, the chemical measurements in the intrusion layer before July 1 suggested that the hydrocarbons were advected in various directions from the wellhead. This is because, before July 1, deep eddies transported the deep oil plume in multiple directions [52]. Sampling revealed that after July 1, the largest and most persistent plume predominantly moved along the southwestern dominant current along the isopycnal surfaces centered between 1027.60 and 1027.70 kg/m 3 [1].
The composition of the submerged oil in the intrusion layer was also investigated. The layer was rich in water-soluble aromatic compounds and C 1 -C 3 hydrocarbon components [66]. The concentrations of these components were greater than 10 µg/L within~30 km of the wellhead. Less water-soluble dissolved oil compounds were also found in the intrusion layer, including C 13 -C 22 PAHs, C 23 -C 40 and C 11 -C 22 alkanes [54].
Moreover, the DSDs in the water column were detected by cameras. ROV video identified visible droplets (<1 mm) existed from 801.6-1005.8 m and 1168-1390 m depth, with the median diameter (d 50 ) of droplets ranging from~45-50 µm [59]. The R/V Brooks McCall water samples in the layer included oil droplets in the water column around the leaking wellhead between the depths of 1000 and 1400 m. The LISST-100X revealed that small droplets (2.5-60 µm) were retained in the intrusion layer [30]. However, Li et al. [59] pointed out that these samples might not include the full range of droplets sizes in the subsurface layer, because the LISST-100X data only provided an estimation of the spatial and temporal distribution of the oil in one location.

Discussion
Overall, the in situ sensors, cameras, and on-board equipment successfully obtained real-time data on submerged oil droplets and dissolved hydrocarbons. However, these detection methods had limitations in the sampling areas and depths. Sampling was constrained within 75 km from the wellhead and was extended beyond 75 km only after September, but the potential oil impacted region may have already moved beyond 75 km before September. According to [40], the average southwest current velocity at 900 m depth was approximately 0.067 ± 0.047 m/s. Based on this velocity, the oil released at the beginning of the spill may have traveled to~70-200 km away from the well head by 30 May. The limited search area before September may be because of other spill response priorities. Before September, the oil response during the DWH spill mainly focused on stopping the leaking of the well and cleaning the surface and shoreline oil. Additionally, devices such as AUV Sentry needed to search a broad area to find the subsurface oil [75]. Another possible reason is after Sep 1, many academic vessels on scene participated in the daily submerged oil monitoring mission, which maximized vessel time and sample location [26]. Moreover, some sampling cruises using bottle casts and AUVs searched along constant depth surfaces, while the submerged oil was distributed along the isopycnal with varying depths [10,13], which may have decreased the detection efficiency. The unsampled areas neither confirm nor negate the presence of oil, which limit the scope of our estimates on the polluted regions from observations.
In addition, large uncertainties exist in the measurements. For example, the concentrations of some replicate samples in the same location varied significantly from ppm to ppb [54]. This could be due to eddies leading to a water parcel transiting through the plume slower or more than once or shear in the subsequent flow. This makes determining the presence of oil and estimating the biological risk difficult. The existence of the DWH submerged oil layer was not known until mid-May [72]. In addition, the sampling plan may be inefficient, and the samples were taken over large regions with no oil. For example, among the entire measured 5641 locations, at 4557 locations the tPAH was less than 1 ppb [1], which equals a~19% detection rate of oil with concentration larger than 1 ppb.
The detection limitations can be resolved by designing a more efficient sampling plan, using more accurate model predictions on submerged oil distributions and using more advanced devices. For example, an effective sampling plan could start with the AUVs following the isopycnals instead of a constant depth. Prediction from an accurate submerged oil spill model can be used to guide the AUVs to search within the isopycnal layer by visiting the oil concentrated area predicted by the model. In order to provide a time updated guidance, the models' predictions should be time-updated based on the new observations. Algorithms, such as machine learning [76] and Bayesian methods [77], can also help design more efficient adaptive sampling methods, based on real-time field sensor data and time updated model predictions [78]. In the end, more advanced AUVs that can operate for longer time periods can help improve the detection of submerged oil by visiting more locations and obtaining more data on submerged oil [78].

Overview of Submerged Oil Modeling
The models for submerged oil prediction are divided into deterministic models, such as MEMW (Marine Environmental Modeling Workbench) [79,80], and probabilistic models, such as SOSim (Submerged Oil Simulator) [38,39,81,82]. A deterministic model typically requires the detailed velocity field and spill information such as spill source locations, time and flow rates as inputs. The actual velocity field can be better calculated from a hydrodynamic model by assimilating observed salinity and temperature profiles or assimilating the observational velocities by acoustic Doppler current profilers. Based on the initial conditions, the deterministic model makes predictions using a Lagrangian stochastic model. As an alternative, a probabilistic model updates the hypothetical probabilistic distributions that represent oil distributions as field observations on oil locations and concentrations become available. The distinguishable difference between the two types of model is the way they make predictions. The deterministic oil spill model (e.g., OSCAR [37]) simulates the spill based on information on where the oil starts and uses the ocean dynamics and oil chemistry to predict the oil trajectory and fate. The probabilistic model makes predictions on the oil distributions from the information on oil locations and concentrations after the spill and guides oil responders on the most probable locations to find the oil in the future.
In detail, the deterministic models are based on the comprehensive oil chemistry and fate knowledge. Well blowout near-field dynamics are modeled using plume dynamics, and the released oil is represented by Lagrangian elements in the far-field model. The near-field well blowout model, e.g., DeepBlow [7], determines the DSD released from the plume phase, with the smallest droplets retained therein the intrusion layers and the depths of the intrusion layers. The far-field Lagrangian model tracks the transport and fate of dissolved oil and oil droplets [44]. The modeled fate processes include dissolution, biodegradation and sedimentation (such as OPAs (oil particle aggregates) and MOSSFA (marine oil snow sedimentation and flocculent accumulation)). The most important fate processes of submerged oil and their time window are in Figure 3. Many well blowout models have been developed to make predictions on the locations and concentrations of the submerged oil and a comparison of these models can be found in [83]. Probabilistic models provide an alternative which is data driven and does not require the hydrodynamic theory and the detailed input information as required by deterministic models [38], which is not always available during the oil response [84].
(Submerged Oil Simulator) [38,39,81,82]. A deterministic model typically requires the detailed velocity field and spill information such as spill source locations, time and flow rates as inputs. The actual velocity field can be better calculated from a hydrodynamic model by assimilating observed salinity and temperature profiles or assimilating the observational velocities by acoustic Doppler current profilers. Based on the initial conditions, the deterministic model makes predictions using a Lagrangian stochastic model. As an alternative, a probabilistic model updates the hypothetical probabilistic distributions that represent oil distributions as field observations on oil locations and concentrations become available. The distinguishable difference between the two types of model is the way they make predictions. The deterministic oil spill model (e.g., OSCAR [37]) simulates the spill based on information on where the oil starts and uses the ocean dynamics and oil chemistry to predict the oil trajectory and fate. The probabilistic model makes predictions on the oil distributions from the information on oil locations and concentrations after the spill and guides oil responders on the most probable locations to find the oil in the future.
In detail, the deterministic models are based on the comprehensive oil chemistry and fate knowledge. Well blowout near-field dynamics are modeled using plume dynamics, and the released oil is represented by Lagrangian elements in the far-field model. The near-field well blowout model, e.g., DeepBlow [7], determines the DSD released from the plume phase, with the smallest droplets retained therein the intrusion layers and the depths of the intrusion layers. The far-field Lagrangian model tracks the transport and fate of dissolved oil and oil droplets [44]. The modeled fate processes include dissolution, biodegradation and sedimentation (such as OPAs (oil particle aggregates) and MOSSFA (marine oil snow sedimentation and flocculent accumulation)). The most important fate processes of submerged oil and their time window are in Figure 3. Many well blowout models have been developed to make predictions on the locations and concentrations of the submerged oil and a comparison of these models can be found in [83]. Probabilistic models provide an alternative which is data driven and does not require the hydrodynamic theory and the detailed input information as required by deterministic models [38], which is not always available during the oil response [84].

Near-Field Plume Model
The near-field plume models can be divided into stability limited plume models and bent plume models based on the separation mechanisms between oil droplets/gas bubbles and the plume. The double-plume integral model works in the quiescent stratification condition and uses an inner plume and outer plume to describe the peeling behavior of the plume and predicts the trapped heights

Near-Field Plume Model
The near-field plume models can be divided into stability limited plume models and bent plume models based on the separation mechanisms between oil droplets/gas bubbles and the plume. The double-plume integral model works in the quiescent stratification condition and uses an inner plume and outer plume to describe the peeling behavior of the plume and predicts the trapped heights [6,85,86]. Double-plume models vary in how they calculate exchange fluxes between the inner and outer plume, which affects the location predictions but does not influence the prediction of trap heights [6]. The bent plume model works in the cross-current condition and accounts for separation of the oil droplets and gas bubbles from the main bent plume, leaving the plume to form one intrusion layer [87,88].
Plume models have been applied to the DWH spill to understand the formation of intrusion layers and help determine the depth of the layers. The model results are important to determine the depths that contain submerged oil. Socolofsky et al. [5] predicted two intrusion layers, one at 1134-1193 m and the other at 876-962 m, agreeing with the observed peaks in CDOM measurements from mid-May to mid-July 2010. But Spaulding et al. [52] pointed out that Socolofsky et al. [5] did not address the two releases periods, that is, the oil and gas released from the riser and kinks before June 3, and the oil and gas released from kinks after June 3. The plume released from the kinks was predicted to be trapped at 1280-1310 m [52], and that from the end of the riser was at 1150-1220 m.
The depths of the intrusion layers are affected by planetary rotation and vertical velocities and these factors are not considered in most near-field plume dynamic models. A strong rotation in high latitude would decrease the vertical buoyancy and momentum fluxes, leading to deeper and thicker neutrally buoyant intrusion layers than in the lower latitudes [89]. Significant planetary rotation induces an anticyclonic precession of the plume. In addition, the depth of the intrusion layer is sensitive to the ocean's vertical current velocities. If the rising velocities are considered, the depth of the intrusion layer would decrease [90]. These factors can be considered by coupling the ocean circulation models with the oil plume dynamics [91].

Sizes of Oil Droplet in the Intrusion Layer
Whether the oil droplets would stay in the intrusion layers is related to their sizes. To describe this relationship, a size related nondimensional slip velocity U N is introduced, which is given by the ratio of slip velocity to the vertical velocity of the droplets [92]. Under the conditions in the DWH spill, Chan et al. [93] found that droplets with diameter less than 1.5 mm, corresponding to U N < 0.3, entered and were transported within the intrusion layer. As partial components of the oil droplets dissolve and biodegrade in the water column, the oil droplets become denser and may leave the intrusion layer and sink to the bottom [94]. Different values were taken in [95] with droplets diameter <3.5 mm, corresponding to U N < 0.6. In the DWH spill, [52] estimated that the effective dispersant treatment can reduce the oil droplet sizes, which range from 1-10 mm before treatment and down to 0.02-0.5 mm after treatment. This implies that after dispersant application, almost all chemically dispersed droplets would be in the intrusion layer during the DWH spill, according to the above hypotheses.
For the droplets that entered the intrusion layer, their horizontal travel within the intrusion layer (before rising out of the intrusion layer) is related to the slip velocity U N [93]. Based on the relationship, the distances were calculated in the DWH spill, and were 100-500 m for oil droplets of diameters 0.2-2 mm and 3-20 km for those of diameters 0.02-0.2 mm [91].
For small droplets, the friction force is greater than the buoyant force, and thus small oil droplets suspend in the intrusion layers without rising out of the intrusion layer. Different diameter values for the droplets that suspend were taken in different models, e.g., <0.040 mm [46], ≤0.080 mm [96], and <0.050 mm [90]. This is because different models for oil DSDs were used. The choice of the oil droplet size that stays in the water column may influence further predictions on the amount of submerged oil trapped in the intrusion layer. Hence, the DSD is important to predict the performance of submerged oil. DSD is often simulated based on either empirical equations [23,97] or dynamic population evolution models [98][99][100]. Both models need to be calibrated with field data or experimental data [60]. For a more detailed discussion on these models, the reader is referred to [21].

Oil Transport Modeling in the Intrusion Layer
The far-field model uses the information on the intrusion layers provided by the near-field model and hydrodynamic fields as the initial conditions to make predictions on the location, trajectory and fate of submerged oil by stochastic Lagrangian methods. The submerged oil droplets are tracked as Lagrangian elements (LEs) such as particles, spillets [12], and 'splots' (a swarm of dots) [101]. The motions of these LEs are governed by a stochastic differential equation with advection and random movements related to turbulent diffusion. The amount of information saved in a Lagrangian element is specific to different models. The 'spillets' in OSCAR [37] and OILMAP [52] have more information regarding the state of the oil compared to the 'splots' in GNOME [101], e.g., diameter of the oil droplet and oil droplet size [102]. The properties of the intrusion layer can then be obtained by taking the ensemble average of these LEs.
The eddy diffusivity used in the Lagrangian models can be obtained from tracer experiments. Tracer experiments confirm that in the deep ocean, the mixing is isopycnal oriented. The estimated diapycnal diffusivity in the deep ocean is on the order of 10 −5 m 2 /s [103,104]. On the other hand, the isopycnal diffusivity at a small eddy scale (0.1-1 km) is on the order of 10 −2 -10 −1 m 2 /s [105,106]. Compared with the values observed in tracer studies, the default diffusivity value in the circulation models is generally higher [83] in order to keep the hydrodynamic model stable. For example, the typical isopycnal diffusivity used in the modeling community is 500 m 2 /s [107]. Additionally, in [104], the predicted area where the tracer covered using the SABGOM circulation model was larger than the detected area in the tracer study. The predicted concentrations using the SABGOM showed too much dilution and were more evenly distributed as compared with the field observations.

MOSSFA
In the DWH spill, before the leak was closed, oil mixed with synthetic-based drilling mud to form OPAs [108], covered a 6.5 km 2 area around the well up to 10 cm thick [109]. This process was a dominant mechanism of sedimentation. After the leak was closed in the DWH spill, MOSSFA was the dominant mechanism [108] that moved oil from the surface and the water column to the seafloor [110,111]. MOSSFA formation is similar to coagulation processes in wastewater treatment in that the transparent exopolymer particles connect the colloids such as living organisms, inorganic matter, detritus and oil droplets in the water column to form larger-size precipitates [94]. The marine oil snow was also observed in the Ixtoc-I spill [112,113]. In the DWH spill, about 3-4% submerged oil leaving the intrusion layer was due to MOSSFA [110]. A concenptual model of marine oil snow (MOS)-related processes can be found in [114], but at present there are no mechanistic models to predict the formation and sedimentation of MOS.

Fate of Submerged Oil
The fate processes related to the submerged oil in the intrusion layer include dissolution and biodegradation. Soluble oil components, such as alkanes, cycloalkanes, and BTEX, partially dissolve in the water column. The dissolved oil and submerged oil droplets are subject to biodegradation, which leads to the consumption of dissolved oxygen. The slow global oxygen circulation in the deep ocean leads to large areas of low dissolved oxygen concentrations in the mid water column [115]. Hence, the oxygen consumption by oil degradation will lower the oxygen levels and cause a DO deficit. DO deficit is a tracer for the biodegraded submerged oil below the main thermocline. The DO deficit prediction is also helpful to estimate the hypoxia risks and provides valuable information to oil spill responders for decision making.
Oxygen demand can be calculated theoretically based on complete mineralization. For crude oil, a simple estimate is that a concentration of 1 mg/L oil requires approximately 3 mg/L DO for complete mineralization [116]. This method is used in post-processing OSCAR output to calculate the dissolved oxygen utilization [27]. In addition, a bacteria-based oxygen model was used by [117] for oxygen deficit estimation. They assigned the primary and secondary bacteria with different oxygen consuming rates, which are related to the hydrocarbon components. On the other hand, the MEGADEEP-Eco model used the stoichiometric coefficient between the hydrocarbon and oxygen to calculate the DO consumption, where the hydrocarbon degradation was determined from a first order decay model [118]. A same method was also used in [119].
The biodegradation processes are influenced by many factors. Published biodegradation models vary in detail, including SSDI, oil droplet size influence, the accommodation of local bacteria [120] in a natural oil seepage area and the low DO concentration effects. Model predictions of oil biodegradation in the DWH spill using a lower biodegradation rate under the high pressure showed a longer persistence of the intrusion layer, which agreed better with observations [90]. In addition to the pressure, the effects of SSDI on biodegradation rates are under debate [121,122]. A study showed that the dispersant Corexit can suppress the activity of natural oil-degrading microorganisms [122]. On the other hand, the SSDI decreased oil droplet sizes and the smaller oil droplets tend to dissolve faster because smaller droplets have larger oil-water interphase areas for oil-degradation [123]. The overall impacts of SSDI on the oil droplets biodegradation rate depend on which effects dominate. In general, the droplet size is the dominant factor affecting the biodegradation rate [124,125]. In addition, the biodegradation of oil and gas was also affected in basins with low DO concentration [126]. Overall, all the factors above need to be considered in order to develop a more accurate biodegradation model, which is important for submerged oil modeling because the submerged oil concentration and extent are sensitive to biodegradation and estimating any potential for development of subsurface hypoxia or anoxia [27,124,125].

Data for Model Calibration
In a well blowout scenario, collected field data can be used to calibrate the model and improve the accuracy of model predictions. The possible data for a typical well blowout model's calibration are discussed in this paragraph. Data for the near-field model calibration include: (1) environmental conditions: the seawater density distribution, currents and diffusivities; (2) oil release conditions: oil and gas flow rates, gas to oil ratio and aperture size, dispersant to oil ratio (DOR), oil and gas properties, and initial oil droplets and gas bubble radius. These data can improve the model prediction certainty. Data for the far-field model include: (1) environmental conditions: temperature and salinity, detected currents or hydrodynamic model current fields, isopycnal and diapycnal diffusivity; (2) oil properties and biodegradation rates for different oil components; (3) amounts and time of dispersant applications. Since near-field model outputs provide inputs to the far-field models, the accuracy of the far-field model predictions depends on the accuracy of near-field models and the environmental data. These data and their availability for modeling calibration in the DWH spill are summarized in Table 3.
These data were partially obtained through observations. At the initial stage of the well blowout, oil responders first focused on rescue operations and then on surface/shoreline oil cleanup. Approximately

Probabilistic Models
A probabilistic model provides another way to model submerged oil by using observational data on submerged oil concentrations and locations. The probabilistic model does not require the oceanographic data typically needed for deterministic models. Examples of probabilistic oil spill models include the boosted regression trees (BRTs) [131], Bayesian networks [132][133][134][135], and Bayesian models [38,39,81,82]. The BRTs were used to estimate the oil residues along the shorelines of Prince Willian Sound in the T/V Exxon Valdez oil spill between 2001 and 2007 in [131], and the Bayesian networks were mainly used to determine the risks of oil spills. These two methods have not been used for submerged oil trajectory prediction in deep intrusion layers, so here we mainly consider a Bayesian inference model, named SOSim, designed especially for the submerged oil modeling [38,39].
SOSim starts with the solution of the advection-dispersion equation and uses the limited observed submerged oil data (e.g., CDOM, PAHs concentration and fluorescence) to develop posterior distributions of the distribution parameters, from which the present or future distribution of submerged oil is inferred. Thus, the model assesses relative oil concentrations and locations as a function of time. As these data are continuously collected, they can be fed into the model to sequentially update the parameters and make time-updated predictions.
The theoretical foundation of SOSim is Bayesian inference, based on a hypothesis as to the distribution of oil concentrations. That is, the concentration of oil after an instantaneous release is assumed to take the form of a multi-modal Gaussian distribution with temporally varying parameters, based on diffusion theory and accounting for the possibility of multiple oil patches. Based on Bayes' theorem, the posterior distribution is proportional to the product of prior distributions for the model parameters, and the likelihood function constructed from field observations of oil concentration. SOSim was first developed to use limited field observations to predict the relative concentrations and locations of sunken oil on flat bay bottoms and continental shelves, and was demonstrated using data collected following the T/B DBL-152 oil spill [82].
SOSim was further developed recently to model submerged oil patches and plumes [38,39]. For that application, the result of another fate-transport model can be input as prior information, to be used together with field concentration data. Results are computed in 2-D density coordinates for the isopycnal layer containing the oil and converted to 3-D output using density versus depth data for the water column. Additionally, velocity and diffusion coefficients are allowed to vary in time, to capture oil movement with subsurface gyres and eddies. Further, uncertainty bounds are calculated by −2-loglikelihood analyses and output on the resulting map of predicted relative oil concentration over the user-selected study area.
SOSim has been used to model the DWH oil spill [39]. Accumulated field observations on submerged oil concentration, together with predictions of the OSCAR model [37], were used to predict submerged oil concentrations within the intrusion layer ( Figure A1). SOSim has also been applied recently to design more efficient submerged oil sampling plans, for continuous sampling such as by submarine devices (e.g., AUV or ROV), and instantaneous sampling such as by ship-based Rosette ( Figure A2) [39].

Model Application in the DWH Spill
During the DWH oil response period, the near field model CDOG (Clarkson Deep Oil and Gas model) [8] and SINTEF Deepblow [7] model produced an initial subsurface plume forecast on May 15 which was used for the guidance of vessels [136]. They initially sampled from 12 km northwest of the DWH wellhead, and approximately seven stations were sampled each day. The subsequent sampling locations were determined based on CDOM detection results. Because of the lack of salinity and temperature data, daily trajectory predictions of the far-field model were not available. Starting on September 7 when the salinity and temperature were available, daily trajectory predictions became available. The predictions were later applied to identify potential locations of dissolved oxygen depression at depth.

Comparisons and Discussion
The deterministic models provide a comprehensive description of the physical and chemical processes related to the submerged oil transport and fate in the intrusion layer. All of these processes from the leaking of oil to the formation of intrusion layers and the advection, diffusion, and fate of the submerged oil are included. Many important details on the intrusion layers are provided in the output including the depth, extent, locations, the concentrations of different oil compounds and hydrocarbons, and the moving directions, etc. Moreover, major fate processes may also be included to consider and to evaluate dispersant effects during oil spill response.
The sophisticated deterministic models require extensive input, including current, salinity and temperature, gas and oil flow rate, oil and gas properties, and the release conditions, etc. Field observations are used to calibrate the model. However, during the oil response period, these field data are not always available in time and key information may not be obtained timely. For example, there were no measurements of the initial bubble or oil droplet sizes at the DWH source [91]. Additionally, the information on the oil flow rate was not determined until June 6-15 [64]. Although deterministic models take the empirical estimators when these data were not available, the field environmental data, such as temperature and salinity, are important and can improve the accuracy of model prediction. For example, the assimilation of field observations helped the model work well in predicting the movement of submerged oil in the DWH spill [40].
In contrast, the probabilistic model SOSim provides a new insight in predicting the possible locations to find submerged oil based on oil concentration observations during the data limited situations. SOSim predicts the probabilities of finding the submerged oil within the interested area by assimilating real-time observations. SOSim does not require specification of the physical parameters required as input to deterministic models, but only requires the observed submerged oil concentrations and locations. Concentration data may be for dissolved oil, CDOM, and oil droplets, etc. In the DWH spill, field concentrations were collected approximately two weeks after the start of the spill, as shown in Table 2 and, thus, could be used by SOSim to make predictions. In addition, SOSim can also use deterministic model results as prior information or as a substitution for the observational data.
The outputs of these two kinds of models (Bayesian, deterministic) are different. The deterministic model provides absolute concentrations of submerged oil, while the probabilistic model provides relative concentration profiles. Since, in oil spill response, the location and distributions of the submerged oil are of more interest, both models can be useful. For damage assessment, the absolute concentrations are needed. The prediction of this absolute concentrations can either be obtained from deterministic models, which can provide detailed mass and toxicity estimates, or from probabilistic models by combining the predicted relative concentration profiles with real observations collected at a few locations.
Uncertainty estimates are important for both models, and the quantitated uncertainty is recommended to be added to the trajectory predictions [137]. For the deterministic models, a sensitivity analysis is carried out to evaluate different input factors and determine the important ones [124,125,138]. However, no quantified estimates on the model output uncertainties are available [40]. For the probabilistic models, probabilistic uncertainty estimates are calculated, and numerical values are provided on the uncertainty of model predictions.

Bayesian Model for Future Spill Detection
A Bayesian model, such as SOSim, may be useful in two types of data limited situations, to guide oil spill response. One can be described as a complex oil release scenario similar to the initial stage of the DWH spill, when oil leaks from both the kinks and the riser. When the locations and depths of the kinks and riser are known and oil droplets are detected in the water column, such information can be input to the probabilistic model to project the oil movement. The other situation is when there has been an apparent spill, even though the source of the leaking oil is unknown. That is, oil is detected in the water, but the exact location of the source of the oil is unknown. In such cases a probabilistic model of the form of SOSim can infer the location of the sources using the observation data on the spilled oil and the prior knowledge on the velocity field.

Integrated Model for Future Spill Detection
SOSim and a deterministic model have been used together in a retrospective study, to demonstrate the real-time, adaptive development of a sampling plan [39], as shown conceptually in Figure 4 (we describe the process here as if a spill were ongoing). A first prediction was made by deterministic modeling 25 days after the spill to project the oil plume using CDOG and SINTEF Deepblow [26]. Results of these two models indicated the depths where submerged oil may be found. On the same day, the oil responders then collected submerged oil data by ship-lowered Rosette, searching over the potential depths and confirming the vertical depths of submerged oil. These data, together with the deterministic model output, were input to SOSim. After running for 4-6 h, the sampling plans for the next day, e.g., day 26 after the spill were prepared, and a prediction showing the probability of finding submerged oil as a function of location on day 26 in 3-D was made. Had this analysis been available, oil responders could then have used SOSim's projection to better focus sampling efforts on the next day e.g., day 26 after the spill. As more field observations were collected, the new observations could have been input to SOSim for time-updated predictions. This approach can circumvent the needs of a deterministic models for site-specific input data, and help improve the accuracy of prediction through ground-truthing.

Integrated Model for Future Spill Detection
SOSim and a deterministic model have been used together in a retrospective study, to demonstrate the real-time, adaptive development of a sampling plan [39], as shown conceptually in Figure 4 (we describe the process here as if a spill were ongoing). A first prediction was made by deterministic modeling 25 days after the spill to project the oil plume using CDOG and SINTEF Deepblow [26]. Results of these two models indicated the depths where submerged oil may be found. On the same day, the oil responders then collected submerged oil data by ship-lowered Rosette, searching over the potential depths and confirming the vertical depths of submerged oil. These data, together with the deterministic model output, were input to SOSim. After running for 4-6 h, the sampling plans for the next day, e.g., day 26 after the spill were prepared, and a prediction showing the probability of finding submerged oil as a function of location on day 26 in 3-D was made. Had this analysis been available, oil responders could then have used SOSim's projection to better focus sampling efforts on the next day e.g., day 26 after the spill. As more field observations were collected, the new observations could have been input to SOSim for time-updated predictions. This approach can circumvent the needs of a deterministic models for site-specific input data, and help improve the accuracy of prediction through ground-truthing.

Summary and Conclusions
Finding submerged oil during the oil response period is a daunting problem. Formation mechanisms of intrusion layers, the fate and transport of submerged oil in deep well blowouts, and the detection and modeling methods to find or track the submerged oil in the intrusion layers are all important to help better assist oil response in subsurface oil spills. The references reviewed indicate that the detection and modeling methods were useful during the DWH spill to find and track the submerged oil.
In terms of detection, submarine devices with in situ sensors and cameras or ship on-board measurements are able to detect submerged hydrocarbons, oil droplets, and the DO in the intrusion layers. However, limitations still exist. The sampling areas are limited due to limited searching abilities; the sensors measured data are uncertain and the sampling plans applied to the DWH spill are sometimes inefficient. Hence, in order to improve the detection, more nimble devices, e.g., AUVs

Summary and Conclusions
Finding submerged oil during the oil response period is a daunting problem. Formation mechanisms of intrusion layers, the fate and transport of submerged oil in deep well blowouts, and the detection and modeling methods to find or track the submerged oil in the intrusion layers are all important to help better assist oil response in subsurface oil spills. The references reviewed indicate that the detection and modeling methods were useful during the DWH spill to find and track the submerged oil.
In terms of detection, submarine devices with in situ sensors and cameras or ship on-board measurements are able to detect submerged hydrocarbons, oil droplets, and the DO in the intrusion layers. However, limitations still exist. The sampling areas are limited due to limited searching abilities; the sensors measured data are uncertain and the sampling plans applied to the DWH spill are sometimes inefficient. Hence, in order to improve the detection, more nimble devices, e.g., AUVs can be used, more accurate sensors can be deployed, and more effective sampling plans can be designed. For example, the sampling can follow the isopycnals rather than constant depths because the submerged oil is physically constrained within the intrusion layers. Although deterministic and probabilistic models follow different approaches, both models have been validated using data from real spills and provide reasonable predictions. However, there is still variation between model predictions and real observations. For the deterministic model, the field salinity and temperature profile or the ocean current profile is needed to improve the accuracy of predictions. In addition, the diffusivity in the deterministic model is set higher than the reality to make the model stable. The predicted oil concentrations for both models sometimes have a significant difference from observations partially due to model uncertainties, input data uncertainties, and epistemic uncertainties. For the deterministic model, the epistemic uncertainties are due to the limited knowledge on the physical and chemical processes related to the transport and fate of submerged oil. For example, the decay rates derived from the present oil biodegradation experiments were conducted on high oil concentration and ignored the rapid dilution effect in the open water [139]. Additionally, for the probabilistic model, the epistemic uncertainties come from the prior knowledge on the parameter distributions.
To improve the accuracy of the models, several approaches can be taken. For the deterministic model, faster deployment to obtain the necessary salinity and temperature profile for hydrodynamic models to assimilate can be helpful. Further, the realistic conditions in the real spills need to be considered in order to make the model more accurate. Specifically, in the DWH spill, the high-pressure during the initial release, the accommodation of the local bacteria to the presence of natural oil seepage, and dilution in the open water [94] need to be taken into account. For probabilistic models, more transformation processes, like biodegradation, can be considered, for example, by introducing a probabilistic model for the decay of oil concentrations. In order to make the results more applicable in oil spill response, uncertainty estimates of the prediction results can be provided by the probabilistic models, and for the deterministic models, the prediction results can be provided to SOSim to obtain uncertainty estimates.
The detection methods and modeling approaches discussed above can be integrated to provide better tools for oil spill response and risk assessment. That is, the use of both deterministic and probabilistic models together can provide time-updated submerged oil predictions to oil responders, e.g., to guide further field data collection. For example, at the initial stage of the spill, the deterministic plume model can be applied first to obtain the estimated depths of the intrusion layers. Then detection can be performed in those layers to obtain concentrations of the submerged oil. These detected field concentration data, along with the predictions of the deterministic model, can then be input to the probabilistic model, to compute a new, ground-truthed prediction and uncertainty estimate. Based on the predictions, a revised sampling plan can be designed to guide further detection. The above procedure can then be followed recursively, to support timely oil spill response and risk assessment.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Results of the Probabilistic Model: SOSim
The predictions of SOSim using the available field submerged oil concentration data on 26 May and 4 June are in Figure A1. On both dates predicted concentrations were consistent with available data, indicating movement towards the southwest and west [1]. Results indicated that submerged oil was at~900-1200 m depth on May 26 and~900-1200 m depth on June 4. The prediction results of SOSim are used to guide the sampling procedures of the AUV. The idea is to use SOSim results to determine subsequent sampling locations and depths. As shown in Figure  A2a, in the initial stage, an adaptive zigzag pattern is to be used by the sampling vessel. Then, the sampled oil concentration data are input to SOSim to assess the oiled area ( Figure A2b). Then subsequent sampling was designed in a zigzag pattern within this area, suggesting the AUV samples uniformly following the blue solid lines in Figure A2b. Sampled locations and concentrations can then be input into SOSim for new predictions for subsequent days.
(a) (b) Figure A2. A sampling plan example for AUVs using SOSim ((a) the initial sampling; and (b) the subsequent sampling) (data from [39]). The prediction results of SOSim are used to guide the sampling procedures of the AUV. The idea is to use SOSim results to determine subsequent sampling locations and depths. As shown in Figure A2a, in the initial stage, an adaptive zigzag pattern is to be used by the sampling vessel. Then, the sampled oil concentration data are input to SOSim to assess the oiled area ( Figure A2b). Then subsequent sampling was designed in a zigzag pattern within this area, suggesting the AUV samples uniformly following the blue solid lines in Figure A2b. Sampled locations and concentrations can then be input into SOSim for new predictions for subsequent days. The prediction results of SOSim are used to guide the sampling procedures of the AUV. The idea is to use SOSim results to determine subsequent sampling locations and depths. As shown in Figure  A2a, in the initial stage, an adaptive zigzag pattern is to be used by the sampling vessel. Then, the sampled oil concentration data are input to SOSim to assess the oiled area ( Figure A2b). Then subsequent sampling was designed in a zigzag pattern within this area, suggesting the AUV samples uniformly following the blue solid lines in Figure A2b. Sampled locations and concentrations can then be input into SOSim for new predictions for subsequent days.