Next Article in Journal
Acting Instead of Reacting—Ensuring Employee Retention during Successful Introduction of i4.0
Previous Article in Journal
Study of Transmission Line Boundary Protection Using a Multilayer Perceptron Neural Network with Back Propagation and Wavelet Transform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Role of Restored Underwater Images in Underwater Imaging Applications

by
Jarina Raihan A
,
Pg Emeroylariffion Abas
* and
Liyanage C. De Silva
Faculty of Integrated Technologies, Universiti Brunei Darussalam, Gadong BE1410, Brunei
*
Author to whom correspondence should be addressed.
Appl. Syst. Innov. 2021, 4(4), 96; https://doi.org/10.3390/asi4040096
Submission received: 7 November 2021 / Revised: 19 November 2021 / Accepted: 22 November 2021 / Published: 25 November 2021
(This article belongs to the Section Information Systems)

Abstract

:
Underwater images are extremely sensitive to distortion occurring in an aquatic underwater environment, with absorption, scattering, polarization, diffraction and low natural light penetration representing common problems caused by sea water. Because of these degradation of quality, effectiveness of the acquired images for underwater applications may be limited. An effective method of restoring underwater images has been demonstrated, by considering the wavelengths of red, blue, and green lights, attenuation and backscattering coefficients. The results from the underwater restoration method have been applied to various underwater applications; particularly, edge detection, Speeded Up Robust Feature detection, and image classification that uses machine learning. It has been shown that more edges and more SURF points can be detected as a result of using the method. Applying the method to restore underwater images in image classification tasks on underwater image datasets gives accuracy of up to 89% using a simple machine-learning algorithm. These results are significant as it demonstrates that the restoration method can be implemented on underwater system for various purposes.

1. Introduction

A considerable part of the earth is covered with water, with the underwater world consisting of an astounding variety of resources. However, the underwater world is not as friendly as the atmospheric region. In order to explore underwater resources and discover the aquatic world, the use of Remotely Operated Vehicles (ROV), such as underwater robots and submarines with proper underwater cameras may be required. The challenges of acquiring undistorted underwater images, which primarily focus on the object of interest, are well-documented due to the distortions caused by marine organisms, floating objects, marine snow, bacteria and algae present in sea water. These sources of distortion make the captured underwater images less informative, and consequently, have limited applicability for some underwater applications, which require undistorted or less distorted images, such as edge detection, feature point detection, image classification, image stitching, object detection, underwater studies, underwater archaeology, marine ecology, assisting aquatic robots, species recognition, and underwater geology. As such, there is a requirement for underwater image capture to have a high degree of accuracy and quality for proper interpretation of its information.
Underwater images are commonly dominated by blue and green shades, as red light from the visible spectrum is quickly absorbed and loses its strength, even in the first part of the ocean, which is within 10 m depth. Other lights from the visible spectrum, such as orange, yellow, green, and blue, are also absorbed by the water as we go deeper into the ocean. Figure 1 shows the light absorption property of water and penetration levels of lights [1], at various depths. Due to these problems, a proper method that is able to restore underwater images is the need of the hour and is required for various studies and scientific research areas. By gaining more information from the images, the underwater images can be used for different underwater applications.
Many restoration algorithms have been proposed in the literature. Reference [2] provides a review of various underwater image restoration methods that are available in the literature, generally classifying underwater restoration methods into hardware, software, and network-based approaches.
Hardware-based approaches employ a variety of hardware to process underwater images for restoration purposes. These include range-gated imaging techniques [3], polarisers [4], imaging using stereo cameras [5], and remotely operated vehicles [6]. However, these methods have been shown to suffer from errors caused by calibration of the hardware devices.
Network-based approaches involve the use of deep-learning algorithms to process underwater images. Convolutional neural networks [7,8] and generative adversarial networks [9,10] are some of the neural networks that have been used for this purpose. However, deep-learning methods commonly require a good dataset with a large number of underwater images, together with ground truth images, which are very difficult to acquire in the case of underwater image processing.
Software-based approaches use the Image Formation Model (IFM) to restore the captured underwater images, by finding the background light and transmission maps. Dark Channel Prior (DCP), as proposed by He et al. [11], uses IFM in image restoration, by assuming scene points closer to the camera as dark images and vice versa. However, due to the longer wavelength and faster attenuation property of red light, the method fails to estimate the proper results, and always ends up choosing the red channel as the darkest of all channels. Variations of DCP have consequently been proposed for underwater images; using green and blue channels only, [12,13,14], using the inverse of the red channel [15], and using the maximum intensity prior [16]. The performances of these methods have been shown to vary depending on different lighting conditions and priors chosen. Instead of estimating the transmission map directly, Peng et al. [17] use a depth-estimation strategy to restore the underwater images. The proposed method involves the use of depth estimation and transmission map estimation with attenuation coefficient priors, by considering the backscattering effect, and has been proven to show superior results [18].
Generally, the efficiency of an image-processing algorithm is calculated by comparing its output processed underwater images to other similar algorithms, and by using quality metrics, such as Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), and dedicated underwater performance metrics, including Underwater Color Image Quality Evaluation (UCIQE) [19] and Underwater Image Quality Measure (UIQM) [20]. These are more common measures in evaluating image processing methods. However, there is no approach to evaluate an image processing algorithm based on its applicability in real applications. This is very important since the motive of developing an image-processing method is not only to restore or enhance the underwater images, but ultimately, to help improve the efficiency of the real applications. In this paper, the first ever approach to evaluating the efficiency of the proposed algorithm based on its usefulness for underwater applications, has been shown.
The contributions of the paper are: (1) proposing an underwater image restoration method, which estimates depth maps using combinations of a blurriness map, background light neutralization, and red-light intensity. The background light neutralization is estimated using the four-quadrant method, which demands lower computation compared with other methods [18], and (2) demonstrations of the restored underwater image using the proposed method on different underwater applications to evaluate the efficiency of the algorithm. This represents the first ever demonstration of a developed algorithm on real underwater applications.
The structure of the rest of the paper is as follows: Section 2 describes the proposed underwater image restoration method that may be used to effectively recover original images from acquired underwater images. Consequently, the usages of the recovered underwater images on different underwater applications are explored in Section 3. Section 4 discusses results from the proposed restoration method, as well as its implementation on the selected underwater applications. The last section concludes the paper.

2. Proposed Restoration Method

The image restoration process employs the Image Formation Model (IFM) given in Equation (1), to obtain the original scene from a captured underwater scene, with the process involving estimation of the different parameters of the underwater IFM.
I c ( x ) = J c ( x ) . t c ( x ) + ( 1 t c ( x ) ) . B c , c { R , G , B }
As can be seen, there are two distinct parts of the captured underwater image I c ( x ) . J c ( x ) . t c ( x ) describes radiance J c ( x ) of the object as it travels in the underwater medium, whilst ( 1 t c ( x ) ) . B c represents the scattering of background light B c as it travels towards the camera. Transmission map t c ( x ) describes the part of the object radiance that reaches the camera, after considering for absorption and scattering.
Recovering the original object radiance J c ( x ) from the acquired image I c ( x ) at the camera requires knowledge of the background light B c   as well as the transmission map t c ( x ) , with this information commonly estimated. Taking t c ˜ ( x ) and B c ˜ as the estimated transmission map and background light, respectively, the recovered scene radiance J c ˜ ( x ) may be estimated as:
J c ˜ ( x ) = I c ( x ) B c ˜ max ( t c ˜ ( x ) , t 0 ) + B c ˜ , c { R , G , B }
t c ˜ ( x ) = e β c   d ˜ ( x ) , c { R , G , B }
where, β c is the spectral attenuation coefficient of the direct signal and d ˜ ( x ) is the estimated depth map of the image.
Figure 2 depicts the flowchart of the proposed underwater restoration method for estimating the recovered scene radiance J c ˜ ( x ) from the captured underwater image I c ( x ) . Blurriness estimated image p blr and background light neutralized image I B L c ( x ) , are calculated from the input image I c ( x ) , which are then used, together, with the red-light intensity I r ( x ) , to estimate depth d ˜ ( x ) of the underwater image. Subsequently, transmission map t c ˜ ( x ) may then be estimated using the estimated depth map d ˜ ( x ) by selecting the appropriate spectral attenuation coefficients. The input image I c ( x ) , estimated background light B ˜ c , and transmission map t c ˜ ( x ) , are then used to find the final scene radiance recovered image J c ˜ ( x ) , as per Equation (2).

2.1. Depth Estimation and Background Light Estimation

Blurriness map estimation p blr   is the first process in the restoration process, by estimating the refined blurriness map, through the initial map and rough map of the image [17]. This is then followed by background light estimation. To determine the background light, the input image I c ( x ) is segmented into four quadrants, and the mean value of the pixels calculated. Equation (4) is then used to estimate background light B ˜ c ,
B ˜ c = max ( I q B L c ( x ) )
where
I q B L c = q m i d ,                 q m i d { q i = { 1 , 2 , 3 , 4 } q m a x q m i n }
The selected pixel, which constitutes the estimated background light B ˜ c , may not be the brightest of all pixels in the entire input image, as two quadrants with extremes light intensity have been excluded from the selection process. The estimated background light B ˜ c shall be used for the scene radiance recovery using Equation (2).
Background light neutralized image I B L c ( x ) needs to be estimated to find the depth map of the underwater image. Initially, average light intensity I q a v g c in the two quadrants, excluding the two extremes, is determined, and taken as average of the underwater image,
I q i c = a v g x q i ( c I c ( x ) )
where q i is the four quadrants i = 1 ,   2 , 3 ,   4 , with I q i c representing the average light intensity in the respective quadrant q i . The brightest q max and darkest q min quadrants are neglected, as they are two extremes of the spectrum. Average light intensity in the remaining two quadrants is then calculated, and taken as average of the underwater image:
I q a v g c = a v g q i q m i d ( I q i c )
This average light intensity I q a v g c is then used to modify all the pixels of the input image I c ( x ) to retrieve the contrast neutralized image I c n c ( x ) , as follows:
I c n c ( x ) = I c ( x ) + I q a v g c
To denoise the image, discrete wavelet transform (DWT) is applied on the contrast-neutralized image I c n c ( x ) and the gray version of the input image I g ( x ) . Inverse discrete wavelet transform (IWDT) is finally applied to retrieve the background light-neutralized image, with approximation and detailed coefficients modified based on the average of approximation coefficients and max rule applied on detailed coefficients.
The blurriness map, background light-neutralized image, and intensity of the red channel can then be used for the depth-estimation process [18]. The maximum intensity of the red channel, known as red channel map r ( x ) of the image, is represented by
r ( x ) = max y φ ( x ) I r ( y )
where I r is the intensity of the red channel and φ ( x ) is a square local patch centred at x. The factors used for estimating depth are passed through a stretching function given by Equation (10) [18].
d f ( x ) ( x ) = 1 F s ( f ( x ) ) , f ( x ) { r ( x ) , p b l r ( x ) , I B L c ( x ) }
where f ( x ) { r ( x ) , p b l r ( x ) , I B L c ( x ) } can either be the red channel map r ( x ) , blurriness map p b l r ( x ) or background neutralised image I B L c ( x ) , to give d r ( x ) , d p b l r ( x ) and d I B L c ( x ) , respectively. F s ( v ) is a stretching function, which accepts vector v as its input.
F s ( v ) = v min ( v ) max ( v ) min ( v )
The final depth estimation can be found by Equation (12),
d ˜ ( x ) = θ b [ θ a d I B L c ( x ) + ( 1 θ a ) d r ( x ) ] + ( 1 θ b ) d P b l r
where θb and θa are θ a = S ( a v g ( I B L c ) , 0.5 ) ) and θ b = S ( a v g ( I r ) , 0.1 ) ) , respectively; with a v g (.) giving average of the input and the sigma functions S(a,v) given as:
S ( a , v ) = [ 1 + e s ( a v ) ] 1

2.2. Transmission Map Estimation

The proposed transmission map estimation involves the use of depth estimation in Equation (3). Reference [17] estimates transmission map using only the direct signal, with the effects of backscattered signals neglected. In contrast, the proposed transmission map estimation involves the use of both direct and backscattered signals in estimating the transmission map, as shown in Equation (14).
t c ˜ ( x ) = t D c ( x ) + t B c ( x ) , c { R , G , B }
where β D c   is the spectral attenuation coefficient of the direct signal and β B c is the spectral attenuation coefficient of the backscattered signal.

2.2.1. Transmission Map of Direct Signal

The transmission map of direct signal is estimated using spectral attenuation coefficients calculated for red, green and blue channels, together with the calculated depth map d ˜ ( x ) . The transmission map for the red channel can be calculated using,
t D r ( x ) = e β D r . d ˜ ( x )
Restoration results are not sensitive to spectral attenuation coefficient β D r of the red channel [17], with values between [0.125,0.20] for oceanic water type I [21], and hence, spectral coefficient value β D r of the red channel is set to 0.142.
The transmission map for the green and blue channels due to direct signal can be found by utilizing the transmission and attenuation coefficient of the red channel [22],
t D k ( x ) = t D r ( x ) β D k β D r , k { g , b }
where the linear relationship between the attenuation coefficients of the green, blue, and red channels is given by Equation (17), with values m = 0.00113 , and i = 1.62517 [23]. Wavelengths for the red, green and blue light are taken to be 620 nm, 540 nm, and 450 nm, respectively [17].
β D k β D r = B ˜ r ( m λ k + i ) B ˜ k ( m λ r + i ) ,     k { g , b }
where B ˜ k is the background light estimated using Equation (4) for the respective channel k { g , b } .

2.2.2. Transmission Map of Backscattered Signal

Comprehensive studies have been conducted on the estimation of spectral attenuation backscattering coefficients, with Mie theory used to predict spectral behavior. Whitmire et al. [24] use Slow Descent Rate Optical Profiler (Slow DROP), to experimentally calculate backscattering coefficients of particulate matters in five research cruises at five different wavelengths covering the visible spectrum, over a period of three years. The values selected based on the wavelengths of interest are shown in Table 1.
Total backscattering coefficient β B c   is a summation of pure water backscattering coefficient β B W c and particulate matter backscattering coefficient β B P c ,
β B c ( λ ) = β B W c ( λ ) + β B P c ( λ )
Transmission map due to backscattered signal may be derived from Equations (3) and (18) as follow,
t B c ( x ) = e β B c . d ˜ ( x )           c { r , g , b }
Spectral attenuation coefficients of the direct β D c and backscattered β B c signals may be used to estimate the raw transmission map, using Equation (14). The estimated transmission is then further refined by using a guided filter [26], instead of soft matting [11], because of its better refinement properties.
Scene radiance recovery involves the use of the estimated background light and transmission map to form the final scene radiance. The refined transmission map is used in Equation (2) to acquire the final restored image.

3. Different Underwater Applications

There are many applications of underwater images, out of which three of the most common applications have been chosen for evaluation of the proposed underwater image restoration method. The three applications are edge detection, Speeded Up Robust Feature (SURF), and image classification using machine learning (ML). One of the main aims of a restoration method is to reduce blurriness from underwater images; among other things, to facilitate edge detection, which may be performed with the use of Sobel edge-detecting operator. Edge detection is mainly used for obstacle detection by unmanned underwater vehicles. Textured details of underwater images may also be improved by using a restoration method, and the effectiveness of the restoration method towards this objective may be evaluated by considering the number of feature points detected by SURF. For underwater images with coral reefs and fish with a variety of shapes and sizes, SURF is used to detect features of objects, which may be performed with the help of the SURF function in MATLAB software. Finally, image classification is performed to prove the effectiveness of a restoration method in detecting targets. This application is specifically used for underwater pipeline corrosion, marine sea salt detection, subsea terrain classification and mineral exploration.

3.1. Edge Detection

Since underwater images are used in pattern recognition, image decomposition, visual inspection, and also in important processing tasks in computer vision related to underwater segmentation, the output underwater image needs to be clear, with good texture and details. Edge detection is particularly useful in underwater image processing, in order to localize coral reefs and other related tasks. In this paper, the Sobel edge detector is used to detect the number of edges in an image [27], whereby output from the proposed restoration method is used as input to the Sobel edge detector, in order to ascertain whether the proposed restoration method actually improves features and texture details of objects in an underwater image. A comparison of performances is then made with the input image, in terms of the number of edges detected.

3.2. Speeded Up Robust Feature (SURF)

SURF is a detection algorithm used to detect points of interest in an underwater image. One of the basic tasks of computer vision algorithms is local feature points matching, which forms the basis for underwater studies, such as for the detection of marine animals, and fish species recognition [28]. The SURF feature matching provided by MATLAB software is used for performance analysis. For the detection of points of interest, SURF uses an integer approximation of the determinant of Hessian blob detector that is determined using a three-integer operation on a precomputed integral image. The feature descriptor is based on the Haar wavelet response. This can also be used underwater to detect and locate objects, reconstruct 3D scenes and extract points of interests.

3.3. Image Classification

Image classification is an important application in image processing. It segregates objects in an underwater image based on the object of interest, which can be useful in various fields, such as for the detection of pipeline corrosion, marine salt, fish detection, detection of ship wrecks, mineral exploration, marine animal detection, pollution monitoring, subsea investigation, and sea floor terrain examination.
In machine learning, the model learns a pattern in a dataset, from which prediction of a given situation of interest can be made. The learning process starts by providing a training dataset, which is fed to a designed model to establish the relationship between dependent and independent variables. Once trained, the pretrained model may then be used to predict the output given a set of test inputs.
Machine-learning methods are generally classified as either supervised, unsupervised, semisupervised or reinforcement learnings. A supervised machine-learning method is used here for the image classification purpose. The method predicts a new set of outputs based on what has been learned from the past training datasets, with the learning process starting with the help of training data that has a set of input and target vectors. In supervised learning, it is assumed that the actual output values are known for each input pattern.
In the process of classification, image features of the input images from the training set are extracted using the Discrete Wavelet Transform (DWT) and Gray Level Co-occurrence Matrix (GLCM), with these features fed into the classification algorithm for training purposes. Two supervised machine-learning algorithms: Support Vector Machines (SVM) and K-Nearest Neighbor (KNN) have been chosen. SVM has evolved as one of the most powerful supervised machine-learning methods in classification problems and linear and nonlinear regressions, whilst KNN is a nonparametric statistical method for classification and regression problems, and has been used for pattern recognition and feature detection [29].

4. Results

The proposed underwater restoration method has been used to process numerous underwater images, with the processed underwater images used as input for the three underwater applications: edge detection, SURF, and image classification applications. Particularly, five underwater test images with various underwater conditions: naturally lit, bluish nature, greenish nature, artificial light source, and backscattered image, have been selected to appraise the efficiency of the proposed restoration method. Figure 3 depicts the raw underwater images along with their restored underwater images using the proposed method.

4.1. Results for Edge-Detection Application

Sobel edge-detection operator is used to detect the edges in an image. Both the raw and restored underwater images shown in Figure 3 are passed through the Sobel edge-detector operator, with results given in Figure 4. Results of edges detected from the raw underwater images of Figure 4a–e, are obtained from passing the raw underwater images in Figure 3a–e, through the Sobel edge-detection operator, respectively, whilst results of edges detected from restored images of Figure 4a–e are obtained from passing the restored images in Figure 3a–e through the Sobel edge-detection operator, respectively.
The number of edges detected on the raw image in Figure 4a is less than the edges detected on the restored image, with edges of the coral reefs not properly detected on the raw image. This is in comparison to the clear detection of the edges of coral reefs in the restored underwater image in Figure 4a. The clear number of edges detected when using the restored underwater images points to the necessity of the proposed restoration method for underwater image processing.
In the raw image of Figure 4b, only the edges of the front part of the ship are detected, whilst after processing with the proposed method, the number of edges detected increases. The edges of the front as well as the back part of the ship can be detected using the processed image; showing the efficiency of the proposed method in improving the feature details of the objects in the image by reducing blurriness, as shown in the restored result of Figure 4b.
The edges of raw image are almost not detected in Figure 4c, due to the blurriness of the raw image. Since the proposed method has the ability to reduce blurriness and improve texture details, the edges detected in the recovered image of Figure 4c are remarkably improved; the coral reefs as well as the fishes can be clearly detected in the restored underwater image.
For images from Figure 4d, it can be seen from the raw image that only a limited number of edges has been detected; particularly, the small sand and stone particles cannot be detected due to poor clarity of the raw image. On the other hand, the edges of the fish, sand, and small particles are clearly detected on the restored image, as can be seen in edges detected in Figure 4d, proving that the proposed method can be used in underwater feature detection applications.
The raw image of Figure 4e, which is heavily affected by backscattering effect, has a very limited number of edges detected; with the middle part of the image containing the turtle, which is almost not detected at all. On the other hand, the edge detection is very good in the restored image, and consequently, it can be said that proposed method is efficient and can be used for underwater edge detection applications.

4.2. Results of the SURF Application

For detecting SURF features in underwater images, gray versions of the raw underwater images and restored underwater images in Figure 3 have been used, with the SURF function in MATLAB used for this purpose. Results of SURF points detected in raw images of Figure 5a–e are found using gray versions of raw underwater images shown in Figure 3a–e, respectively, whilst the results of SURF points detected in processed images of Figure 5a–e are found using gray versions of restored underwater images shown in Figure 3a–e, respectively.
Visually, all gray versions of the restored images are relatively clear and sharp compared to gray versions of input raw images. Consequently, the number of SURF points detected in all the restored gray versions of the underwater images is higher than the input gray version. This clearly suggests that the proposed method is able to improve underwater images for the purpose of detection of objects as well as extraction of point of interest in underwater applications, by improving texture details of the processed image as well as reducing blurriness. Comparison between the number of SURF feature points detected in the gray version of raw input and restored underwater images in Figure 3a is shown in Figure 5a. The raw underwater image shown in Figure 3a is a naturally lit image, with the front part receiving more light than the background region. Consequently, in both the raw and restored images in Figure 5a, more features are detected in the front part of the image when compared with the back part of the image. Less feature points are detected in the front part of the restored image of Figure 5a when compared with the raw image of Figure 5a. Additionally, SURF feature points in the coral regions are also detected from the restored underwater image.
Figure 5b depicts the comparison between raw and restored underwater images of a shipwreck. Less SURF points are detected on the raw image of Figure 5b, with the detected features concentrated mainly on the front part of the ship. On the other hand, SURF feature points on the front part as well as the top part of the ship can be clearly seen from the restored image in Figure 5b. This points to the effectiveness of the proposed restoration method.
Figure 5c shows comparison of SURF features of an image with greenish nature between the raw and restored underwater images. On the raw image of Figure 5c, the detection of feature points focuses only on the coral reefs and not on the fish, but features are detected on the coral reefs as well on some of the fish in the restored image of Figure 5c. Thus, the restored image performs better than the input raw image.
Figure 5d shows an image with an artificial light source. Generally, images which are lit artificially would have more distortions due to the reflection of artificial light source on floating particles in water. The restored underwater image using the proposed method handles the artificial light source problems gracefully by reducing the effects of distortions, as seen in Figure 3d. Consequently, more SURF points are detected on the sand particles as well as on the fish in the gray version of the restored image of Figure 5d, when compared with the input raw image of Figure 5d.
Finally, Figure 5e shows comparison of the raw input and restored underwater image of image affected by backscattered signal. Only four SURF features are detected on the upper part of the sea turtle of the raw image in Figure 5e. On the other hand, more SURF features are noticeably detected on the restored image of Figure 5e, since the restored image deals with backscattering effects effectively, by using proper priors for backscattering attenuation coefficients.

4.3. Results for Image Classification Application

For investigating the performance of the restored images using the proposed restoration methods for image classification applications, four general classifications are made: fish, ships, statues, humans. The supervised machine-learning approach is used for designing the classification algorithm.
For the training purpose, 890 raw underwater images have been chosen from the Liu et al. [30] dataset; with one model trained using the raw underwater images and another model trained using restored underwater images using the proposed restoration method. Sensitivity, specificity, and accuracy are used as performance measures. Sensitivity is the ability of a test to correctly classify an image, while specificity is the ability of the test to classify the incorrect objects. Accuracy is defined as the capability of the test to produce a result closer to the original correct classification. In total, 75 underwater images have been used for testing the trained model.
Unprocessed images and restored images using the proposed restoration method are used for testing; with results tabulated for both SVM and KNN separately, in Table 2. From the table, it can be seen that the proposed method is able to improve sensitivity, specificity, and accuracy compared to using the raw unprocessed underwater images. Using the proposed restoration method improves accuracies from 72% and 82%, to 83% and 89% for SVM and KNN, respectively.

5. Conclusions

A proposed restoration method for underwater images is given in this paper, which involves depth estimation from blurriness estimation and background light neutralization process as well a transmission map estimation using direct and backscattered signals. Any method that is designed for restoring the image has to provide efficient results not only in terms of quantitative and qualitative approach, but also in practical applications, which serves the whole purpose of developing the method in the first place. For this reason, the proposed underwater image restoration method has been tested on direct underwater applications. The applications are chosen in such a way that the claims of the proposed methods are proven. Edge detection, SURF and image classification have been chosen. The edge detection has been performed using the Sobel edge-detection operator on raw and restored images using the proposed method, and it has been shown that the number of edges detected on the restored images are always higher than on the original raw images, proving that the proposed restoration method is able to reduce blurriness in an image. Similarly, the SURF function from MATLAB has been used on the gray version of raw and restored underwater images. Subsequently, it has been shown that the number of SURF points detected on the restored underwater images increases. This implies the ability of the proposed method to improve texture details of an underwater image. Finally, image classification using a supervised machine-learning approach has been performed on a set of 75 test images before and after restoration, whereby the restored images give better classification results in terms of accuracy when used for image classification using both SVM and KNN.
These results on different underwater imaging applications show that the recovered images from the proposed method provide good and efficient results when compared to unprocessed raw input images. As such, the proposed method can be considered as an important step to recover underwater images before applying them to underwater applications, with an efficient outcome.

Author Contributions

Conceptualization, J.R.A. and P.E.A.; methodology, J.R.A. and P.E.A.; software, J.R.A.; validation, J.R.A., P.E.A. and L.C.D.S.; formal analysis, J.R.A.; investigation, J.R.A.; resources, P.E.A.; data curation, J.R.A.; writing—original draft preparation, J.R.A.; writing—review and editing, J.R.A. and P.E.A.; visualization, J.R.A.; supervision, P.E.A. and L.C.D.S.; project administration, P.E.A. and L.C.D.S.; funding acquisition, P.E.A. and L.C.D.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by UBD Faculty Research Grant (UBD/RSCH/1.3/FICBF(b)/2018/001).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chiang, J.Y.; Chen, Y.-C. Underwater Image Enhancement by Wavelength Compensation and Dehazing. IEEE Trans. Image Process. 2012, 21, 1756–1769. [Google Scholar] [CrossRef] [PubMed]
  2. Raihan, J.A.; Abas, P.E.; De Silva, L.C. Review of underwater image restoration algorithms. IET Image Process. 2019, 13, 1587–1596. [Google Scholar] [CrossRef]
  3. Tan, C.S.; Sluzek, A.; G. Seet, G.L.; Jiang, T.Y. Range Gated Imaging System for Underwater Robotic Vehicle. In Proceedings of the OCEANS 2006–Asia Pacific, Singapore, 16–19 May 2006; pp. 1–6. [Google Scholar]
  4. Schechner, Y.; Karpel, N. Recovery of Underwater Visibility and Structure by Polarization Analysis. IEEE J. Ocean. Eng. 2005, 30, 570–587. [Google Scholar] [CrossRef] [Green Version]
  5. Roser, M.; Dunbabin, M.; Geiger, A. Simultaneous underwater visibility assessment, enhancement and improved stereo. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–4 June 2014; pp. 3840–3847. [Google Scholar]
  6. Zhishen, L.; Tianfu, D.; Gang, W. ROV based underwater blurred image restoration. J. Ocean Univ. Qingdao 2003, 2, 85–88. [Google Scholar] [CrossRef]
  7. Li, C.; Saeed, A.; Porikli, F. Deep Underwater Image Enhancement; Cornell University Library: Itacha, NY, USA, 2018. [Google Scholar]
  8. Lu, H.; Kim, H.; Serikawa, S. Underwater Light Field Depth Map Restoration Using Deep Convolutional Neural Fields. In Artificial Intelligence and Robotics; Springer: Berlin/Heidelberg, Germany, 2018; pp. 305–312. [Google Scholar]
  9. Fabbri, C.; Islam, J.; Sattar, J. Enhancing Underwater Imagery Using Generative Adversarial Networks. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 7159–7165. [Google Scholar]
  10. Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson-Roberson, M. WaterGAN: Unsupervised Generative Network to Enable Real-time Color Correction of Monocular Underwater Images. IEEE Robot. Autom. Lett. 2017, 3, 387–394. [Google Scholar] [CrossRef] [Green Version]
  11. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [PubMed]
  12. Wen, H.; Tian, Y.; Huang, T.; Gao, W. Single underwater image enhancement with a new optical model. In Proceedings of the 2013 IEEE International Symposium on Circuits and Systems (ISCAS2013), Beijing, China, 19–23 May 2013; pp. 753–756. [Google Scholar]
  13. Drews, P., Jr.; Nascimento, E.D.; Moraes, F.; Botelho, S.; Campos, M. Transmission Estimation in Underwater Single Images. In Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 2–8 December 2013; pp. 825–830. [Google Scholar]
  14. Emberton, S.; Chittka, L.; Cavallaro, A. Hierarchical rank-based veiling light estimation for underwater dehazing. In Proceedings of the British Machine Vision Conference (BMVC), Swansea, UK, 7–10 September 2015; pp. 125.1–125.12. [Google Scholar]
  15. Galdran, A.; Pardo, D.; Picon, A.; Alvarez-Gila, A. Automatic Red-Channel underwater image restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef] [Green Version]
  16. Carlevaris-Bianco, N.; Mohan, A.; Eustice, R.M. Initial results in underwater single image dehazing. In Proceedings of the OCEANS 2010 MTS/IEEE, Seattle, WA, USA, 20–23 September 2010; pp. 1–8. [Google Scholar]
  17. Peng, Y.-T.; Cosman, P. Underwater Image Restoration Based on Image Blurriness and Light Absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef] [PubMed]
  18. Raihan, J.A.; Abas, P.E.; De Silva, L.C. Depth estimation for underwater images from single view image. IET Image Process. 2020, 14, 4188–4197. [Google Scholar] [CrossRef]
  19. Yang, M.; Sowmya, A. An Underwater Color Image Quality Evaluation Metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef] [PubMed]
  20. Panetta, K.; Gao, C.; Agaian, S. Human-Visual-System-Inspired Underwater Image Quality Measures. IEEE J. Ocean. Eng. 2016, 41, 541–551. [Google Scholar] [CrossRef]
  21. Solonenko, M.G.; Mobley, C.D. Inherent optical properties of Jerlov water types. Appl. Opt. 2015, 54, 5392–5401. [Google Scholar] [CrossRef] [PubMed]
  22. Zhao, X.; Jin, T.; Qu, S. Deriving inherent optical properties from background color and underwater image enhancement. Ocean Eng. 2015, 94, 163–172. [Google Scholar] [CrossRef]
  23. Gould, R.W.; Arnone, R.A.; Martinolich, P.M. Spectral dependence of the scattering coefficient in case 1 and case 2 waters. Appl. Opt. 1999, 38, 2377–2383. [Google Scholar] [CrossRef] [PubMed]
  24. Whitmire, A.; Boss, E.; Cowles, T.J.; Pegau, W.S. Spectral variability of the particulate backscattering ratio. Opt. Express 2007, 15, 7019–7031. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Smith, R.C.; Baker, K.S. Optical properties of the clearest natural waters (200–800 nm). Appl. Opt. 1981, 20, 177–184. [Google Scholar] [CrossRef] [PubMed]
  26. He, K.; Sun, J.; Tang, X. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  27. Saini, A.; Biswas, M. Object Detection in Underwater Image by Detecting Edges using Adaptive Thresholding. In Proceedings of the 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 23–25 April 2019; pp. 628–632. [Google Scholar]
  28. Garcia, R.; Gracias, N. Detection of interest points in turbid underwater images. In Proceedings of the OCEANS 2011 IEEE, Santander, Spain, 6–9 June 2011; pp. 1–9. [Google Scholar]
  29. Kim, J.; Kim, B.-S.; Savarese, S. Comparing image classification methods: K-nearest-neighbor and support-vector-machines. In Proceedings of the 6th WSEAS International Conference on Computer Engineering and Applications, and Proceedings of the 2012 American Conference on Applied Mathematics, Stevens Points, WI, USA, 25–27 January 2012. [Google Scholar]
  30. Liu, X.; Chen, B.M. A Systematic Approach to Synthesize Underwater Images Benchmark Dataset and Beyond. In Proceedings of the 2019 IEEE 15th International Conference on Control and Automation (ICCA), Edinburgh, Scotland, 16–19 July 2019; pp. 1517–1522. [Google Scholar]
Figure 1. Light absorption in water and penetrating levels of different colors.
Figure 1. Light absorption in water and penetrating levels of different colors.
Asi 04 00096 g001
Figure 2. Flow chart of the proposed method.
Figure 2. Flow chart of the proposed method.
Asi 04 00096 g002
Figure 3. Test images taken with (a) Naturally lit, (b) Bluish nature (c) Greenish nature (d) Artificial light source, and (e) Backscattered image.
Figure 3. Test images taken with (a) Naturally lit, (b) Bluish nature (c) Greenish nature (d) Artificial light source, and (e) Backscattered image.
Asi 04 00096 g003aAsi 04 00096 g003b
Figure 4. Edge Detection on unprocessed and restored images using the proposed method, for images taken with (a) Naturally lit, (b) Bluish nature (c) Greenish nature (d) Artificial light source, and (e) Backscattered image.
Figure 4. Edge Detection on unprocessed and restored images using the proposed method, for images taken with (a) Naturally lit, (b) Bluish nature (c) Greenish nature (d) Artificial light source, and (e) Backscattered image.
Asi 04 00096 g004
Figure 5. Surf points detected from images taken with (a) Naturally lit, (b) Bluish nature (c) Greenish nature (d) Artificial light source, and (e) Backscattered image.
Figure 5. Surf points detected from images taken with (a) Naturally lit, (b) Bluish nature (c) Greenish nature (d) Artificial light source, and (e) Backscattered image.
Asi 04 00096 g005
Table 1. Determining the total backscattering attenuation coefficient β B c ( λ ) .
Table 1. Determining the total backscattering attenuation coefficient β B c ( λ ) .
Wavelength β B W c ( λ ) [25] β B P c ( λ ) [24] β B c ( λ )
450 nm 2.2 × 10 3 1.75 × 10 2 1.97 × 10 2
540 nm 1.0 × 10 3 1.26 × 10 2 1.36 × 10 2
620 nm 0.6 × 10 3 1.33 × 10 2 1.39 × 10 2
Table 2. Tabulation of SVM and KNN feature values on unprocessed and restored underwater images.
Table 2. Tabulation of SVM and KNN feature values on unprocessed and restored underwater images.
Unprocessed Input ImagesRestored Output Images
SVMKNNSVMKNN
Sensitivity (%)73698579
Specificity (%)90779184
Accuracy (%)72828389
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Raihan A, J.; Abas, P.E.; De Silva, L.C. Role of Restored Underwater Images in Underwater Imaging Applications. Appl. Syst. Innov. 2021, 4, 96. https://doi.org/10.3390/asi4040096

AMA Style

Raihan A J, Abas PE, De Silva LC. Role of Restored Underwater Images in Underwater Imaging Applications. Applied System Innovation. 2021; 4(4):96. https://doi.org/10.3390/asi4040096

Chicago/Turabian Style

Raihan A, Jarina, Pg Emeroylariffion Abas, and Liyanage C. De Silva. 2021. "Role of Restored Underwater Images in Underwater Imaging Applications" Applied System Innovation 4, no. 4: 96. https://doi.org/10.3390/asi4040096

APA Style

Raihan A, J., Abas, P. E., & De Silva, L. C. (2021). Role of Restored Underwater Images in Underwater Imaging Applications. Applied System Innovation, 4(4), 96. https://doi.org/10.3390/asi4040096

Article Metrics

Back to TopTop