Next Article in Journal
Multi-Decadal Changes in Mangrove Extent, Age and Species in the Red River Estuaries of Viet Nam
Previous Article in Journal
Spatial and Temporal Distribution of PM2.5 Pollution over Northeastern Mexico: Application of MERRA-2 Reanalysis Datasets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Sterescopic System to Measure Water Waves in Laboratories

1
Department of Civil, Environmental, Building Engineering and Chemistry, Polytechnic University of Bari-Via E. Orabona 4, 70125 Bari, Italy
2
Geophysical Applications Processing-GAP srl, 70125 Bari, Italy
3
Department of Physics, Polytechnic University of Bari-Via E. Orabona 4, 70125 Bari, Italy
4
Department of Civil, Construction-Architectural and Environmental Engineering (DICEAA)—Environmental and Maritime Hydraulic Laboratory (LIam), University of L’Aquila, 67100 L’Aquila, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(14), 2288; https://doi.org/10.3390/rs12142288
Submission received: 21 May 2020 / Revised: 9 July 2020 / Accepted: 13 July 2020 / Published: 16 July 2020

Abstract

:
A new system for estimating the synthetic parameters of sea states during physical investigations has been implemented. The technique proposed herein is based on stereographic analysis of digital images acquired with optical sensors. A series of ad hoc floating markers has been made and properly moored to the bottom of a large wave tank to estimate the synthetic parameters of generated waves. The implemented acquisition system and the proposed algorithm provide automatic recognition of all markers by a pair of optical sensors that synchronously captures their instantaneous location and tracks their movements over time. After transformation from the image to the real-world coordinates, water surface elevation time series have been obtained. Several experimental tests have been carried out to assess the feasibility and reliability of the proposed approach. The estimated wave synthetic parameters have been then compared with those obtained by employing standard resistive probes. The deviation were found to be equal to ~6% for the significant wave height and 1% for peak, mean, and significant wave periods.

Graphical Abstract

1. Introduction

In coastal physical modeling, the estimation of synthetic parameters of sea states, such as significant height, either peak or significant wave period, and propagation direction, is one of the main tasks (see, e.g., in [1,2,3]). Their estimation is a preliminary action for all activities typically carried out within the frame of experimental investigations. Indeed, whatever the aims of the tests (either related to coastal risk assessment, see, e.g., in [4,5,6], or coastal structures testing, see, e.g., in [3]), the incident waves represent the main forcing to be reproduced.
Wave parameters are usually inferred from data collected employing either capacitive or resistive Eulerian probes (see, e.g., in [7]) characterized by simple operating principles and easy-to-build supporting frames. They are mainly used because they allow a reliable measure of the free surface elevation in a Eulerian reference frame and also for their relative simplicity. Nevertheless, their intrusiveness is not negligible as they could perturb the investigated wave field, especially in large wave tanks due to the supporting frames for instruments positioning and sensors themselves.
To address these problems, the research is turning to remote sensing techniques also in laboratory environments. They are non-intrusive and allow wave field measurement in wide areas, by using well-established instruments experimented in the open sea in indoor laboratories. Allis et al. [8] have measured waves generated in a wave flume (40 m long and 0.6 m wide) by using lasers, validating the data with a set of eight capacitive probes installed along the central axis of the wave flume. Water was seeded with kaolinite powder to improve the water turbidity and enhance the reflectance of laser in water. Blenkinsopp et al. [9] found that the differences between the measurements performed by the laser and those by capacitive electrodes were of the order of 5 mm, in terms of root mean square error (RMSE), at least for wave height up to 0.3 m.
Generally speaking, past research works highlighted the needing for additives to adjust water turbidity. This is still a limit, especially when the technique is applied in large wave tanks that would require large quantities of such additives. Furthermore, kaolinite powder settles in a relatively short time by losing its usefulness as a turbidity intensifier and by reducing the time window for carrying out wave measurements. After sedimentation, the additive powder contaminates materials laying on the bottom of the wave tank and it could affect the model behavior in sediment transport. The use of some additives could damage some important elements of the hydraulic systems of the experimental facilities (i.e., pumps or seals).
Stereoscopy could be a good alternative: it has been used for sea state characterization since 1939 when Schumacher and Defant [10] performed some acquisitions from moving boats. This technique was soon abandoned due to the high computational effort required to obtain significant results. Some decades later, the advent of digital photography has stimulated the development of this field of applied research. Shemdin et al. [11] obtained wave direction from images captured by optical sensors installed on an oceanographic offshore tower, using image processing and spectral analysis techniques. Optical sensors had also been exploited to estimate sea current direction and speed [12], shoreline position [13,14], run-up levels [15], bathymetric data [16], for bathymetry inversion [17,18], and beach resilience evaluation [19].
A critical step was found to be the passage from the acquisition of static stereo-pairs to image pairs captured with a high frame rate. In this way, it is possible to add time information to the images, obtaining free surface elevation time series, similar to that provided by standard sensors. The advantage is that using a single system, measurements may be collected in a very large number of points reducing problems of intrusiveness and time-consuming calibration procedures. Indeed, standard sensors need a short calibration interval, at least once a day during experimental testing (see, e.g., in [20]), because their operation principles strictly depend on the micro-climatic conditions, which can vary significantly from day to day. Especially in large wave tanks, the calibration procedures can be very expensive, both in terms of time and human resources because researchers have to enter and operate in an uncomfortable environment. It could be observed that in these cases repeated calibration is one of the bottlenecks of experimental investigations.
An important extension to the time domain analysis has been obtained by Santel et al. [21] who reconstructed a digital surface model of the surf zone, and validated the results by a detailed comparison with a battery of probes. De Vries et al. [22] also investigated the surf zone waves at both medium (~50 m 2 ) and large (~ 10 3 10 4 m 2 ) scale, comparing results with a pressure sensor (in a laboratory) and a wave buoy (in the field) and obtaining satisfactory results. Furthermore, a trinocular system has been tested to reduce the error due to false matches between homologous points (i.e., the corresponding points on the two sensors, representative of the same object in the framed scene) by providing three independent stereo-pairs [23]. Benetazzo [24] developed an innovative algorithm for 3D sea surface reconstruction, based on the successive approximations method, validating the results through data collected during several oceanographic measurement campaigns. Bechle and Who [25] have developed an algorithm able to return the water elevations at any point (virtual wave gauge) of a selected portion of sea surface through the analysis of a stereoscopic pair of movies. More recently, Benetazzo et al. [26] successfully extended stereo video techniques to larger areas (approximately 1000 m 2 ). Stereoscopy has been successfully used to study some particular types of waves such as extreme waves during storms [27,28]. An open data set of sea surface stereo images is now available for an ocean region of 10 4 m 2 [29].
The proposed approach herein is based on the stereoscopic imaging technique. Even though it is a well-established method, this research work aims to apply it to a challenging environment. In open sea conditions, the light scene allows to easily distinguish the objects that belong to the surface of the sea from those that are reflected. Especially on cloudy days, when the daylight creates a very uniform lighting condition, the open seawater surface can be considered as a Lambertian surface, at least with some approximations (see, e.g., in [26,30]). In large wave tanks deployed in indoor laboratories, both the incident light, either natural or artificial, and the reflected light directions can always be definitely identified. In these conditions, it can be hard to discriminate the scene points belonging to the water surface and specular reflection of images from the water surface. This issue, known as radiometric distortion, greatly complicates the use of stereoscopic techniques for free surface elevation measurement. An attempt to provide a solution to this issue has been recently proposed by Aubourg et al. [31], who performed a combination of stereoscopic surface mapping and stereoscopic three-component Particle Imaging Velocimetry (PIV), using a seeding of the free surface by small polystyrene particles in a confined cylindrical tank. In the field, Sutherland and Melville [32] used a stereo infrared particle image velocimetry (PIV).
This paper aims to assess the feasibility of a noninvasive approach to get a reliable estimation of synthetic wave parameters when large wave tanks deployed in indoor laboratories are used. A new approach based on stereoscopic vision techniques, named Wave Lab Stereoscopic System (WLSS), is proposed herein. To solve the issues related to water reflectance, and then to remove radiometric distortion, some colored low inertia floating markers have been deployed in a wave tank within the field of view of two identical cameras. Irregular waves have been reproduced and stereo movie pairs recorded. The frame by frame analysis allows extracting the markers position and the local and instantaneous free surface elevation, at least if the markers dynamics is neglected. The free surface elevation is then characterized in both the time and space domains. As this paper aims to assess the reliability of the method, and the time series collected by means of a resistive probe is used to reach the goal, only the heave (i.e., the linear motion of the floaters along the vertical direction) time series is discussed hereafter.
The activities carried out in this research can be synthesized in the following steps. First, a hardware experimental setup consisting of preparing floating markers and assembling the entire acquisition system has been performed. Then, data processing algorithms have been implemented to perform camera calibration, marker recognition, stereo-matching, geometric triangulation, marker coordinates extraction, and transformation to a global reference frame. The validation of the method has been performed by comparing the results obtained from the proposed stereoscopic technique to data collected by means of standard probes.
The next section describes the method that was used, in terms of both hardware and software implemented for the work; the succeeding section describes the experimental step in the laboratory, and the main results are discussed. Concluding remarks close the paper.

2. Main Features of 3D Stereo Systems

A stereoscopic vision system is composed of two (or more) devices for the acquisition of images (photo- or video-cameras) suitably positioned, as well as a data processing system. This ancient technique, developed in the 19th century for images taken by analog cameras, rapidly improved in the last decades. Even though the method is well-established, this section aims to detail its main features.
The reconstruction of three-dimensional reality by means of computer vision techniques requires the knowledge of image formation on a camera sensor. For this purpose, analytic modeling is typically used due to its capability to effectively approximate the real-world phenomena. In this work a standard perspective model, i.e., pinhole camera model [33], has been used. Under the assumption of infinitesimal thickness of lenses (i.e., within the frame of “thin lenses” approximation), the model allows the transformation from image coordinates (u, v) to the real-world reference coordinate system through a perspective projection matrix. The transformation takes into account the coordinates of the optical center and parameters for (i) the translation and rotation between the camera and world systems, (ii) the scale factors related to u and v axes, and (iii) the non-perfect orthogonality between the u-v axes. As the thin lenses approximation is a mathematical abstraction, the Brown–Conrady model has been used to take into account the radial and tangential lens distortions [34,35].
The parameters in the perspective projection matrix are generally empirically estimated, after appropriate calibration procedures, exploiting images of a structured grid with known mesh size [36]. The calibration procedure proposed by Zhang has been applied herein [37]. A chessboard has been used as calibration target: it has been photographed by the calibrating cameras for different positions, as the greater the number of acquisitions of the target, the greater the accuracy in the estimation of the parameters. Once this procedure has been applied to both cameras, a further process has been used to estimate calibration parameters of the whole stereoscopic acquisition system [38].
Even if this type of calibration can be considered onerous, it is strictly needed only once if the baseline (i.e., the distance between the two cameras) is not modified. Furthermore, the calibration procedure is very easy to carry out because it is not performed in water. Nevertheless, it can be considered as a possible source of error: an imperfect calibration directly affects the system precision that depends on the reliable estimation of the baseline length and of the optical parameters. The calibration process is sensitive to the flatness and accuracy of the target pattern and to the care used to fix the cameras to the supporting frame. Indeed, any movement that may occur in the period between the calibration process and the experimental acquisitions should be avoided.
Another crucial point of stereoscopic techniques is the stereo-matching, i.e., the search of homologous points in a stereo-couple of images: this phase is preparatory to the extraction of the spatial coordinates of the homologous points.
To simplify the calculations, it is advisable to proceed with the rectification of the stereo-pair first. Substantial support for rectification can be provided by the application of the principles of epipolar geometry that reduces the search for homologous points to a one-dimensional problem [39]. Given a point on one of the two images, the epipolar hypothesis restricts the search for its homologous point to a segment—the epipolar line—instead of an area. The rectification procedure requires that the cameras have been previously calibrated [40]. It is aimed to obtain two new perspective projection matrices, which allow the transition from the real-world coordinates to those of two new ideal image planes. These new matrices are then rotated around their optical centers until focal planes are coplanar and contain the baseline. Then, to obtain horizontal epipolar lines, the abscissa of the new reference frame has to be chosen parallel to the baseline. It is required that the two intrinsic parameters (optical center coordinates and focal lengths) of the new projection matrices are equal to each other to impose the collinearity condition of the two cameras. This is equivalent to equalize the horizontal and vertical scale coefficients in the two projection matrices.
The digital camera sensors are made of a finite number of pixels: this implies a discrete determination of the target localization in metric coordinates in the real-world reference system. It is then useful to estimate the resolution of the three-dimensional coordinates of a point in the real-world for the proposed system. This resolution allows to estimate the smallest object that the system is able to identify.
The first step is the estimation of the resolution in the camera reference frame whose origin lies typically in the optical center of one of the cameras, as described in the pinhole camera model [33]. In this reference frame, it is preferable to estimate separately the contribution to the resolution in the image plane (x and y components) and along the optical axis (z component).
As the contribution of the x and y resolution components is independent of both the disparity (defined, for each space point “P”, as the difference between the abscissas of the point P projections on the two cameras sensors) and the stereographic system layout, it is possible to refer to a single camera geometry.
A generic real point A, placed on the optical axis at distance D (hereinafter referred to as shooting distance) from the optical center (Figure 1, left), generates an image in C, usually the central pixel of the sensor. A possible miscalculation of the image point in C , say n pixels away from C either along x or along y, involves a displacement of the real point in the wrong position A (pinhole camera model). From the triangles A A O and C C O it reads
A A D = n f
where f is the focal length in pixel units as obtained from the calibration process. The differential equation reads as follows,
δ ( A A ) E x , y = D f δ n
where E x , y is a location error along either x or y, expressed in metric units. As expected, E x , y is proportional to the shooting distance D. When δ n = 1 , i.e., considering a positioning error of only one pixel, Equation (2) gives the pixel spatial resolution. The estimation of this quantity can be useful in determining the size of the marker to be used, according to the camera resolution and the experimental layout: once experimentally established the minimum number c of consecutive aligned pixels necessary for a marker to be easily visible and segmented, it is possible to obtain the minimum size that can be resolved by the system, by multiplying c by the spatial resolution of the pixel.
The contribution along the z axis is due to the assessment of the value of disparity and is the only component strictly inherent to the stereo system. In the right panel of Figure 1, the real point A placed on the optical axis of the left camera at a distance D from the optical center O L , has images in the left sensor center C L and in the point G on the right sensor (pinhole camera model). Point G is n pixels far from the right sensor center C R , so that n is just the disparity of the real point A in pixels units. Exploiting the similarity criteria, a location error along z ( E z ) can be expressed in differential form as
δ D E z = D 2 B f δ n
where E z is expressed in meters. E z is proportional to the square of the shooting distance D and to the reciprocal of the baseline length (B, see Figure 1) and focal length (f).
The resolution of the proposed method can be projected to the real-world reference frame. In the following, only the projection along z axis is discussed, because it is useful to estimate the accuracy of the vertical water surface elevation, which can then be compared with measures taken with traditional instruments. This projection is the result of a geometric composition of E x , E y , and E z when δ n is assumed to be equal to 1.
Figure 2 (upper panel) shows the reference frame linked to the cameras and the one linked to the laboratory (world reference frame), whose plane x l a b y l a b is the still water surface. Without loss of generality, it can be assumed that the plane z l a b x l a b contains the baseline in the x c a m direction. Let α be the angle formed by x c a m with x l a b and β the angle formed by y c a m with the x l a b z l a b plane (Figure 2, lower panel). The projection along z l a b of the three components E x , E y , and E z , with no regards to signs, gives
E z l a b = ( E x s i n α ) 2 + ( E y c o s β · c o s α ) 2 + ( E z s i n β · c o s α ) 2
In the usually accepted hypothesis that α = 0 (horizontal baseline), and considering the expressions of E y and E z given in Equations (2) and (3) it can be reduced to
E z l a b ( 1 p i x e l ) = D 2 f 2 c o s 2 β + D 4 B 2 f 2 s i n 2 β .
following that the vertical pixel resolution is directly proportional to the shooting distance (D) and inversely proportional to the focal length (f) and the baseline length (B).

3. Methodology

In this paper, an innovative procedure called Wave Lab Stereo System (WLSS) has been tested for synthetic wave parameters estimation within the frame of experimental investigations. The feasibility and reliability of the proposed method have been assessed by comparing the estimated wave parameters to those estimated by using free surface elevation time series collected using a conventional wave probe. As part of the experimental investigation, a data processing chain has been implemented to perform free surface elevation time series extraction. The implemented procedure consists of (i) system implementation and positioning of low inertia floating markers, (ii) markers detection and tracking in the images extracted from the video (segmentation phase) and search for the correspondences in the stereo-pairs, (iii) stereoscopic reconstruction of markers positions, and (iv) extraction of time series of markers displacements. Following these steps, wave parameters estimation can be achieved. This section is aimed to outline the main processing steps.

3.1. System Implementation

To justify some choices made during experimental steps, it is useful to detail all the issues of the “stereo-matching” step. A necessary condition for getting geometrical coordinates of the points belonging to a scene is that the same points have to be filmed by at least two different sensors located in two different positions. Finding the homologous points on two different images could be difficult. First, not all the points of a scene may be seen by both cameras (i.e., “occlusion problem”). The issue could be minimized by using short baselines so that the scenes framed by the two different points of view are quite similar. Even if all the points are visible from both sensors there might be the problem of radiometric distortions typically related to indoor laboratories, i.e., the case tackled by this study. Finally, the uniformity of some surfaces (just like a water surface), which return images consisting of recurring patterns when focused, can create difficulties for any matching algorithm.
To overcome these critical issues, low inertia floating markers have been used so that their instantaneous location can be related to the elevation of the water surface: the dynamics of the floaters is considered as negligible and their motion can be easily used to infer the free surface elevation. On one end, the dynamics of floaters is shape and frequency-dependent, and, on the other hand, the markers must follow the free surface. In other words, their Response Amplitude Operator (RAO; see, e.g., in [41]), related to the amplitude of the motion of the floater as a function of the forcing frequency, must be as similar as possible to the identity matrix. Therefore, the amplitude of the motion must be very similar to the amplitude of the wave forcing, at least within the frequency range of interest. This does not apply in the surf zone, where a significant mass transport occurs, and then the markers would be advected toward the shore.
Table tennis plastic balls (diameter equal to 40 mm; weight equal to 2.4 g) have been used as markers. The spheres have been previously treated with a red color paint to improve the performance of the segmentation phase: both the color and the size of the markers have been chosen based on the results of a sensitivity analysis performed ad hoc. The painted markers have been equipped with a mooring system, similar to the wave-buoys mooring layout, made of a stiff plastic cord, and an anchor weight, i.e., a lead fishing weight (Figure 3).
This type of mooring system allowed to obtain markers able to follow water surface oscillations, just like a wave buoy. In the meantime, they remain in the neighborhood of their initial location. To define the mooring line length, the wave-driven Lagrangian trajectory has been estimated by using the small amplitude linear theory. The length of the wire used to moor the floaters has been then selected longer than the summation of the local water depth and the maximum axis of the estimated Lagrangian paths. For the case at hand, a wire length of 0.45 m has been selected (with a water depth equal to 0.32 m). An empirical check that the mooring system does not influence the motion of the markers was carried out by observing their movements for a while in the tank at the early stage of the tests before start shooting. The mooring system design has taken into account the need to make the whole system as non-intrusive as possible: the fishing line used has a thickness of 0.3 mm, and the lead fishing weight is 5 mm thick with a mass of 60 g. The line has been attached to the weight with a simple knot and has been glued to the marker. Therefore, the perturbation on the liquid surface induced by the floating markers and their mooring system have to be considered negligible when compared with those of conventional wave gauges. Indeed, the invasivity of standard wave gauges deployed into a large wave tank is related to both the gauge wire and the supporting frame that must rely on the tank bottom.

3.2. Markers Detection and Stereoscopic Procedures

In the experimental investigation, a processing algorithm able to detect markers in the images has been developed. It aims to find the correspondences in the stereo-pairs, and then to track the markers from frame to frame. The sequence of the procedures is outlined in the flow chart of Figure 4.
To further facilitate the stereo matching problem solving, a method to simplify both these issues and the detection of the marker in the rectified images has been conceived.
The qualitative analysis of some of the collected movies revealed that the movements of every single marker are confined in a definite neighborhood. For this reason, a preliminary step of a “multi-search box” procedure has been implemented. The user is allowed to define a window for each marker in the image (by manually selecting each marker in the first frame of the movie) to limit within these areas the search for markers in the stereo-pair (Figure 5). In the literature, there are several examples of using Regions Of Interest (ROI) [42] or bounding boxes [43] applied to computer vision. They aim to limit the operations to be carried out only to defined portions of the scene framed by the cameras. Issues arise when the frames of a movie have to be processed: it is possible that two or more boxes intersect each other as the time elapses, or that markers leave their windows. It is not so complex, however, to avoid these conditions in a laboratory-like environment: the image acquisition point can be properly moved. The “multi-search box” method greatly simplifies the problem of correspondence and allows reducing also the computational cost in the segmentation step. For these reasons, it has been applied before segmentation procedures in the processing chain (Figure 4).
After the “multi-search box” procedure, the algorithm continues with the segmentation step. As a first attempt, a color-based segmentation procedure is applied. This type of segmentation is very useful if features are characterized by a uniform color. In general, it is very effective because colors typically do not significantly change with light conditions. To facilitate the operations of color segmentation, in accordance with Bora et al. [44], H S V color space images are more useful (“H” is Hue, “S” is Saturation, and “V” is Brightness Value) than standard color spaces as, e.g., R G B (“R” is Red, “G” is Green, and “B” is Blue) or L a b (“L” is Lightness, “a” is Red/Green Value parameter, and “b” is Blue/Yellow Value parameter). It is possible to directly set the threshold value needed to perform the segmentation on H and S channels (see, e.g., in [44]). At the end of this step, the result is a binary image: pixels of a given color (the color of the floating markers) are tagged as “0”, whereas the remaining pixels of the image are tagged as “1”.
If the color-based procedure identifies only one marker per box, segmentation can be considered as successfully completed. If not, the image processing continues with the application of the Hough transform [45]. A useful evolution of Hough transform [46] is normally employed for the segmentation of curvilinear forms with special properties (in particular circles and ellipses): the application of these algorithms should extract all the red spherical shapes visible in the images (i.e., the floating markers). Nevertheless, in very noisy images other features (due to reflections on the water surface with a high red color content and a circular shape) could be identified as a marker by the segmentation algorithm. To discard these small features (almost points), a morphological model based on the convolution with a raster operator (size of 3×3 pixels) has been used [47].
If no marker is found after these attempts, the segmentation is considered failed for the specific marker in the analyzed frame.
Once the segmentation phase is completed, the implemented procedure calculates and stores the coordinates of the centroid of each identified marker. It is important to stress that this step is carried out with a sub-pixel accuracy: the evaluation of the centroid coordinates of the markers (i.e., the centroid of the red pixel clusters) is not performed by calculating the average of the bidimensional sensor coordinates of the edge points of the marker. Rather, it is carried out by a weighted algorithm that takes into account the gray level of the marker pixels, especially near their edges, as described in [48]. The result is the detection of an intensity weighted marker centroid with sub-pixel accuracy.
Once all the markers have been identified in each frame, a procedure aimed to track the markers from frame to frame has been implemented using an “optimal association” method. The displacement of each marker between two successive frames is expected to be reasonably small (depending on the frame rate, equal to 30 Hz for the tested configuration), certainly smaller than the distances between the marker itself and adjacent ones (“hypothesis of locality”). Then, a check matrix ( C m ) has been built to minimize the possibility of the wrong estimation of markers paths (for both cameras, see Figure 4). For a given number (N) of the markers, it reads as follows,
C m = Δ r 11 Δ r 12 Δ r 1 N Δ r 21 Δ r 22 Δ r 2 N Δ r N 1 Δ r N 2 Δ r N N
where the generic element Δ r j k is the distance between the instantaneous location of marker M j at the time t i and the instantaneous location of marker M k at the instant t i + Δ t (being Δ t the time interval between two frames, related to the frame rate). The minimum value of the elements in the matrix detects the first “optimal association” between markers. Then, a second matrix can be extracted starting from the first one deleting the row and column corresponding to the first minimum value found and the second minimum can be calculated. The procedure has to be carried out until the last optimal association has been found. Before proceeding to the next step, the procedure foresees a further attempt to recover “lost” data due to segmentation failure by a bilinear interpolation. This attempt is performed only when there is a loss of data for only a small number of consecutive frames (less than a given threshold).
Some preliminary tests on segmentation and tracking procedures have been performed before wave tank testing in order to test the proposed system. A pendulum with a single red plastic sphere attached to the end of a wire has been built and the maker trajectory has been tracked checking that the radius of the circular sector was equal to the pendulum wire length during the oscillation.
After the image segmentation, a stereoscopic triangulation model described by Faugeras et al. [49] has been applied to images obtained from the virtually rectified cameras to extract the 3D coordinates of all the detected markers. The stereoscopic triangulation model allows calculating, in particular, the Z coordinate of every single point along the optical axis:
Z = f · B [ X L ( p L ) X R ( p R ) ]
where B is the baseline, f is the focal length (both obtained from calibration procedure), X L ( p L ) is the abscissa of the projection of generic point “P” on the left camera sensor, and X R ( p R ) is the abscissa of the projection of the same point on the right camera sensor (both obtained from marker detection procedure) in their reference systems.
The obtained points coordinates are transformed to the real-world reference frame through the application of the pinhole camera model. Once these elaborations have been carried out for each extracted movie frame, the markers’ vertical displacement time series has been achieved. As the marker dynamics have been considered negligible, the vertical displacement time series of each marker has been intended as the free surface elevation time series.

4. Method Validation

4.1. Experimental Set-up and Preliminary Tests

All the experiments have been carried out in a large wave tank at the Coastal Engineering Laboratory (LIC) of the Polytechnic University of Bari (Italy). The wave tank is 50.00 m wide, 90.00 m long, and 1.20 m deep, and it is equipped with a paddle wave maker able to generate a wavefront approximately 30 m long. The layout of the experiments can be summarized as follows. Nine floating markers have been deployed at the nodes of a grid in the wave tank; two cameras, whose technical features are reported in Table 1, have been placed on a steel bar (physical baseline) at a mutual distance of ~70 cm so that both of them can frame the whole markers grid. Irregular wave trains have been generated in the tank and wave-induced movements of the floating markers have been recorded by the WLSS.
Free surface elevation time series have been also collected with a standard resistive probe deployed very close to the markers to assess the reliability of the proposed method (Figure 6).
For the experimental investigation, due to the distance of the cameras from the scene, an appropriate size target has been built for camera calibration [37].
An important preliminary operation is the determination of the mean water level in order to compare the results obtained with the proposed method against the estimation based on the data collected employing standard resistive probes. Working in a controlled situation (i.e., laboratory) it has not been difficult to determine the zero water level as the one that best fitted the heights of the markers measured in still water by the stereoscopic system.
The start of the acquisitions process has been made at the same time for all the instruments. This has been achieved using a physical connection between the cables of the remote controls of the cameras and the acquisition PC connected to the standard resistive probe.
As a further check of the synchronicity of the stereo system, an operator with a light source has been filmed by both cameras before wave motion started. This procedure has allowed identifying a specific moment in the two movies (frames in which light was turned on by the operator) that was easy to match.

4.2. Experimental Tests and Discussion of the Results

Some initial tests were carried out to evaluate the camera image resolution. By employing the methods described in Section 2, the minimum dimension c (in pixel units) that an object should have to be distinguishable in an image has been estimated to be equal to 4. For the given size of the markers, this information was crucial for establishing the correct distance between the sensors and the markers.
Several tests have been carried out in the wave tank changing both sensor layout (D and H in Figure 7) and marker arrangement, aimed to identify the best configuration for markers detection and tracking. The table in Figure 7 reports three of the most significant experimental trials with varying parameters.
Test A has been carried out to check the segmentation processing chain and to evaluate the algorithm accuracy: the markers have been deployed at a very small distance from the cameras and regular waves have been generated.
Figure 8 shows a screenshot of the segmentation step: images taken by the two cameras (upper panel) and the result of the markers segmentation (lower panel) are shown.
This test revealed a satisfactory performance of the segmentation algorithm as reported in Figure 9 that shows that the percentage of marker detection failures (i.e., lack of detection as defined in Section 3.2) was very low (~1%) both in left (upper panel) and in right (lower panel) image.
Test B has been carried out by changing the position of the markers. They have been deployed on a regular 3×3 grid. The distance from the cameras of each row was selected as 9, 17, and 40 meters, respectively. This test has been performed to assess the algorithm ability to segment markers deployed throughout the tank. Despite the fact that a new, higher, camera support has been used, various failures and problems have been encountered in the segmentation. Markers on the most distant row (M1, M2, and M3) were not visible, and also markers on the intermediate row often disappeared to the cameras, hidden by the wave crests. Figure 10 shows that percentage of segmentation failures of these markers (M4, M5, and M6) is ~13%. This issue, known as “occlusion problem” can be bypassed only by changing the elevation of the cameras.
For test C, a new steel baseline has been built ad hoc with a double hinge so that all possible movements could capture the wave field in the tank.
The acquisition system has been placed at a distance of 15 meters from the most distant of the 9 floating markers placed in the nodes of a regular grid, and at a height of 4 meters above the mean water level. The water depth at the location of the markers was ~0.32 m (Figure 11). Markers have been placed approximately to 2 m from each other and a resistive gauge for water level measures has been deployed near the markers named “M2” and “M5” (Figure 6).
This experimental layout allowed overcoming the segmentation problems due to the occlusion issues that occurred in previous tests by raising and distancing the cameras from the region of interest thus ensuring the visibility of the markers. By applying Equation (2) with the camera configuration of test C, it results that E x , y is ~5.6 mm and, consequently, markers take up ~7 pixels in the images, that is more than 4 (the minimum number of pixels required to markers for their reliable identification). In this configuration, E z l a b , corresponding to the spatial resolution along z l a b direction Equation (5), is equal to 3.4 cm, smaller than floating markers diameter.
Several other experimental tests were carried out. In the following, the result of a particular test is detailed. Indeed, similar results were obtained for the whole investigation. Irregular wave trains have been generated according to a JONSWAP spectrum, with a significant wave height of 10 cm and a peak wave period of 1 s. The elevation of the cameras was selected to minimize the occlusion problem due to the wave motion. The histograms presented in Figure 12 (one panel for each camera) show the low rate of segmentation failure for each marker (~0.5%).
Statistical parameters, such as maximum and minimum values of the free surface elevation time series, standard deviation, skewness, and kurtosis coefficient have been estimated (Table 2). Then, a standard zero-upcrossing analysis [50] has been performed in the time domain by identifying the Empirical Probability Density Function of the individual wave height and period (Table 3).
In order to assess the proposed system reliability, WLSS estimated parameters have been compared to the estimation achieved by using the time series collected with the standard wave probe data (WG in Table 2 and Table 3).
As can be seen from Table 2, standard deviation values of water elevation time series are very similar for all markers and are consistent with the results from the resistive probe WG. On the other hand, an overall underestimation of maximum elevation can be observed. The best results are obtained for marker “M5” that is characterized by an error of ~18% if the maximum elevation is considered.
The shape of the water elevation distributions, represented in Figure 13, shows similar features for all the markers and probe, presenting a tail in the positive elevation values, except for “M4” marker that shows a negative skewness.
It can be observed from Table 3 that the number of waves is almost the same for markers and probe, and, specifically, the marker “M5”, the closest marker to the control point (i.e., the location of the standard probe), is able to reproduce the same number of waves as the probe. In general, it can be seen that “M2” and “M5” markers better approximate wave parameters calculated from the standard probe data, as they are the closest markers to the location of the standard probe (Figure 6).
The error in significant wave height H 1 / 3 estimation is ~6% for the “M5” marker, lower than the error estimated for the “M2” marker (7%). A possible explanation for these slight differences can be related to the different distances of the markers away from the wave gauge (Figure 14). Similarly to what was observed in the elevation time series analysis, the values of H m a x are underestimated for all markers. Regarding H m a x values extracted from the two closest markers, errors amount to approximately 21% and 25% for “M2” and “M5”, respectively.
The large deviations in estimating the maximum elevation can be addressed as follows. Basically, the proposed method aims to estimate the sea state synthetic parameters evaluated from the time series data of a Lagrangian floater displacements. It is well known that the using of floaters to measure free surface elevation suffers from some drawbacks. Just as an example, when the waves are measured with floating accelerometric buoys (very similar to the case described herein but the mooring system), the differences on the crests are amplified because the quasi-Lagrangian behavior of these instruments. As pointed out by Magnusson et al. [51], a moored floater is advected by the wave orbital currents and then the free surface elevation is not related to a fixed point on the horizontal plane. The floater moves in the wave direction when a crest occurs, and against the wave direction when a trough occurs. Therefore, the wave profile is distorted: the crests are flattened, and the troughs sharpened. This induces an apparent increase of the mean water level sea and reduces the amplitude of the crests. Figure 15 shows an example of the free surface elevation estimated by means of the proposed method.
This aspect has to be considered when the deviation of H m a x is discussed. Nevertheless, the unreliable estimate of the maximum individual wave height does not significantly affect the estimation of other quantiles of the empirical cumulative probability distribution of the individual wave height. It was observed that the higher the quantile, the higher the deviation of the estimation of the quantile. For marker “M5”, the relative error decreases from −10.3% for H 1 / 10 (quantile 0.90) down to −5.6% for H 1 / 3 (quantile 0.66) and to −1.2% for H m e a n (quantile 0.5). This drawback, however, can be accepted, as in laboratory experiments the measurements of interest typically (even if not always) concern the average heights of the highest waves ( H 1 / 3 ) and not the extremes of free surface elevation.
The error in wave period estimation is small for all the markers (the average error is ~4%) and is ~1% for the “M5” marker.
Moreover, frequency spectra have been obtained by means of standard Fast Fourier Transform (FFT) applied to elevation time series. Figure 16 shows the comparison of wave spectra from probe WG (black line) and from “M5” marker (red line).
The spectra estimated from the “M5” marker and the probe have a similar shape presenting the peak frequency at ~0.97 Hz. Both spectra present shapes significantly deviating from the standard JONSWAP spectrum, as measurements are performed in intermediate water, and waves are likely to be affected by shoaling and nonlinearities. When this happens several deformations appear to the spectra: the second harmonic is visible in Figure 16 in correspondence with the value of the frequency of 1.93 Hz. To justify the energy content at low frequencies that is observed in Figure 16, it could be useful to know that downdrift of the markers grid, to a distance of about 10 meters, there is a vertical concrete wall (Figure 11), which causes a total reflection of the waves, so that the markers are stressed by both incoming and reflected waves. More in detail, the inspection of the two spectra reveals that the reconstruction in the frequency domain based on the results of the proposed approach lacks in the estimation of peak energy and in the energy content in the frequency band around 2 Hz. As far as the peak energy is concerned, the maximum energy density underestimation is related to the underestimation of the highest waves already discussed. Nevertheless, it should be observed that the synthetic parameters are estimated by means of the computation of the moments of the spectrum, i.e., the density energy is integrated over the frequencies. The underestimation, at least for the reproduced cases, is related to very narrow frequency bands and does not affect significantly the estimation of synthetic parameters (i.e., the spectral wave height). On the other hand, the differences observed around the frequency equal to 2 Hz must also be addressed. First, it should be underlined that these deviations are not related to the dynamics of the floaters. Indeed, the frequency of the normal modes of the spheres is well outside of the frequency band of interest due to their low inertia (a rough estimation of the normal modes of the spheres revealed a frequency higher than ~7 Hz). Then, the differences in the highest band are likely due to the location of the wave gauge and the marker and in the effect of crest flattening and trough sharpening related to the Lagrangian measurements of wave profiles (see, e.g., [51]).

5. Concluding Remarks

This paper illustrates a new technique aimed to get the estimation of synthetic wave parameters when experimental investigations are performed in indoor large wave tanks. The paper is particularly devoted to assess the feasibility and reliability of the proposed method.
It is based on the processing of digital images collected through a stereoscopic system, consisting of a pair of optical sensors and several low inertia markers floating on the water surface.
Both technical steps (preparation of markers, mooring systems, special supports for cameras baseline, and pattern for cameras calibration) and computational steps (specific procedures for cameras calibration, images segmentation, matching of homologous points in the stereo couples, and triangulation of such homologous points and their world coordinates restitution) are detailed. The critical aspects of the system are related to the markers mooring system that cannot be used in the surf zones, where mass transport is significant.
The system has been tested by several experimental trials that highlighted both pros and cons of the proposed approach.
The technique offers the possibility of acquiring data simultaneously in a large number of points of large wave tanks, with no problem of intrusiveness and with fair reliability of measures. Although this reliability depends on some experimental conditions (such as the shooting distance, i.e., the distance between a real point and the cameras lens or the brightness) and quality of optical sensors, the laboratory experiments have shown that, even in the worst conditions, the difference of the results obtained by the standard measuring system and those derived from the new system, is at most 6% in terms of significant wave height, and 1% in terms of periods. In terms of maximum wave height, the error increases, with an underestimation of ~25%. This is likely due to the quasi-Lagrangian approach of the method for which it was demonstrated that the crests are flattened and the troughs sharpened.

Author Contributions

Conceptualization, M.G.M. and M.F.B.; Data curation, M.G.M., M.F.B., L.P., and A.R.; Formal analysis, M.G.M., M.F.B., D.P., and M.D.R.; Funding acquisition, M.M.; Investigation, M.G.M., M.F.B., L.P., A.R., G.P., and D.P.; Methodology, M.G.M., M.F.B., L.P., A.R., A.M., G.P., D.P., M.D.R., and M.M.; Resources, M.G.M., M.F.B., L.P., and A.R.; Software, M.G.M., M.F.B., A.R., and A.M.; Supervision, M.D.R. and M.M.; Validation, M.G.M., M.F.B., L.P., A.R., G.P., and D.P.; Visualization, M.G.M., M.F.B., A.M., and D.P.; Writing—original draft, M.G.M. and M.F.B.; Writing—review and editing, M.G.M., M.F.B., L.P.; A.R., A.M., G.P., D.P., M.D.R., and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors thank Antonio Mongelli for the fruitful collaboration.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Romano, A.; Di Risio, M.; Molfetta, M.G.; Bellotti, G.; Pasquali, D.; Sammarco, P.; Damiani, L.; De Girolamo, P. 3D physical modeling of tsunamis generated by submerged landslides at a conical island: The role of initial acceleration. In Proceedings of the Coastal Engineering Conference, Antalya, Turkey, 17–20 November 2016; Volume 35. [Google Scholar]
  2. Saponieri, A.; Valentini, N.; Di Risio, M.; Pasquali, D.; Damiani, L. Laboratory investigation on the evolution of a sandy beach nourishment protected by a mixed soft-hard system. Water 2018, 10, 1171. [Google Scholar] [CrossRef] [Green Version]
  3. Celli, D.; Pasquali, D.; De Girolamo, P.; Di Risio, M. Effects of submerged berms on the stability of conventional rubble mound breakwaters. Coast. Eng. 2018, 136, 16–25. [Google Scholar] [CrossRef]
  4. Di Risio, M.; Bruschi, A.; Lisi, I.; Pesarino, V.; Pasquali, D. Comparative analysis of coastal flooding vulnerability and hazard assessment at national scale. J. Mar. Sci. Eng. 2017, 5, 51. [Google Scholar] [CrossRef] [Green Version]
  5. Archetti, R.; Damiani, L.; Bianchini, A.; Romagnoli, C.; Abbiati, M.; Addona, F.; Airoldi, L.; Cantelli, L.; Gaeta, M.G.; Guerrero, M.; et al. Innovative strategies, monitoring and analysis of the coastal erosion risk: The stimare project. In Proceedings of the 29th International Offshore and Polar Engineering Conference, Honolulu, HI, USA, 16–21 June 2019; Volume 3, pp. 3836–3841. [Google Scholar]
  6. Bruno, M.F.; Molfetta, M.G.; Pratola, L.; Mossa, M.; Nutricato, R.; Morea, A.; Nitti, D.O.; Chiaradia, M.T. A combined approach of field data and earth observation for coastal risk assessment. Sensors 2019, 19, 1399. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Whittenbury, C.; Huber, E.; Newell, G. Instrument for measuring water waves. Rev. Sci. Instrum. 1959, 30, 674–676. [Google Scholar] [CrossRef]
  8. Allis, M.J.; Peirson, W.L.; Banner, M.L. Application of LiDAR as a measurement tool for waves. In Proceedings of the The Twenty-first International Offshore and Polar Engineering Conference, Maui, HI, USA, 19–24 June 2011; pp. 19–24. [Google Scholar]
  9. Blenkinsopp, C.E.; Turner, I.L.; Allis, M.J.; Peirson, W.L.; Garden, L.E. Application of LiDAR technology for measurement of time-varying free-surface profiles in a laboratory wave flume. Coast. Eng. 2012, 68, 1–5. [Google Scholar] [CrossRef]
  10. Schumacher, A.; Defant, A. Stereophotogrammetrische Wellenaufnahmen; W. de Gruyter: Hamburg, Germany, 1939; Volume 7. [Google Scholar]
  11. Shemdin, O.H.; Tran, H.M.; Wu, S. Directional measurement of short ocean waves with stereophotography. J. Geophys. Res. Ocean. (1978–2012) 1988, 93, 13891–13901. [Google Scholar] [CrossRef]
  12. Dugan, J.; Piotrowski, C.; Williams, J. Water depth and surface current retrievals from airborne optical measurements of surface gravity wave dispersion. J. Geophys. Res. Ocean. (1978–2012) 2001, 106, 16903–16915. [Google Scholar] [CrossRef] [Green Version]
  13. Valentini, N.; Damiani, L.; Molfetta, M.G.; Saponieri, A. New coastal video-monitoring system achievement and development. Coast. Eng. Proc. 2017, 1, 11. [Google Scholar] [CrossRef]
  14. Valentini, N.; Saponieri, A.; Molfetta, M.G.; Damiani, L. New algorithms for shoreline monitoring from coastal video systems. Earth Sci. Inform. 2017, 10, 495–506. [Google Scholar] [CrossRef]
  15. Valentini, N.; Saponieri, A.; Danisi, A.; Pratola, L.; Damiani, L. Exploiting remote imagery in an embayed sandy beach for the validation of a runup model framework. Estuar. Coast. Shelf Sci. 2019, 225, 106244. [Google Scholar] [CrossRef] [Green Version]
  16. Stockdon, H.F.; Holman, R.A. Estimation of wave phase speed and nearshore bathymetry from video imagery. J. Geophys. Res. Ocean. (1978–2012) 2000, 105, 22015–22033. [Google Scholar] [CrossRef]
  17. Bouvier, C.; Balouin, Y.; Castelle, B.; Valentini, N. Video Depth Inversion at a Microtidal Site Exposed to Prevailing Low-energy Short-period Waves and Episodic Severe Storms. J. Coast. Res. 2020, 95, 1021–1026. [Google Scholar] [CrossRef]
  18. Simarro, G.; Calvete, D.; Luque, P.; Orfila, A.; Ribas, F. UBathy: A New Approach for Bathymetric Inversion from Video Imagery. Remote Sens. 2019, 11, 2722. [Google Scholar] [CrossRef] [Green Version]
  19. Melito, L.; Parlagreco, L.; Perugini, E.; Postacchini, M.; Zitti, G.; Brocchini, M. Monitoring for coastal resilience: A project for five Italian beaches. In Proceedings of the 2018 IEEE International Workshop on Metrology for the Sea; Learning to Measure Sea Health Parameters (MetroSea), Bari, Italy, 8–10 October 2018; pp. 76–81. [Google Scholar]
  20. Hughes, S.A. Physical Models and Laboratory Techniques in Coastal Engineering; World Scientific: Singapore, 1993; Volume 7. [Google Scholar]
  21. Santel, F.; Linder, W.; Heipke, C. Stereoscopic 3D-image sequence analysis of sea surfaces. In Proceedings of the ISPRS Commission V Symposium, Istanbul, Turkey, 12–23 July 2004; Volume 35, pp. 708–712. [Google Scholar]
  22. De Vries, S.; Hill, D.; De Schipper, M.; Stive, M. Remote sensing of surf zone waves using stereo imaging. Coast. Eng. 2011, 58, 239–250. [Google Scholar] [CrossRef]
  23. Wanek, J.M.; Wu, C.H. Automated trinocular stereo imaging system for three-dimensional surface wave measurements. Ocean Eng. 2006, 33, 723–747. [Google Scholar] [CrossRef]
  24. Benetazzo, A. Measurements of short water waves using stereo matched image sequences. Coast. Eng. 2006, 53, 1013–1032. [Google Scholar] [CrossRef]
  25. Bechle, A.J.; Wu, C.H. Virtual wave gauges based upon stereo imaging for measuring surface wave characteristics. Coast. Eng. 2011, 58, 305–316. [Google Scholar] [CrossRef]
  26. Benetazzo, A.; Fedele, F.; Gallego, G.; Shih, P.C.; Yezzi, A. Offshore stereo measurements of gravity waves. Coast. Eng. 2012, 64, 127–138. [Google Scholar] [CrossRef] [Green Version]
  27. Benetazzo, A.; Ardhuin, F.; Bergamasco, F.; Cavaleri, L.; Guimarães, P.V.; Schwendeman, M.; Sclavo, M.; Thomson, J.; Torsello, A. On the shape and likelihood of oceanic rogue waves. Sci. Rep. 2017, 7, 1–11. [Google Scholar] [CrossRef]
  28. Schwendeman, M.S.; Thomson, J. Sharp-crested breaking surface waves observed from a ship-based stereo video system. J. Phys. Oceanogr. 2017, 47, 775–792. [Google Scholar] [CrossRef]
  29. Guimarães, P.V.; Ardhuin, F.; Bergamasco, F.; Leckler, F.; Filipot, J.F.; Shim, J.S.; Dulov, V.; Benetazzo, A. A data set of sea surface stereo images to resolve space-time wave fields. Sci. Data 2020, 7, 1–12. [Google Scholar] [CrossRef]
  30. Bergamasco, F.; Torsello, A.; Sclavo, M.; Barbariol, F.; Benetazzo, A. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves. Comput. Geosci. 2017, 107, 28–36. [Google Scholar] [CrossRef]
  31. Aubourg, Q.; Sommeria, J.; Viboud, S.; Mordant, N. Combined stereoscopic wave mapping and particle image velocimetry. arXiv 2019, arXiv:1902.05819. [Google Scholar]
  32. Sutherland, P.; Melville, W.K. Field measurements of surface and near-surface turbulence in the presence of breaking waves. J. Phys. Oceanogr. 2015, 45, 943–965. [Google Scholar] [CrossRef]
  33. Jähne, B. Spatio-Temporal Image Processing. Comput. Sci. 1993, 751, 143–152. [Google Scholar]
  34. Brown, D.C. Decentering distortion of lenses. Photome. Eng. 1966, 32, 444–462. [Google Scholar]
  35. Conrady, A. Lens-systems, decentered. Mon. Not. R. Astron. Soc. 1919, 79, 384–390. [Google Scholar] [CrossRef] [Green Version]
  36. Tsai, R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef] [Green Version]
  37. Zhang, Z. Flexible camera calibration by viewing a plane from unknown orientations. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 1, pp. 666–673. [Google Scholar]
  38. Furukawa, Y.; Ponce, J. Accurate camera calibration from multi-view stereo and bundle adjustment. Int. J. Comput. Vis. 2009, 84, 257–268. [Google Scholar] [CrossRef]
  39. Xu, G.; Zhang, Z. Epipolar Geometry in Stereo, Motion and Object Recognition: A Unified Approach; Springer Science & Business Media: Berlin, Germany, 2013; Volume 6. [Google Scholar]
  40. Fusiello, A.; Trucco, E.; Verri, A. A compact algorithm for rectification of stereo pairs. Mach. Vis. Appl. 2000, 12, 16–22. [Google Scholar] [CrossRef]
  41. Faltinsen, O. Sea Loads on Ships and Offshore Structures; Cambridge University Press: Cambridge, UK, 1993; Volume 1. [Google Scholar]
  42. Lee, S.H.; Moon, J.; Lee, M. A region of interest based image segmentation method using a biologically motivated selective attention model. In Proceedings of the 2006 IEEE International Joint Conference on Neural Network Proceedings, Vancouver, BC, Canada, 16–21 July 2006; pp. 1413–1420. [Google Scholar]
  43. Lempitsky, V.; Kohli, P.; Rother, C.; Sharp, T. Image segmentation with a bounding box prior. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 277–284. [Google Scholar]
  44. Bora, D.J.; Gupta, A.K.; Khan, F.A. Comparing the performance of L* A* B* and HSV color spaces with respect to color image segmentation. arXiv 2015, arXiv:1506.01472. [Google Scholar]
  45. Hough, P.V. Machine analysis of bubble chamber pictures. In Proceedings of the International Conference on High Energy Accelerators and Instrumentation, Geneva, Switzerland, 14–19 September 1959; Volume 73. [Google Scholar]
  46. Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
  47. Malge, P.; Nadaf, R. PCB defect detection, classification and localization using mathematical morphology and image processing tools. Int. J. Comput. Appl. 2014, 87. [Google Scholar]
  48. Zaho Y, Z.Y. Research of star pattern recognition centroid extraction algorithm. Space Electron. Technol. 2004, 4, 5–8. [Google Scholar]
  49. Faugeras, O.; Luong, Q.T.; Papadopoulo, T. The Geometry of Multiple Images: The Laws That Govern the Formation of Multiple Images of a Scene and Some of Their Applications; MIT Press: Boston, MA, USA, 2004. [Google Scholar]
  50. Longuet-Higgins, M. On the joint distribution of the periods and amplitudes of sea waves. J. Geophys. Res. 1975, 80, 2688–2694. [Google Scholar] [CrossRef]
  51. Magnusson, A.; Donelan, M.; Drennan, W.M. On estimating extremes in an evolving wave field. Coast. Eng. 1999, 36, 147–163. [Google Scholar] [CrossRef]
Figure 1. Evaluation of the x and y (on the left panel) and z (on the right panel) components of the resolution in camera reference system.
Figure 1. Evaluation of the x and y (on the left panel) and z (on the right panel) components of the resolution in camera reference system.
Remotesensing 12 02288 g001
Figure 2. Evaluation of the “real-world system error”: representation of camera and world reference frames (a) and of vertical component of motion in camera reference frame (b).
Figure 2. Evaluation of the “real-world system error”: representation of camera and world reference frames (a) and of vertical component of motion in camera reference frame (b).
Remotesensing 12 02288 g002
Figure 3. A marker (red painted tennis table plastic ball on the right) and its mooring system.
Figure 3. A marker (red painted tennis table plastic ball on the right) and its mooring system.
Remotesensing 12 02288 g003
Figure 4. Flow chart of the procedures applied to the stereo-pair images.
Figure 4. Flow chart of the procedures applied to the stereo-pair images.
Remotesensing 12 02288 g004
Figure 5. “Multi-search box” procedure: bounding boxes superimposed, for each marker, on the left frame (left panel) and on the right frame (right panel).
Figure 5. “Multi-search box” procedure: bounding boxes superimposed, for each marker, on the left frame (left panel) and on the right frame (right panel).
Remotesensing 12 02288 g005
Figure 6. Experimental layout: Wave Lab Stereo System (WLSS) instrumentation (cameras and markers from M1 to M9) and the resistive probe (WG).
Figure 6. Experimental layout: Wave Lab Stereo System (WLSS) instrumentation (cameras and markers from M1 to M9) and the resistive probe (WG).
Remotesensing 12 02288 g006
Figure 7. Sketch of cameras and markers positions: the table reports the varying experimental parameters (D is the distance from the markers to the cameras; H is the height of the cameras above the still water level).
Figure 7. Sketch of cameras and markers positions: the table reports the varying experimental parameters (D is the distance from the markers to the cameras; H is the height of the cameras above the still water level).
Remotesensing 12 02288 g007
Figure 8. The segmentation step: stereo-pair images (upper panels) and markers detection (lower panels).
Figure 8. The segmentation step: stereo-pair images (upper panels) and markers detection (lower panels).
Remotesensing 12 02288 g008
Figure 9. Test A: segmentation error histograms for each marker in the stereo-pair images.
Figure 9. Test A: segmentation error histograms for each marker in the stereo-pair images.
Remotesensing 12 02288 g009
Figure 10. Test B: segmentation error histograms for each marker in the stereo-pair images. Markers M1, M2, and M3 are placed at 40 m far from the cameras; M4, M5, and M6 at 17 m; and M7, M8, and M9 at 9 m.
Figure 10. Test B: segmentation error histograms for each marker in the stereo-pair images. Markers M1, M2, and M3 are placed at 40 m far from the cameras; M4, M5, and M6 at 17 m; and M7, M8, and M9 at 9 m.
Remotesensing 12 02288 g010
Figure 11. Test C experimental layout: section of the wave tank and moored markers.
Figure 11. Test C experimental layout: section of the wave tank and moored markers.
Remotesensing 12 02288 g011
Figure 12. Test C: segmentation error histograms for each marker in the stereo-pair images. Markers M1, M2, and M3 are placed at 15 m far from the cameras; M4, M5, and M6 at 13 m; and M7, M8, and M9 at 11 m.
Figure 12. Test C: segmentation error histograms for each marker in the stereo-pair images. Markers M1, M2, and M3 are placed at 15 m far from the cameras; M4, M5, and M6 at 13 m; and M7, M8, and M9 at 11 m.
Remotesensing 12 02288 g012
Figure 13. Histograms of the water surface elevation time series estimated from the wave gauge (WG) and markers (from M1 to M9).
Figure 13. Histograms of the water surface elevation time series estimated from the wave gauge (WG) and markers (from M1 to M9).
Remotesensing 12 02288 g013
Figure 14. Qualitative sketches of the marker configuration (circles: WG refers to the Eulerian wave gauges and M to the markers) along with the relative errors distribution for all the synthetic parameters of the sea state (see Table 3).
Figure 14. Qualitative sketches of the marker configuration (circles: WG refers to the Eulerian wave gauges and M to the markers) along with the relative errors distribution for all the synthetic parameters of the sea state (see Table 3).
Remotesensing 12 02288 g014
Figure 15. One of the estimated free surface elevation (z) time series. The shaded areas highlight the effects of Lagrangian measurement in flattening the crests.
Figure 15. One of the estimated free surface elevation (z) time series. The shaded areas highlight the effects of Lagrangian measurement in flattening the crests.
Remotesensing 12 02288 g015
Figure 16. Comparison between energy density spectra estimated from marker M 5 and wave gauge W G . Note the log-scale of the vertical axis.
Figure 16. Comparison between energy density spectra estimated from marker M 5 and wave gauge W G . Note the log-scale of the vertical axis.
Remotesensing 12 02288 g016
Table 1. Main technical features of the camera.
Table 1. Main technical features of the camera.
MODEL CANON EOS 5D Mark III
Camera sensorCMOS 36 × 24 mm
Effective pixels22.3 MP
Total pixels23.4 MP
low-pass filterintegrated/fixed with fluorine coating
Lens typeCanon EF 50mm f/1.4 USM
Field angle (horizontal, vertical, diagonal)40 , 27 , 46
Minimum focusing distance (m)0.45
Max image sizeJPEG: 5760 × 3840; RAW: 5760 × 3840
Movie typeMOV (Video: intra frame/inter frame − 8 bit uncompressed)
Max movie dimensions1920 × 1080 (29 fps) intra or inter frame
Table 2. Statistics of water surface elevations estimated from the wave gauge (WG) and markers (from M1 to M9). Min. el. indicates the lowest observed free surface elevation, Max. El. is the highest observed free surface elevation, and St. Dev is the standard deviation.
Table 2. Statistics of water surface elevations estimated from the wave gauge (WG) and markers (from M1 to M9). Min. el. indicates the lowest observed free surface elevation, Max. El. is the highest observed free surface elevation, and St. Dev is the standard deviation.
STATISTICS OF WATER SURFACE ELEVATIONS
InstrumentMin. el. [mm]Max. el. [mm]St. Dev. [mm]SkewnessKurtosis
WG−36.3277.4114.290.774.67
M 1−38.5147.4913.000.103.31
M 2−36.2357.2513.510.483.34
M 3−38.8254.2812.600.513.47
M 4−44.9840.3812.45−0.023.16
M 5−34.9063.3813.950.473.64
M 6−43.3653.3312.760.263.90
M 7−37.7453.9811.950.183.52
M 8−32.7254.3312.380.443.50
M 9−41.2866.0913.470.413.88
Table 3. Statistics of wave parameters estimated from the wave gauge (WG) and markers (from M1 to M9). N w a v e s indicates the number of observed individual waves, H m a x is the highest individual wave height, H m e a n is the average value of the individual wave heights, T m e a n is the average individual wave period, H r m s is the root mean square individual wave height, H 1 / 10 is the average wave height of the 1/10 highest individual waves, T 1 / 10 is the average wave period of the 1/10 highest individual waves, H 1 / 3 is the average wave height of the 1/3 highest individual waves, and T 1 / 3 is the average wave period of the 1/3 highest individual waves.
Table 3. Statistics of wave parameters estimated from the wave gauge (WG) and markers (from M1 to M9). N w a v e s indicates the number of observed individual waves, H m a x is the highest individual wave height, H m e a n is the average value of the individual wave heights, T m e a n is the average individual wave period, H r m s is the root mean square individual wave height, H 1 / 10 is the average wave height of the 1/10 highest individual waves, T 1 / 10 is the average wave period of the 1/10 highest individual waves, H 1 / 3 is the average wave height of the 1/3 highest individual waves, and T 1 / 3 is the average wave period of the 1/3 highest individual waves.
WAVE PARAMETERS STATISTICS
Instrument N waves H max [mm] H mean [mm] T mean [s] H rms [mm] H 1 / 10 [mm] T 1 / 10 [s] H 1 / 3 [mm] T 1 / 3 [s]
WG255105.3232.650.8038.3572.401.0255.890.97
M 126674.9629.830.7733.9061.471.0048.480.96
M 222082.7633.090.9336.8464.591.0551.991.03
M 323774.0828.780.8033.0259.461.0647.980.93
M 426576.9728.560.7632.5157.991.0646.391.01
M 525579.0832.260.8636.8364.911.1152.740.98
M 625693.0928.490.8033.4963.891.0349.331.00
M 726285.0827.440.7831.4357.591.1445.231.01
M 825572.6628.470.8032.3458.810.9946.640.96
M 924079.8031.750.8536.6968.181.0653.291.02

Share and Cite

MDPI and ACS Style

Molfetta, M.G.; Bruno, M.F.; Pratola, L.; Rinaldi, A.; Morea, A.; Preziosa, G.; Pasquali, D.; Di Risio, M.; Mossa, M. A Sterescopic System to Measure Water Waves in Laboratories. Remote Sens. 2020, 12, 2288. https://doi.org/10.3390/rs12142288

AMA Style

Molfetta MG, Bruno MF, Pratola L, Rinaldi A, Morea A, Preziosa G, Pasquali D, Di Risio M, Mossa M. A Sterescopic System to Measure Water Waves in Laboratories. Remote Sensing. 2020; 12(14):2288. https://doi.org/10.3390/rs12142288

Chicago/Turabian Style

Molfetta, Matteo Gianluca, Maria Francesca Bruno, Luigi Pratola, Antonio Rinaldi, Alberto Morea, Giovanni Preziosa, Davide Pasquali, Marcello Di Risio, and Michele Mossa. 2020. "A Sterescopic System to Measure Water Waves in Laboratories" Remote Sensing 12, no. 14: 2288. https://doi.org/10.3390/rs12142288

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop