Next Article in Journal
Does Hyperspectral Imagery Improve Satellite-Derived Bathymetry? A Case Study from a Posidonia oceanica-Dominated Mediterranean Region
Previous Article in Journal
RSSRGAN: A Residual Separable Generative Adversarial Network for Remote Sensing Image Super-Resolution Reconstruction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

UAV Remote Sensing of Submerged Marine Heritage: The Tirpitz Wreck Site, Håkøya, Norway

1
Scott Polar Research Institute, University of Cambridge, Lensfield Road, Cambridge CB2 1ER, UK
2
Department of Technology and Safety, UiT Arctic University of Norway, 9037 Tromsø, Norway
3
Biosciences, Fisheries and Economics, UiT Arctic University of Norway, 9037 Tromsø, Norway
4
Arctic Museum of Norway, UiT Arctic University of Norway, 9037 Tromsø, Norway
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(1), 45; https://doi.org/10.3390/rs18010045
Submission received: 17 October 2025 / Revised: 17 December 2025 / Accepted: 18 December 2025 / Published: 23 December 2025

Highlights

What are the main findings?
  • UAV-based through-surface photogrammetry successfully produced accurate (<10 cm) reconstructions of shallow submerged structures.
  • Bathymetric estimation in shallow (<10 m) water using structure from motion applied to through-surface UAV data was effective compared to spectral ratio methods, such as the Stumpf model.
What is the implication of the main finding?
  • Low-cost, easily implementable UAV systems can be used effectively to document underwater cultural heritage in shallow, clear water, providing a practical alternative to surveys using divers or vessels.
  • This highlights the potential of SfM for shallow-water bathymetry in situations where the sea floor is optically heterogeneous.

Abstract

This study evaluates the use of UAV-based photogrammetry to document shallow submerged cultural heritage, focusing on the Tirpitz wreck salvage site near Håkøya, Norway. Using a DJI Phantom 4 Multispectral drone, we acquired RGB and multispectral imagery over structures located at depths of up to 5–10 m. Structure-from-motion (SfM) processing enabled the three-dimensional reconstruction of submerged features, including a 52 × 10 m wharf and adjacent debris piles, with an accuracy of the order of 10 cm. Our data represents the first and only accurate mapping of the site yet carried out, with an absolute position uncertainty estimated to be no greater than 3 m. Volumes of imaged debris could be estimated, using a background subtraction method to allow for variable bathymetry, at around 350 m3. Bathymetric data for the sea floor could be derived effectively from an SfM point cloud, though less effectively applying the Stumpf model to the multispectral data as a result of significant spectral variation in the sea floor reflectance. Our results show that UAV-based through-surface SfM is a viable, low-cost method for reconstructing submerged heritage with high spatial accuracy. These findings support the integration of UAV-based remote sensing into heritage and environmental monitoring frameworks for shallow aquatic environments.

1. Introduction

The use of UAVs (unmanned/uncrewed/unoccupied aerial vehicles, or, in popular usage, ‘drones’) as platforms for collecting scientifically valid data has increased enormously over the last few years in many fields of study, including marine environments [1,2,3]. UAV platforms, and the unmanned aerial systems (UASs) of which they form a part, provide a number of advantages compared with satellite remote sensing data, including an increased flexibility of operation and the potential for centimetric spatial resolution, though with some disadvantages, including the need for higher technical skill on the part of the operator and a more restrictive legislative environment [4]. UAVs provide a platform from which different types of sensors can be deployed, though the commonest such sensor remains the three-band-colour digital camera (‘RGB imager’). Multispectral imagers, typically also providing one or more spectral bands in the near-infrared part of the spectrum, are also often carried via UAVs.
In general, the digital images acquired from such systems, viewing downwards towards the earth’s surface, can be analysed quantitatively using the methods of multispectral image classification [5,6,7] to classify different surface materials on the basis of their spectral signature (characteristic variation of reflectance with wavelength). Such methods have been very extensively developed and refined for digital RGB and multispectral imagery acquired from spaceborne, airborne, and handheld platforms [8,9,10], although object-based classification, again well established for satellite-borne imagery, finds its way into the analysis of data collected from UAV platforms [11].
The flexibility of image collection from UAVs has also led to their increasing popularity for generating three-dimensional datasets in which the spatial coordinates of imaged objects are deduced from the different geometrical perspectives present in overlapping images acquired from different viewpoints, using the structure-from-motion technique [12,13,14]. This can be used qualitatively to visualise objects as they would be seen from vantage points not physically realised, and quantitatively for the mensuration of remote objects.
UAV-based imaging at a very high spatial resolution is routinely possible for objects located within the aerial space. A search of the Web of Science database (https://www.webofscience.com/wos/woscc/basic-search accessed 15 July 2025) at the time of writing (July 2025) showed 1080 documents with author-supplied keywords including both ‘UAV’ and ‘photogrammetry’, dating from 1998. However, the UAV-based imaging of objects located below a water surface is a much less well-established technique. A similar Web of Science search for the terms ‘UAV’ and ‘bathymetry’ located 54 documents, with the earliest being dated to 2013, and a search for ‘UAV’ and ‘two-media’ found only six documents, dating from 2014 onwards. Photogrammetric bathymetry from a UAV platform was demonstrated in 2016 [15], and some of the technical difficulties have been explored since then [1,16,17]. Recent publications suggest that two-medium imaging and geometric reconstruction can be achieved to depths of up to 5–10 m and with spatial resolutions approaching 10 cm [18,19], although no systematic investigation of this potential, and of the factors affecting performance, has yet been reported to the best of the authors’ knowledge.
One of the most challenging aspects of two-media photogrammetry is that of refraction at the air–water interface. The two-medium problem in photogrammetry has been identified and addressed since three-quarters of a century ago [20]. The usual SfM process for reconstructing three-dimensional geometry assumes a rectilinear propagation of light from the target object to the imaging device [21,22], whereas refraction at the interface introduces nonlinearity to the propagation. The deleterious effect on imaging quality is known to increase with depth [21]. The rigorous modification of the ray-tracing procedures employed in SfM geometrical reconstruction is complicated (although the physics is well understood) and slows an already computationally intense process [16,23], so there is a clear interest in testing whether simpler methods of approximately correcting for refractive effects can be successful [24].
This study investigates the feasibility of using UAV-based imaging, including SfM, to document shallow submerged cultural heritage features. The study location is the historic Tirpitz wreck salvage site near Håkøya, Norway. Our aim is to test whether centimetric-scale reconstructions of submerged structures are possible using off-the-shelf UAV hardware and standard processing software and to explore the implications of such techniques for archaeological and environmental monitoring. After demonstrating that successful two-medium retrievals are possible, we pay particular attention to the scope for determining 2- and 3-dimensional geometries of materials lying on the sea floor, and we also consider the scope offered by UAV multispectral imagery for extracting bathymetric data for the sea floor itself.
Our analytical approach prioritises the understanding and validation of the data processing chain over computational efficiency. We, therefore, adopt a ‘tiered’ approach combining GIS-based spatial processing with lower-level image analysis and numerical tools that allow us to interrogate individual steps in isolation before integrating them into a unified workflow. This gives us greater confidence in the robustness of our results. However, we emphasise that the final workflow is not dependent on the use of these latter tools and can be implemented in standard GIS software such as QGIS. It is also our aim to generate documentary evidence of the wreck site.

2. Materials and Methods

2.1. Study Site

The study site is located off the southern shore of Håkøya Island in Troms County, Norway, at approximately 69.647°N, 18.806°E (Figure 1). The site was the last location of the second-world-war German battleship Tirpitz, deployed to Norway in 1942 and finally sunk at anchor by Royal Air Force bombers on 12 November 1944, following earlier attempts by the Royal Navy, the Soviet Red Army Air Force, and the RAF (Figure 2). During the subsequent salvage (1947 to 1958), many pollutants were released, and commercially worthless material was dumped on the seabed. The site continues to be of significance and to attract political debate. As a second-world-war shipwreck site, it is representative of the main category of submerged cultural heritage in northern Norway, with vessels built over 100 years ago automatically protected by the Norwegian Cultural Heritage Act. The ship itself has been removed, but the site remains popular with divers. An attempt to protect the site under the Cultural Heritage Act, based in part on the threat from the widespread unauthorised removal of artefacts and as a political initiative to provide a more balanced representation of WWII heritage, was abandoned in 2016 as a result of opposition from the diving community and others. For these reasons, the site is unusually attractive as a location in which to investigate the possibilities offered via robotic investigation. It has been the focus of the Tirpitz Site Project of the Arctic Legacy of War programme [25], https://en.uit.no/project/Arcticlow accessed 15 December 2025) of the UiT Arctic University of Norway since 2023. However, it appears that no detailed mapping of the site has been undertaken. While this statement is difficult to substantiate formally since a lack of documentation is not usually something commented upon, we note that the report ‘Tirpitz ved Tromsø: Krigshistorie og opphogging Rapport’ [Tirpitz near Tromsø: war history and scrapping report] 02/00848 of the Tromsø kommune in 2002 mentions existing investigations and documentation limited to environmental surveys.
The principal visible manifestations of the site, and the objects of this study, are piles of salvage debris and the remains of a wharf that was constructed as part of the salvage operation. These lie in shallow water at a nominal depth of around 5 m. At low tide, the wharf structure can be partially exposed above the surface.

2.2. UAV Platform and Data Acquisition

We deployed a DJI Phantom 4 Multispectral UAV, equipped with both RGB and multispectral sensors. The multispectral sensor has five bands, at 450 ± 16 nm, 560 ± 16 nm, 650 ± 16 nm, 730 ± 16 nm, and 840 ± 26 nm (blue, green, red, red edge, and near-infrared, respectively), with a 1600 × 1300 pixel resolution and a ground sample distance of 0.43 milliradians. The RGB camera has wider coverage (5472 × 4104 pixel resolution) and a finer resolution (0.23 milliradians).
We determined that the optimal time of year for acquiring data would be after the return of sunlight at this arctic location, but before the spring algal bloom [26]. In consultation with members of the local diving community, who have dived on the site for several decades, late March was chosen for the initial survey and data collection took place on 28 March 2023. The flying altitude was around 60–70 m, giving pixel resolutions of around 3 cm (multispectral) and 1.6 cm (RGB). Flight paths were programmed to maintain a constant orientation of solar illumination and shadows, in order to simplify image processing. The image overlap was set to 80%. UAV operations were coordinated with local Air Traffic Control due to the site’s proximity to Tromsø airport, and the UAV was maintained in line of sight, and personally observed, throughout the operations. Although data collection took place around low tide, the wharf was fully submerged.

2.3. Image Preprocessing

In this section, and subsequently, bold letters in parentheses, (a) to (h), are used to identify steps in the data processing summarised in Figure 3.
A total of 976 images were collected. Preprocessing was conducted using the Pix4D software (https://www.pix4d.com/ accessed 15 December 2025), version 4.9.0, without any special precautions or refraction correction for the two-medium imaging of submerged objects (this is discussed in Section 2.4). We followed the standard processing procedures with default values, including filtering the images for quality, assessing the quality of alignment and frame overlap, and producing dense point clouds, orthomosaics, and DEMs. For the multispectral images, radiometric calibration to reflectance was also carried out, using preflight images of calibration targets. Two sets of images were analysed: one covering the entire site (828 images) and another focused on the wharf (148 images). This resulted in two principal data products: a 5-band georeferenced multispectral orthomosaic (a), with a pixel size of 2.2 cm, clipped to the area of interest defined by the wharf and the debris piles, and a point cloud (b) of 494,176 georeferenced points at an average density of around 16 points per square metre (calculated from area of convex hull). Point densities around the debris piles and wharf were typically above 25 per square metre (Figure 4). The point cloud was visualised within the Pix4D software, giving the first indication of the success of the SfM processing and the quality of the data (Figure 5).

2.4. Refraction Model

As noted above, one serious concern regarding two-medium imaging is whether refraction effects at the surface are large enough to invalidate the assumptions made in applying normal ray-tracing methods for three-dimensional reconstruction from images. Here, we consider the potential influence of refraction on geometrical reconstruction by developing a simple model, following [18,24] and assuming that photogrammetric image processing does not otherwise allow for the effects of refraction at the water–air interface. To obtain a quantitative understanding of the effect of refraction, we consider the case where a point object is imaged just twice, at angles of φ1 and φ2 to the vertical. The object is assumed to be located at depth d below the water surface, and its apparent position (defined as the intersection of the imaging rays in air) is located at depth d′ below the surface and displaced by horizontal distance x (Figure 6). The refractive index of water relative to air is n.
A straightforward application of Snell’s law of refraction shows that
x d = c o s ϕ 1 n 2 s i n 2 ϕ 1 c o s ϕ 2 n 2 s i n 2 ϕ 2 c o t ϕ 1 c o t ϕ 2
and that
d d = x d c o t ϕ c o s ϕ n 2 s i n 2 ϕ
where φ can denote either φ1 or φ2. As is well known, when values of |φ| are sufficiently small, (2) is approximated as
d d 1 n
Equations (1) and (2) are plotted in Figure 7, for a nominal refractive index n = 1.33. The figure shows that, for example, for viewing angles up to about 30˚ from the nadir, the assumption that the imaged point is not displaced horizontally and that Equation (3) is valid is accurate to about 0.05 d. It is thus reasonable to assume that the reconstruction of a point cloud for a submerged object will be successful for images acquired reasonably close to vertical observation and that the only correction that would need to be applied would be to compensate for the effect of Equation (3) on the position of submerged points. This optimistic conclusion forms the basis of the UAV-based studies described below.

2.5. Bathymetric Analysis

It is well known that multispectral data from shallow water can be used to estimate the depth of the water column to the sea floor, as a consequence of the different absorption coefficients in water for light of different wavelengths. Although Lyzenga’s [27] approach is closely based on Beer’s law of absorption, the ‘Stumpf model’ [28] is usually preferred since it requires fewer parameters to be tuned and is less sensitive to variations in bottom reflectance. According to the latter model, the depth z is estimated by a function
z = m 1 l n ( n r 1 ) l n ( n r 2 ) + m 0 ,
where r1 and r2 are the measured reflectances at two different wavelengths, while m0, m1 and n are tuneable parameters to fit suitable calibration data. The parameter n also performs the role of ensuring that neither of the logarithm terms is negative; i.e., it must satisfy
n > m a x ( r 1 1 , r 2 1 ) .

2.6. Data Processing

Data products (point-cloud and orthomosaic) were inspected visually for quality, and it was observed with satisfaction that the SfM processing had succeeded despite the through-surface imaging geometry (Figure 5 and Figure 8), although the infrared (band 5) channel of the multispectral orthomosaic displayed some irregularities in calibration across the mosaic. Inspection of the orthomosaic showed no instances of sun-glint, and visualisation of the point cloud (Figure 5) confirmed the fidelity of the 3D structure.
The two-dimensional georeferenced geometry of the wharf structure was determined by viewing and digitising the orthomosaic using QGIS (c). Estimates of three-dimensional geometry were made from the point-cloud data. The principal data product from which 3D geometry was determined was a map of the ‘depth below wharf’ (DBW). This was generated from the point-cloud data by manually determining the z-coordinate of the highest point on the (submerged) wharf and then subtracting this from all other z-coordinates, followed by multiplication by a factor of 1.33 to account for refraction (Equation (3)). The data were then gridded to a raster data product at a pixel size of 0.25 m, using the XYZ2DEM plugin (https://imagej.net/ij/plugins/xyz2dem-importer.html, accessed 15 July 2025) in the ImageJ (v. 2.16.0/1.54p) processing environment [29,30]. This process, which uses Delaunay triangulation as an intermediate step, followed by linear interpolation, will undoubtedly introduce errors around the wharf with its relatively open structure but, elsewhere, will map the solid surface (d). We note that an equivalent functionality is provided via the GDAL grid operation, accessible through QGIS.
To estimate the volume of debris material lying above the sea floor, we constructed the equivalent of a digital terrain model to represent the bottom topography, using the ‘rolling-ball’ background subtraction method [31] in ImageJ (e). This was chosen in preference to multispectral classification of the point cloud because of spectral variability, partly as a result of depth variations, noted in the imagery of the sea floor (see also results, Section 3.3). We used a radius of 20 m, chosen empirically but with regard to the observed size of the debris piles (e.g., Figure 11), for the rolling ball. Again, we note that an equivalent functionality to this method of surface fitting could be achieved within the QGIS environment, for example, using a combination of focal minimum filtering, followed by smoothing.
Subtraction from the DBW data was performed in QGIS, followed by smoothing with a Gaussian kernel with a 1-metre standard deviation. This generated a data product showing the height above the sea floor (f). As noted above, this was expected to be less reliable for the wharf structure than the debris piles, so a second method of estimating the 3D characteristics of the former was employed (g). This approach capitalises on the open structure of the wharf, meaning that several points in the point cloud can be expected to be found at approximately the same planimetric (x-y) coordinates, similar to the imaging of an open vegetation canopy [32]. In fact, as observed in Figure 4, the point-cloud densities around the wharf structure exceeded 50 m−2. We thus gridded the point-cloud data, and we calculated the range between the minimum and maximum z-coordinates (again, with a factor of 1.33 for refraction), to estimate the height above the sea floor in the wharf structure. This step, implemented using GNU Octave, also allowed a straightforward estimate of point-cloud noise to be made from areas not forming part of the wharf structure (result noted in Section 3.2).
The bathymetric variation across the study site was most directly determined from the DBW data product. However, we also tested the extent to which the Stumpf model, applied to the multispectral orthomosaic, could be tuned to accurately represent variations in the bottom topography. Successful retrieval using this method would represent an approximately 10-fold improvement in the linear spatial resolution of the bathymetric model. The Stumpf model, while less sensitive to variations in bottom reflectance, is not immune to them, especially when they arise from different spectral characteristics. Since the debris piles were clearly spectrally different from the sea floor (e.g., Figure 11), we chose to fit the model of Equation (4) to the DBW data, following manual masking to remove areas of debris and the wharf structure. We extracted DBW values and reflectances in all five spectral bands of the multispectral orthomosaic. Model fitting was performed in GNU Octave, considering all possible pairwise combinations of the multispectral data and choosing the combination that gave the best model fit (h).
Table 1 shows the coordinates of some points representative of the overall geometry, manually digitised in QGIS (Figure 8). The corresponding dimensions are sketched in Figure 9. The estimated accuracy of this manual digitisation is around 0.05 m. Many dimensions can be determined. For example, the wharf can be measured as approximately 52 m long and 10 m wide. It is divided into four equally spaced sections longitudinally. Approximately 4 m at each end of the wharf has lateral reinforcements spaced around one metre apart, while the main length of the wharf has lateral reinforcements around two metres apart. The width of the structural members is between about 0.15 and 0.25 m.

3. Results

3.1. Wharf Planimetric Geometry

Figure 8 shows an extract of the multispectral composite detailing the wharf structure. Fifteen points, representative of the overall geometry, were manually digitised. Their coordinates are listed in Table 1, and the corresponding dimensions are sketched in Figure 9. The estimated accuracy of this manual digitisation is around 0.05 m, based on the appearance in the orthomosaic of parts of the wharf structure that are clearly approximately linear. Many dimensions can be determined. For example, the wharf can be measured as approximately 52 m long and 10 m wide. It is divided into four equally spaced sections longitudinally. Approximately 4 m at each end of the wharf has lateral reinforcements spaced around one metre apart, while the main length of the wharf has lateral reinforcements around two metres apart. The width of the structural members is between about 0.15 and 0.25 m. Since no previous measurements of the wharf’s dimensions are known to us, we are unable to verify their accuracy.
The absolute accuracy of the geographical locations noted in Table 1 cannot be determined properly since we are unaware of any remotely comparable mapping of the area. However, we can compare some of the coordinates with recent high-resolution Maxar imagery accessible through Google Earth. Two images are useful in this respect, from 20 April 2022 and 20 July 2022. Manual digitisation of some of the points reported in Table 1 shows that the apparent coordinates within the Maxar images vary by around 1 metre between images, while the difference between the coordinates reported in Table 1 and those in either of the Maxar images is consistent to around 0.5 m. We also performed a comparison against aerial orthophotos from 2024 found at https://www.norgeibilder.no and https://norgeskart.no (accessed on 15 December 2025). Although coordinates could be extracted for these photos with a precision of only 1 m, we did not find any discrepancies from our estimated coordinates exceeding 3 m. We thus propose that the absolute uncertainty in our coordinates is, at most, 3 m.

3.2. Depth Below Wharf, Debris Volume, and Wharf Height

The ‘depth below wharf’ dataset derived from the point-cloud data is visualised in Figure 10. As noted before, this has a spatial resolution of 0.25 m, and values range from 0 to around 15 m. The presence of the wharf structure is very obvious, while the debris piles are more indistinct.
The smoothed ‘object height model’, resulting from the subtraction of the estimated sea-floor background from the DBW, is visualised as a contour map in Figure 11. We identified more or less discrete debris piles and estimated their volumes through a local integration of the object height model (Table 2). The total estimated volume of debris within the evaluated area is 701 m3. We estimated the uncertainty in the height estimates as around 5 cm, based on the modal standard deviation of heights within 1 m2 cells within the point cloud. This is around 10% of the average height of a debris pile, and we thus also estimate the uncertainty of our volume estimate as around 10%.
From Table 2 and Figure 11, it can be inferred that typical, more or less discrete piles of debris reach around one metre above the sea floor, with volumes of a few tens of cubic metres and, hence, base areas of the order of 100 square metres. When the packing fraction (the ratio of the volume of solid material to the total pile volume) of a randomly arranged debris pile is estimated as ~0.5 [33], the total volume of solid debris would be around 350 m3, and when the density of this solid material as 7000 kg m−3 is estimated under the assumption that it is predominantly steel, the total mass of material would be around 2400 tonnes. We note that these are minimum estimates since there may well be more debris material not covered via the multispectral imagery.
The wharf height above the sea floor, estimated through both the rolling-ball seafloor removal and estimation similar to that for vegetation canopy height (range of point-cloud height values within a single 1 m2 grid cell) ranged from around 1 metre to a maximum of 2.65 metres.

3.3. Bathymetry

As noted earlier, the best bathymetric product from our data is the depth-below-wharf (DBW) dataset shown in Figure 10. However, we also investigated the extent to which bathymetric variation in the sea floor could be extracted from the multispectral composite. Testing all possible pairwise band combinations showed that the strongest statistical relationship of the form represented in Equation (4) was found for bands 1 and 3 (i.e., blue and red). The optimum selected model had parameters of m0 = 0.6615, m1 = 4.5597, and n = 2000. It was fitted after approximately 40% of the data points were rejected, which evidently did not fit the distribution. Figure 12 shows the depth map constructed using this model and its difference from the DBW map of Figure 10. The inspection of Figure 12 shows that the Stumpf model performed reasonably well in the part of the image lying to the west of the wharf (the RMS error here is typically around 0.25 m) but increasingly poorly to the east, where the systematic errors increase to 4–5 m. This points to a spatial gradient in the spectral reflectance characteristics of the sea floor.

4. Discussion

The principal aim of this study was to investigate whether UAV-based SfM and multispectral imaging are capable of providing better-than-decimetric-resolution reconstructions of submerged structures in shallow water. Despite the challenges posed by refraction, the results were positive and strongly suggest that this approach is viable for documenting and monitoring submerged heritage. Since the existing literature in this area is not extensive, this is a significant finding, especially to the extent that it shows that off-the-shelf hardware and software can be adequate to the task.
The spatially coherent structure of the wharf, the visible parts of which were submerged to a maximum depth of around 5 m, proved relatively simple to quantify, with an accuracy of around 5 cm (implied by both dispersion in the heights represented in the point cloud and also by the physical integrity of the objects reconsructed from the pount cloud, as visualised in Figure 5). This is comparable to the few other reported accuracies for UAV-based two-medium SfM in shallow water. The open structure of the wharf allowed it to be analysed in a manner similar to that for a vegetation canopy, with relatively complete three-dimensional imaging of the geometry. Although our investigation did not include any validation activity, it would be relatively straightforward to confirm the dimensions of the wharf structure using in situ measurements made by divers. This is one of the proposed public engagement activities described below.
The reconstructed three-dimensional geometry also proved effective for quantifying the less ordered material that composed the debris piles. In distinction to the wharf, the piles could be characterised as having a closed structure, with little to no visibility through them. The vegetation canopy analogy was inappropriate in this case, and instead, we treated the problem of shape characterisation as similar to that of determining both a Digital Surface Model (DSM, directly deduced from the UAV data) and a Digital Terrain Model (DTM, representing the sea floor). The approach used here to estimate the DTM is a background subtraction based on a simple rolling-ball algorithm and could undoubtedly be improved on through the use of a more sophisticated approach [34], though the evident and expected smoothness of the seafloor geometry encourages us to believe that the approach is unlikely to be seriously inaccurate. The estimated accuracy of heights in the point cloud is very similar to that of the horizontal accuracy, around 5 cm. We can note that the estimated mass contained in the debris piles identified in this study, around 1900 tonnes, would constitute around 5% of the mass of the entire Tirpitz.
Bathymetry using the multispectral imagery showed some promise but requires further validation and correction for variations in seafloor reflectance. When the SfM method succeeds in imaging the bottom surface, which it has done in the present instance to a depth of at least 10 m, this would seem to be a much more reliable method of determining bathymetry.
Perhaps the least satisfactory result from this investigation is the residual uncertainty in planimetric coordinates. We estimate this as not greater than 3 m, which is not likely to be sufficiently precise for future archaeological studies. We are, however, much more confident in the relative planimetric positions, and any lateral offset in the coordinates we have estimated could be reduced through surveying, for example, through a terrestrial RTK measurement of the wharf when exposed due to the tide.
Through-surface imaging is not the only, or necessarily the most obvious, means of characterising the structure of submerged objects. Robotic equipment operating purely underwater (Autonomous Underwater Vehicles, or AUVs) can use SONAR, LiDAR, or conventional photography to capture information that can, in principle, be integrated with that obtained from through-surface imaging, though accurately locating the underwater data and co-registering them with the aerially acquired data proves challenging. The concept of an Internet of Underwater Things (IoUT) has recently been proposed [35] as a framework linking the underwater part of data collection, and UAVs will have a role to play in maintaining the connectivity of this network [36]. There are useful synergies between aerial and underwater detection systems [37], though difficulties remain for both the communication and navigation of AUVs [38]. As part of the Tirpitz Site Project, we are currently investigating the use of an AUV for data collection and for identifying some of the objects comprising the debris piles.
Environmental conditions were selected with care to perform the data collection for this project. We note in particular the sea state, sun geometry, and water turbidity, as well as the state of the tide. Neither the work reported here nor any work reported elsewhere in the literature has yet provided a comprehensive understanding of optimum environmental conditions. However, we can note that some useful environmental data is potentially encoded in the measurements themselves. For example, spectral variation in the optical absorption coefficient in the water column contains information about water turbidity, and hence, for example, algal blooms or glacial runoff, while sun-glint conveys information about the sea and wind state. These will be explored in future work.
Other aims for the future development of this work include UAV platform upgrades, integration with underwater robotic data, and the deployment of geolocated seabed markers to improve registration and support long-term monitoring. Broader goals include the development of a spatial transect for pollutant monitoring and a web-based GIS platform [39] for community engagement.

5. Conclusions

The principal aim of this research was to investigate the potential of two-media structure-from-motion and multispectral imagery for shallow submerged structures, using off-the-shelf components and processing software. This was shown to be successful, achieving accuracies of a few centimetres at depths of up to 5–10 m. The specific characteristics of the Tirpitz wreck site were determined with high precision in this study: we were able to measure both the two-dimensional projected form of the wharf and its three-dimensional structure, as well as that of the piles of debris nearby. The volume of these debris piles is estimated at around 350 m3, with a somewhat less confident estimate of their mass at around 2400 tonnes. The Tirpitz salvage site offers a challenging but informative testbed for the remote sensing of underwater cultural heritage. Our results support a further exploration of hybrid aerial–underwater sensing frameworks and community-engaged monitoring initiatives.

Author Contributions

Project conception: S.W., B.L. and G.R.; data collection and preprocessing: M.K.D., M.B. and E.V.; fieldwork logistics: M.K.D., M.B., B.L. and S.W.; data analysis: G.R. and O.T.; writing: all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported via internal funds from the UiT Arctic University of Norway and the Scott Polar Research Institute, the University of Cambridge.

Data Availability Statement

The data presented in this study are openly available at https://doi.org/10.17863/CAM.122777.

Acknowledgments

Map data are copyrighted by OpenStreetMap contributors and available from https://www.openstreetmap.org (accessed 15 December 2025). We gratefully acknowledge the contributions of Endre Grimsbø, Bernt Inge Hansen, Vegard Nergård, and Sophie Weeks. This research was supported by UiT The Arctic University of Norway the and the University of Cambridge.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yang, Z.; Yu, X.; Dedman, S.; Rosso, M.; Zhu, J.; Yang, J.; Xia, Y.; Tian, Y.; Zhang, G.; Wang, J. UAV Remote Sensing Applications in Marine Monitoring: Knowledge Visualization and Review. Sci. Total Environ. 2022, 838, 155939. [Google Scholar] [CrossRef] [PubMed]
  2. Milas, A.S.; Sousa, J.J.; Warner, T.A.; Teodoro, A.C.; Peres, E.; Gonçalves, J.A.; Garcia, J.D.; Bento, R.; Phinn, S.; Woodget, A. Unmanned Aerial Systems (UAS) for Environmental Applications Special Issue Preface. Int. J. Remote Sens. 2018, 39, 4845–4851. [Google Scholar] [CrossRef]
  3. Tmušić, G.; Manfreda, S.; Aasen, H.; James, M.R.; Gonçalves, G.; Ben-Dor, E.; Brook, A.; Polinova, M.; Arranz, J.J.; Mészáros, J.; et al. Current Practices in UAS-Based Environmental Monitoring. Remote Sens. 2020, 12, 1001. [Google Scholar] [CrossRef]
  4. Alvarez-Vanhard, E.; Corpetti, T.; Houet, T. UAV & Satellite Synergies for Optical Remote Sensing Applications: A Literature Review. Sci. Remote Sens. 2021, 3, 100019. [Google Scholar] [CrossRef]
  5. Jiang, X.; Gao, M.; Gao, Z. A Novel Index to Detect Green-Tide Using UAV-Based RGB Imagery. Estuar. Coast. Shelf Sci. 2020, 245, 106943. [Google Scholar] [CrossRef]
  6. Rossi, L.; Mammi, I.; Pelliccia, F. UAV-Derived Multispectral Bathymetry. Remote Sens. 2020, 12, 3897. [Google Scholar] [CrossRef]
  7. Lim, J.S.; Gleason, S.; Williams, M.; Matás, G.J.L.; Marsden, D.; Jones, W. UAV-Based Remote Sensing for Managing Alaskan Native Heritage Landscapes in the Yukon-Kuskokwim Delta. Remote Sens. 2022, 14, 728. [Google Scholar] [CrossRef]
  8. Mather, P.M.; Koch, M. Computer Processing of Remotely-Sensed Images, 5th ed.; John Wiley and Sons: Hoboken, NJ, USA, 2022. [Google Scholar]
  9. Marcial-Pablo, M.d.J.; Gonzalez-Sanchez, A.; Jimenez-Jimenez, S.I.; Ontiveros-Capurata, R.E.; Ojeda-Bustamante, W. Estimation of Vegetation Fraction Using RGB and Multispectral Images from UAV. Int. J. Remote Sens. 2019, 40, 420–438. [Google Scholar] [CrossRef]
  10. Zhang, A.; Hu, S.; Zhang, X.; Zhang, T.; Li, M.; Tao, H.; Hou, Y. A Handheld Grassland Vegetation Monitoring System Based on Multispectral Imaging. Agriculture 2021, 11, 1262. [Google Scholar] [CrossRef]
  11. Chabot, D.; Dillon, C.; Shemrock, A.; Weissflog, N.; Sager, E.P.S. An Object-Based Image Analysis Workflow for Monitoring Shallow-Water Aquatic Vegetation in Multispectral Drone Imagery. ISPRS Int. J. Geo-Inf. 2018, 7, 294. [Google Scholar] [CrossRef]
  12. Ullman, S. The Interpretation of Structure from Motion. Proc. R. Soc. Lond. B 1979, 203, 405–426. [Google Scholar] [PubMed]
  13. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  14. Westoby, M.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. “Structure-from-Motion” Photogrammetry: A Low-Cost, Effective Tool for Geoscience Applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef]
  15. Ye, D.; Liao, M.; Nan, A.; Wang, E.; Zhou, G. Research on reef bathymetric survey of UAV stereopair based on two-medium photogrammetry. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2016, XLI-B1, 407–412. [Google Scholar] [CrossRef]
  16. Skarlatos, D.; Agrafiotis, P. A Novel Iterative Water Refraction Correction Algorithm for Use in Structure from Motion Photogrammetric Pipeline. J. Mar. Sci. Eng. 2018, 6, 77. [Google Scholar] [CrossRef]
  17. Carrivick, J.L.; Smith, M.W. Fluvial and Aquatic Applications of Structure from Motion Photogrammetry and Unmanned Aerial Vehicle/Drone Technology. WIREs Water 2019, 6, e1328. [Google Scholar] [CrossRef]
  18. Mulsow, C.; Kenner, R.; Bühler, Y.; Stoffel, A.; Maas, H.-G. Subaquatic Digital Elevation Models from UAV-Imagery. Int. Arch. Phtogrammetry Remote Sens. Spat. Inf. Sci. 2018, XLII-2, 739–744. [Google Scholar] [CrossRef]
  19. Ventura, D.; Bonifazi, A.; Gravina, M.F.; Belluscio, A.; Ardizzone, G. Mapping and Classification of Ecologically Sensitive Marine Habitats Using Unmanned Aerial Vehicle (UAV) Imagery and Object-Based Image Analysis (OBIA). Remote Sens. 2018, 10, 1331. [Google Scholar] [CrossRef]
  20. Rinner, H. Problems of Two-Medium Photogrammetry. Photogramm. Eng. 1948, 35, 272–282. [Google Scholar]
  21. David, C.G.; Kohl, N.; Casella, E.; Rovere, A.; Ballesteros, P.; Schlurmann, T. Structure-from-Motion on Shallow Reefs and Beaches: Potential and Limitations of Consumer-Grade Drones to Reconstruct Topography and Bathymetry. Coral Reefs 2021, 40, 835–851. [Google Scholar] [CrossRef]
  22. Agrafiotis, P.; Karantzalos, K.; Georgopoulos, A.; Skarlatos, D. Correcting Image Refraction: Towards Accurate Aerial Image-Based Bathymetry Mapping in Shallow Waters. Remote Sens. 2020, 12, 322. [Google Scholar] [CrossRef]
  23. Dietrich, J.T. Bathymetric Structure-from-Motion: Extracting Shallow Stream Bathymetry from Multi-View Stereo Photogrammetry. Earth Surf. Process. Landf. 2017, 42, 355–364. [Google Scholar] [CrossRef]
  24. Woodget, A.S.; Carbonneau, P.E.; Visser, F.; Maddock, I.P. Quantifying Submerged Fluvial Topography Using Hyperspatial Resolution UAS Imagery and Structure from Motion Photogrammetry. Earth Surf. Process. Landf. 2015, 40, 47–64. [Google Scholar] [CrossRef]
  25. Lintott, B.; Rees, G. Arctic Legacy of War: Tirpitz Site Project. J. Ocean Technol. 2024, 19, 43–47. [Google Scholar]
  26. Degerlund, M.; Eilertsen, H.C. Main Species Characteristics of Phytoplankton Spring Blooms in NE Atlantic and Arctic Waters (68–80° N). Estuaries Coasts 2010, 33, 242–269. [Google Scholar] [CrossRef]
  27. Lyzenga, D.R. Passive Remote Sensing Techniques for Mapping Water Depth and Bottom Features. Appl. Opt. 1978, 17, 379–383. [Google Scholar] [CrossRef]
  28. Stumpf, R.P.; Holderied, K.; Sinclair, M. Determination of Water Depth with High-Resolution Satellite Imagery over Variable Bottom Types. Limnol. Oceanogr. 2003, 48, 547–556. [Google Scholar] [CrossRef]
  29. Rasband, W.S. ImageJ. 1997. Available online: http://rsb.info.nih.gov/ij/ (accessed on 15 December 2025).
  30. Schneider, C.A.; Rasband, W.S.; Eliceiri, K.W. NIH Image to ImageJ: 25 Years of Image Analysis. Nat. Methods 2012, 9, 671–675. [Google Scholar] [CrossRef]
  31. Hall, P.; Park, B.U.; Turlach, B.A. Rolling-Ball Method for Estimating the Boundary of the Support of a Point-Process Intensity. Ann. L’institut Henri Poincare (B) Probab. Stat. 2002, 38, 959–971. [Google Scholar] [CrossRef]
  32. Nasiri, V.; Darvishsefat, A.A.; Arefi, H.; Pierrot-Deseilligny, M.; Namiranian, M.; Le Bris, A. Unmanned Aerial Vehicles (UAV)-Based Canopy Height Modeling under Leaf-on and Leaf-off Conditions for Determining Tree Height and Crown Diameter (Case Study: Hyrcanian Mixed Forest). Can. J. For. Res. 2021, 51, 962–971. [Google Scholar] [CrossRef]
  33. Aste, T.; Weaire, D. The Pursuit of Perfect Packing, 2nd ed.; Taylor & Francis Ltd.: Abingdon-on-Thames, UK, 2008. [Google Scholar]
  34. Amirkolaee, H.A.; Arefi, H.; Ahmadlou, M.; Raikwar, V. DTM Extraction from DSM Using a Multi-Scale DTM Fusion Strategy Based on Deep Learning. Remote Sens. Environ. 2022, 274, 113014. [Google Scholar] [CrossRef]
  35. Su, R.; Zhang, D.; Li, C.; Gong, Z.; Venkatesan, R.; Jiang, F. Localization and Data Collection in AUV-Aided Underwater Sensor Networks: Challenges and Opportunities. IEEE Netw. 2019, 33, 86–93. [Google Scholar] [CrossRef]
  36. Wang, Q.; Dai, H.-N.; Wang, Q.; Shukla, M.K.; Zhang, W.; Soares, C.G. On Connectivity of UAV-Assisted Data Acquisition for Underwater Internet of Things. IEEE Internet Things J. 2020, 7, 5371–5385. [Google Scholar] [CrossRef]
  37. Villalpando, F.; Tuxpan, J.; Ramos-Leal, J.A.; Marin, A.E. Towards of Multi-Source Data Fusion Framework of Geo-Referenced and Non-Georeferenced Data: Prospects for Use in Surface Water Bodies. Geocarto Int. 2023, 38, 2172215. [Google Scholar] [CrossRef]
  38. Wibisono, A.; Piran, J.; Song, H.-K.; Lee, B.M. A Survey on Unmanned Underwater Vehicles: Challenges, Enabling Technologies, and Future Research Directions. Sensors 2023, 23, 7321. [Google Scholar] [CrossRef]
  39. Nowak, M.M.; Dziób, K.; Ludwisiak, Ł.; Chmiel, J. Mobile GIS Applications for Environmental Field Surveys: A State of the Art. Glob. Ecol. Conserv. 2020, 23, e01089. [Google Scholar] [CrossRef]
Figure 1. Study site located offshore of Håkøya, Norway. Wharf (beige) and debris (dark red) locations digitised from data collected during this project. Grid coordinates use UTM zone 34 WGS84 projection. Background mapping © Open Streetmap and its contributors.
Figure 1. Study site located offshore of Håkøya, Norway. Wharf (beige) and debris (dark red) locations digitised from data collected during this project. Grid coordinates use UTM zone 34 WGS84 projection. Background mapping © Open Streetmap and its contributors.
Remotesensing 18 00045 g001
Figure 2. Wreck of the Tirpitz off Hakøya, Norway. View towards the south from 600 feet altitude, photographed by the Royal Air Force 23 May 1945 as part of the official investigation into the sinking and the effectiveness of the bombs. Photograph reproduced from UK National Archives collection AIR 14/994 with permission.
Figure 2. Wreck of the Tirpitz off Hakøya, Norway. View towards the south from 600 feet altitude, photographed by the Royal Air Force 23 May 1945 as part of the official investigation into the sinking and the effectiveness of the bombs. Photograph reproduced from UK National Archives collection AIR 14/994 with permission.
Remotesensing 18 00045 g002
Figure 3. Flow diagram of data processing. The black arrows denote the processing chain. Red double-headed arrows denote comparisons. The green double-headed arrows denote visual inspection of initially generated datasets. Processes are referenced in the text according to the letter symbols in this figure. (a,b) Preprocessing (Pix4D). (c) Visual interpretation in QGIS and comparison with Maxar imagery through Google Earth. (d) Gridding to a regular (x,y) grid. Perfomed using the XYZ2DEM plugin in ImageJ (v 2.16.0/1.54p), though QGIS provides equivalent functionality. (e) Application of a ‘rolling ball’ filter to isolate smooth background. Implemented in ImageJ, although similar functionality could be achieved using tools in QGIS. (f) Image subtraction to obtain the height of objects above the sea floor. Implemented in QGIS. (g) Object height determination for open structures via the calculation of a range of height values within a single grid cell. Implemented in GNU Octave. (h) Bathymetric estimation using the Stumpf model, implemented in QGIS using parameters calculated in GNU Octave.
Figure 3. Flow diagram of data processing. The black arrows denote the processing chain. Red double-headed arrows denote comparisons. The green double-headed arrows denote visual inspection of initially generated datasets. Processes are referenced in the text according to the letter symbols in this figure. (a,b) Preprocessing (Pix4D). (c) Visual interpretation in QGIS and comparison with Maxar imagery through Google Earth. (d) Gridding to a regular (x,y) grid. Perfomed using the XYZ2DEM plugin in ImageJ (v 2.16.0/1.54p), though QGIS provides equivalent functionality. (e) Application of a ‘rolling ball’ filter to isolate smooth background. Implemented in ImageJ, although similar functionality could be achieved using tools in QGIS. (f) Image subtraction to obtain the height of objects above the sea floor. Implemented in QGIS. (g) Object height determination for open structures via the calculation of a range of height values within a single grid cell. Implemented in GNU Octave. (h) Bathymetric estimation using the Stumpf model, implemented in QGIS using parameters calculated in GNU Octave.
Remotesensing 18 00045 g003
Figure 4. Point-cloud density (number of points per 1 m2 grid cell) achieved across the study site. Projection: UTM zone 34, WGS84.
Figure 4. Point-cloud density (number of points per 1 m2 grid cell) achieved across the study site. Projection: UTM zone 34, WGS84.
Remotesensing 18 00045 g004
Figure 5. Oblique visualisation (Pix4D) of the point cloud of the wharf and debris piles reconstructed from structure-from-motion point-cloud analysis of aerial UAV RGB imagery. Insets show details of the wharf structure and the debris piles.
Figure 5. Oblique visualisation (Pix4D) of the point cloud of the wharf and debris piles reconstructed from structure-from-motion point-cloud analysis of aerial UAV RGB imagery. Insets show details of the wharf structure and the debris piles.
Remotesensing 18 00045 g005
Figure 6. Refraction geometry at the water–air interface. An object is located underwater at coordinates (0, −d) and is imaged at location (x, −d’). φ is the incidence angle at the surface. Solid lines represent real rectlinear light rays, dashed lines are virtual rays. The horizontal dash-dot line is the water surface, to which depths are referred.
Figure 6. Refraction geometry at the water–air interface. An object is located underwater at coordinates (0, −d) and is imaged at location (x, −d’). φ is the incidence angle at the surface. Solid lines represent real rectlinear light rays, dashed lines are virtual rays. The horizontal dash-dot line is the water surface, to which depths are referred.
Remotesensing 18 00045 g006
Figure 7. Contours of (a) x/d and (b) nd′/d as functions of φ1 and φ2 (both specified in degrees) for a refractive index n = 1.33.
Figure 7. Contours of (a) x/d and (b) nd′/d as functions of φ1 and φ2 (both specified in degrees) for a refractive index n = 1.33.
Remotesensing 18 00045 g007
Figure 8. Location of manually digitised coordinates superimposed on true-colour (RGB) orthomosaic of the wharf.
Figure 8. Location of manually digitised coordinates superimposed on true-colour (RGB) orthomosaic of the wharf.
Remotesensing 18 00045 g008
Figure 9. Dimensions and orthographic coordinates of the main structural elements of the wharf. Coordinates and dimensions in metres. Coordinate values are in metres, UTM zone 34°N, WGS84 ellipsoid.
Figure 9. Dimensions and orthographic coordinates of the main structural elements of the wharf. Coordinates and dimensions in metres. Coordinate values are in metres, UTM zone 34°N, WGS84 ellipsoid.
Remotesensing 18 00045 g009
Figure 10. Depth below the highest point of the wharf structure, determined from SfM at a spatial resolution of 0.25 m.
Figure 10. Depth below the highest point of the wharf structure, determined from SfM at a spatial resolution of 0.25 m.
Remotesensing 18 00045 g010
Figure 11. Smoothed object height model represented by 0.2 m contours (heavier contours are at 1 m height). The contours are superimposed on a band 4-3-2 false-colour composite image. Annotations represent specific debris piles whose volumes are estimated in Table 2.
Figure 11. Smoothed object height model represented by 0.2 m contours (heavier contours are at 1 m height). The contours are superimposed on a band 4-3-2 false-colour composite image. Annotations represent specific debris piles whose volumes are estimated in Table 2.
Remotesensing 18 00045 g011
Figure 12. (a) Best-fitting bathymetric model using Equation (4) with bands 1 and 3 of the multispectral mosaic. The data have been fitted to the DBW model of Figure 10. (b) Difference between the bathymetric models of Figure 10 and Figure 12a.
Figure 12. (a) Best-fitting bathymetric model using Equation (4) with bands 1 and 3 of the multispectral mosaic. The data have been fitted to the DBW model of Figure 10. (b) Difference between the bathymetric models of Figure 10 and Figure 12a.
Remotesensing 18 00045 g012
Table 1. Coordinates (WGS84, UTM zone 34 N) of the representative points identified in Figure 8.
Table 1. Coordinates (WGS84, UTM zone 34 N) of the representative points identified in Figure 8.
Point NumberX/mY/m
1414,890.547,728,046.29
2414,939.667,728,063.75
3414,943.147,728,054.56
4414,894.207,728,037.08
5414,893.237,728,039.40
6414,892.397,728,041.53
7414,892.267,728,041.88
8414,891.407,728,044.02
9414,940.517,728,061.44
10414,941.307,728,059.31
11414,942.277,728,056.78
12414,939.327,728,053.72
13414,935.777,728,062.37
14414,894.677,728,047.67
15414,897.177,728,040.67
Table 2. Estimated volumes of the debris piles identified in Figure 11.
Table 2. Estimated volumes of the debris piles identified in Figure 11.
PileVolume (m3)
A102
B65
C90
D120
E53
F53
G38
H96
J61
K52
L24
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rees, G.; Tutubalina, O.; Bjørndahl, M.; Dreyer, M.K.; Lintott, B.; Venables, E.; Wickler, S. UAV Remote Sensing of Submerged Marine Heritage: The Tirpitz Wreck Site, Håkøya, Norway. Remote Sens. 2026, 18, 45. https://doi.org/10.3390/rs18010045

AMA Style

Rees G, Tutubalina O, Bjørndahl M, Dreyer MK, Lintott B, Venables E, Wickler S. UAV Remote Sensing of Submerged Marine Heritage: The Tirpitz Wreck Site, Håkøya, Norway. Remote Sensing. 2026; 18(1):45. https://doi.org/10.3390/rs18010045

Chicago/Turabian Style

Rees, Gareth, Olga Tutubalina, Martin Bjørndahl, Markus Kristoffer Dreyer, Bryan Lintott, Emily Venables, and Stephen Wickler. 2026. "UAV Remote Sensing of Submerged Marine Heritage: The Tirpitz Wreck Site, Håkøya, Norway" Remote Sensing 18, no. 1: 45. https://doi.org/10.3390/rs18010045

APA Style

Rees, G., Tutubalina, O., Bjørndahl, M., Dreyer, M. K., Lintott, B., Venables, E., & Wickler, S. (2026). UAV Remote Sensing of Submerged Marine Heritage: The Tirpitz Wreck Site, Håkøya, Norway. Remote Sensing, 18(1), 45. https://doi.org/10.3390/rs18010045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop