Polymodal Method of Improving the Quality of Photogrammetric Images and Models

Photogrammetry using unmanned aerial vehicles has become very popular and is already commonly used. The most frequent photogrammetry products are an orthoimage, digital terrain model and a 3D object model. When executing measurement flights, it may happen that there are unsuitable lighting conditions, and the flight itself is fast and not very stable. As a result, noise and blur appear on the images, and the images themselves can have too low of a resolution to satisfy the quality requirements for a photogrammetric product. In such cases, the obtained images are useless or will significantly reduce the quality of the end-product of low-level photogrammetry. A new polymodal method of improving measurement image quality has been proposed to avoid such issues. The method discussed in this article removes degrading factors from the images and, as a consequence, improves the geometric and interpretative quality of a photogrammetric product. The author analyzed 17 various image degradation cases, developed 34 models based on degraded and recovered images, and conducted an objective analysis of the quality of the recovered images and models. As evidenced, the result was a significant improvement in the interpretative quality of the images themselves and a better geometry model.


Introduction
Photogrammetry using unmanned aerial vehicles, understood to be a tool for taking measurements, combines the possibility of ground, air and even suborbital photogrammetric measurements [1], at the same time being a low-cost competition for conventional aerial photogrammetry or satellite observation. The well-established photogrammetric techniques and technologies, already used with classic aircraft, were quickly adapted to low-level solutions with unmanned aerial vehicles (UAVs). Acquiring data from a low level using unmanned aerial vehicles-although in principle the same as the process in the case of classic aerial photogrammetry-due to obvious equipment differences and flight possibilities, generates new problems, encountered only in the event of UAV photogrammetry [2].
Commercial UAVs used in photogrammetry have a minor maximum take-off mass (MTOM) up to 25 kg, although the most commonly used models weigh up to 5 kg. Few possibilities of transporting additional loads and limitations on UAVs mass enforce the need to reduce the weight of all components carried by the vehicle. Miniaturization involves, among others, global navigation satellite system (GNSS) receivers, inertial units (INS), and optoelectronic devices (visible light, thermal imaging and multispectral cameras), often making these devices less sophisticated and accurate. Whereas, regarding digital cameras used on UAVs, they are usually small structures. Commercial UAVs usually use integrated cameras with a sensor from 1/2.3 '' (DJI Mavic Pro) through to 1' (DJI Mavic Pro 2, DJI Phantom 4 Pro) to APS-C (DJI Zenmuse X7) (Shenzhen DJI Sciences and Technologies Ltd., Shenzhen, China). Such structures do not utilize such image compensation systems used in air photogrammetry such as time-delayed integration (TDI) [3,4], bright lenses with

•
Procedures-every aspect of the image data collection process, which stem from the execution method and its correctness. In other words, within this group, the following process elements can be distinguished: applied flight plan (altitude, coverage, selected flight path, and camera position), GNSS measurement accuracy, selected time of day, scenery illumination quality, etc.; • Technical elements-all technical devices and their quality used to collect data, for instance, technical capabilities and accuracy of the lenses, cameras, stabilization systems, satellite navigation system, receivers, etc.; • Numerical methods-the capabilities and characteristics of the algorithms and mathematical methods applied for data processing.
Energies 2021, 14, x FOR PEER REVIEW 2 of 23 systems used in air photogrammetry such as time-delayed integration (TDI) [3,4], bright lenses with constant internal orientation parameters and low distortion. Such a situation may lead to a number of errors in the data acquisition process, for example, blur and noise, which affects the quality of photogrammetric processing. Nowadays, with the use of UAVs for measurement during various construction projects and monitoring natural environment phenomena so frequently, data acquisition can sometimes be forced by the schedule of a given project or the uniqueness of an individual natural phenomenon. In such cases, there are often no suitable measurement conditions, there are insufficient lighting conditions, and there is not much time for the UAV flight itself. Such a forced flight schedule and time can lead to image degradation. Most frequently, the sensor ISO sensitivity is increased to avoid photo underexposure, which generates higher noise visible in the images [5]. Limited time forces the operator to fly at higher speeds, which combined with extended shutter speed generates blur. Such phenomena are particularly intensified with small CMOS (Complementary Metal-Oxide-Semiconductor) image sensors, frequently used in commercial UAVs. As a result, the quality-related requirements of photogrammetric processing might not be satisfied.
The primary determinants of a photogrammetric process are its qualitative requirements. They are usually specified by the end user of the product and can take various forms, e.g., specifications in a given contract, certain minimum official requirements, or adopted standards. In this context, a photogrammetric process can be defined as a set of interconnected activities, the execution of which is necessary to obtain a specific resultthe required image quality. The concept of quality has numerous definitions, with one of them defining quality as a process adaptability to set requirements. Therefore, reaching the required quality will strictly depend on the main factors of a given process. A factor is defined as a certain process activity impacting quality. The quality of the photogrammetric process can be built on three main pillars ( Figure 1) [6]: • Procedures-every aspect of the image data collection process, which stem from the execution method and its correctness. In other words, within this group, the following process elements can be distinguished: applied flight plan (altitude, coverage, selected flight path, and camera position), GNSS measurement accuracy, selected time of day, scenery illumination quality, etc.;

•
Technical elements-all technical devices and their quality used to collect data, for instance, technical capabilities and accuracy of the lenses, cameras, stabilization systems, satellite navigation system, receivers, etc.; • Numerical methods-the capabilities and characteristics of the algorithms and mathematical methods applied for data processing. Each of the aforementioned factors significantly impacts image quality, and their skillful balancing and matching to the existing measurement conditions and set requirements enables reaching an assumed objective. Importantly, there is no single path to achieving the required quality. For example, a required ground sampling distance (GPD) for a given image distance can be obtained through changing UAVs' flight altitude (pro-Photogrammetric process Each of the aforementioned factors significantly impacts image quality, and their skillful balancing and matching to the existing measurement conditions and set requirements enables reaching an assumed objective. Importantly, there is no single path to achieving the required quality. For example, a required ground sampling distance (GPD) for a given image distance can be obtained through changing UAVs' flight altitude (procedural factor) or changing a camera (technical factor), alternatively, by applying numerical methods for increasing image resolution, e.g., super-resolution algorithm [7] (numerical factor). Several of the interesting, latest publications can be presented regarding procedural factors. The authors of [8] discussed a new approach to planning a photogrammetric mission, especially in terms of complex scenery. Complex sceneries are ones where the terrain is of variable elevation, and with scattered terrain obstacles and objects. Such a terrain requires an unconventional approach to flight planning. Traditional flight plans, well-established and frequently used, are widely discussed by Eisenbeiss et al. in [9]. In the works of [10,11], the authors discuss flight planning procedures to analyze changes in a coastal zone. The authors of [12] address the issues associated with beach measurements and show a highly interesting procedural factor. The recommended time of day for beach measurements was early morning. Owing to a change in the time of day, the mean error was reduced twofold. The issue related to the impact of sun position on image quality has been further studied in [13]. Its authors also show the effect of forward overlap on the root mean square error (RMSE), while indicating recommended values. The studies in [14,15] present the impact of ground control point (GCP) arrangement and the number of image accuracy.
Of course, the photogrammetric processing accuracy depends on the quality of used equipment. In terms of technical factors, the greatest impact will be that of the quality and type of UAV navigation, orientation and stability systems, camera type, shutter type and lens type [16,17]. UAVs equipped with simple RTK (real-time kinematics) receivers are becoming popular today. Using this navigation sensor significantly improves the accuracy of direct determination of external orientation elements [18][19][20][21]. Some authors even state that using RTK receivers on UAVs enables to withdraw from GCP development [22]. Vautherin et al. [23] showed how the shutter type affects the image quality. Global shutters still have the dominance over cheaper rolling shutter solutions. Publications of [24][25][26] describe the impact of GCP measurement accuracy and arrangement on image quality.
Numerical methods used in processing digital images significantly affect the quality of a photogrammetric image [27]. For example, the issue of a nonmetric camera calibration algorithm and its impact on the geometry of photogrammetric processing [28][29][30]. The authors are constantly developing new calibration methods, reaching a significant improvement in the geometry of the end images [31]. The authors of [32] presented a new numerical way of improving the geometric quality of an image with single-strip blocks using the Levenberg-Marquardt-Powell method. The article in [33] discusses a method for eliminating the impact of weather conditions on the quality of photogrammetric images. Some authors also suggest comprehensive solutions to this issue, noting that certain factors significantly impact the quality of a photogrammetric process, while designing and constructing UAVs with their own calibration and processing algorithms [34]. Neural networks, especially the ones based on deep models, are widely used also in photogrammetry. Such cases include, for example, the following methods of improving photogrammetric processing quality [6,7,[35][36][37][38][39].
Due to the data in contemporary photogrammetry having a fully digital form, the algorithms and numerical methods used to process them will highly affect the end result [40]. It can be concluded that each numerical method used within the data processing process, starting from processing single values of digital sensor pixels [41], through writing them on a memory card or transferring the data to a server, to a full range of digital software-implemented photogrammetry methods will impact the final result. Therefore, the technique is to optimally select and choose those that lead to the lowest quality losses. In practice, software-implemented numerical methods are already properly selected and the user has no influence on changing them, and only on certain processing parameters instead. Furthermore, as observed in research [6], through the application of advanced processing algorithms, modern photogrammetric software is remarkably resistant to imagedegrading factors and is able to generate the model, although the final result usually has low geometric quality.
In photogrammetric practice, especially in the event of using UAVs in commercial tasks, there may be a situation when the correct selection of procedural and technical Energies 2021, 14, 3457 4 of 23 factors is insufficient. As a result, achieving the required photogrammetric product quality can be unfeasible. Consequently, the following approach can be formulated and expressed as follows. In the event of UAV images containing typical quality-degrading elements, such as noise, blur, and low resolution, one can apply an additional process to eliminate these factors, hence improving the final quality of a photogrammetric product. This additional process interferes only with the image data directly prior to their processing, therefore, it does not change the elements of the software itself. The modern image restoration methods were used to confirm this thesis and develop a new method of improving photogrammetric image quality. These methods were also tested in terms of their impact on image quality, processing, and final photogrammetric models. The outcome of the conducted research was the development and presentation of new solutions in the field of low-level photogrammetry: • the impact of three basic image quality-degrading factors (noise, blur, and low resolution) on the processing in modern photogrammetry software and on the quality of models based on such images was assessed, • a polymodal algorithm for improving measurement image quality based on neural networks and numerical methods was developed, • image-degrading factors were eliminated, their quality was objectively assessed, and basic photogrammetric products were developed. Models developed from the recovered images, which are images after elimination of degradation factors, were compared with a reference model. Figure 2 shows the applied research and data processing processes. The process is commenced with a data acquisition block. Image data was collected using a typical commercial UAV-DJI Mavic Pro. Unmodified images were used as reference data (ground truth). Further copies of the images were subjected to noise, blur, and low-resolution simulating degradation. One dataset with added noise, 8 sets with added blur, and 8 sets with simultaneous blur and reduced resolution were generated. All these images were used to create photogrammetric models. The next stage involved subjecting the modified images to the polymodal method for improving image quality and once again used to develop models. The research process involved comparing the image quality at individual processing stages and evaluating the quality of models generated based on these images. The study utilized various software and software environments, which are shown in Figure 2.

Image Degradation Model
The objective of image restoration (IR) methods is to recover latent pure image x based on its degraded form y, which can be expressed by a quotation: where D is a noise-independent degradation operation, n represents the additive white Gaussian noise (AWGN) of standard deviation σ. This paper assumes that noise, blur, and low resolution would degrade the measurement images and, consequently, lead to degraded photogrammetric processing. Opposing operations, denoising, deblurring, and super resolution, respectively, will improve the quality of degraded images and lead to improved quality of photogrammetric processing. They can be classified as numerical methods of improving photogrammetric processing quality, described in the Introduction.

Image Degradation Model
The objective of image restoration (IR) methods is to recover latent pure image based on its degraded form , which can be expressed by a quotation: where is a noise-independent degradation operation, represents the additive white Gaussian noise (AWGN) of standard deviation . This paper assumes that noise, blur, and low resolution would degrade the measurement images and, consequently, lead to degraded photogrammetric processing. Opposing operations, denoising, deblurring, and super resolution, respectively, will improve the quality of degraded images and lead to improved quality of photogrammetric processing. They can be classified as numerical methods of improving photogrammetric processing quality, described in the Introduction.
For a typical blur, the degradation model can be expressed as follows: where * is a two-dimensional convolution of a pure image and the blur kernel . More information on blur in low-level photogrammetry can be found in [6]. Unlike the study in [6], the authors of this paper used 8 different blur kernels ( Figure 3) that were adopted as in [42,43]. The proposed blur kernels, in connection with noise and low resolution led to the development of degraded test data. The blur kernels presented here were chosen to complement the kernels presented in the study [6], where very intense motion blur was simulated. For a typical blur, the degradation model can be expressed as follows: where x * k is a two-dimensional convolution of a pure image and the blur kernel k. More information on blur in low-level photogrammetry can be found in [6]. Unlike the study in [6], the authors of this paper used 8 different blur kernels ( Figure 3) that were adopted as in [42,43]. The proposed blur kernels, in connection with noise and low resolution led to the development of degraded test data. The blur kernels presented here were chosen to complement the kernels presented in the study [6], where very intense motion blur was simulated.
Energies 2021, 14, x FOR PEER REVIEW 6 of 23 The degradation model for an image with reduced resolution is expressed by the following equation: where ↓ means simple image direct downsampling [44] at every s × s pixel, where s is a downscaling factor, here = 2. In the case of super-resolution algorithms, such an image degradation model is deemed most correct [43,45]. More information on how low res- The degradation model for an image with reduced resolution is expressed by the following equation: where ↓ s means simple image direct downsampling [44] at every s × s pixel, where s is a downscaling factor, here s = 2. In the case of super-resolution algorithms, such an image degradation model is deemed most correct [43,45]. More information on how low resolution impacts a photogrammetric model, and on other models of increasing resolution, can be found in [7]. Noise can be defined as a certain accidental and unwanted signal. In the context of captured images, noises are an undesirable byproduct of image capture and recording, and constitute certain redundant information. The noise source in digital photography is primarily the image capturing, recording, and transmission channel. Outcomes of noise include undesirable effects like artifacts, unreal generated individual pixels of random quality, lines, emphasized object edges, etc. [46]. The authors of this paper adopted the Gaussian model of additive noise. This noise, also called electronic noise, is generated primarily as a result of signal amplification in the recording channel and directly in CCD (Charge Coupled Device) and CMOS sensors, as a result of thermal atomic vibration [46]. It should be stressed that an elevated noise or distortion level in input photogrammetric images can lead to significant degradation of the stereo-matching process. This applies to all stereo-vision algorithms, however, to a varying extent [47]. Such a situation can directly impact the developed model. Assuming that in Equation (1), D(x) will be the identity function, hence, random variable n in Equation (3) adopts the Gaussian density function p n x p , which can be expressed as [47]: where x p means the value of a single image pixel x, µ is the mean value 0 adopted herein, and σ is the standard deviation, adopted herein in the range from 0 to 50.

Restoration
As mentioned in the Introduction, the development of a method for improving image quality was based on already existing numerical methods, functioning in other fields. The assumption also was that the method used in the research had to have the plug and play capability [42,[48][49][50][51][52][53][54][55][56][57][58][59], which means being functional without the need for user interference. The plug-and-play methods used for image restoration problems can perform generic image restoration independent of the degradation type. That capability is especially essential in real applications, due to the fact that during UAV data acquisition, the degradation factors can be very different and random. Moreover, a common feature of these methods is also that they are relatively simple to use, which means that they can be easily implemented within available environments or integrated with existing software, and that they are based on well-known numerical methods.
To put it simply, all aforementioned methods solve Equation (1) in different ways. One of the methods is Bayesian inference, therefore, Equation (1) can be solved using the rule of maximum a posteriori probability (MAP), which can be formally expressed as [60]: where the solution minimizes an energy function of a data term 1 2σ 2 y − D(x) 2 and a prior term λR(x) with regularization parameter λ. As stipulated by the source literature [60][61][62][63], the methods for solving the Solution (5) can be divided into two groups, namely, modelbased methods and learning-based methods. They have their pros and cons. As a rule of thumb, model-based methods are rather flexible and offer numerous tasks (D), but they unfortunately need more time for calculation. On the other hand, learning-based methods can provide results very quickly but require long learning time and are not as flexible.
Learning is limited to a specific task only (D). For photogrammetric purposes, the solution presented herein was adopted directly after [60], therefore, denoising will be conducted Energies 2021, 14, 3457 7 of 23 using a learning-based model, while deblurring and resolution improvement through model-based methods. Readers who want to further explore the issues of deblurring and resolution improvement through learning-based models are welcome to study [6,7], where learning-based methods were applied for photogrammetric purposes. As presented there, the methods were very effective and were able to restore even very blurry images, and in the case of super resolution, generate high-quality high resolution images, nevertheless, the methods were applied only for one separate degradation problem. Moreover, the approach presented in these works uses neural networks, that which require a long learning process, and the large training dataset, that which is generally not problematic for one task only. The polymodal image restoring method presented here should solve 3 degradation factors at one time, therefore, what is required is a different methodology.
As already mentioned, denoising was conducted using a DRUNnet neural network [53]. This network is classified as a convolutional neural network (CNN) and is able to remove noise of various levels, based on a single model. The backbone of DRUNnet is the wellknown U-Net network [64] and consists of four scales, with each of them having an identity skip connection between 2 × 2 strided convolution (SConv) downscaling and 2 × 2 transposed convolution (TConv) upscaling operations. The number of channels in each layer, from the first to the fourth is 64, 128, 256, and 512, respectively. The activation function does not appear before the first and last convolutions, and before the SConv and TConv layers. Additionally, every residual block has only one ReLU activation function. A neural training database consists of 8794 images acquired from four following datasets [65][66][67][68].
Data term and prior term in Equation (5) can be decoupled using half quadratic splitting (HQS) algorithm [69] as introduced in [53]. Therefore, the HQS introduces the z auxiliary variable, resulting in: which can be solved by minimizing the following problem: where µ is the penalty parameter. This problem can be solved through iterating two sub-problems for x and z, whereas the other variables are fixed: Therefore, the z k solution task comes down to finding a proximal point of z k−1 and usually has a closed-form solution dependent on D. For the deblurring task, assuming that the function sport in Equation (2) is executed based on circular boundary conditions, a fast solution to x k is: where F (·) and F −1 (·) mean the Fast Fourier Transform (FFT) and inverse Fast Fourier Transform, respectively, and F (·) describes the complex conjugate of F (·). The solution to z k for the super-resolution task, assuming that the function spot in Equation (3) is executed based on circular boundary conditions, can be taken from [53,70]: Energies 2021, 14, 3457 where means a distinct block processing operator with element-wise multiplication. Degraded images were subjected to the presented method. Therefore, the polymodal method of improving the quality of photogrammetric images involves solving three subproblems, namely, denoising, deblurring, and super resolution.

Reference Data Acquisition
The reference images was acquired using a DJI Mavic Pro (Shenzhen DJI Sciences and Technologies Ltd., China) UAV. The UAV is a typical representative of commercial aerial vehicles, designed and intended mainly for amateur movie creators. The flexibility and trustworthiness of these platforms were quickly appreciated by the photogrammetric community.
The flight was planned and executed as per a single grid [71] over an urban infrastructure fragment. The area of the test space is 0.192 km 2 , with the flight conducted at an altitude of 100 m above ground level (AGL), with a longitudinal and transverse coverage of 75%. 129 images were taken and supplemented with metadata and an actual UAV position. In addition, a photogrammetric network was established that consisted of 16 ground control points (GCPs), evenly arranged throughout the entire study area. The GCPs' position were measured with the GNSS RTK accurate satellite positioning method and determined relative to the PL-2000 Polish state grid coordinate system and their altitude relative to the quasigeoid. Commercial Agisoft Metashape ver. 1.6.5 (Agisoft LLC, St. Petersburg, Russia) software was used to process the data. The results for the reference model, read from a report generated by the software, are shown in Table 1 and the visualizations are presented in Figure 4.

Degraded Models
The aforementioned relationships were used to create research data. Image data acquired during a reference flight were noised, blurred, and their resolution was reduced. One dataset with added Gaussian noise of = 15, 8 sets with added Gaussian noise of = 7.65 and variable blur, and 8 sets with low resolution ( = 2) and variable blur. Blur was changed for each set, indicating the blur kernel (Figure 3), so that a given kernel is invariant for all images in the set. This enabled generating a total of 17 complete sets of degraded images (Table 2).
Next, in accordance with the rules of the art, the same process of generating typical low-ceiling photogrammetry products was conducted using the photogrammetric software. This process followed the same procedure and processing settings as the ones ap-

Degraded Models
The aforementioned relationships were used to create research data. Image data acquired during a reference flight were noised, blurred, and their resolution was reduced. One dataset with added Gaussian noise of σ = 15, 8 sets with added Gaussian noise of σ = 7.65 and variable blur, and 8 sets with low resolution (s = 2) and variable blur. Blur was changed for each set, indicating the blur kernel (Figure 3), so that a given kernel is invariant for all images in the set. This enabled generating a total of 17 complete sets of degraded images (Table 2). Next, in accordance with the rules of the art, the same process of generating typical low-ceiling photogrammetry products was conducted using the photogrammetric software. This process followed the same procedure and processing settings as the ones applied for generating reference products. The result was 17 products generated from degraded images. Table 2 presents the basic data of the surveys based on degraded data. It should be noted that a real flight altitude (ca. 100 m) was fixed, and the one shown in the table was calculated by the software. Table 3 shows the root mean square error (RMSE) calculated for control point locations. During the image processing process, the control points were manually indicated by the operator for each dataset. The fact that it was possible to develop models from all degraded data, and that the software selected for this task completed the process without significant disturbance is noteworthy.

Image Restoration and Model Processing
All 17 complete sets of degraded images ( Table 2) were subjected to degradation elimination. The resulting images for each set were used to generate further sets of photogrammetric products. This process was similar to the one involving reference models, with the same processing software settings being used. Figure 5, as well as Figures 6 and 7       A visual analysis of the aforementioned images indicates that the method significantly restores the image, enabling significant denoising, deblurring, and improving resolution. In practice, the noise has been completely eliminated, and the images exhibit a significantly higher interpretative quality. Furthermore, the visual assessment of all restored images (c) indicates that they have a very similar or even identical quality. It is virtually difficult to assess their degradation extent, which enables the conclusion that all products based on these images will also exhibit similar interpretative and geometric quality, regardless of the problem source.
The assessment of image quality and the evaluation of the results of the presented polymodal method in comparison with the degraded images was conducted on the basis of four different image quality metrics (IQM): blind referenceless image spatial quality evaluator (BRISQUE) [71], natural image quality evaluator (NIQE) [72], perception-based image quality evaluator (PIQE) [73] and peak-signal-to-noise ratio (PSNR) [74]. Chosen no-reference image quality scores generally return a nonnegative scalar. The BRISQUE score is in the range from 0 to 100. Lower score values better reflect the perceptive qualities of images. The NIQE model is trained on a database of pristine images and can measure the quality of images with arbitrary distortion. NIQE is opinion-unaware and does not A visual analysis of the aforementioned images indicates that the method significantly restores the image, enabling significant denoising, deblurring, and improving resolution. In practice, the noise has been completely eliminated, and the images exhibit a significantly higher interpretative quality. Furthermore, the visual assessment of all restored images (c) indicates that they have a very similar or even identical quality. It is virtually difficult to assess their degradation extent, which enables the conclusion that all products based on these images will also exhibit similar interpretative and geometric quality, regardless of the problem source.
The assessment of image quality and the evaluation of the results of the presented polymodal method in comparison with the degraded images was conducted on the basis of four different image quality metrics (IQM): blind referenceless image spatial quality evaluator (BRISQUE) [71], natural image quality evaluator (NIQE) [72], perception-based image quality evaluator (PIQE) [73] and peak-signal-to-noise ratio (PSNR) [74]. Chosen no-reference image quality scores generally return a nonnegative scalar. The BRISQUE score is in the range from 0 to 100. Lower score values better reflect the perceptive qualities of images. The NIQE model is trained on a database of pristine images and can measure the quality of images with arbitrary distortion. NIQE is opinion-unaware and does not use subjective quality scores. The trade-off is that the NIQE score of an image might not correlate as well as the BRISQUE score with human perceptions of quality. Lower score values better reflect the perceptive quality of images with respect to the input model. The PIQE score is the no-reference image quality score, and it is inversely correlated with the perceptual quality of an image. A low score value indicates high perceptive quality, and a high score value indicates low perceptive quality. A higher PSNR value provides a higher image quality, and at the other end of the scale, a small value of the PSNR provides high numerical differences between images. Figure 8 presents the calculated results of the aforementioned image quality evaluators in graphical form.
perceptual quality of an image. A low score value indicates high perceptive quality, and a high score value indicates low perceptive quality. A higher PSNR value provides a higher image quality, and at the other end of the scale, a small value of the PSNR provides high numerical differences between images. Figure 8 presents the calculated results of the aforementioned image quality evaluators in graphical form. An analysis of the aforementioned results shows that in terms of perceptive quality improvement (BRISQUE index), significant improvement in the resolution improvement subtask (task: sr), and minor improvement in denoising were achieved. The BRISQUE index values are clearly high in terms of the deblurring task, although the visual analysis clearly indicates significant quality improvement. On the other hand, the NIQE (natural image quality evaluator) index correctly indicates image quality improvement in each task. This means that, objectively, the quality of each image has been very explicitly improved, and the NIQE index values in certain cases are very similar to the reference val- An analysis of the aforementioned results shows that in terms of perceptive quality improvement (BRISQUE index), significant improvement in the resolution improvement subtask (task: sr), and minor improvement in denoising were achieved. The BRISQUE index values are clearly high in terms of the deblurring task, although the visual analysis clearly indicates significant quality improvement. On the other hand, the NIQE (natural image quality evaluator) index correctly indicates image quality improvement in each task. This means that, objectively, the quality of each image has been very explicitly improved, and the NIQE index values in certain cases are very similar to the reference values. Interestingly, the NIQE value for the denoise task indicates even better image quality after noise reduction than that of the ground truth image (obtained straight from a camera and unmodified). This means that the noise present on ground truth images was minor, as natural for a sensor of small digital cameras. Residual noise reduction on the ground truth image enabled to fully eliminate the noise, which translated into an improved NIQE index value. The PIQE index, similarly to BRISQUE, indicates a general improvement, however, the values are clearly overstated for the deblurring task. The popular PSNR index indicated a significant improvement of image quality in all tasks, with the highest value observed for deblurring, for which BRISQUE and PIQE showed quite the opposite.
Images subjected to the quality improvement method were used as a base to develop successive, typical photogrammetric products. This process followed the same procedure and processing settings as the ones applied for generating reference products and degraded images. The result was 17 products generated from images with improved quality. Table 4 presents the basic study data based on images without blurring. Table 5 shows the RMSE calculated for control points' location for the restored dataset.

Results
This chapter analyzes and discusses the geometry of all developed photogrammetric products based on restored images. It should be noted that all processes were correct and without disturbances, and the applied photogrammetric software did not indicate significant difficulties in generating products. Figure 9 shows a full summary of the basic quality parameters relating to the photogrammetric product, namely, reprojection error, total RMSE for GCPs, and number of key points.
The reprojection error (RE) for models based on restored images in all tasks adopts higher values than the reference ones and the values generated for degraded images. The difference between the error for degraded images and restored images is minor, yet slightly higher. The values in terms of total RMSE for GCPs are similar to the reference values. This means that significant improvement is observed in this respect. The number of key points is close to the values generated for the degraded models. All the aforementioned values differ much from the ground truth values, but it is possible to identify certain dependencies. RE values are not improved, RMSE for GCPs are improved, and the number of key points increases. The rather subtle differences in this respect enable a conclusion that the model geometry will be preserved.
The geometric quality of the developed topography models was evaluated using the methods described in [75,76], and similarly to the analyses performed in [6]. An M3C2 distance map (Multiscale Model to Model Cloud Comparison) was developed for each point cloud. The M3C2 distance map computation process utilized 3-D point precision estimates stored in scalar fields. Appropriate scalar fields were selected for both point clouds (referenced and tested) to describe measurement precision in X, Y, and Z (σ X , σ Y , σ Z ). The results for sample cases are shown in Figure 10. The reprojection error (RE) for models based on restored images in all tasks adopts higher values than the reference ones and the values generated for degraded images. The difference between the error for degraded images and restored images is minor, yet slightly higher. The values in terms of total RMSE for GCPs are similar to the reference values. This means that significant improvement is observed in this respect. The number of key points is close to the values generated for the degraded models. All the aforementioned values differ much from the ground truth values, but it is possible to identify certain dependencies. RE values are not improved, RMSE for GCPs are improved, and the number of key points increases. The rather subtle differences in this respect enable a conclusion that the model geometry will be preserved.
The geometric quality of the developed topography models was evaluated using the methods described in [75,76], and similarly to the analyses performed in [6]. An M3C2 distance map (Multiscale Model to Model Cloud Comparison) was developed for each point cloud. The M3C2 distance map computation process utilized 3-D point precision estimates stored in scalar fields. Appropriate scalar fields were selected for both point The statistical distribution of M3C2 distances is close to normal, which means that a significant part of the observations is concentrated around the mean. The means (µ) for all cases adopted negative values, which means that each model, both degraded and restored, was displaced on average by approximately 20 cm relative to the reference model. It should be noted that the degraded models exhibited greater differences from the reference model. All models based on restored images exhibited lower mean (µ) than in the case of the equivalent model for the same degradation parameters. Standard deviation was about 1 m. The M3C2 distance was approximately 1 m for the models in their eastern part (blue color) near the quay, where the water surface is recorded. Furthermore, one can notice a significant number of random points of extreme deviation in the northern and southern parts. Even a 1 meter difference in the flyover area (central model part Low-7 blue color) can be observed for extremely damaged cases (Low-7). This directly means that flyover altitude (object altitude above ground level) has been incorrectly calculated by the software. After restoring the images, this difference minimizes to around zero (green points-SuperRes-7). A similar situation is observed with a noised model. This proves that model geometry is significantly improved. The statistical distribution of M3C2 distances is close to normal, which means that a significant part of the observations is concentrated around the mean. The means (μ) for all cases adopted negative values, which means that each model, both degraded and restored, was displaced on average by approximately 20 cm relative to the reference model. It should be noted that the degraded models exhibited greater differences from the reference model. All models based on restored images exhibited lower mean (μ) than in the case of the equivalent model for the same degradation parameters. Standard deviation was about 1 m. The M3C2 distance was approximately 1 m for the models in their eastern part (blue color) near the quay, where the water surface is recorded. Furthermore, one can notice a significant number of random points of extreme deviation in the northern and southern parts. Even a 1 meter difference in the flyover area (central model part Low-7 blue color) can be observed for extremely damaged cases (Low-7). This directly means that flyover altitude (object altitude above ground level) has been incorrectly calculated by the software. After restoring the images, this difference minimizes to around zero (green points-SuperRes-7). A similar situation is observed with a noised model. This proves that model geometry is significantly improved.
In areas where the M3C2 distance takes higher values, it was observed that tie points exhibited lower measurement precision, which was manifested by higher values of , , . These values are calculated in millimeters (mm). Therefore, it was decided to additionally assess the quality of all products by conducting a statistical analysis of tie point precision ( ), expressed by Formula (12): In areas where the M3C2 distance takes higher values, it was observed that tie points exhibited lower measurement precision, which was manifested by higher values of σ X, σ Y, σ Z . These values are calculated in millimeters (mm). Therefore, it was decided to additionally assess the quality of all products by conducting a statistical analysis of tie point precision (d σ ), expressed by Formula (12): The statistical analysis included the median and standard deviation of tie point precision (d σ ) and was calculated for each case. The numerical results are shown in Table 6 for the median value and Table 7 for standard deviation. The graphical comparison of the data from the tables is shown in Figure 11.   A results' analysis clearly shows that the precision of tie point position determination was improved in each case, which consequently translates to improved geometric quality of the product. Median improvement for the noise reduction task is approx. 20 mm. The A results' analysis clearly shows that the precision of tie point position determination was improved in each case, which consequently translates to improved geometric quality of the product. Median improvement for the noise reduction task is approx. 20 mm. The improvement for the blur reduction task depended on the kernel and amounted to 15 to 20 mm, while in the case of resolution improvement, this value varied from 40 to about 60 mm. It should be noted that the values of the "Low" task are converted from the ground resolution value, which was approximately twice as high for this task. Therefore, when comparing the results of this task with the "Super" task, the calculated precision values should be multiplied by 2. A reduction of the standard deviation was also noted for all cases, which also means improved geometric quality and precision of the product.
The last element of the results' analysis was a visual analysis of orthophoto maps, digital elevation models, and dense point clouds. Figures 12 and 13  comparing the results of this task with the "Super" task, the calculated precision values should be multiplied by 2. A reduction of the standard deviation was also noted for all cases, which also means improved geometric quality and precision of the product.
The last element of the results' analysis was a visual analysis of orthophoto maps, digital elevation models, and dense point clouds. Figures 12 and 13 contain several representative cases showing orthophoto map fragments and digital elevation model (DEM) fragments.

Noise
Blur-4 Blur-7 Low-7 Denoise Deblur-4 Deblur-7 SuperRes-7  The visual assessment enables a conclusion that a significant improvement in the interpretative quality of the products was achieved in each case. Improved image quality, evidenced objectively in the previous section, clearly contributes to the improved orthoimage quality, which seems obvious. More details can be distinguished on products based on restored images. These details are also clearer and exhibit less noise. The geometric improvement (proven above) also translates to DEM quality. DEMs based on restored images have a clearly lower amount of terrain unevenness. Products developed using degraded images also exhibit clear, minor unevenness in places where there is no such object in reality, with the source of this situation being the imprecise determination of tie points.  The visual assessment enables a conclusion that a significant improvement in the interpretative quality of the products was achieved in each case. Improved image quality, evidenced objectively in the previous section, clearly contributes to the improved orthoimage quality, which seems obvious. More details can be distinguished on products based on restored images. These details are also clearer and exhibit less noise. The geometric improvement (proven above) also translates to DEM quality. DEMs based on restored images have a clearly lower amount of terrain unevenness. Products developed using degraded images also exhibit clear, minor unevenness in places where there is no such object in reality, with the source of this situation being the imprecise determination of tie points.

Conclusions
The presented method supports the photogrammetric process by eliminating imagedegrading factors, while allowing to correctly generate accurate photogrammetric models. As shown by analysis, the geometric and interpretative quality of the models is similar to that of the reference models, and is significantly higher than that of models based on degraded images. The discussed image quality improvement method comprehensively removes three factors that degrade photogrammetric models and improves the quality of end products.
Geometric accuracy of the models generated from the restored images was maintained, which is evidenced by the low standard deviation of the compared models. This deviation is stable for different blur kernels and various combinations of degradation factors. Degradation factors can appear in pairs or as a simultaneous cluster of all above. Such cases are particularly encountered for small sensors, with poor lighting (e.g., overcast sky). and upon fast UAV flight. The discussed method allows to use images from such measurements that are not fully correct, and ultimately develop a correct model.
The interpretive quality of textured products and images clearly increased. It has been shown, beyond any doubt, that reducing the degrading factor significantly improves image perception, and the objects depicted in an orthoimage are clearer.
The polymodal method of improving the quality of degraded images applied within these studies has been tested using typical photogrammetric software. Surprisingly, the software turned out to be rather resistant to these factors and enabled generating models based on all test data, even the ones with the highest degradation factors.

Conclusions
The presented method supports the photogrammetric process by eliminating imagedegrading factors, while allowing to correctly generate accurate photogrammetric models. As shown by analysis, the geometric and interpretative quality of the models is similar to that of the reference models, and is significantly higher than that of models based on degraded images. The discussed image quality improvement method comprehensively removes three factors that degrade photogrammetric models and improves the quality of end products.
Geometric accuracy of the models generated from the restored images was maintained, which is evidenced by the low standard deviation of the compared models. This deviation is stable for different blur kernels and various combinations of degradation factors. Degradation factors can appear in pairs or as a simultaneous cluster of all above. Such cases are particularly encountered for small sensors, with poor lighting (e.g., overcast sky). and upon fast UAV flight. The discussed method allows to use images from such measurements that are not fully correct, and ultimately develop a correct model.
The interpretive quality of textured products and images clearly increased. It has been shown, beyond any doubt, that reducing the degrading factor significantly improves image perception, and the objects depicted in an orthoimage are clearer.
The polymodal method of improving the quality of degraded images applied within these studies has been tested using typical photogrammetric software. Surprisingly, the software turned out to be rather resistant to these factors and enabled generating models based on all test data, even the ones with the highest degradation factors.
Degraded images are to be eliminated from a typical, not modified, photogrammetric process. In specific cases, it may turn out that all images within an entire photogrammetric flight will have various defects. Contrary to the appearance, such situations are not rare. The camera's instrumentation and control system can adjust exposure for each image, and in the case of dynamic scenery, along with changing lighting, blur, and noise can appear on images from one flight. The presented method harmonizes all images, eliminating degrading factors.
Commonly used photogrammetric software, especially their versions of Cloud computing, will enable introducing this additional option that will eliminate undesirable degradation. This method is so fast that a user will be virtually unable to notice a significant slowdown of the photogrammetric model construction process. Furthermore, the versatility of the method and the independence from the degradation character means that its practical application will significantly expand the capabilities of photogrammetric software.