Analysis of UAV Flight Patterns for Road Accident Site Investigation

: Unmanned Aerial Vehicles (UAVs) offer a promising solution for road accident scene documentation. This study seeks to investigate the occurrence of systematic deformations, such as bowling and doming, in the 3D point cloud and orthomosaic generated from images captured by UAVs along an horizontal road segment, while exploring how adjustments in ﬂight patterns can rectify these errors. Four consumer-grade UAVs were deployed, all ﬂying at an altitude of 10 m while acquiring images along two different routes. Processing solely nadir images resulted in signiﬁcant deformations in the outputs. However, when additional images from a circular ﬂight around a designated Point of Interest (POI), captured with an oblique camera axis, were incorporated into the dataset, these errors were notably reduced. The resulting measurement errors remained within the 0–5 cm range, well below the customary error margins in accident reconstruction. Remarkably, the entire procedure was completed within 15 min, which is half the estimated minimum duration for scene investigation. This approach demonstrates the potential for UAVs to efﬁciently record road accident sites for ofﬁcial documentation, obviating the need for pre-established Ground Control Points (GCP) or the adoption of Real-Time Kinematic (RTK) drones or Post Processed Kinematic (PPK) technology.


Introduction
Although the safety of road traffic is expected to increase with the advancement of autonomous vehicles [1], there remains an ongoing occurrence of road accidents.Apart from minor collisions, the police investigates the accident site [2] in order to document available data, including skid marks and the final resting position of the vehicles [3,4].The accuracy of the records is contingent upon a multitude of factors, ranging from the tools and methods applied to the professional knowledge of the police personnel [5].Upon the completion of the on-site investigation, a final report is compiled.This report encompasses various essential details, including a detailed sketch of the accident scene, complete with measurements.Subsequently, the comprehensive documentation compiled by the police is handed over to the accident reconstructionist, a forensic expert well-versed in discerning the accident process.The accident reconstructionist employs specialized software programs to simulate the accident process [6].The reconstructed representation of the accident serves as one of the most important pillars of the legal procedure aiming to establish liability for the accident.
In the course of a standard investigative procedure, the relevant road section is partially or entirely closed.This inevitably leads to traffic congestions, which have negative psychological effects on drivers [7], increase the amount of emissions [8] and, in some cases, lead to further, secondary accidents [9,10].Thus, it is of primary importance to shorten incident duration [11] and also clearance time, i.e., the time required for the collision investigator to complete the on-site accident investigation procedure [12].The reported values for average clearance times exhibit a notable degree of variability and span a wide range, being significantly influenced by the specific regulatory requirements of the respective country [11][12][13].No official statistical data were found for Hungary for clearance time.However, reference [14] provides a substantial estimate, indicating a broad range from 30 to 180 min.
To expedite clearance time, it is imperative to streamline the on-side data recording process.Currently, certain steps of the on-site investigation procedure cannot be sped up or omitted (such as closing off the accident scene or highlighting certain marks).Now, taking photographs at accident scenes is part of the standard procedure [15], which is a step forward in reducing the duration of the recording process, compared with traditional methods solely applying mechanical measurement tools and sketching the scene manually [16].Also, camera footage recorded by highway surveillance systems [17] or video recordings obtained by the scene investigator [18] might be added to the data set the forensic expert relies on.However, the addition of such recordings does not shorten the on-site investigation procedure.
Through the application of an unmanned aerial vehicle (UAV), i.e., a drone [19][20][21][22][23], accident data recording time can be decreased.This is due to the fact that certain phases of the traditional data recording process can be completely replaced by a UAV flight.Thus, the time required for the recording procedure can be diminished significantly [24], even compared with the methodology applying Close Range Photogrammetry [25] without using drones [26].Ideally, the result of the UAV recording procedure and subsequent image processing is an accurate, high-resolution 2D orthomosaic, which correctly reflects true distances.Thus, the orthomosaic can replace the site sketch, which is traditionally prepared by the police officer manually.Moreover, the photos taken by the UAV are also used to create a 3D point cloud of the scene, which is also accurate.This 3D model of the original accident site can be used by the accident reconstructionist as a simulation environment (e.g., [27]).
Our ongoing research project is dedicated to investigating the feasibility of employing Unmanned Aerial Vehicles (UAVs) for the precise documentation of road accident sites.UAVs have been applied in diverse fields, including mapping [28], seeding [29] and traffic monitoring [30], and also have been reported to record road accident sites accurately [31].Our primary goal is to find the simplest possible method for this task, which is cost-and time-effective, and, at the same time, does not require high levels of professional expertise.The widespread application of such a method would enable police officers to speed up the on-site crash investigation procedure and open the affected road section for traffic sooner.As time is of primary concern, the methods developed here do not apply Ground Control Points (GCPs), which are frequently utilized to increase accuracy [32,33].Also, for the sake of simplicity, the default camera settings are applied, as opposed to [34].RTK drones [35,36] or Post Processed Kinematic (PPK) technology [37] are not used in our experiments either.Thus, our calculations only rely on the Global Navigation Satellite System (GNSS) locations of camera positions recorded by the UAV to georeference the results [38,39].
Earlier studies have shown that neither the use of RTK drones [40] nor the application of GCPs improved horizontal accuracy significantly [41], and the differences typically remined below 5 cm.This level of accuracy is satisfactory for road accident investigation [31] (also see calculation below in Section 3.4).
Accuracy with respect to flight pattern was investigated in [42].Data obtained of a sample road accident site by n UAV following a circular path (point of interest-POItechnique), and a single grid path (waypoint technique) combined with a circular path, were compared.The results suggest that the accuracy (which was in the cm range) did not improve if the circular path was combined with the grid technique.On the contrary, Sanz-Ablanedo et al. [43] suggest that while taking nadir images is an excellent source of data for the orthomosaic, if photos are taken solely with a vertical camera axis, this will result in serious deformations of the point cloud, especially concerning the vertical axis.To solve this problem, [43] suggest that the camera should be facing the center of the investigated site, while the UAV follows a grid pattern.Similarly, [44,45] propose that an oblique camera angle can reduce systematic deformations.
Road alignments can be very diverse, thus road accidents also occur in very diverse environments [46], e.g., at road junctions or straight road sections.In the latter case, the skid marks and the affected vehicles are typically found in a long stretch of the road, i.e., in a long and narrow rectangle.In the preliminary stages of our project, it was found that when recording long straight road sections, the output is not accurate enough if the UAV flew above the road section in parallel lines (in single grid mode), taking nadir images.Even with high percentages of overlap between individual images, vertical alignment was faulty: the horizontal road section seemed to tilt (i.e., the road surface was not horizontal but sloped to one side) or to bend (convex image-doming, concave image-bowling), as illustrated in Figure 1.
Accuracy with respect to flight pattern was investigated in [42].Data obtained of a sample road accident site by n UAV following a circular path (point of interest-POI-technique), and a single grid path (waypoint technique) combined with a circular path, were compared.The results suggest that the accuracy (which was in the cm range) did not improve if the circular path was combined with the grid technique.On the contrary, Sanz-Ablanedo et al. [43] suggest that while taking nadir images is an excellent source of data for the orthomosaic, if photos are taken solely with a vertical camera axis, this will result in serious deformations of the point cloud, especially concerning the vertical axis.To solve this problem, [43] suggest that the camera should be facing the center of the investigated site, while the UAV follows a grid pattern.Similarly, [44,45] propose that an oblique camera angle can reduce systematic deformations.
Road alignments can be very diverse, thus road accidents also occur in very diverse environments [46], e.g., at road junctions or straight road sections.In the latter case, the skid marks and the affected vehicles are typically found in a long stretch of the road, i.e., in a long and narrow rectangle.In the preliminary stages of our project, it was found that when recording long straight road sections, the output is not accurate enough if the UAV flew above the road section in parallel lines (in single grid mode), taking nadir images.Even with high percentages of overlap between individual images, vertical alignment was faulty: the horizontal road section seemed to tilt (i.e., the road surface was not horizontal but sloped to one side) or to bend (convex image-doming, concave image-bowling), as illustrated in Figure 1.As discussed in [43,47], such systematic deformations tend to occur in the 3D model for a variety of reasons, including, among other factors, the calibration of the camera, the geometry of the image acquisition, the number and overlap of the images and flight patterns.
The aim of the present study is to explore whether doming and bowling deformations occur irrespective of the UAV type applied, and to test a simple method to eliminate these vertical alignment errors through experiments with parallel grid paths with the nadir camera setting and circular flight patterns with oblique camera settings (POI), similarly to [43,48].The overall aim of the research is to find a simple and cost-effective method for road accident site recording, thus the output point cloud and orthomosaic should be suitable for road accident forensic analysis.
The experiments were conducted at a given long, horizontal, public road segment in Hungary, applying four different consumer-grade non-RTK UAVs.Data processing was standardized, employing identical Python software and settings and applying the relevant functions of the Agisoft Metashape 2.0.1 program [49].It was found that systematic deformations occurred if the test site was photographed solely with nadir camera positions, irrespective of the type of the drone.However, if photos taken along a circular path over a section of the test site with the camera aimed at the center of the circle were added to the nadir images, the resulting 2D orthomosaic and the 3D point cloud accurately corresponded to the real terrain.In the latter scenario, horizontal and vertical accuracy was satisfactory, which makes the method suitable for road accident site investigation.
The following sections of this article are organized as follows.In Section 2, an elaborate description of the experimental setting is provided, encompassing details about the employed Unmanned Aerial Vehicles (UAVs), the associated software and a comprehensive exposition of the flight patterns.Section 3 is dedicated to the presentation of the experimental results.Section 4 serves as a platform for the discussion of the findings and offers insights into potential directions for future research.The article is brought to its conclusion in Section 5.

Materials and Methods
For the test flights, four different, commercially available, non-RTK DJI drones were used (Table 1, Figure 2).All drones flew along the same paths at the altitude of 10 m, so that differences between them could be revealed.If it was possible (UAVs b, c), the flight route was pre-programmed with the Litchi software [50].In other cases, the flight was administered with manual control, taking photos approximately at the same positions as in the programmed flights.Images were processed in an identical program environment, using the Python module [51] of the Agisoft Metashape 2.0.1.program [49].The experiment was conducted along a straight, horizontal section of the lower ranking Road 5207 (47.23501099292472, 19.11899903515487) in Hungary.The site was selected because the road runs on a straight embankment, no trees interfere with the view and there are road markings (Figure 3).At the end of the section there is a junction with a dirt road, but the junction itself is paved (Figure 4).The experiment was conducted along a straight, horizontal section of the lower ranking Road 5207 (47.23501099292472, 19.11899903515487) in Hungary.The site was selected because the road runs on a straight embankment, no trees interfere with the view and there are road markings (Figure 3).At the end of the section there is a junction with a dirt road, but the junction itself is paved (Figure 4).The experiment was conducted along a straight, horizontal section of the lower ranking Road 5207 (47.23501099292472, 19.11899903515487) in Hungary.The site was selected because the road runs on a straight embankment, no trees interfere with the view and there are road markings (Figure 3).At the end of the section there is a junction with a dirt road, but the junction itself is paved (Figure 4).The experiment was conducted along a straight, horizontal section of the lower ranking Road 5207 (47.23501099292472, 19.11899903515487) in Hungary.The site was selected because the road runs on a straight embankment, no trees interfere with the view and there are road markings (Figure 3).At the end of the section there is a junction with a dirt road, but the junction itself is paved (Figure 4).Ground resolution was expected to be 3.3 mm/px, as based on our preliminary tests; this image resolution ensures that the resulting orthomosaic and point cloud can be used well for accident reconstruction purposes.Flight altitudes that ensure this ground resolution were calculated using the following Formulas (1) and ( 2) [56] (p.289).
where Thus, the flight altitudes AGL 1 and AGL 2 could be calculated from the relevant data of the UAVs applied (Table 2).As the values are between 7.0 and 11.5 m, the uniform altitude of the test flights was set to 10 m.In order to check vertical alignment, two technologies were applied.First of all, measurements of a certain horizontal distance on the orthomosaic and the corresponding point cloud were compared.If the two measurements were equal, there was no height difference in the point cloud.However, if the point cloud distance was longer than the corresponding distance on the orthomosaic, then one of the points in the point cloud was at a different elevation.
Also, a custom-made experimental device was set up, on which two plumb bobs were suspended.Colored plastic balls were strung across their diameter onto the string of the plumb bobs in order to make the position of the string visible on the photos.The distance between the centers of the large purple balls was 1 m.The device was set up on the side road, in a location visible to the drone in all flight patterns (Figure 5).The device could not be set up on the main road itself, as it would have disturbed the traffic.
After data processing was complete, spheres were fit onto the points representing the large balls in the point cloud, and the position of their centers was compared with the vertical axis (Figure 6).
The centers of the balls were on a vertical line, in reality.If the point cloud is accurate, the centers of the fitted virtual spheres should also be positioned on the same vertical line.However, if the point cloud is not accurate vertically, an angle error α arises (Figure 7).After data processing was complete, spheres were fit onto the points representing the large balls in the point cloud, and the position of their centers was compared with the vertical axis (Figure 6).The centers of the balls were on a vertical line, in reality.If the point cloud is accurate, the centers of the fitted virtual spheres should also be positioned on the same vertical line.However, if the point cloud is not accurate vertically, an angle error α arises (Figure 7).After data processing was complete, spheres were fit onto the points representing the large balls in the point cloud, and the position of their centers was compared with the vertical axis (Figure 6).The centers of the balls were on a vertical line, in reality.If the point cloud is accurate, the centers of the fitted virtual spheres should also be positioned on the same vertical line.However, if the point cloud is not accurate vertically, an angle error α arises (Figure 7).The angle error α can be calculated with the following Formula (3), if the distance between the two center points is regarded to be 1 unit, and  = 0 and  = 0.
where The angle error α can be calculated with the following Formula (3), if the distance between the two center points is regarded to be 1 unit, and x 1 = 0 and y 1 = 0.
where α-angle error; x 2 -deviation of the center of the lower sphere from the origin along axis x; y 2 -deviation of the center of the lower sphere from the origin along axis y.
To check horizontal accuracy, white arrows were placed to the side of the main road, which could be seen on the point cloud and the orthomosaic as well.The distance between the points of the arrows was measured with a laser distance meter (7.42 m).The results were compared with the measurements on the point cloud and the orthomosaic.Furthermore, the distance between the road marks along the sides of the road were measured in the same way.
Each UAV was started from the side road and followed the same flight pattern (Figure 8).Block 1 consisted of long parallel L-shaped routes along the main road and above a small section of the side road (path length: 371 m), with vertical (nadir) camera alignment.In Block 2, the UAV followed a circular path around a virtual point of interest (POI) in the junction (path length: 52 m), with the camera facing a point 1 m above ground level in the center of the circle (camera angle: 38 • , nadir: 0 • ).The two routes (Blocks 1 + 2) were combined, thus adding up to a 423 m long path (Figure 6).

Results
Each UAV followed the same path, at the planned altitude of 10 m a section.The flying time was 10 min in each case.Table 3 shows the actual fl the number of images captured during each mission and the size of the im  Block 1 images were processed separately from Block 2 images.Also, images from both Block 1 and Block 2 were processed together.

Results
Each UAV followed the same path, at the planned altitude of 10 m above the road section.The flying time was 10 min in each case.Table 3 shows the actual flight altitudes, the number of images captured during each mission and the size of the image set.

General Properties of Orthomosaics and Point Clouds
The photos were processed, resulting in an orthomosaic and a point cloud for each part of the mission (1) Block 1; (2) Block 2; (3) Blocks 1 + 2. Table 4 shows the main characteristics of the processing and its results.Photos were taken at approximately every 2 m, which ensured that each point designated was visible at least on 9 different photos within a mission (blue area in Figures 9 and 10).Photos were taken at approximately every 2 m, which ensured that each point designated was visible at least on 9 different photos within a mission (blue area in Figures 9  and 10).

Block 1 (Nadir Images)-Deformations
As shown in Figure 11, the expected deformations occurred in the point clouds generated from Block 1 images, i.e., solely nadir images taken along a long stretch of a horizontal road section, irrespective of the type of the UAV.Although the deformations varied, none of the resulting point clouds could have been used for accident reconstruction due to the lack of accuracy.

Block 1 (Nadir Images)-Deformations
As shown in Figure 11, the expected deformations occurred in the point clouds generated from Block 1 images, i.e., solely nadir images taken along a long stretch of a horizontal road section, irrespective of the type of the UAV.Although the deformations varied, none of the resulting point clouds could have been used for accident reconstruction due to the lack of accuracy.

Block 1 (Nadir Images)-Deformations
As shown in Figure 11, the expected deformations occurred in the point clouds generated from Block 1 images, i.e., solely nadir images taken along a long stretch of a horizontal road section, irrespective of the type of the UAV.Although the deformations varied, none of the resulting point clouds could have been used for accident reconstruction due to the lack of accuracy.

Blocks 1 + 2
In the next step of the experiment, Block 1 images and Block 2 images taken during the same mission were combined.Figure 12 shows the point cloud and the corresponding camera images.Block 2 images were also processed separately for further analysis.However, as those did not include the whole of the road section discussed in the present study, the results of the circular missions alone (i.e., Block 2) are not discussed here.

Blocks 1 + 2
In the next step of the experiment, Block 1 images and Block 2 images taken during the same mission were combined.Figure 12 shows the point cloud and the corresponding camera images.Block 2 images were also processed separately for further analysis.However, as those did not include the whole of the road section discussed in the present study, the results of the circular missions alone (i.e., Block 2) are not discussed here.

Blocks 1 + 2
In the next step of the experiment, Block 1 images and Block 2 images taken during the same mission were combined.Figure 12 shows the point cloud and the corresponding camera images.Block 2 images were also processed separately for further analysis.However, as those did not include the whole of the road section discussed in the present study, the results of the circular missions alone (i.e., Block 2) are not discussed here.As shown in Figure 13, if images from the two blocks were combined, the resulting orthomosaic and point cloud were correctly oriented in space.Bowling, doming and tilting deformations were minimized.As shown in Figure 13, if images from the two blocks were combined, the resulting orthomosaic and point cloud were correctly oriented in space.Bowling, doming and tilting deformations were minimized.

Accuracy
The vertical accuracy assessment in this experiment, which did not depend on a GNSS receiver, involved measuring the discrepancies between points assumed to be at identical altitudes.These points were situated on opposing sides of the horizontal road, and a comparison was made between the orthomosaic and the corresponding point cloud.

Accuracy
The vertical accuracy assessment in this experiment, which did not depend on a GNSS receiver, involved measuring the discrepancies between points assumed to be at identical altitudes.These points were situated on opposing sides of the horizontal road, and a comparison was made between the orthomosaic and the corresponding point cloud.
The processing software applied shows two distance values between two marked points of the point cloud (Figure 14, A-B and C-D).The values marked on the red line as "3D" show the distance measured by the software between the respective points in the point cloud, using their coordinates.The values marked as "2D" show how long the distance is between the images of the two points projected onto a common horizontal surface.This is illustrated in Figure 14.The distance between points C and D in the point cloud is 2.14 m (3D value).When these two points are projected onto a horizontal plane (Points C and E), the distance between C and E is 1.92 m (2D value).The processing software provides both measurements for each distance measured in the point cloud.If the 3D and 2D values are the same (as in the case of the A-B distance), it means that the 3D distance and the distance of the projection are equal, i.e., the two points are on the same horizontal plane.In the present case, if the point cloud was generated solely from nadir images, a tilting effect was detected (Figure 11c).However, if oblique images were added to the processed image set, vertical orientation was corrected (Figure 14).In all four examined cases, the vertical alignment is satisfactory (Table 5).Another method was also tested to check the vertical accuracy of the point cloud.The custom-made device (see Figure 5 in Section 2 above) had two vertical strings with plumb bobs, one enhanced with small and the other with larger balls.Only the larger balls were well visible on the point clouds, thus the string with the smaller balls could not be taken into account.
The fitting of spheres to the images of the balls on the point cloud was successful.In spite of this, this method could not be applied for determining vertical accuracy.The reason for this was that the ground resolution values of the emerging point clouds were between 2.8 and 5.9 mm/px.This means, if the sphere is fitted with 1 px error, it results in an angle error that would correspond to a width difference of 20.72-43.66mm, respectively,  Another method was also tested to check the vertical accuracy of the point cloud.The custom-made device (see Figure 5 in Section 2 above) had two vertical strings with plumb bobs, one enhanced with small and the other with larger balls.Only the larger balls were well visible on the point clouds, thus the string with the smaller balls could not be taken into account.
The fitting of spheres to the images of the balls on the point cloud was successful.In spite of this, this method could not be applied for determining vertical accuracy.The reason for this was that the ground resolution values of the emerging point clouds were between 2.8 and 5.9 mm/px.This means, if the sphere is fitted with 1 px error, it results in an angle error that would correspond to a width difference of 20.72-43.66mm, respectively, when examining the width of the road (7.42 m).This error value is much greater than the one produced by the first method (Table 5).Thus, the error margin of the second method is too large for this experiment.
Horizontal accuracy was checked on both the orthomosaics and the point clouds (Figure 15).On the site, the distance between the heads of the arrows was 7.42 m according to our measurements with a laser distance meter.Table 6 shows the errors in the horizontal measurements.The error is below 1% in each case, which is much lower than the expected accuracy of an accident reconstruction diagram.The reason for this relatively large error margin in accident reconstruction is practical.For example, the beginning of a skid mark can usually be determined with an error of several centimeters, as it is not always visible.However, such an error only minimally modifies the results of a forensic investigation.To illustrate this, let us have a look at the speed calculation.The speed of a vehicle at the beginning of a skid mark can be determined by the following Formula (4).Table 6 shows the errors in the horizontal measurements.The error is below 1% in each case, which is much lower than the expected accuracy of an accident reconstruction diagram.The reason for this relatively large error margin in accident reconstruction is practical.For example, the beginning of a skid mark can usually be determined with an error of several centimeters, as it is not always visible.However, such an error only minimally modifies the results of a forensic investigation.To illustrate this, let us have a look at the speed calculation.The speed of a vehicle at the beginning of a skid mark can be determined by the following Formula (4). where If a skid mark is measured to be 15 m at the accident scene, and the average deceleration of the vehicle is 7.5 m/s 2 , the speed of the vehicle at the beginning of the skid mark is 54 km/h.Table 7 gives the arising speed values if we suppose a ±1% error in the length measurement.As the data in Table 7 show, a 1% difference in the length of the skid marks results only in a 0.5% difference in the calculated speed (0.3 km/h in the above example).The forensic expert, however, determines the speed with a much greater margin, generally as 53-55 km/h in this case.

Discussion
This study presented the outcomes of an experimental investigation involving four consumer-grade non-RTK UAVs, which took photos following the same path over the same public road section, in an imitation of road accident scene documentation.The study pursued a two-fold objective.The first aim was to test whether systematic deformations occur in the point clouds created from the images for all UAVs, as expected based on the literature (i.e., [43][44][45]47]) and our earlier experience.Secondly, if such deformations were identified, the research aimed to test a straightforward and expedient method for their mitigation.
Concerning deformations observed in point clouds, as illustrated by Figure 11, it became evident that irrespective of the type of UAV employed, the processing of solely nadir images captured in single grid mode yields point clouds characterized by pronounced deformations.Consequently, doming, bowling and tilting effects could be reproduced in the experiment.This result confirms our earlier experience when such distortions occurred during accident scene documentation (Figure 1).Furthermore, these findings corroborate the widely discussed findings in the literature.If photos are acquired in the conventionally applied single grid flight pattern, i.e., when the drone flies in parallel lines over an elongated rectangular area with the camera directed vertically (i.e., capturing solely nadir images), the resulting point cloud will exhibit systematic deformations [43][44][45]47].
Furthermore, our results underpinned earlier findings [43,45] that with flight pattern modification, especially with the combination of images taken with vertical and oblique camera axes, the above effects can be minimized.If the nadir image set was complemented with images taken during the same mission with oblique camera angles around a POI on a circular route (Figures 8 and 12), deformations such as doming, bowling and tilting were successfully diminished (Figure 13).However, this result contradicts [42]'s claim, in whose experiment "combining POI and waypoint techniques did not improve accuracy" (p.12).The reasons for this difference cannot be revealed, as [42] did not provide data about the camera angle during their circular mission.
On the topic of camera angles, [45] suggested gentle forward inclinations between −5 and 15 • (nadir: 0 • ), which resulted in accurate point clouds.Authors of [48] experi-mented with camera angles between 0 and 35 • .We applied a 38 • angle for the circular mission (diameter 20 m, altitude 10 m).This inclination also proved to be successful in mitigating deformations and thus increasing accuracy.This angle ensured that the POI around which the circular mission was carried out was at 1 m above the ground.This POI altitude is ideal for road accident recording, as car deformations typically occur at around this height.
As regards flight altitudes, following the calculation proposed in [56], the flights in our experiment were administered at 10 m.This contradicts the suggestions of [31], who surveyed road accidents from much higher altitudes in urban settings (20, 25 or 30 m), in order to avoid obstacles such as public illumination poles.Even higher altitudes (60 and 80 m) were tested by [38], but [42] suggested flying the drone at 5, 7 and 10 m, respectively.Our experiment was conducted in an "ideal" setting, where no road-side obstacles were present.Thus, the lower flight altitude (10 m) posed no problems, and this altitude also ensured a high-enough resolution of the point cloud and the orthomosaic.Shadowing effects did not result in serious distortions either.
While [38] demonstrated that the direct georeferencing (DG) approach can successfully be applied to gain data for topographic surveying, our study reaffirms the conclusion reached by [42] that utilizing UAVs for accident scene recording provides feasible and accurate results, characterized by errors at a centimeter scale.This accuracy is satisfactory for police documentation and accident reconstruction purposes.The error margin arising from this method is considerably lower than the error thresholds commonly employed by forensic experts for road accident reconstruction.
Similar to the present study, [34] also aimed at devising a quick method for applying UAVs for accident reconstruction.However, their study focused on camera calibration, in a similar manner to [38].In the present study, however, default camera settings were applied and no pre-calibration of the camera was performed.The reason for this is that the long-term objective of our research is to create a methodology that can be used by police personnel to record accident data.In order to be applied reliably and easily, the system should be as automatic as possible, requiring minimum intervention from the controller of the UAV.
As for the length of the recording procedure, in the present investigation, flying time was 10 min.In the proposed system, when the UAV lands, the photos are uploaded to a server, where a preliminary, low-quality processing is carried out.The quality of processing can be set by choosing low values for Accuracy and Depth map quality.The aim of such a low-quality processing is solely to determine whether the photos can be used to create an orthomosaic and a point cloud.In the event that these processed images prove inadequate, a new mission with higher overlap percentage should be carried out before the accident scene is modified or cleared.High-quality processing of the same image set requires 3-4 times longer time, which can be carried out any time after data upload.Our experiments have revealed that the low-quality processing of the image set acquired lasted for 3-6 min, while the flying time was 10 min.Consequently, the entire process, including data capture and preliminary analysis, consumes approximately a quarter of an hour.This time frame is notably shorter than the minimal estimates (30 min) for accident scene clearance in Hungary [14].However, it is essential to acknowledge that some additional minutes are also necessary both before and after UAV flight to complete the on-site investigation procedures comprehensively.
The proposed scheme for road accident site recording with a UAV is presented in a flowchart (Figure 16).As Figure 16 illustrates, the following steps should be followed when documenting an accident scene with a UAV.

•
Step 1: Delineate the area to be photographed.The boundaries of the relevant accident site must be identified.Establish the POI, i.e., the point around which the circular part of the mission is to be executed.Typically, this corresponds to the location where the vehicles involved in the accident are situated.

•
Step 2: Obtain nadir images with the UAV following a grid path.The images should cover the whole area, with suitable longitudinal and transversal overlap (generally 60%).The number of images depends on their resolution and the dimensions of the site.In an average case, 60-100 images should be taken.A flight altitude of 10 m results in a point cloud and an orthomosaic with an adequate resolution.

•
Step 3: Obtain oblique images around a POI.Images should be taken following a circular path, with the camera facing a characteristic point of the scene (e.g., a vehicle in its final rest position), with an oblique camera angle.

•
Step 4: Upload data; process images.The images are to be uploaded to the processing site.Preliminary processing takes place.

•
Step 5: Quality check of point cloud and orthomosaic.The point cloud and orthomosaic should be checked.

•
Step 6: Modify parameters.If the quality of the point cloud and orthomosaic is not satisfactory (e.g., it is distorted or fragmented or has faulty spatial orientation), the flight path should be modified to increase overlap.A new image set should be obtained.

•
Step 7: Mission complete.If the quality of the resulting point cloud and orthomosaic is satisfactory, the accident documentation process is complete.
For the purpose of accident scene documentation, any UAV equipped with a camera featuring a resolution similar to that of the drones tested here is deemed suitable.It is essential to ensure that the GNSS coordinates of the drone are recorded and saved within As Figure 16 illustrates, the following steps should be followed when documenting an accident scene with a UAV.

•
Step 1: Delineate the area to be photographed.The boundaries of the relevant accident site must be identified.Establish the POI, i.e., the point around which the circular part of the mission is to be executed.Typically, this corresponds to the location where the vehicles involved in the accident are situated.

•
Step 2: Obtain nadir images with the UAV following a grid path.The images should cover the whole area, with suitable longitudinal and transversal overlap (generally 60%).The number of images depends on their resolution and the dimensions of the site.In an average case, 60-100 images should be taken.A flight altitude of 10 m results in a point cloud and an orthomosaic with an adequate resolution.

•
Step 3: Obtain oblique images around a POI.Images should be taken following a circular path, with the camera facing a characteristic point of the scene (e.g., a vehicle in its final rest position), with an oblique camera angle.

•
Step 4: Upload data; process images.The images are to be uploaded to the processing site.Preliminary processing takes place.

•
Step 5: Quality check of point cloud and orthomosaic.The point cloud and orthomosaic should be checked.

•
Step 6: Modify parameters.If the quality of the point cloud and orthomosaic is not satisfactory (e.g., it is distorted or fragmented or has faulty spatial orientation), the flight path should be modified to increase overlap.A new image set should be obtained.

•
Step 7: Mission complete.If the quality of the resulting point cloud and orthomosaic is satisfactory, the accident documentation process is complete.
For the purpose of accident scene documentation, any UAV equipped with a camera featuring a resolution similar to that of the drones tested here is deemed suitable.It is essential to ensure that the GNSS coordinates of the drone are recorded and saved within the metadata of the captured images.All the four UAVs under evaluation in this experiment were found to be applicable for accident scene documentation.However, considering costs and the ease of operation, small and economically viable drones, such as the DJI Mavic Air 2 and DJI Air S2, are recommended for this specific application.
In the future, analogous experiments at other (higher and lower) altitudes are planned.This approach is essential, because the presence of illumination poles, overhead electricity and telecommunication lines, as well as road-side trees, may present challenges for accident investigators, especially in urban areas.However, higher altitudes result in lower image resolution, potentially diminishing the overall accuracy of the data.Hence, we anticipate the need for an optimized solution for such urban accident scenes in the long term.Given that road accidents frequently occur in urban settings, this research direction carries substantial relevance.Additionally, exploring the employment of RTK-drones is another potential way for increasing accuracy in accident scene documentation.

Conclusions
This article has presented the findings of an Unmanned Aerial Vehicle (UAV) experiment conducted along a lengthy, straight, horizontal section of road featuring a crossing at one end, emblematic of a common road accident site.The UAVs maintained a consistent altitude of 10 m while capturing images with default camera settings.The flight path encompassed two segments: for Block 1 images, the camera was oriented vertically (0 • ), and the UAV followed parallel L-shaped trajectories, whereas for Block 2 images, the drones circled a virtual Point of Interest (POI) positioned 1 m above ground level, employing oblique camera settings (38 • ).The number of images captured during the 10 min flying time for each mission was 162 or 163.The total size of images per mission varied between 639 MB and 1.8 GB.The preliminary processing time ranged from 3 min 2 sec to 6 min.Consequently, the documentation of the scene with a UAV took approximately 15 min.
The experiments described here have revealed that taking solely nadir images in single grid mode with a UAV along an extensive road section (Block 1) does not yield satisfactory or accurate results for accident reconstruction.This is due to the presence of systematic errors in the point cloud, resulting in deformations characterized by bowling, doming and tilting.These deformations occurred despite the high number of images taken and the substantial overlap between the photos.Notably, relevant road points were represented in nine different images (Figure 9).
Conversely, when images taken with an oblique camera setting around a point of interest (POI) on the same road section within the same flight mission were added to the processed image set (Blocks 1 + 2), these deformations were effectively minimized.Thus, combining nadir images taken along the relevant road section with images around a POI may be a successful, accurate and quick method for recording road accidents via UAV technology.
In this test, the circular mission was executed at one end of the relevant road section at 10 m altitude.Nevertheless, positioning the POI at any other part of the road section affected by the accident is also expected to produce equally satisfactory results.To capture additional vehicle details relevant to the accident, selecting the final resting positions of the involved vehicles as the center of the circular mission is a rational choice.
As regards accuracy, the error of the horizontal measurements remained in the 0-5 cm range (Table 6), which is well below the error margin generally employed in accident reconstruction.Vertical accuracy was also satisfactory (Table 5).Consequently, the method delineated in this study can be applied to document road accidents accurately and quickly, offering immediate feedback on the success of the UAV flight.Considering that the entire recording process, encompassing preliminary image processing, consumed approximately 15 min, it is anticipated that the utilization of UAVs in road accident reconstruction can substantially reduce on-site investigation times and consequently expedite road clearance procedures.

Figure 1 .
Figure 1.Deformations detected in the results of image procession.(a) Three-dimensional point cloud of a real accident scene taken in a single grid path with vertical camera-tilted image (Source:

Figure 1 .
Figure 1.Deformations detected in the results of image procession.(a) Three-dimensional point cloud of a real accident scene taken in a single grid path with vertical camera-tilted image (Source: Varga, P.); (b) Three-dimensional point cloud of a real accident scene taken in a single grid path with vertical camera-seemingly correct (Source: Harmat, I.); (c) Point cloud (b) from above (Source: Harmat, I.); (d) Deformed orthomosaic corresponding to (c) (Source: Harmat, I.).

Figure 3 .
Figure 3.The experiment site-a long straight road section (photo taken from the ground).

Figure 4 .
Figure 4.The experiment site-junction at the end of the straight section (photo taken from the ground).

Figure 3 .
Figure 3.The experiment site-a long straight road section (photo taken from the ground).

Figure 4 .
Figure 4.The experiment site-junction at the end of the straight section (photo taken from the ground).

Figure 3 .
Figure 3.The experiment site-a long straight road section (photo taken from the ground).

Figure 3 .
Figure 3.The experiment site-a long straight road section (photo taken from the ground).

Figure 4 .
Figure 4.The experiment site-junction at the end of the straight section (photo taken from the ground).

Figure 4 .
Figure 4.The experiment site-junction at the end of the straight section (photo taken from the ground).

Figure 5 .
Figure 5. Custom-made device for showing the vertical set up at the experiment site (photo taken from the ground).

Figure 6 .
Figure 6.(a) The device in the point cloud.(b) Spheres (green) fit to the points representing the large balls (purple) in the point cloud.

Figure 5 .
Figure 5. Custom-made device for showing the vertical set up at the experiment site (photo taken from the ground).

Vehicles 2023, 5 ,Figure 5 .
Figure 5. Custom-made device for showing the vertical set up at the experiment site (photo taken from the ground).

Figure 6 .
Figure 6.(a) The device in the point cloud.(b) Spheres (green) fit to the points representing the large balls (purple) in the point cloud.

Figure 6 .Figure 7 .
Figure 6.(a) The device in the point cloud.(b) Spheres (green) fit to the points representing the large balls (purple) in the point cloud.Vehicles 2023, 5, FOR PEER REVIEW 8

Figure 7 .
Figure 7. Calculation of angle error.(a) If the balls are not on the same vertical axis, the angle error is α; (b) View from above.

Figure 8 .
Figure 8. Flight paths.Yellow lines indicate the route of the UAV.Purple markers ma where the UAV turned.(a) Block 1: L-shaped path (vertical camera setting); (b) shaped (vertical camera setting) and circular (camera facing the POI) paths combin

Figure 8 .
Figure 8. Flight paths.Yellow lines indicate the route of the UAV.Purple markers mark the positions where the UAV turned.(a) Block 1: L-shaped path (vertical camera setting); (b) Blocks 1 + 2: L-shaped (vertical camera setting) and circular (camera facing the POI) paths combined.

Figure 9 .
Figure 9. Image overlap analysis in the processing report of Block 1 images.The black dots represent the camera positions along the flight path.Colors indicate the number of images on which a given data point is captured.

Figure 9 .
Figure 9. Image overlap analysis in the processing report of Block 1 images.The black dots represent the camera positions along the flight path.Colors indicate the number of images on which a given data point is captured.

Vehicles 2023, 5 ,Figure 10 .
Figure 10.Image overlap analysis in the processing report of Block 1 + 2 images.The black dots represent the camera positions along the flight path.Colors indicate the number of images on which a given data point is captured.

Figure 10 . 11 Figure 10 .
Figure 10.Image overlap analysis in the processing report of Block 1 + 2 images.The black dots represent the camera positions along the flight path.Colors indicate the number of images on which a given data point is captured.

Figure 11 .
Figure 11.Deformations could be detected on the point clouds created from the nadir images from the experiment.(a) DJI Mavic-strong bowling; (b) DJI Air2S-slight doming; (c) DJI Phantomtilting and doming (red line-distance measured on point cloud; green line-vertical and horizontal components); (d) DJI Inspire-strong doming and tilting.

Figure 11 .
Figure 11.Deformations could be detected on the point clouds created from the nadir images from the experiment.(a) DJI Mavic-strong bowling; (b) DJI Air2S-slight doming; (c) DJI Phantom-tilting and doming (red line-distance measured on point cloud; green line-vertical and horizontal components); (d) DJI Inspire-strong doming and tilting.

Figure 11 .
Figure 11.Deformations could be detected on the point clouds created from the nadir images from the experiment.(a) DJI Mavic-strong bowling; (b) DJI Air2S-slight doming; (c) DJI Phantomtilting and doming (red line-distance measured on point cloud; green line-vertical and horizontal components); (d) DJI Inspire-strong doming and tilting.

Figure 12 .
Figure 12. Results of the processing Block 1 + 2 images of DJI Mavic.(a) Three-dimensional point cloud; (b) the same point cloud with camera positions.

Figure 12 .
Figure 12. Results of the processing Block 1 + 2 images of DJI Mavic.(a) Three-dimensional point cloud; (b) the same point cloud with camera positions.

Vehicles 2023, 5 ,Figure 14 .
Figure 14.Distances measured on the 2D orthomosaic and the 3D point cloud.A, B, C-points on the outer side of the road marking line.D-point at the top of the verge marker post.E-projection of Point D on the horizontal plane of Point C.

Figure 14 .
Figure 14.Distances measured on the 2D orthomosaic and the 3D point cloud.A, B, C-points on the outer side of the road marking line.D-point at the top of the verge marker post.E-projection of Point D on the horizontal plane of Point C.

Figure 15 .
Figure 15.Distances measured between the two arrow heads on the 2D orthomosaic and the 3D point cloud.(a) left side; (b) right side; (c) whole distance.

Figure 15 .
Figure 15.Distances measured between the two arrow heads on the 2D orthomosaic and the 3D point cloud.(a) left side; (b) right side; (c) whole distance.

Figure 16 .
Figure 16.Flowchart for road accident site recording with UAV.

Figure 16 .
Figure 16.Flowchart for road accident site recording with UAV.
1 is the flight altitude above ground level (AGL) [m], calculated from the horizontal resolution; AGL 2 is the flight altitude above ground level (AGL) [m], calculated from the vertical resolution; f is the focal distance [mm]; GSD is the ground sample distance [m/px]; HR is the horizontal resolution of the sensor [px]; VR is the vertical resolution of the sensor [px]; SW is the sensor width [mm]; SH is the sensor height [mm].

Table 4 .
Main characteristics of the orthomosaic and the point cloud for Block 1 + 2 images, with high-and low-quality processing.

Table 4 .
Main characteristics of the orthomosaic and the point cloud for Block 1 + 2 images, with high-and low-quality processing.

Table 5 .
Road width measured on the point cloud and its horizontal projection compared.

Table 5 .
Road width measured on the point cloud and its horizontal projection compared.

Table 6 .
Horizontal accuracy.Distance measured by a laser distance meter: 7.42 m.

Table 6 .
Horizontal accuracy.Distance measured by a laser distance meter: 7.42 m.

Table 7 .
Effect of 1% measurement error of the skid mark length on the calculated speed of the vehicle at the beginning of the skid mark.