Constructing a Virtual Environment for Multibody Simulation Software Using Photogrammetry

: Real-time simulation models based on multibody system dynamics can replicate reality with high accuracy. As real-time models typically describe machines that interact with a complicated environment, it is important to have an accurate environment model in which the simulation model operates. Photogrammetry provides a set of tools that can be used to create a three-dimensional environment from planar images. A created environment and a multibody-based simulation model can be combined in a Unity environment. This paper introduces a procedure to generate an accurate spatial working environment based on an existing real environment. As a numerical example, a detailed environment model is created from a University campus area.


Introduction
In general, real-time simulation models based on multibody system dynamics interact with the graphical environment to illustrate the feasibility of the model and its functionality.Figure 1 shows examples of real-time simulation models and their environments, in which the environments vary from simple, see Figure 1a, to highly detailed and complex, see Figure 1d.A simulation model interacts with its environment via tires, tracks, or other bodies that it can come into contact with and moves the environmental objects; Figure 1c,d shows typical examples.Graphical software, such as Blender, and game engine software, such as Unity, Unreal Engine, and CryEngine, can be utilized to generate working environments [1][2][3][4][5].Each software has its own advantages and disadvantages.In Unity, the C programming language is used, whereas Unreal Engine and CryEngine use the C++ language.Unity has the capability to compile games from different platforms [6], and offers preprogrammed three-dimensional models, cameras, and lights [7].Unreal Engine is a license-free software.
Graphical software makes it possible to generate environments for use in electronic games, virtual reality applications [8], and simulations.Realistic environment is an important aspect of simulations used, for example, to train operators of industrial vehicles [9][10][11].Such training can help operators perform more efficiently, prevent accidents, and increase safety [12][13][14].Graphical software are also widely used in the development of high-quality three-dimensional environments for educational purposes [15][16][17].
Photogrammetry is an estimation of the geometric and semantic properties of objects based on image analysis [18].In other words, it is an approach used to generate three-dimensional models from detailed images of an object/area [19].Digital photogrammetry collects data about an environment by calculating the locations of objects based on a predefined coordinate system [20].It has been applied in many different fields and studies, such as material testing [21], recognition of deformation of beam elements and structures in fire tests [22,23], measurement of vertical deflections in large constructions such as bridges [24], and measurement of the roughness of the soil surface for better understanding of erosion processes [25].Researchers have also generated precise three-dimensional models of large assets such as museums and historical sites by employing photogrammetry and laser scanning approaches [26][27][28][29].Many scholars have studied how to generate 3D models using point cloud data.Cielos et al. introduced a methodology for the creation of 3D models.The method employs a laser scanning concept with light detection.With the help of this method, point cloud data can be collected, even in harsh weather conditions, to construct the 3D models.In addition, the collected point data can be used directly in photogrammetric software [30].Laser scanning is widely used in building construction applications to collect point cloud data and utilise them for creation of 3D outdoor and indoor models of buildings [31].Laser scanning has also been used to extract accurate data from rock surfaces [32].
In recent years, photogrammetry concept has also been used for mobile phones to construct 3D models of indoor places and close-range distances [33,34].Many studies have been conducted to increase the accuracy of the photogrammetry method in the applications of city planning and building recognition.Wang et al. has introduced a procedure-based online matching to increase the accuracy of the 3D models produced with the photogrammetry.The procedure has been applied in recognizing buildings in aerial images [35].In addition, the photogrammetry method is currently being used for constructing historical and cultural buildings [36].Scholars have utilized photogrammetry to create 3D models of historical sites, which were created using the graphical software (Unity) for the virtual reality aspect [37,38].Topographic methods or linear variable differential transducers (LVDTs) are two examples of alternative approaches for creation of a three-dimensional model from an object or area.However, they have major disadvantages, such as requiring long processing time, intensive manual work, and limitations in points positioning in a structure [39].
Laser scanning is also an alternative approach to photogrammetry.A notable advantage of the laser scanning approach is that it allows the data of low-textured objects to be collected.This is a situation where the photogrammetry matching approach often fails.The laser scanning uses a Global Positioning System (GPS) and Internal Navigation System (INS) for sensor orientation [40], and derives three-dimensional coordinates using Time of Flight (TOF) [41].The laser scanner transmits impulses towards an object and estimates the distance between the scanner and the object.In addition, the laser scanner sends a laser line to an object and records the laser line reflection to obtain the geometry of the object [42].
Laser scanning, mainly airborne laser scanning (ALS), and photogrammetry have some differences and similarities.For example, they both use GPS and digital sensors.On the other hand, ALS uses point sensors, whereas photogrammetry uses line sensors.ALS takes points out of an area and photogrammetry covers the whole area.The production time in ALS is typically longer than in photogrammetry [40].Photogrammetry is an inexpensive, easy-to-set-up method.On the other hand, in photogrammetry, to capture images with an appropriate map, mainly in harsh weather conditions, an experienced operator is needed.For some laser scanners, an extra digital camera is needed to capture RGB colors of surfaces.The disadvantage of scanners that have their own digital camera, is their low geometric resolution [43].Laser scanning has a higher measurement accuracy than photogrammetry, however laser scanning procedures mostly costs more than photogrammetry [44].When the material of the model/object absorbs or diffuses the laser, the photogrammetry method mostly works properly [45].A number of studies have been conducted to identify the material during 3D model constructions.One of the material recognition methods is to classify the images and laser scanning data based on the spectral categorization.In this method, the image will be classified and analyzed based on its wavelength.Furthermore, by image analyzing, the effect of the environmental phenomena on the buildings' surfaces can also be identified [46].In some cases, laser scanning usage might be limited to short distances [47].
The laser scanning method also has some disadvantages.To collect accurate point cloud data via a laser scanner, the precision of the operator plays a crucial role.Furthermore, converting the collected point cloud data into a 3D model of buildings requires intensive work [48].In addition, the laser scanner has to be relocated several times during the process; therefore, in the collected point cloud data, the buildings' deformation should be considered [49].A number of scholars conducted studies on collecting point cloud data using various procedures.Wang et al. have accomplished a comprehensive study on different applications, such as photogrammetry, Lidar, as well as laser scanning, to collect 3D point clouds in the construction industry.Point cloud data can be used for different purposes and areas, such as civil engineering, construction industries, and tracing progress in building constructions [50].
The objective of this paper is to generate a working environment for real-time multibody based simulation models.The environment is created from an existing area in the real world.The campus area of Lappeenranta-Lahti University of Technology, LUT University, Finland is selected as the case study.In this study, Unity software was used to develop the campus environment.

Methodology
This section introduces a procedure to create a three-dimensional environment using photogrammetry and graphical software.
From a graphical point of view, a multibody simulation model consists of graphics of bodies and the working environment that the machine is interacting with.Multibody simulation software usually offers the possibility to create a simple environment.However, as will be shown in this paper, a multibody model can represented in a graphical software.The use of graphical software allows a detailed description of a working environment.

Photogrammetry Approach
Photogrammetry uses contact-free sensors, which makes it possible to create three-dimensional models of objects that are expensive, fragile, toxic, or visible but inaccessible.It also allows to document the changes of an object or area, such as a building construction.
Photogrammetry suffers from a number of shortcomings such as sensitivity to light conditions.The light source can be optimized for small objects but in the case of outdoor objects and environments, optimization of lighting conditions remains a challenge.
To create a three-dimensional model with high precision, a large number of images are needed.In general, to create an initial three-dimensional model of a single object, a minimum of two planar images with a known offset is necessary [7].A functional and affordable method for taking thousands of images of a wide area (including tall structures such as buildings) is the use of Unmanned Aerial Vehicles (UAVs) [51,52].Assisted by UAVs, photogrammetry can be extended to cover areas in the scale of square kilometers.
Figure 2 presents a procedure for using photogrammetry to create a three-dimensional model of an area/object.As Figure 2 illustrates, obtained images should be calibrated to calculate the distance between the camera and the object [53].In the figure, exterior orientation means calculation of the exterior coordinates, which are the location of the projection center and the rotation angles of the object with respect to the considered global coordinate system.Surface modeling can be applied to the object to visualize the texture.In the final step, postprocessing is done to create a three-dimensional model of the object.

Calibration
To put it simply, photogrammetry creates three-dimensional models out of planar images.To this end, postprocessing software compares two images taken of an object or area and recognizes identical points, see Figure 3. "Overlapping" between images helps to simplify the identical point recognition process and increases the quality of the object's texture.In addition, "shape matching" can be assisted to match corresponding points in two overlapping images.Shape matching technique has different varieties.One common technique compares the shape of objects in two images without color consideration.This simplifies the process and reduces computational time.By considering corresponding points and the orientation of the cameras, the location of points in the three-dimensional environment can be estimated.
Even though the photogrammetry approach is extensively used to generate virtual realistic environments, it still faces some barriers and limitations.In most cases, the geometry and exact location of the object under investigation should be estimated.Transparent and dark colored objects, as well as tiny objects, pose challenges for photogrammetry [27,54,55].Furthermore, there are some limitations when using photogrammetry and laser scanning procedures.Objects that are small, shiny, and transparent cannot be accurately captured in photogrammetry and laser scanning procedures.In addition, materials that absorb or diffuse the laser beams, are barriers to accurately collecting point cloud data during the laser scanning procedure.

Example Case
In this study, a photogrammetry approach is used to create an environment model from the campus area of Lappeenranta-Lahti University of Technology (LUT University).The University is located in the south of Finland, Figure 4.The photogrammetry covered area in this study is approximately 40,000 square meters.

Equipment
In this study, to collect three-dimensional data for photogrammetry, a drone (as a UAV) and a laser scanner were used.The drone used is a phantom 4 RTK from DJI Technology INC., see Figure 5.The Phantom 4 RTK drone has a location system, communication , and propulsion systems, as well as a flight controller, and a battery.The maximum flight speed is 49.9 m per second and the drone weighs 1391 g.The battery life is sufficient for a 30 min flight.Horizontal accuracy for the Phantom 4 RTK is three centimeters and it stores three-dimensional observational data, which will be used with the postprocessing software.The three centimeter horizontal accuracy is the relative accuracy.As pointed out, there is a ground reference check point where the drone started to fly from.Note that, the reference spot was inside the campus area.The drone carries a 20 megapixel camera with a CMOS sensor.A three-axis gimbal is attached to the drone to provide stability for the camera and enable a high resolution and clear images.The drone is also equipped with obstacle sensors to prevent crashes during flight.The remote flight controller uses the GS RTK app to generate a flight plan.The controller has a built-in 5.5 inch (13.97 cm) screen which shows the flight map of the drone, see Figure 6.A laser scanner, Faro S70, was used to obtain a high-quality and accurate three-dimensional environment, see Figure 7.The laser scanner collects points in the order of millions (points cloud) to converge planar images to a three-dimensional model.The laser scanner also captures the textures of surfaces of buildings and other objects.It can be used both indoor and outdoor and it is suitable for distances between 0.6 m to 70 m.It can recognize the point locations with an accuracy of ±1 mm and can provide one million points per second.For the post processing step, the FARO S70 laser scanner uses FARO SCENE software or the Autodesk Reality Capture software, (ReCap software).

Procedure for Three Dimensional Environment
Figure 8 shows the process steps for a photogrammetry approach using a drone and a laser scanner to generate a three-dimensional environment for the use of real-time simulation.As Figure 8 illustrates, the photogrammetry starts with the drone taking thousands of images.The drone flies at a specific height and takes images with a specified overlap between each two continuous images.Simultaneously, a laser scanner scans the environment and generates a point cloud of structures, the ground, trees and other objects.The images and point cloud are exported to a postprocessing software to generate an initial three-dimensional environment.ReCap software was utilized as the postprocessing software in this study.The generated initial environment is exported to a graphical software to create a detailed environment that can be employed in a real-time multibody simulation.In the procedures used, the drone captures the images and the laser scanner collects the point clouds.The laser scanner software has an altimeter, an inclinometer, a compass, and a color recognition feature.
Prior to starting photogrammetry and the laser scanning processes, a ground reference point for the laser scanner and the drone are defined.Based on the ground reference point, the postprocessing software identifies the corresponding points on surfaces in both methods.Prior to the operation, the laser scanner locations were predefined during its operation.This pre-definition helps with point matching and line matching between images and point cloud data.
The point cloud data collected by the laser scanner are exported to the photogrammetry software, ReCap, where the alignment between points is accomplished.Afterwards, the images are exported to the software, where the alignment between images will be done.At the final stage, the alignment between the point cloud data and images is accomplished based on the check points.As mentioned previously, the laser scanning process is performed to demonstrate the precise textures of the walls.At this stage, the multibody simulation software models can interact with the generated environment in the graphical software platform.Accordingly, there is no need to export the generated environment from the graphical software to the real-time simulation software.Instead, a model and its environment can be illustrated in the graphical software platform and controlled by the simulation software.Figure 9 shows an example of a model and an environment in the graphical software that can be controlled by the real-time simulation software.The graphics of the forklift model in Figure 9 were created in Blender software and the environment generated by Unity software.Figure 6 shows the flight map of the drone for the targeted area.The drone started flying from a specified spot and, after arriving at a specific height, it flew horizontally with a horizontal velocity of ~20 km per hour while taking images.During its flight, the drone took approximately 1900 images with 80 percent overlap between continuous images.The percentage of the overlap can be defined by the controller before the flight.The maximum height that the drone flew was 50 m.Although in some photogrammetry procedures drones capture both nadiral and oblique images, in this case study, the drone captured nadiral images.The constructed 3D environment model is based on the nadiral images and point cloud, which is collected by the laser scanner.
To collect the point cloud data, the operators relocated the laser scanner to certain predefined locations.The whole process took nearly three hours.The obtained images and the point cloud data were transferred to the Recap software to build the three-dimensional environment.The ReCap software created a three-dimensional model, made some markers as Geo-references, and took measurements pertaining to the height.Finally, the Unity software prepared the environment for use in the simulation software

Discussion
As already stated, nearly 1900 images were taken of the campus area.In this section, a number of views of the campus area have been chosen for comparison to illustrate similarities and differences between the environment in the real world and the corresponding environment in Unity software, see Figures 10-13 10b and 11b illustrate the area as it is created using photogrammetry.As the figures show, the created environment reflects the real world environment appropriately.Cloudy weather facilitated proper matching of the points collected by the laser scanner with the corresponding points in the images.As Figures 10b and 11b demonstrate, the physical dimensions of the buildings, their locations, as well as the distances between the structures have been correctly generated.The paths and streets are created without notable failures.Figures 12a,b and 13a,b show the main entrance of the University in the real world and in the graphical software, respectively.Comparing the point of views, the distances between points are measured in both the real world and the graphical software.Table 1 shows the values for the point distances.As the table shows, the 3D graphical environment constructed by photogrammetry procedure reflects the real world environment appropriately.Figures 12a and 13a show the main entrance area of the LUT University main building.Figures 12b and 13b are the corresponding scenes generated using the photogrammetry approach.As the figures show, the postprocessing phase in the photogrammetry is accomplished with acceptable accuracy.Nearly all points (generated by the laser scanner) match properly with the corresponding points in the images taken by the drone.The buildings are created appropriately and there is no major distortion in the structures.Comparisons of the dimensions, colors, and textures in Figure 12a,b show that Figures 12b and 13a provide a highly accurate reflection of the real world.As pointed out, to construct the 3D environment model out of the planar images, photogrammetry has been used.
In this study, a laser scanner has been utilised to increase the visibility and accuracy of the building surfaces, such as walls and windows.In addition, the environment will be used in the application of physics based real-time simulation models.Therefore, both methods have been utilised to keep the environment as realistic as possible.As mentioned earlier, the graphical software can be connected to the simulation software to run simulation models.The environment of the LUT University campus area can be used for real-time simulation models.Figure 14 shows a real-time simulation model of an excavator which has been imported to the university campus environment.The model is of a real excavator with 22 tons of operating weight.The excavator-simulated model consists of nine bodies.Its hydraulic circuit system is modeled using Lumped Fluid Theory [56].The definitions of the bodies and their constraints, as well as interactions between them are accomplished in real-time simulation software that employs the semi-cursive multibody method [57].In addition, these equations of motions can be solved using the Runge-Kutta time integration [58].The excavator graphical model was created using the Blender software.The graphics for the excavator consist of the graphics to illustrate the model and the collision graphics.The collision graphics define the contacts between the bodies as well as with the ground.By collecting data that the real-time simulation models provide, designers can analyze dynamic behavior and consequently improve the performance of the models.In this study, a 3D environment based on the real world has been constructed such that it interacts with real-time simulation models.In practice, the model interacts with the environment, for example, an excavator driver can drive and excavate soil and have the experience of both working with a real excavator, and working in an environment based on the real world.Furthermore, data collected for real-time simulation models is more reliable than operating in an imaginary environment.From an educational point of view, the use of digital twin models in an environment based on the real-world assists operators in learning more quickly and more precisely.In addition, with the help of the virtual reality concept, the education process can be made more functional and closer to the real-world.

Conclusions
In this paper, a procedure for generating a three-dimensional environment based on a photogrammetry approach is introduced.To create the environment, a drone and a laser scanner were used to take images and collect the point cloud data, respectively.Using a photogrammetry-based approach, it is possible to generate environments that exist in the real world.Furthermore, the graphical software can be connected to the simulation software, which makes it possible to operate physics-based simulation models in their real environments.
The procedure introduced was applied to create an environmental model of the campus area of the LUT University in Finland.A multibody simulation model of an excavator was exported to the campus area.The real-time simulation model is run in a dynamic environment, which means the generated three-dimensional environments can be updated based on renovations in the corresponding real environment.

Figure 2 .
Figure 2. Overview of the photogrammetry procedure.

Figure 3 .
Figure 3. Estimation of three-dimensional specifications of a vehicle by comparing two planar images.

Figure 4 .
Figure 4. Lappeenranta-Lahti University campus area selected for the photogrammetry.

Figure 6 .
Figure 6.Flight maps of the chosen area for photogrammetry.

Figure 8 .
Figure 8. Process steps to generate an environment for real-time simulation software using a photogrammetry approach and Unity software.

Figure 9 .
Figure 9.A fork lift model in its environment as an example of a combination between a real-time simulation software and Unity software. .

Figure 10 .
Comparison between the LUT University campus area in the real world and in Unity software using photogrammetry: (a) Real world and (b) Unity environment.

Figures
Figures 10a and 11a depict the campus area of the LUT University in the real world, and Figures 10b and 11b illustrate the area as it is created using photogrammetry.As the figures show, the created environment reflects the real world environment appropriately.Cloudy weather facilitated proper matching of the points collected by the laser scanner with the corresponding points in the images.As Figures 10b and 11b demonstrate, the physical dimensions of the buildings, their locations,

Figure 11 .
A comparison between the LUT campus buildings in real world and in the graphical software using the photogrammetry approach: (a) Real world and (b) Unity environment.

Figure 12 .Figure 13 .
LUT University main entrance in the real world and in the graphical software using the photogrammetry-based approach: (a) Real world and (b) Unity environment.Comparison between the textures in the real world and created in the graphical software using the photogrammetry-based approach: (a) Real world and (b) Unity environment.

Figure 14 .
Figure 14.Excavator model in the LUT campus area.

Table 1 .
The distance values shown in Figures12a,b and 13a,b.