Next Article in Journal
Impact of Intelligent Manufacturing on Total-Factor Energy Efficiency: Mechanism and Improvement Path
Next Article in Special Issue
How Does a Smart City Bridge Diversify Urban Development Trends? A systematic Bibliometric Analysis and Literature Study
Previous Article in Journal
Sustainable Outdoor Education: Organisations Connecting Children and Young People with Nature through the Arts
Previous Article in Special Issue
Local Feature Search Network for Building and Water Segmentation of Remote Sensing Image
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Implementation of Digital Geotwin-Based Mobile Crowdsensing to Support Monitoring System in Smart City

by
Suhono H. Supangkat
1,2,*,
Rohullah Ragajaya
1,2 and
Agustinus Bambang Setyadji
3
1
Smart City and Community Innovation Center, Bandung Institute of Technology, Bandung 40132, Indonesia
2
School of Electrical and Informatics Engineering, Bandung Institute of Technology, Bandung 40132, Indonesia
3
Faculty of Earth Science and Technology, Bandung Institute of Technology, Bandung 40132, Indonesia
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(5), 3942; https://doi.org/10.3390/su15053942
Submission received: 22 December 2022 / Revised: 10 January 2023 / Accepted: 10 January 2023 / Published: 21 February 2023
(This article belongs to the Special Issue Remote Sensing, Sustainable Land Use and Smart City)

Abstract

:
According to the UN (United Nations) data released in 2018, the growth in the world’s population in urban areas is increasing every year. This encourages changes in cities that are increasingly dynamic in infrastructure development, which has an impact on social, economic, and environmental conditions. On the other hand, this also raises the potential for new problems in urban areas. To overcome potential problems that occur in urban areas, a smart, effective, and efficient urban monitoring system is needed. One solution that can be implemented is the Smart City concept which utilizes sensor technology, IoT, and Cloud Computing to monitor and obtain data on problems that occur in cities in real time. However, installing sensors and IoT throughout the city will take a long time and be relatively expensive. Therefore, in this study, it is proposed that the Mobile Crowdsensing (MCS) method is implemented to retrieve and collect data on problems that occur in urban areas from citizen reports using their mobile devices. MCS implementation in collecting data from the field is relatively inexpensive and does not take long because all data and information are sent from citizens or the community. The data and information that has been collected from the community are then integrated and visualized using the Digital Geotwin-based platform. Compared to other platforms, which are mostly still based on text and GIS in 2D, the advantage of Digital Geotwin is being able to represent and simulate real urban conditions in the physical world into a virtual world in 3D. Furthermore, the use of the Digital Geotwin-based platform is expected to improve the quality of planning and policy making for stakeholders. This research study aims to implement the MCS method in retrieving and collecting data in the form of objects and problem events from the field, which are then integrated into the Digital Geotwin-based platform. Data collected from MCS are coordinate data and images of problem objects. These are the contributions of this research study: the first is to increase the accuracy in determining the coordinates of a distant object by adding a parameter in the form of the approximate coordinates of the object. Second, 3D visualization of the problem object using image data obtained through the MCS method and then integrating it into the Digital Geotwin-based platform. The results of the research study show a fairly good increase in accuracy for determining the coordinates of distant objects. Evaluation results from the visualization of problem objects in 3D have also proven to increase public understanding and satisfaction in capturing information.

1. Introduction

The world population growth in urban areas is increasing every year. According to data released by the UN (2018), the population of urban areas in 2025 will reach 4.7 billion people. This causes the development of urban infrastructure to be more dynamic. However, on the other hand, it increases the potential for new problems in the city [1,2,3,4,5,6,7,8,9,10]. Therefore, a smart and effective city monitoring system is needed to overcome them. Smart City is a smart solution for monitoring various dynamics and problems in urban areas by utilizing IoT, sensors, and Cloud Computing [10,11,12,13]. Although the use of sensors and IoT is quite effective in collecting data and carrying out city monitoring, installing and deploying sensor equipment throughout the city requires a lot of time and is relatively expensive. As a substitute for sensors and IoT, a new sensing paradigm has emerged, namely Mobile Crowdsensing (MCS).
There are many terms and definitions of MCS [14,15,16,17,18,19,20]. However, what best fits the context of writing this paper is the definition of Guo et al. [17]. MCS is “a new sensing paradigm that empowers ordinary citizens to contribute data sensed or generated from their mobile devices, aggregates, and fuses the data in the cloud for crowd intelligence extraction and people-centric service delivery”. Data and information that has been collected from the field are then combined with the cloud for data collection, data storage, and data processing. This makes MCS increasingly popular as a replacement for the use of static and traditional sensors. MCS can combine the advantages of traditional information technology with mobile communications to provide cost-effective and high-quality services to various fields [21]. The most prominent characteristics of MCS are the utilization of mobile devices (e.g., smartphones) and community participation. In addition, MCS can support increasing the active role of the community in participating in urban planning and management according to UN SDGs [22].
Currently, smartphones are equipped with various types of sensors such as cameras, microphones, GPS systems, accelerometers, temperature monitoring sensors, and so on. This makes MCS have more value compared to the use of traditional sensors (e.g., Wireless Sensor Network/WSN), which usually only have one type of sensor. The comparison between MCS and WSN and the MCS taxonomy are clearly described in the paper [23].
MCS has been widely applied in various fields such as environmental monitoring; transportation; urban dynamic sensing; location services; health; and so on [18,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45]. However, this paper will focus on an in-depth study of the implementation of MCS for a Digital Geotwin-based city monitoring system [46] using data in the form of the coordinate location of problem objects in the field, and the construction of these objects in 3D form.
Digital Twin is one of the technologies that play an important role in the development of the Smart City concept [47,48,49,50]. Digital Twins can improve city management and operations to become smarter and more sustainable cities [51]. According to [52], a Digital Twin is defined as a virtual representation of a physical object or process capable of gathering information from the real world to represent, validate, and simulate the behavior of the current and future physical twin. The Digital Twin consists of three main parts, namely its physical form, digital form, and the connection between both of them [48]. In simple terms, Digital Twin can be interpreted as a “digital twin” of a physical entity that exists in the real world [53].
Based on the level of integration between the Digital Twin and the real world, the Digital Twin is classified into three classes, namely Digital Model; Digital Shadows; and Digital Twins [54]. Digital Model is a digital version of a pre-existing or planned physical object, and there is no automatic exchange of data between the physical object and the digital model, so any changes to the physical object will not affect the digital. Digital Shadow is described as a digital representation of an object that has a one-way flow of data between the physical object and the digital object. Changes in physical objects can cause changes to digital objects, but not vice versa. Meanwhile, a system can be called a Digital Twin if, between physical objects and digital objects, there is an exchange of data automatically and it is fully integrated in both directions. Changes that occur in physical objects can automatically cause changes in digital objects and vice versa. In this research study, the Digital Twin system is limited to the Digital Shadow integration level. In addition, this research study places more emphasis on the use of technology that utilizes spatial data and focuses on geodetic aspects so that the term “Digital Twin” in the developed platform will add an element of “geo” to become “Digital Geotwin” [46].
To build a Digital Twin of a city, at least a 3D model and coordinates for the location of the physical objects of the city are needed [55,56,57]. This is one of Digital Twin’s advantages, namely being able to provide 3D visualization of data and information collected from the field so as to provide a more real experience [51]. Visualization of city models in 3D can improve analysis, interaction, and model human activities more intuitively and efficiently [57,58,59]. In the research [60], Digital Twin is used to create visualizations of data collected in the field. Likewise, research studies conducted by [31] utilize the Digital Twin as a means to better understand spatiotemporal information in the field so as to produce the right decisions.
Currently, there are many research studies that examine platforms that involve the community as well as community reports in monitoring and evaluating urban management [32,61,62]. Unfortunately, not all of them have a 3D visual display that is implemented into the Digital Geotwin platform. In addition, the location coordinates sent are the coordinates of the mobile device, not the coordinates of the problem object. This causes the officers to find it difficult to determine the location of the object of the problem in question from the community. Therefore, the objectives of this study are described as follows: (1) utilizing data and community reports from MCS to determine the location of the coordinates of problem objects more accurately and build a 3D model of the object; (2) integrate the problem objects into a platform based on Digital Geotwin.

2. Materials and Methods

2.1. Research Areas

This research was conducted in Bandung city, West Java, Indonesia (Figure 1). This city was chosen as the research location because it adequately represents the characteristics of a big city. Bandung city is the capital of West Java province. The area of Bandung city is 167.31 km 2 . The population of Bandung city in 2020 has reached 2.5 million people [63]. From the city profile, it can be assumed that Bandung city has a fairly high potential for urban dynamics change. In addition, Bandung city has now implemented the Smart City concept and a maturity level assessment has been carried out [64]. In terms of data availability, Bandung city is one of the cities that has sufficient geospatial data in 3D form to be used and developed into a Digital Geotwin.

2.2. Data

The data needed in this research study are divided into 2 (two) categories. The first category is data from mobile devices owned by citizens. The data collected from citizens through MCS must include at least the following: (1) photo or image data of problem objects of interest that occur in the field; (2) citizen coordinate data generated from their mobile device’s GPS sensor; (3) data of the approximation coordinates of the target object taken from Google Maps; (4) magnetic north compass direction indicating the angle of the object of the problem; (5) textual information explaining the event or object of the problem. The additional information based on text serves to clarify the reported incident and describe the contents of the photos taken.
The second category is geospatial data. Geospatial data have a very important role in building of 3D city models [65]. Conceptually, geospatial data are in the form of spatial objects and can also be represented in geometric symbols [66]. The spatial data used consists of spatial objects such as buildings; building height data; and road/transport networks in vector. In addition, spatial data in raster form (e.g., satellite imagery) are also used in this study. Detailed description of the data types and the uses of the data can be seen in Table 1.

2.2.1. Mobile Device Data

Mobile device data are data generated from sensors embedded in mobile devices. Mobile device data used in this research study are photos; user coordinates (citizens who report); compass direction; and the approximation coordinates of the problem object to be reported. The approximate coordinates of the problem object are obtained from Google Maps.
Photo data obtained from MCS are used to build a 3D model of the object. From the photos collected earlier, a 3D model reconstruction process was carried out using the cloud-based SfM (Structure for Motion) algorithm [67,68,69,70,71,72]. Furthermore, GPS coordinate data from the mobile user’s device, approximate coordinates, and compass directions are used to calculate the coordinates of the problem object. More detailed explanation is described in Section 2.3.

2.2.2. Geospatial Data

Next, the data needed to implement this research is geospatial data. This data are obtained from the Geospatial Information Agency in Indonesia, whose function is to carry out government tasks in the field of geospatial information. Geospatial data are produced from survey and mapping technology (e.g., FU; Lidar; Laser Scanning) [73]. Geospatial data such as buildings/structures; road networks; terrain; utility networks are the main components used to build Digital Geotwin [48]. These data are the static base objects of a city to be represented as 3D city model in a Digital Geotwin [74].

2.3. Methodology

Figure 2 illustrates the proposed MCS implementation framework for a Digital Geotwin-based monitoring system. This framework aims to monitor problems that occur in cities by utilizing data sent or reported by citizens using mobile devices. In this research study, problem objects are defined as things that are hazardous, or an anomaly that occurs in urban areas. Examples of such kinds of objects are garbage accumulation, broken/hollow roads, tilted electric poles, illegal parking, traffic jams, etc. Apart from hazardous things or anomalies that occur in cities, this system can actually be used to update city geospatial databases [75]. If there is a new object or building, an indicative update can be carried out on the database.
After getting the data, they are then stored and processed in the cloud. Cloud systems are used because they can increase IT functionality and capacity without having to add software, personnel, investment for additional training, and new infrastructure [76]. The data processing process is generally divided into 2 (two) parts: the first is the process of determining the location of the coordinates of the problem object. The specified location coordinates are the coordinates of the problem object, not the coordinates of the user or citizens. Second is the process of building a 3D model of the object. Reconstruction of 3D models using a 3D model collaborative system [67]. Therefore, it takes more than one photo. However, it would be better if the photos collected covered all angles and parts of the problem object in the field. The 3D model collaborative development system was chosen because it is assumed that photos of problem objects in the field will be sent by more than one person. Sometimes, one problem object is reported by several people.
The coordinate location and 3D model of the problem object that has been obtained are then integrated and visualized into the Digital Geotwin platform. In this research study, Digital Twin level 2 was used [52]. Digital Geotwin is built from geospatial data, which are building objects, building height information, road or transportation networks, and terrain. Then, Digital Geotwin is used as a 3D map base for the Smart City platform or dashboard [77], so that monitoring can be carried out to support a better planning and decision-making system for stakeholders.

2.3.1. Digital Geotwin Development

Digital Twin has a maturity level from 0 to 5 (Table 2). In this research study, the Digital Twin that was built is at level 2 (two), in which there is a relationship between the 3D city model and static data in the field but not in real-time [52]. The 3D City Model is the main basis for building a city’s Digital Twin.
The development of 3D city models needs to pay attention to LOD (Level of Detail). In 2012, the Open Geospatial Consortium defined 5 (five) levels of LOD, namely LOD0 to LOD4 (Figure 3). The LOD concept is focused on several thematic classes, but is mainly implemented for spatial data of buildings [78]. LOD0 is the shape of the footprint and polygon of the building object in 2 (two) dimensions. LOD1 is the result of extrusion of the LOD0 model in a simple prismatic form that already has high dimensions or 3D. LOD2 is a model of a building with a simplified roof. LOD3 is a model of a building that has a more detailed architecture with windows and doors, and is more complex than the previous LOD. LOD4 is a more complete version of LOD3, which has included room information and features.
The LODs used in this study in developing the Digital Geotwin platform are LOD1 and LOD2. According to the explanation previously described in Section 2.2.2, the geospatial data used consists of building objects, building heights, and transportation road networks. All geospatial data are in vector form. Digital Geotwin development was developed through Blender tools with the CityGML [56,79,80]. A more detailed explanation regarding the building of Digital Geotwin will be explained in Section 3.1.

2.3.2. Determination of the Coordinate Location of the Problem Object from MCS

The majority of current MCS-based coordinate location determinations are extracted from user coordinate data using GNSS sensors embedded in mobile devices [15,35,81], so that the coordinates obtained are the coordinates of the user or citizens. In fact, citizens collect data at locations far from the object of interest. Illustrations can be seen in Figure 4. In many cases, for reasons of safety and to avoid danger, citizens take and collect data on problem objects in the field from a considerable distance. For example, problem objects such as electric poles that are prone to collapse, broken traffic lights or potholes in the middle of the road, flooding, etc. Therefore, a process and method are needed to determine the location of the coordinates of problem objects whose data are taken from a distance but still have a good accuracy value. To clarify and be consistent in the use of words, the term used for problem objects whose coordinate data are taken from a distance is called “distant object”.
There are several research studies related to determining the coordinates of distant objects. Research conducted by [82] proposes determining the location of the coordinates of distant objects using a method called the OPS (Object Positioning System), which is built from a combination of triangulation and trilateration methods equipped with computer vision techniques and multi-modal sensor information. Another study was conducted by [25], who used participatory sensing-based mobile phones.
The process of determining the coordinates of distant objects in this research study consists of several parts, which are the conversion of the geographic coordinate system into the UTM coordinate system, the determination of the equation of the line obtained from the users’ mobile coordinates and the compass direction, and the addition of parameters in the form of Google Maps-based approach. From the intersection of these line equations, a number of coordinate data will be generated. The distribution of several coordinates resulting from the intersection of the lines earlier is obtained, then the mean-shift algorithm is applied [83,84,85,86,87] to obtain the final coordinates of the distant object. The workflow to determine the coordinates of distant objects is shown in Figure 5. The detailed process and mathematic model of the geospatial localization are explained in Appendix A.
The method of determining the coordinates of distant objects in research [25] will be adopted for this research study, but with additional parameters in the form of approximation coordinates from Google Maps to speed up the calculation process and improve accuracy. To this day, Google Maps has been widely used in various fields, especially geospatial data in the form of locations/coordinates, satellite imagery, maps, etc. [88,89,90,91,92,93].

2.3.3. Reconstruction of 3D Model of Problem Object from Image Data

Over the last decade, the development of camera technology on mobile phones has significantly improved in terms of image resolution, sharpness, and color accuracy. This has led to increased utilization of photo/image data generated from mobile phone camera sensors to construct 3D models using photogrammetric techniques [69,72,94,95]. Even currently, the level of use is increasing from 3D models to mixed reality [96]. The geometric evaluation results for positional accuracy and 3D model development using a mobile phone camera yield good and acceptable values [97,98]. Many research studies have utilized 3D models from photographic data produced by mobile phones in various fields. An example is research from [69] on the sets of standards for the acquisition process to accurately visualize in 3D form and display museum or cultural heritage objects using a camera. Reconstruction of 3D models using mobile phones is also used in the health sector to create 3D heads of small children to examine skull deformities in more detail [99]. The use of mobile phone cameras for building 3D models has been widely used and is the best choice at a relatively low cost. In the context of this research, the construction of 3D models of urban problem objects is made so that they can be integrated and visualized into the Digital Geotwin platform.
The 3D model reconstruction process in this study uses a collaborative system using photos/images from MCS. Currently, there have been many studies conducting research to create 3D models from a collection of photos obtained from crowdsourcing [43,67,68,100,101]. In this study, the 3D model development process adopts a workflow [101] based on Cloud Computing. The process flow of the 3D model development work adopted in this study can be seen in Figure 6.
The first step is to collect photos that refer to the same problem object. For example, if there are reports of damaged and potholed road objects in a location, these photos will be collected from various citizens’ reports. It is assumed that there is more than one report from citizens about the damaged and potholed roads, so various photos are obtained from various shooting angles. From these photos, the photo poses were reconstructed using the SfM algorithm to produce sparse 3D point clouds. Next, a dense point cloud, commonly known as Multi-view Stereo (MvS) algorithm, is performed. The dense point cloud has a rough appearance and texture, so editing needs to be performed to make the 3D model look smoother. After that, mesh generation, also called polygon surface formation, was carried out so that the surface of the 3D model was close to the original. The final step is editing and optimizing the surface texture of the 3D model. The results of this 3D model have dimensions and sizes that match the scale size of the original shape/object. The stages of constructing a 3D model in detail can be seen in Figure 6 as follows.

3. Results

3.1. Digital Geotwin Development

Survey and mapping technology is needed to build a Digital Geotwin from a city. This technology is a fundamental technology, which is used to generate geospatial data, both in 2D and 3D, from an object in urban areas. These urban objects can be in the form of buildings/structures, road networks, topography, and urban spatial structures [48]. These objects are then integrated into the Smart City platform system so that they can represent the physical world of the city in the digital world with dimensions and sizes that match the scale. The representation of the physical world into the virtual world in the form of Digital Geotwin can make it easier for stakeholders to carry out an inferential analysis. From the analysis, new insights emerge. After getting insight, stakeholders can make decisions quickly and accurately [102].
Digital Geotwin development into the Smart City platform utilizes the geospatial data described in Section 2.2.2. The geospatial data used are vector data in (.shp) format. The tools used for data processing use Blender software. The objects in Digital Geotwin are built in LOD 1, but some important buildings are built in LOD 2, which is more detailed. The results of the Digital Geotwin building can be seen in Figure 7 as follows.

3.2. Results of Determining Distant Object Coordinates

After collecting data in the form of user/citizen coordinates, compass directions, and approximation coordinates of distant objects through MCS, a calculation process is then carried out to determine the estimated coordinates of the location of the distant object. In conducting experiments, we used three definitive benchmarks as distant objects. We use all these benchmarks as objects to validate them with calculated coordinates. The benchmark is a monument where definitive coordinates have been calculated. Benchmark locations are around ITB (Bandung Institute of Technology). The benchmark is named with the codes ITB092; ITB059A; and ITB075A.
Furthermore, from each of these benchmark objects, we retrieve data in the form of GNSS coordinates from the mobile device, compass angle, approximate coordinates, and photos. However, the photos of these benchmark objects are specifically not included in the process of calculating the coordinates of distant objects. Photo data were used for the construction of 3D models. Each of our benchmarks was divided into three categories based on the distance of the user/observer to the distant object. The first category is 10 m away. The second category is 20 m away. The third category has a distance of 30 m. We chose these distances with the assumption that the average person takes a photo of an object from the range of 10 m to 30 m. If the photo is taken from a distance that is too far, it will be difficult to identify objects, especially objects that are relatively small in size.
Each category was collected four times from various angles. The data collection was performed using a Samsung Galaxy S10+ mobile device and GPS Map Camera Lite applications that can record photos, mobile device coordinates, and compass directions simultaneously [103]. In Figure 8, you can see an example of the results of data retrieval from benchmark objects using the application. Next, the approximate coordinate data are retrieved from Google Maps as an additional parameter when the coordinate value estimation process is carried out.
The mobile device coordinate data and compass directions that have been taken are then made into straight-line equations so that each category has four line equations. There are six combinations of intersection points resulting from the four line equations. From these six line equations, we obtain six X and Y coordinates. The detailed process can be seen in Appendix A.
The total number of coordinate points for each line intersection is six coordinate points, and then one approximation coordinate of the benchmark object taken from Google Maps is added. Hence, the number of coordinate data processed is seven coordinate points. Furthermore, to determine the final coordinate estimation, a calculation process was carried out using the mean-shift algorithm with a kernel bandwidth of 30. An example of the results can be seen in Figure 9 as follows.
From the picture above, the results of the estimated position of the coordinates of the benchmark object, which has a “+” sign, can be seen. The x-axis represents the x or Easting coordinates in the UTM projection coordinate system. The y-axis represents the y or Northing coordinates in the UTM projection coordinate system. The green circles totaling seven in the graph are the coordinate points of the intersection of the combination of line equations and the coordinates of the results of the approach. Based on this, it can be assumed that the coordinates of the objects are in the middle of the points of intersection of the coordinates and the approximate coordinates (the “+” sign). By using the mean-shift algorithm, the mean value of the cluster can be determined. This mean value is the final estimate of the coordinates of the distant object. The table below is the result of the final calculation of the coordinates of the distant object. Table 3 shows the estimated result of the ITB092 (benchmark) distant object coordinates. Table 4 shows the estimated result of the ITB059A (benchmark) distant object coordinates. And, Table 5 shows the estimated result of the ITB075A (benchmark) distant object coordinates.

3.3. Results of 3D Model Reconstruction

To build a 3D model of a problem object that occurs in a city, photo/image data of the problem object are needed. The object photo data was taken from citizen reports using the camera sensor on their phone. The photos of problem objects in the field that have been collected from MCS are then processed for data processing to build a 3D model.
The photo data processing was carried out using the PIX4Dcatch software [104]. This software is designed for iOS and Android devices and can transform data in the form of photographs in 2D into 3D objects that have real-time coordinates (geolocation) using photogrammetry [105]. Data retrieval was performed by scanning the problem object using the camera sensor to take photos and the GNSS sensor on the mobile device. The scanning process is actually taking pictures of objects frame by frame simultaneously. The condition for taking photos of the problem object must have several of the same parts so that the matching/stereo process can be carried out.
In this experiment, we took four examples of problem objects, which are (1) non-formal areas; (2) roads that are damaged and have potholes; (3) potholed sidewalks/pavement; and (4) broken sidewalks/pavement. We chose these objects because these phenomena occur a lot in the city of Bandung. In addition, MCS is an efficient method for monitoring this phenomenon. We use applications and software from PIX4D to construct 3D models of these problem objects. In Table 6, you can see photos of the problem objects and information about the number of photos taken to construct the 3D model in this study.
The results of the 3D model reconstruction of the problem objects can be seen in the image below. Figure 10 is a 3D model of the non-formal area around the ITB campus, precisely in Tamansari Village. The non-formal area has an inadequate environment and is a slum area. Within the area, there are very dense settlements, narrow roads, and buildings that are unfit for habitation that have low construction quality and a lack of access to infrastructure [106]. These problems will certainly affect the aesthetics of the city so that it becomes inappropriate. Therefore, the stakeholders’ or the government’s role is needed to immediately fix and improve the quality of buildings in slum areas according to a good urban spatial plan. With citizen reports and then visualizing them in 3D form, it is hoped that the handling and improvements related to this non-formal area can be followed up on quickly and precisely.
In Figure 11, you can see the 3D model of the damaged and potholed road. Damaged roads and potholes are one of the problems that often occur in Bandung city. According to data released by BPS in 2018, the total length of damaged roads in Bandung city reached 81.82 km [107]. With the visualization of the 3D model, the calculation of the area of damaged and potholed roads can be calculated directly so that planning and budgeting can be carried out immediately, which will then be followed up on for repairs. This will certainly make it easier to monitor roads throughout Bandung City and provide efficiency in terms of time, effort, and cost.
Figure 12 and Figure 13 are images of damaged and potholed sidewalks located near Bandung Institute of Technology. In Figure 12, it can be seen that the sidewalk has a hole right on the access road for people with disabilities. This certainly has the potential to be hazardous and raises the risk of pedestrians falling into holes when crossing the sidewalk. Likewise, in Figure 13 is an image of a sidewalk that has been damaged and broken so that it endangers pedestrians and disrupts the aesthetics of the city. By using the MCS method, information on infrastructure damage such as this can be quickly received by stakeholders.
As seen in Figure 14 below, the left image is a 3D visualization of a damaged and potholed road. The right image is the calculation of the area of the hollow road area using the software’s polygon tools, which will immediately produce the area value. There are two sections of the damaged road area, which are approximately 0.82 m2 and 0.73 m2. This information on the area of the damaged road can be used to plan the follow-up action and its budget for road improvement for stakeholders.
As previously explained, the 3D reconstruction and visualization of problem objects in the city can help stakeholders see sidewalk conditions closer and more intuitively to actual conditions on the ground. In addition, with 3D visualization, stakeholders can also calculate the length, width, and depth of the holes on the pavement, so they can make decisions quickly and accurately. More details can be seen in Figure 15 below.
Figure 15 above shows an image of the 3D shape of the perforated pavement model from the top view and the calculation of the length, width, and depth of the perforated pavement. The length of the sidewalk hole is 63.8 cm. The width of the sidewalk hole is 63.2 cm, while the depth of the hole is 60.8 cm. This information can assist stakeholders in making better plans and decisions so as to improve the quality of life in urban communities.

3.4. The Result of Integration with the Digital GeoTwin Platforms

After constructing a 3D model of the problem object, then the 3D object is integrated into the Digital Geotwin platform. The integration was carried out by entering the coordinate values obtained from the determining process of the coordinates of distant objects in the problems discussed in Section 3.1. The results of integrating 3D model objects into the Digital Geotwin platform can be seen in Figure 16 below.
By integrating the problem object, stakeholders can better understand the condition of the problem holistically, and can know the position and orientation of the problem object more intuitively. For example, the location of the problem, the conditions around the object of the problem, what is the impact on the surrounding environment, what is the solution, whether it needs to be followed up on immediately or not, estimating how much it will cost to solve the problem, how to evaluate it in the future and various other analyses.

4. Evaluation

4.1. Evaluation of Determining the Coordinates of Distant Objects

The evaluation process in this study was carried out by comparing the value of the distance error between the proposed method of determining the coordinates of distant objects and the distance error value of the previous research [25]. However, in this study, no adjustments were made to the compass direction and magnetic declination at each geographical location using the International Geomagnetic Reference Field Model. A comparison of the results of coordinates and distance error values between previous research methods and the method proposed in this study is presented in Table 7 as follows.
To find out the results of the comparison of the distance error values in the two methods above, the Paired T-Test was used. Based on the statistical test Paired t-Test using SPSS software [108,109], the output was obtained through a table of comparison results as follows.
Table 8 below shows the summary results of the descriptive statistics of the two methods. The mean value of the previous method is 11.7767 m, while the mean value of the proposed method is 7.6911 m. Furthermore, in Table 8, the results of the paired sample test were presented from the distance error values of the two methods.
Table 9 shows the significant value/ Sig. (2-tailed) between the two data samples with a confidence interval of 95%. If the value of Sig. (2-tailed) ≤ 0.05, then there is a significant difference between the coordinate calculation results using the previous research method and the coordinate calculation results using the proposed method. Conversely, if the value of Sig. (2-tailed) ≥ 0.05, then there is no significant difference between the two methods.
The Sig. (2-tailed) value in the table is 0.003, which is smaller than 0.05. Based on these results, it can be concluded that there is a significant difference in the distance error value after implementing the newly proposed method. The mean value in the table shows that there is an average increase in the accuracy value of (+) 4.08556 m.
The result of the evaluation above proves that the proposed method can increase the accuracy of the coordinates of distant objects. Google maps coordinates have the same coordinate system reference as the object. In addition, coordinates from Google maps have better GNSS quality than the GNSS quality of mobile devices. This makes the result of determining object coordinates much better than the previous method.

4.2. Evaluation of the Utilization of 3D Models in the Digital Geotwin Platform

To evaluate the integration of the 3D problem object model into the Digital Geotwin platform, we use a questionnaire to quantitatively assess the results of 3D utilization. Based on this, we assess three aspects, which are the comparative aspect; satisfaction aspect; and usability aspects [59]. In the aspect of comparison, we compare maps or visualizations in 2D with visualizations in 3D. In the second aspect, we measure the aspect of respondents’ satisfaction by using 3D maps/visualization regarding comfort, convenience, and detailed information obtained. In the third aspect, we evaluate the use of features that can be generated from 3D visualization and 3D factual dimension measurement functions. To access the questionnaire, follow the link at [110].
The total number of respondents who filled out the questionnaire was 42 (forty-two) people. The results of the questionnaire from the comparative aspect show that the majority of respondents stated “strongly agree” and “agree”, with an accumulation of 83%, that the use of maps and 3D visualization is better than 2D visualization. A total of 10% of respondents said they were unsure or “neutral”. Meanwhile, 2% and 5% of respondents stated “disagree” and “strongly disagree”, respectively. More details can be seen in Figure 17.
From the aspect of satisfaction, as many as 67% of respondents stated that they were satisfied with the visualization and utilization of the 3D model, namely that the visualization of the 3D model provided satisfaction in terms of comfort, convenience, and detailed information obtained. A total of 22% of respondents stated they were in doubt or “neutral”. Meanwhile, 6% stated “disagree”, and 5% stated “strongly disagree”. The results of the questionnaire assessment regarding the satisfaction aspect can be seen in Figure 18.
In Figure 19, the results of the respondents on the usability aspect can be seen. The accumulated total of 83% of respondents stated “strongly agree” and “agree” that the use of 3D models can be useful in city management and support better planning and policy making. A total of 9% of respondents said they were unsure or “neutral”. The remaining 3% and 5% stated “disagree” and “strongly disagree”.

5. Conclusions

From the results and evaluations of the research studies that have been conducted, it can be concluded that the implementation of a monitoring system for urban problems can be carried out by utilizing MCS (Mobile Crowdsensing), which is integrated with Digital Geotwin technology. Urban problems such as damaged roads, potholes on sidewalks, and so on can be identified quickly and can be acted upon immediately. MCS is a method of collecting data and information (sensing) that involves the community/citizens using mobile devices. The MCS method has developed quite rapidly in recent times because of the many advantages it has when compared to traditional sensors. The advantages of the MCS method in conducting city monitoring systems include the following: (1) the city does not need to install and distribute many sensors in every corner of the city, because the public and society are dynamic and have a high level of mobilization; (2) it has very broad reach and coverage; (3) the number of sensors and computational capabilities is higher, because it uses mobile devices to collect data and information; (4) it has a wide choice of platforms and applications compared to traditional sensors; (5) it is more efficient in the aspects of time, effort, and cost.
The combination of the MCS method and Digital Geotwin technology can produce a better monitoring system for urban problems. By using the MCS method, information can be obtained in the form of the coordinate position of the problem object along with the 3D model of the object. The reconstruction of 3D objects is not only limited to visualization; dimensional measurements of the object can also be carried out. Furthermore, data in the form of position coordinates and 3D shapes resulting from MCS can be integrated into the Digital Geotwin platform. Digital Geotwin Technology is able to represent 3D objects and city conditions virtually, so that it will be easier for stakeholders to carry out a monitoring system for urban problems.
In this paper, research is focused on two things. The first is determining the coordinate position of the problem object that occurs in urban areas from a distance (distant object) using a mobile device. Second, the construction of 3D models of urban problem objects (such as potholed sidewalks, damaged roads, garbage dumps, etc.) using camera sensors from citizens/community mobile devices using the MCS method and then integrated into the Digital Geotwin.
The results of the evaluation of the statistical tests for determining the coordinates of distant objects in urban problems show quite good results, namely an average increase in accuracy of about 4 (four) meters. Meanwhile, the evaluation results of the development and utilization of 3D models in representing urban problem objects show better results in three aspects. The first aspect is the comparative aspect. This comparative aspect compares the visualization of objects based on 2D maps with the visualization of objects based on 3D. The results of the questionnaire show that more than 80% of respondents agreed that 3D visualization is better than 2D visualization. The second aspect is the satisfaction aspect. The satisfaction aspect is measured by the convenience, comfort, and interest of respondents in visualization in 3D. The results show that more than 60% of respondents are satisfied with the use of and visualization in 3D. The third aspect is the usability aspect; more than 80% of respondents agreed that the use and visualization of 3D objects help stakeholders in planning development, making policies, managing cities, and providing more detailed and more intuitive information.

Author Contributions

Conceptualization, S.H.S., R.R. and A.B.S.; Methodology, R.R.; Validation, A.B.S.; Formal analysis, S.H.S.; Writing—original draft, R.R.; Visualization, R.R.; Supervision, S.H.S. and A.B.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data sets generated during and/or analyzed in this study are available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

To find the equation of a line in the process of determining the coordinates of a distant object, two pieces of data are needed, which are (1) compass direction with magnetic north and (2) GPS coordinates of the user’s mobile device. The formulation used to create the equation of the line is as follows.
y = m x a + b
m = t a n Φ
where
y:Northing (UTM Coordinate System)
x:Easting (UTM Coordinate System)
a :Easting from mobile device (UTM Coordinate System)
b :Northing from mobile device (UTM Coordinate System)
m:Gradient
Φ:Compass angle
Equation (A1) is the formulation used to find the equation of the line for each point or position taken by the user using their mobile device. Meanwhile, Equation (A2) is the formula for finding the gradient value of the line equation. The gradient value is obtained from the angle of the compass direction taken by the user/citizen. After obtaining several line equations, the point of intersection between one line equation and another is determined. In this research, four line equations were used so that there were six combinations of intersecting coordinate points. To determine the number of combinations, you can use the combination formula as follows.
n C r = n ! n r ! r !
where
C:Combination
n:Amount of data
r :The amount of data to be combined
! :Factorial symbol
The next step was to enter the coordinate value resulting from the intersection of the line and the approximate coordinate value from Google Maps into the mean-shift algorithm to determine the final estimated value. The mean-shift algorithm is an average shift method carrying out a simple iterative procedure that shifts each data point to the average of the surrounding data points [84]. This algorithm was used to determine the density gradient estimation. In this study, Kernel Density Estimation (KDE) was used to estimate the non-parametric density [83,86].
f x = 1 n h i = 1 n K x x i h
where
K:Gaussian Kernel used
n:Amount of data
h :Bandwidth or window radius
In general, the mean-shift formulation used can be seen in formulation (A4) above. The bandwidth value used in this study was 30.

References

  1. Gössling, S. Integrating E-Scooters in Urban Transportation: Problems, Policies, and the Prospect of System Change. Transp. Res. Part D Transp. Environ. 2020, 79, 102230. [Google Scholar] [CrossRef]
  2. Nguyen, T.T.; Ngo, H.H.; Guo, W.; Wang, X.C.; Ren, N.; Li, G.; Ding, J.; Liang, H. Implementation of a Specific Urban Water Management—Sponge City. Sci. Total Environ. 2019, 652, 147–162. [Google Scholar] [CrossRef] [PubMed]
  3. Nowakowski, P. A Proposal to Improve E-Waste Collection Efficiency in Urban Mining: Container Loading and Vehicle Routing Problems—A Case Study of Poland. Waste Manag. 2017, 60, 494–504. [Google Scholar] [CrossRef]
  4. Liu, H.; Jia, Y.; Niu, C. “Sponge City” Concept Helps Solve China’s Urban Water Problems. Environ. Earth Sci. 2017, 76, 473. [Google Scholar] [CrossRef]
  5. Cui, J.X.; Liu, F.; Janssens, D.; An, S.; Wets, G.; Cools, M. Detecting Urban Road Network Accessibility Problems Using Taxi GPS Data. J. Transp. Geogr. 2016, 51, 147–157. [Google Scholar] [CrossRef]
  6. Broere, W. Urban Underground Space: Solving the Problems of Today’s Cities. Tunn. Undergr. Sp. Technol. 2016, 55, 245–248. [Google Scholar] [CrossRef] [Green Version]
  7. Christine Haaland; Cecil Konijnendijk van den Bosch Challenges and Strategies for Urban Green-Space Planning in Cities Undergoing Densification: A Review. Urban For. Urban Green. 2015, 14, 760–771. [CrossRef]
  8. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep Learning in Remote Sensing Applications: A Meta-Analysis and Review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
  9. Chuantao, Y.I.N.; Zhang, X.; Hui, C.; Jingyuan, W.; Daven, C.; Bertrand, D. A Literature Survey on Smart Cities. Sci. China Inf. Sci. 2015, 58, 1–18. [Google Scholar] [CrossRef]
  10. Jalali, R.; El-Khatib, K.; McGregor, C. Smart City Architecture for Community Level Services through the Internet of Things. In Proceedings of the 2015 18th International Conference on Intelligence in Next Generation Networks, ICIN 2015, Paris, France, 17–19 February 2015; pp. 108–113. [Google Scholar] [CrossRef]
  11. Jiang, D. The Construction of Smart City Information System Based on the Internet of Things and Cloud Computing. Comput. Commun. 2020, 150, 158–166. [Google Scholar] [CrossRef]
  12. Laufs, J.; Borrion, H.; Bradford, B. Security and the Smart City: A Systematic Review. Sustain. Cities Soc. 2020, 55, 102023. [Google Scholar] [CrossRef]
  13. Kim, T.-H.; Ramos, C.; Mohammed, S. Smart City and IoT. Future Gener. Comput. Syst. 2017, 76, 159–162. [Google Scholar] [CrossRef]
  14. Li, G.; Zheng, Y.; Fan, J.; Wang, J.; Cheng, R. Crowdsourced Data Management: Overview and Challenges. In Proceedings of the 2017 ACM International Conference on Management of Data, Chicago, IL, USA, 14–19 May 2017; pp. 1711–1716. [Google Scholar]
  15. Tong, Y.; Zhou, Z.; Zeng, Y.; Chen, L.; Shahabi, C. Spatial Crowdsourcing: A Survey. VLDB J. 2020, 29, 217–250. [Google Scholar] [CrossRef]
  16. Capponi, A.; Fiandrino, C.; Kantarci, B.; Foschini, L.; Kliazovich, D.; Bouvry, P. A Survey on Mobile Crowdsensing Systems: Challenges, Solutions, and Opportunities. IEEE Commun. Surv. Tutorials 2019, 21, 2419–2465. [Google Scholar] [CrossRef] [Green Version]
  17. Guo, B.; Yu, Z.; Zhou, X. From Participatory Sensing to Mobile Crowd Sensing. In Proceedings of the 2014 IEEE International Conference on Pervasive Computing and Communication Workshops (PERCOM WORKSHOPS), Budapest, Hungary, 24–28 March 2014; pp. 593–598. [Google Scholar]
  18. Chatzimilioudis, G.; Konstantinidis, A.; Laoudias, C. Crowdsourcing with Smartphones. IEEE Internet Comput. 2012, 16, 36–44. [Google Scholar] [CrossRef] [Green Version]
  19. Ganti, R.K.; Ye, F.; Lei, H. Mobile Crowdsensing: Current State and Future Challenges. IEEE Commun. Mag. 2011, 49, 32–39. [Google Scholar] [CrossRef]
  20. Christin, D.; Reinhardt, A.; Kanhere, S.S.; Hollick, M. A Survey on Privacy in Mobile Participatory Sensing Applications. J. Syst. Softw. 2011, 84, 1928–1946. [Google Scholar] [CrossRef]
  21. Kong, X.; Liu, X.; Jedari, B.; Li, M.; Wan, L.; Xia, F. Mobile Crowdsourcing in Smart Cities: Technologies, Applications, and Future Challenges. IEEE Internet Things J. 2019, 6, 8095–8113. [Google Scholar] [CrossRef]
  22. United Nations Sustainable Development. Available online: https://sdgs.un.org/goals (accessed on 10 October 2022).
  23. Boubiche, D.E.; Imran, M.; Maqsood, A.; Shoaib, M. Mobile Crowd Sensing—Taxonomy, Applications, Challenges, and Solutions. Comput. Human Behav. 2019, 101, 352–370. [Google Scholar] [CrossRef]
  24. Guo, B.I.N.; Wang, Z.H.U.; Yu, Z.; Wang, Y.U.; Yen, N.Y.; Huang, R.; Zhou, X. Mobile Crowd Sensing and Computing: The Review of an Emerging Human-Powered Sensing Paradigm. ACM Comput. Surv. (CSUR) 2015, 48, 1–31. [Google Scholar] [CrossRef]
  25. Kotovirta, V.; Toivanen, T.; Tergujeff, R.; Huttunen, M. Participatory Sensing in Environmental Monitoring—Experiences. In Proceedings of the 6th International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing IMIS 2012, Palermo, Italy, 4–6 July 2012; pp. 155–162. [Google Scholar] [CrossRef]
  26. Haltofová, B. Implementation of Geo-Crowdsourcing Mobile Applications in e-Government of V4 Countries: A State-of-the-Art Survey. World Acad. Sci. Eng. Technol. Int. J. Comput. Electr. Autom. Control Inf. Eng. 2017, 11, 568–572. [Google Scholar]
  27. Ariya Sanjaya, I.M.; Supangkat, S.H.; Sembiring, J. Citizen Reporting Through Mobile Crowdsensing: A Smart City Case of Bekasi. In Proceedings of the 2018 International Conference on ICT for Smart Society: Innovation Toward Smart Society and Society 5.0, ICISS 2018, Semarang, Indonesia, 10–11 October 2018; pp. 11–14. [Google Scholar] [CrossRef]
  28. Calle-Jimenez, T.; Luján-Mora, S. Using Crowdsourcing to Improve Accessibility of Geographic Maps on Mobile Devices. In Proceedings of the ACHI 2015: The 8th International Conference on Advances in Computer-Human Interactions, Lisbon, Portugal, 22–27 February 2015; pp. 150–154. [Google Scholar]
  29. Jones, P.; Layard, A.; Speed, C.; Lorne, C. MapLocal: Use of Smartphones for Crowdsourced Planning. Plan. Pract. Res. 2015, 30, 322–336. [Google Scholar] [CrossRef] [Green Version]
  30. Ch’ng, E.; Cai, S.; Zhang, T.E.; Leow, F.T. Crowdsourcing 3D Cultural Heritage: Best Practice for Mass Photogrammetry. J. Cult. Herit. Manag. Sustain. Dev. 2019, 9, 24–42. [Google Scholar] [CrossRef]
  31. Ham, Y.; Kim, J. Participatory Sensing and Digital Twin City: Updating Virtual City Models for Enhanced Risk-Informed Decision-Making. J. Manag. Eng. 2020, 36, 04020005. [Google Scholar] [CrossRef]
  32. Hamrouni, A.; Member, S.; Ghazzai, H.; Frikha, M.; Massoud, Y. A Spatial Mobile Crowdsourcing Framework for Event Reporting. IEEE Trans. Comput. Soc. Syst. 2020, 7, 477–491. [Google Scholar] [CrossRef]
  33. Kim, H.; Ham, Y. Participatory Sensing-Based Geospatial Localization of Distant Objects for Disaster Preparedness in Urban Built Environments. Autom. Constr. 2019, 107, 102960. [Google Scholar] [CrossRef]
  34. Abbondati, F.; Antonio, S.; Veropalumbo, R.; Dell, G. Surface Monitoring of Road Pavements Using Mobile Crowdsensing Technology. Measurement 2021, 171, 108763. [Google Scholar] [CrossRef]
  35. Cecilla, J.M.; Cano, J.; Hernandez-Orallo, E.; Calafate, C.T.; Manzoni, P. Mobile Crowdsensing Approaches to Address the COVID-19 Pandemic in Spain. IET Smart Cities 2020, 2, 58–63. [Google Scholar] [CrossRef]
  36. Li, X.; Goldberg, D.W. Computers, Environment and Urban Systems Toward a Mobile Crowdsensing System for Road Surface Assessment. Comput. Environ. Urban Syst. 2018, 69, 51–62. [Google Scholar] [CrossRef]
  37. Chen, H.; Guo, B.; Member, S.; Yu, Z.; Member, S. CrowdTracking: Real-Time Vehicle Tracking Through Mobile Crowdsensing. IEEE Internet Things J. 2019, 6, 7570–7583. [Google Scholar] [CrossRef]
  38. Zhao, X.; Wang, N.; Han, R.; Xie, B. International Journal of Disaster Risk Reduction Urban Infrastructure Safety System Based on Mobile Crowdsensing. Int. J. Disaster Risk Reduct. 2018, 27, 427–438. [Google Scholar] [CrossRef]
  39. Marakkalage, S.H.; Sarica, S.; Pik, B.; Lau, L.; Viswanath, S.K.; Balasubramaniam, T.; Yuen, C.; Member, S.; Yuen, B.; Luo, J.; et al. Understanding the Lifestyle of Older Population: Mobile Crowdsensing Approach. IEEE Trans. Comput. Soc. Syst. 2019, 6, 82–95. [Google Scholar] [CrossRef]
  40. Marjanovic, M.; Grubeša, S.; Žarko, I.P. Air and Noise Pollution Monitoring in the City of Zagreb by Using Mobile Crowdsensing. In Proceedings of the 2017 25th International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, Croatia, 21–23 September 2017. [Google Scholar] [CrossRef]
  41. Peng, Z.; Gao, S.; Xiao, B.; Guo, S.; Yang, Y. CrowdGIS: Updating Digital Maps via Mobile Crowdsensing. IEEE Trans. Autom. Sci. Eng. 2018, 15, 369–380. [Google Scholar] [CrossRef]
  42. Dhonju, H.K.; Xiao, W.; Shakya, B.; Mills, J.P.; Sarhosis, V. Documentation of heritage structures through geo-crowdsourcing and web-mapping. ISPRS Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2017, XLII, 18–22. [Google Scholar] [CrossRef] [Green Version]
  43. Xiao, Z.; Lim, H.; Ponnambalam, L. Participatory Sensing for Smart Cities: A Case Study on Transport Trip Quality Measurement. IEEE Trans. Ind. Informatics 2017, 13, 759–770. [Google Scholar] [CrossRef]
  44. Cheng, L.; Yuan, Y.; Xia, N.; Chen, S.; Chen, Y.; Yang, K.; Ma, L. ISPRS Journal of Photogrammetry and Remote Sensing Crowd-Sourced Pictures Geo-Localization Method Based on Street View Images and 3D Reconstruction. ISPRS J. Photogramm. Remote Sens. 2018, 141, 72–85. [Google Scholar] [CrossRef]
  45. Matarazzo, T.J.; Santi, P.; Pakzad, S.N.; Carter, K.; Ratti, C.; Moaveni, B.; Osgood, C.; Jacob, N. Crowdsensing Framework for Monitoring Bridge Vibrations Using Moving Smartphones. Proc. IEEE 2018, 106, 577–593. [Google Scholar] [CrossRef]
  46. Lehner, H.; Dorffner, L. Digital GeoTwin Vienna: Towards a Digital Twin City as Geodata Hub. PFG J. Photogramm. Remote Sens. Geoinf. Sci. 2020, 88, 63–75. [Google Scholar] [CrossRef]
  47. Mohammadi, N.; Taylor, J.E. Smart City Digital Twins. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 21 November–1 December 2017; pp. 1–5. [Google Scholar] [CrossRef]
  48. Deng, T.; Zhang, K.; Shen, Z.J. A Systematic Review of a Digital Twin City: A New Pattern of Urban Governance toward Smart Cities. J. Manag. Sci. Eng. 2021, 6, 125–134. [Google Scholar] [CrossRef]
  49. Xia, H.; Liu, Z.; Efremochkina, M.; Liu, X.; Lin, C. Study on City Digital Twin Technologies for Sustainable Smart City Design: A Review and Bibliometric Analysis of Geographic Information System and Building Information Modeling Integration. Sustain. Cities Soc. 2022, 84, 104009. [Google Scholar] [CrossRef]
  50. El Saddik, A.; Laamarti, F.; Alja’Afreh, M. The Potential of Digital Twins. IEEE Instrum. Meas. Mag. 2021, 24, 36–41. [Google Scholar] [CrossRef]
  51. Shahat, E.; Hyun, C.T.; Yeom, C. City Digital Twin Potentials: A Review and Research Agenda. Sustainability 2021, 13, 3386. [Google Scholar] [CrossRef]
  52. Botín-Sanabria, D.M.; Mihaita, S.; Peimbert-García, R.E.; Ramírez-Moreno, M.A.; Ramírez-Mendoza, R.A.; Lozoya-Santos, J.D.J. Digital Twin Technology Challenges and Applications: A Comprehensive Review. Remote Sens. 2022, 14, 1335. [Google Scholar] [CrossRef]
  53. Sharma, A.; Kosasih, E.; Zhang, J.; Brintrup, A.; Calinescu, A. Journal of Industrial Information Integration Digital Twins: State of the Art Theory and Practice, Challenges, and Open Research Questions. J. Ind. Inf. Integr. 2022, 30, 100383. [Google Scholar] [CrossRef]
  54. Fuller, A.; Fan, Z.; Day, C.; Barlow, C. Digital Twin: Enabling Technologies, Challenges and Open Research. IEEE Access 2020, 8, 108952–108971. [Google Scholar] [CrossRef]
  55. Yan, J.; Zlatanova, S.; Aleksandrov, M.; Diakite, A.A.; Pettit, C.; Building, R.C.; Twins, D. Integration of 3D objects and terrain for 3D modelling supporting the digital twin. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, IV, 24–27. [Google Scholar] [CrossRef] [Green Version]
  56. Ruohomaki, T.; Airaksinen, E.; Huuska, P.; Kesäniemi, O.; Martikka, M.; Suomisto, J. Smart City Platform Enabling Digital Twin. In Proceedings of the 2018 International Conference on Intelligent Systems (IS), Funchal, Portugal, 25–27 September 2018; pp. 155–161. [Google Scholar]
  57. Xiaojing, H.; Leong, K.K.; Bo, Y.; Yong, K.T. An Efficient Platform for 3D City Model Visualization. In Proceedings of the 2006 IEEE International Symposium on Geoscience and Remote Sensing, Denver, CO, USA, 31 July–4 August 2006; pp. 917–920. [Google Scholar] [CrossRef]
  58. Lv, Z.; Li, X.; Wang, W.; Zhang, B.; Hu, J. Government Affairs Service Platform for Smart City. Future Gener. Comput. Syst. 2018, 81, 443–451. [Google Scholar] [CrossRef]
  59. Lv, Z.; Yin, T.; Zhang, X.; Song, H.; Chen, G. Virtual Reality Smart City Based on WebVRGIS. IEEE Internet Things J. 2016, 3, 1015–1024. [Google Scholar] [CrossRef]
  60. Major, P.; Li, G.; Hildre, H.P.; Zhang, H. The Use of a Data-Driven Digital Twin of a Smart City: A Case Study of Ålesund, Norway. IEEE Instrum. Meas. Mag. 2021, 24, 39–49. [Google Scholar] [CrossRef]
  61. Aljoufie, M.; Tiwari, A. Citizen Sensors for Smart City Planning and Traffic Management: Crowdsourcing Geospatial Data through Smartphones in Jeddah, Saudi Arabia. GeoJournal 2022, 87, 3149–3168. [Google Scholar] [CrossRef]
  62. Khedher, I.; Faiz, S.; Gazah, S. R-Safety: A Mobile Crowdsourcing Platform for Road Safety in Smart Cities. In Proceedings of the 2022 8th International Conference on Control, Decision and Information Technologies CoDIT 2022, Istanbul, Turkey, 17–20 May 2022; pp. 950–955. [Google Scholar] [CrossRef]
  63. BPS-Statistics of Bandung Municipality. Bandung Municipality in Figures; BPS Kota: Bandung, Indonesia, 2020.
  64. Nuraeni, A.; Munandar, A. Smart City Evaluation Model in Bandung, West Java, Indonesia. In Proceedings of the 2019 IEEE 13th International Conference on Telecommunication Systems, Services, and Applications (TSSA), Piscataway, NJ, USA, 3–4 October 2019. [Google Scholar]
  65. Breunig, M.; Bradley, P.E.; Jahn, M.; Kuper, P.; Mazroob, N.; Rösch, N.; Al-doori, M.; Stefanakis, E.; Jadidi, M. Geospatial Data Management Research: Progress and Future Directions. ISPRS Int. J. Geo-Inf. 2020, 9, 95. [Google Scholar] [CrossRef] [Green Version]
  66. Sun, K.; Zhu, Y.; Pan, P.; Hou, Z.; Wang, D.; Li, W. Geospatial Data Ontology: The Semantic Foundation of Geospatial Data Integration and Sharing. Big Earth Data 2019, 3, 269–296. [Google Scholar] [CrossRef] [Green Version]
  67. Poiesi, F.; Kessler, F.B.; Locher, A.; Kessler, F.B.; Nocerino, E.; Kessler, F.B.; Remondino, F.; Kessler, F.B. Cloud-Based Collaborative 3D Reconstruction Using Smartphones. In Proceedings of the 14th European Conference on Visual Media Production (CVMP 2017), London, UK, 11–13 December 2017. [Google Scholar]
  68. Paper, C. 3D Reconstruction with a Collaborative Approach Based on Smartphones and a Cloud-Based. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W8, 187–194. [Google Scholar] [CrossRef] [Green Version]
  69. Apollonio, F.I.; Fantini, F.; Garagnani, S.; Gaiani, M. A Photogrammetry-Based Workflow for the Accurate 3D Construction and Visualization of Museums Assets. Remote Sens. 2021, 13, 486. [Google Scholar] [CrossRef]
  70. Iglhaut, J.; Cabo, C.; Puliti, S.; Piermattei, L.; O’Connor, J.; Rosette, J. Structure from Motion Photogrammetry in Forestry: A Review. Curr. For. Rep. 2019, 5, 155–168. [Google Scholar] [CrossRef] [Green Version]
  71. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. “Structure-from-Motion” Photogrammetry: A Low-Cost, Effective Tool for Geoscience Applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  72. Tavani, S.; Pignalosa, A.; Corradetti, A.; Mercuri, M.; Smeraglia, L.; Riccardi, U.; Seers, T.; Pavlis, T.; Billi, A. Photogrammetric 3D Model via Smartphone GNSS Sensor: Workflow, Error Estimate, and Best Practices. Remote Sens. 2020, 12, 3616. [Google Scholar] [CrossRef]
  73. Badan Informasi Geospasial. Available online: https://www.big.go.id/ (accessed on 24 November 2022).
  74. Adreani, L.; Colombo, C.; Fanfani, M.; Nesi, P.; Pantaleo, G.; Pisanu, R. Rendering 3D City for Smart City Digital Twin. In Proceedings of the 2022 IEEE International Conference on Smart Computing (SMARTCOMP), Helsinki, Finland, 20–24 June 2022; pp. 183–185. [Google Scholar] [CrossRef]
  75. Zhang, H.; Jiang, J.; Huang, W.; Yang, L. Design and implementation of crowdsourcing based China’s national public geospatial information collection system. Remote Sens. Spat. Inf. Sci. 2019, XLII, 10–14. [Google Scholar] [CrossRef] [Green Version]
  76. Rashid, A.; Chaturvedi, A. Cloud Computing Characteristics and Services A Brief Review. Int. J. Comput. Sci. Eng. 2019, 7, 421–426. [Google Scholar] [CrossRef]
  77. Jing, C.; Du, M.; Li, S.; Liu, S. Geospatial Dashboards for Monitoring Smart City Performance. Sustainability 2019, 11, 5648. [Google Scholar] [CrossRef] [Green Version]
  78. Biljecki, F.; Ledoux, H.; Stoter, J. Computers, Environment and Urban Systems An Improved LOD Speci Fi Cation for 3D Building Models. Comput. Environ. Urban Syst. 2016, 59, 25–37. [Google Scholar] [CrossRef] [Green Version]
  79. Yap, W.; Janssen, P.; Biljecki, F. Free and Open Source Urbanism: Software for Urban Planning Practice. Comput. Environ. Urban Syst. 2022, 96, 101825. [Google Scholar] [CrossRef]
  80. El Haje, N.; Jessel, J.-P.; Gaildrat, V.; Sanza, C. 3D Cities Rendering and Visualisation: A Web-Based Solution. In Proceedings of the Eurographics Workshop on Urban Data Modelling and Visualisation (UDMV 2016), Liege, Belgium, 8 December 2016; pp. 95–100. [Google Scholar] [CrossRef]
  81. Li, Y.; Yi, G. Spatial Task Management Method for Location Privacy Aware Crowdsourcing. Clust. Comput. 2019, 22, 1797–1803. [Google Scholar] [CrossRef]
  82. Manweiler, J.; Choudhury, R.R. Satellites in Our Pockets: An Object Positioning System Using Smartphones. In Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services, Windermere, UK, 25–29 June 2012; pp. 211–224. [Google Scholar]
  83. Fukunaga, K.; Hostetler, L. The Estimation of the Gradient of a Density Function, with Applications in Pattern—Recognition. IEEE Trans. Inf. Theory 1975, 21, 32–40. [Google Scholar] [CrossRef] [Green Version]
  84. Cheng, Y. Mean Shift, Mode Seeking, and Clustering. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 790–799. [Google Scholar] [CrossRef] [Green Version]
  85. Comaniciu, D.; Meer, P.; Member, S. Mean Shift: A Robust Approach Toward Feature Space Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 603–619. [Google Scholar] [CrossRef] [Green Version]
  86. Meer, P. Mean Shift Analysis and Applications. In Proceedings of the 7th IEEE International Conference on Computer Vision, Corfu, Greece, 20–27 September 1999. [Google Scholar]
  87. Miguel, A. A Review of Mean-Shift Algorithms for Clustering. arXiv 2015, arXiv:1503.00687. [Google Scholar]
  88. Dodsworth, E.; Nicholson, A. Academic Uses of Google Earth and Google Maps in a Library Setting. Inf. Technol. Libr. 2012, 31, 102–117. [Google Scholar] [CrossRef] [Green Version]
  89. Pokorný, P. Determining Traffic Levels in Cities Using Google Maps. In Proceedings of the 2017 Fourth International Conference on Mathematics and Computers in Sciences and in Industry MCSI 2017, Corfu, Greece, 24–27 August 2017; pp. 144–147. [Google Scholar] [CrossRef]
  90. Dewi, S.S.; Satria, D.; Yusiban, E.; Sugiyanto, D. Prototipe Sistem Informasi Monitoring Kebakaran Bangunan Berbasis Google Maps Dan Modul GSM. J. JTIK (J. Teknol. Inf. Dan Komun.) 2017, 1, 33–38. [Google Scholar] [CrossRef] [Green Version]
  91. Mishra, S.; Bhattacharya, D.; Gupta, A. Congestion Adaptive Traffic Light Control and Notification Architecture Using Google Maps APIs. Data 2018, 3, 67. [Google Scholar] [CrossRef] [Green Version]
  92. McQuire, S. One Map to Rule Them All? Google Maps as Digital Technical Object. Commun. Public 2019, 4, 150–165. [Google Scholar] [CrossRef]
  93. Mehta, H.; Kanani, P.; Lande, P. Google Maps. Int. J. Comput. Appl. 2019, 178, 41–46. [Google Scholar] [CrossRef]
  94. Reljić, I.; Dunđer, I. Application of Photogrammetry in 3D Scanning of Physical Objects. TEM J. 2019, 8, 94–101. [Google Scholar] [CrossRef]
  95. Pepe, M.; Bari, P.; Orabona, E.; Bari, P.; Orabona, E.; Pepe, M. Techniques, Tools, Platforms and Algorithms in Close Range Photogrammetry in Building 3D Model and 2D Representation of Objects and Complex Architectures. Comput. Aided Des. Appl. 2020, 18, 42–65. [Google Scholar] [CrossRef]
  96. Champion, E.; Bekele, M. From Photo to 3D to Mixed Reality: A Complete Work Fl Ow for Cultural Heritage Visualisation and Experience. Digit. Appl. Archaeol. Cult. Heritage 2019, 13, e00102. [Google Scholar] [CrossRef]
  97. Yilmazturk, F.; Gurbak, A.E. Geometric Evaluation of Mobile-Phone Camera Images for 3D Information. Int. J. Opt. 2019, 2019, 1–10. [Google Scholar] [CrossRef]
  98. Lheaturu, C.J.; Ayodele, E.G.; Okolie, C.J. An assessment of the accuracy of structure-from-motion (SfM) photogrammetry for 3D terrain mapping. Geomat. Land Manag. Landsc. 2020, 65–82. [Google Scholar]
  99. Barbero-García, I.; Lerma, J.L.; Marqués-Mateu, Á.; Miranda, P. Low-Cost Smartphone-Based Photogrammetry for The Analysis of Cranial Deformation in Infants. World Neurosurg. 2017, 102, 545–554. [Google Scholar] [CrossRef]
  100. Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T. Crowdsourcing Based 3d Modeling. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2016, 41, 587–590. [Google Scholar] [CrossRef] [Green Version]
  101. Paper, C. ETH Library A Smartphone-Based 3D Pipeline for the Creative Industry—The Replicate Eu Project. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII, 535–541. [Google Scholar] [CrossRef] [Green Version]
  102. Petrova-Antonova, D.; Ilieva, S. Methodological Framework for Digital Transition and Performance Assessment of Smart Cities. In Proceedings of the 2019 4th International Conference on Smart and Sustainable Technologies (SpliTech), Split, Croatia, 18–21 June 2019. [Google Scholar]
  103. Apps, S. GPS Map Camera Lite. Available online: https://play.google.com/store/apps/details?id=com.gpsmapcamerastamplite.gpsmaplocationstamponphotos&hl=id&gl=US (accessed on 10 November 2022).
  104. Pix4D PIX4DCatch: 3D Scanner. Available online: https://play.google.com/store/apps/details?id=com.pix4dcatch&hl=id&gl=US (accessed on 10 November 2022).
  105. PIX4D PIX4D Catch Webpage. Available online: https://www.pix4d.com/product/pix4dcatch (accessed on 10 November 2022).
  106. Febrion, C.; Wijaya, K.; Sugandi, D. Identifikasi Bangunan Kumuh Yang Mempengaruhi Kualitas Lingkungan Permukiman Tamansari Kota Bandung. J. Arsit. ARCADE 2020, 4, 314. [Google Scholar] [CrossRef]
  107. BPS (Badan Pusat Statistik). Keadaan Panjang Jalan Menurut Kondisi (Km). Available online: https://bandungkota.bps.go.id/indicator/17/135/1/keadaan-panjang-jalan-menurut-kondisi.html (accessed on 17 November 2022).
  108. BPS (Badan Pusat Statistik). Pengertian SPSS. Available online: https://pusdiklat.bps.go.id/diklat/bahan_diklat/BA_Paket%20Program%20Komputer%20(SPSS)%20-%20Deskriptif%20Statistik_Budiyanto,%20S.Si.,%20M.S.E_2117.pdf (accessed on 23 November 2022).
  109. IBM. Available online: https://www.ibm.com/products/spss-statistics (accessed on 23 November 2022).
  110. Rohullah Ragajaya Questionnaire Visualization and Utilization of 3D Maps. Available online: https://forms.gle/65A1dou4vsEZePUv7 (accessed on 25 November 2022).
Figure 1. Research Area in Bandung City.
Figure 1. Research Area in Bandung City.
Sustainability 15 03942 g001
Figure 2. Proposed MCS framework for monitoring system in Smart City.
Figure 2. Proposed MCS framework for monitoring system in Smart City.
Sustainability 15 03942 g002
Figure 3. Level of Detail (LOD) on standard CityGML (source: [78]).
Figure 3. Level of Detail (LOD) on standard CityGML (source: [78]).
Sustainability 15 03942 g003
Figure 4. Illustration of taking coordinate data for distant objects.
Figure 4. Illustration of taking coordinate data for distant objects.
Sustainability 15 03942 g004
Figure 5. Workflow for determining the coordinates of distant objects.
Figure 5. Workflow for determining the coordinates of distant objects.
Sustainability 15 03942 g005
Figure 6. Three-dimensional model reconstruction workflow.
Figure 6. Three-dimensional model reconstruction workflow.
Sustainability 15 03942 g006
Figure 7. Digital Geotwin development from 3D models of geospatial data.
Figure 7. Digital Geotwin development from 3D models of geospatial data.
Sustainability 15 03942 g007
Figure 8. Example of data collection results.
Figure 8. Example of data collection results.
Sustainability 15 03942 g008
Figure 9. An example of the results of estimating object coordinates using the mean-shift algorithm.
Figure 9. An example of the results of estimating object coordinates using the mean-shift algorithm.
Sustainability 15 03942 g009
Figure 10. Three-dimensional model of non-formal area model in point cloud: (1) front view; (2) seen from a horizontal angle; (3) rear view; (4) seen from a vertical angle.
Figure 10. Three-dimensional model of non-formal area model in point cloud: (1) front view; (2) seen from a horizontal angle; (3) rear view; (4) seen from a vertical angle.
Sustainability 15 03942 g010
Figure 11. Three-dimensional model of broken and potholed roads: (1) front view; (2) right side view; (3) top view; (4) left side view.
Figure 11. Three-dimensional model of broken and potholed roads: (1) front view; (2) right side view; (3) top view; (4) left side view.
Sustainability 15 03942 g011
Figure 12. Three-dimensional model of the perforated/potholed pavement: (1) front view; (2) top view; (3) bottom view; (4) side view.
Figure 12. Three-dimensional model of the perforated/potholed pavement: (1) front view; (2) top view; (3) bottom view; (4) side view.
Sustainability 15 03942 g012
Figure 13. Three-dimensional model of broken pavement: (1) front view; (2) side view; (3) rear view; (4) top view.
Figure 13. Three-dimensional model of broken pavement: (1) front view; (2) side view; (3) rear view; (4) top view.
Sustainability 15 03942 g013
Figure 14. Three-dimensional model of the pothole (left image); calculation of the area of the road damaged/potholes (right image).
Figure 14. Three-dimensional model of the pothole (left image); calculation of the area of the road damaged/potholes (right image).
Sustainability 15 03942 g014
Figure 15. Three-dimensional model shape and dimension calculation of the sidewalk potholes: (1) visualization of the 3D shape of the perforated sidewalk model; (2) calculating the length and width of the hole; (3) calculation of the depth of the pavement pits.
Figure 15. Three-dimensional model shape and dimension calculation of the sidewalk potholes: (1) visualization of the 3D shape of the perforated sidewalk model; (2) calculating the length and width of the hole; (3) calculation of the depth of the pavement pits.
Sustainability 15 03942 g015
Figure 16. The result of the integration of 3D model of problem object into the Digital Geotwin platform.
Figure 16. The result of the integration of 3D model of problem object into the Digital Geotwin platform.
Sustainability 15 03942 g016
Figure 17. The results of the respondents’ assessment on the comparative aspect.
Figure 17. The results of the respondents’ assessment on the comparative aspect.
Sustainability 15 03942 g017
Figure 18. The results of the respondents’ assessment on the satisfaction aspect.
Figure 18. The results of the respondents’ assessment on the satisfaction aspect.
Sustainability 15 03942 g018
Figure 19. The results of the respondents’ assessment on the usability aspect.
Figure 19. The results of the respondents’ assessment on the usability aspect.
Sustainability 15 03942 g019
Table 1. Data requirements and specifications.
Table 1. Data requirements and specifications.
DataData TypeObjects/FunctionsTechnology
Mobile Device Data
Photo/ ImageRasterReconstruction of 3D modelsCamera Sensors
Determination of the problem object
User/Citizen coordinatesVectorDetermination of the location of the coordinates of the problem objectMobile GNSS (Global Navigation Satellite System) Sensors
Approximate CoordinatesVectorDetermination of the location of the coordinates of the problem objectGoogle Maps
Compass directionsVectorDetermination of the location of the coordinates of the problem objectMagnetometer Sensors
Accelerometer Sensors
Geospatial Data
BuildingsVectorReconstruction of 3D object models for Digital GeotwinAerial Photo/LIDAR
Building heightVectorReconstruction of 3D object models for Digital GeotwinAerial Photo/LIDAR
Road/Transport NetworkVectorReconstruction of 3D object models for Digital GeotwinAerial Photo/LIDAR
Table 2. Maturity levels for Digital Twin [52].
Table 2. Maturity levels for Digital Twin [52].
LevelPrincipleUsage
0Physical world capture (e.g., point cloud, photogrammetry, drones, etc.)Brownfield (existing) as-built survey
12D map/system or 3D model (e.g., object-based, with no metadata or building information models)Design/asset optimization and coordination
2Connect model to static data, metadata, and building information model4D/5D simulation, design/asset management, BIM stage 2
3Enrich with real-time data (IoT, sensors)Operational efficiency
4Two-way data integration and interactionRemote and immersive operations; controlling the physical from the digital
5Autonomous operations and maintenanceComplete self-governance with total oversight and transparency
Table 3. Estimation of ITB092 Distant Object Coordinates.
Table 3. Estimation of ITB092 Distant Object Coordinates.
Object
Benchmark
Observer to Object
Distance (m)
Compass Angle (°)Coordinate Mobile Device (UTM)Estimation Coordinates (UTM)
EastNorthEastNorth
#ITB092
Data-110261788569.4319237324.370788564.1589237322.625
Data-210302788567.8169237319.350
Data-310339788563.2899237316.021
Data-41066788554.4379237320.285
Data-120263788577.2829237324.367788567.7799237318.499
Data-220294788574.9589237314.383
Data-320331788567.0819237309.012
Data-42060788547.7999237314.216
Data-130264788583.1369237324.171788571.1839237317.062
Data-230308788582.0039237309.314
Data-330330788571.4199237302.782
Data-43049788542.4149237306.670
Table 4. Estimation of ITB059A Distant Object Coordinates.
Table 4. Estimation of ITB059A Distant Object Coordinates.
Object BenchmarkObserver to Object Distance (m)Compass Angle (°)Coordinate Mobile Device (UTM)Estimation Coordinates (UTM)
EastNorthEastNorth
#ITB059A
Data-11024788592.8439237292.080788599.5419237304.373
Data-210275788604.8359237302.504
Data-310185788597.2079237312.300
Data-410102788590.1299237304.857
Data-12024788588.5619237285.338788596.5349237305.937
Data-220274788611.4629237301.634
Data-320181788598.1829237324.151
Data-420105788579.9739237306.771
Data-13027788587.9389237279.729788596.2579237295.732
Data-230252788614.8839237308.851
Data-330182788597.6859237325.220
Data-430111788572.7749237311.704
Table 5. Estimation of ITB075A Distant Object Coordinates.
Table 5. Estimation of ITB075A Distant Object Coordinates.
Object BenchmarkObserver to Object Distance (m)Compass Angle (°)Coordinate Mobile Device (UTM)Estimation Coordinates (UTM)
EastNorthEastNorth
#ITB075A
Data-110291788362.2309237365.817788350.4549237367.109
Data-210345788359.7269237359.007
Data-31064788347.2569237363.226
Data-41093788342.4859237368.918
Data-120296788371.2939237359.768788346.9459237371.119
Data-220316788363.5739237350.492
Data-32066788341.0379237356.798
Data-420103788336.7809237368.962
Data-130303788377.8379237357.465788340.1419237360.596
Data-230343788369.5859237343.519
Data-33062788332.9859237351.497
Data-43096788326.4589237365.999
Table 6. Photo and number of photo of problem objects.
Table 6. Photo and number of photo of problem objects.
NoProblem ObjectPhoto of the Problem ObjectNumber of Photos Taken (Images)
1Informal areaSustainability 15 03942 i001181
2Broken and potholed roadsSustainability 15 03942 i002103
3Potholed pavementSustainability 15 03942 i00388
4Broken pavementSustainability 15 03942 i004116
Table 7. Comparison of estimated coordinates of experimental distant objects.
Table 7. Comparison of estimated coordinates of experimental distant objects.
ObjectDefinitive CoordinatesObserver to Object Distance (m)Methods in Previous StudiesDistance Error (m)Proposed MethodsDistance Error (m)
UTM Coordinate SystemUTM Coordinate SystemUTM Coordinate System
E (East)N (North) E (East)N (North) E (East)N (North)
#ITB092788558.9939237325.28310788565.1409237322.2736.84788564.1589237322.6254.90
20788570.0249237316.09014.36788567.7799237318.4998.39
30788575.1759237313.27320.15788571.1839237317.06211.85
#ITB059A788595.2179237303.21010788601.4139237304.4946.33788599.5419237304.3734.48
20788597.9129237306.7804.47788596.5349237305.9373.03
30788596.9769237288.26015.05788596.2579237295.7327.55
#ITB075A788353.7129237371.25110788350.1389237365.5356.74788350.4549237367.1095.27
20788345.4139237370.4608.34788346.9459237371.1196.77
30788335.7399237355.79423.71788340.1419237360.59617.25
Table 8. Paired sample statistics.
Table 8. Paired sample statistics.
MeanNStd. DeviationStd. Error Mean
Pair 1PREVIOUS11.776796.842042.28068
PROPOSED7.691194.380951.46032
Table 9. Paired sample test.
Table 9. Paired sample test.
Paired DifferencestdfSig. (2-Tailed)
MeanStd. DeviationStd. Error Mean95% Confidence Interval of the Difference
LowerUpper
Pair 1PREVIOUS-PROPOSED4.085562.973360.991121.800036.371084.12280.003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Supangkat, S.H.; Ragajaya, R.; Setyadji, A.B. Implementation of Digital Geotwin-Based Mobile Crowdsensing to Support Monitoring System in Smart City. Sustainability 2023, 15, 3942. https://doi.org/10.3390/su15053942

AMA Style

Supangkat SH, Ragajaya R, Setyadji AB. Implementation of Digital Geotwin-Based Mobile Crowdsensing to Support Monitoring System in Smart City. Sustainability. 2023; 15(5):3942. https://doi.org/10.3390/su15053942

Chicago/Turabian Style

Supangkat, Suhono H., Rohullah Ragajaya, and Agustinus Bambang Setyadji. 2023. "Implementation of Digital Geotwin-Based Mobile Crowdsensing to Support Monitoring System in Smart City" Sustainability 15, no. 5: 3942. https://doi.org/10.3390/su15053942

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop