# Machine-Learning-Enhanced Procedural Modeling for 4D Historical Cities Reconstruction

^{*}

## Abstract

**:**

## 1. Introduction

**Data Incompleteness.**Firstly, it is crucial to acknowledge that historical information inherently tends to be incomplete by default. Historical data are usually fragmented, missing, or conflicting, compelling researchers to amalgamate and interpret multiple sources, while occasionally resorting to educated guesses to fill in the gaps.

**Cultural specificity.**One of the main challenges of historical city reconstruction is that every case study is unique. Not only is the architecture changing in every culture and every region, but the available data contributing to the remodeling can be very disparate. The solution to this challenge is two-fold. In order to achieve generality, an approach must prioritize easy customization and flexibility. Our proposed framework, founded on procedural modeling, offers complete parameterization, allowing for seamless adaptation and extension to address a wide range of modeling challenges. We also provide strategies to deal with parameters that might be totally unavailable for a specific city. Second, any purposely generic solution should be open-source. Indeed, the unique nature of cities and the multiplicity of research project objectives compels technical solutions to constant adaptation and evolution. In this regard, we consider closed commercial solutions to be a dead end, as no tool is anywhere close to comprehensiveness. The ability to alter and adapt the code to the need of multiple projects and approaches is fundamental.

**Iterative nature of scientific projects.**When working on historical data, we may not only be confronted with the initial problem of incompleteness, but we might also consider the possibility for additional information to become available in the future. Indeed, the collection of urban historical data is generally iterative, and the ability to incorporate new data into the system, and thus dynamicity, is therefore absolutely essential.

**Subjectivity of the reconstruction and interpretation.**Recent research studies in cartography often consider the map as an artifact culturally constraint, rather than a mere projection of the territory [16]. The process of mapping itself is made of arbitrary choices, based on cultural factors and intentional decisions. Maps and other sources can thus offer a different or even contradictory description of the city.

## 2. Methodology and Approach

#### 2.1. From Cartographic Sources to a GIS Dataset

- Thinning. First, a thinning or skeletonization algorithm is used to obtain single-pixel-wide contours [41].
- Identification of the connected components. The identification of the connected components makes it possible to assign an index to each closed geometry.
- Removal of non-delimiting lines. Lines surrounded by the same connected component on both sides are non-delimiting. They constitute noise in the sense that they do not allow the demarcation of distinct building instances. Based on this criterion, they are automatically removed.
- Corners detection. The Harris Corner Detector is used to precisely locate prominent building corners [42]. It is supplemented by OpenCV’s cornerSubPix function, which refines the position to subpixel accuracy.
- Vectorization. The vectorization algorithm treats the map as a network. First, it detects all the nodes in the thinned image (i.e., intersections and corners detected at the previous step). Second, it follows the edge paths to reconstruct the segment paths. At this stage, the geometry of each segment is simplified using the Douglas–Peucker algorithm [43]. The level of simplification is controlled by the parameter$\u03f5$.
- Removing duplicated segments. This step checks that detected segments are not duplicated.
- Reconstitution of the polygons. To reconstruct the polygons, the nodes directly adjacent to each connected component are retrieved. The segments are adjacent to the polygon if both their starting and their terminal nodes are adjacent to the connected component. Then, simply following the adjacent segments one after the other makes it possible to reconstruct the cycle of each polygon.
- Polygon hierarchy and orientation. Polygons can encompass “donut holes”, which should be oriented counterclockwise, according to the shapefile format convention, while the outer cycle should be oriented clockwise. To distinguish between both, the approximate inner area of the cycles is computed using the Shoelace formula (Equation (1)).$$A=\frac{1}{2}\left(\right)open="|"\; close="|">\sum _{i=1}^{n}\left(\right)open="("\; close=")">{x}_{i}\xb7{y}_{i+1}-{y}_{i}\xb7{x}_{i+1}$$The orientation of each polygon is then calculated using Equation (2), so that the cycles can be reoriented clockwise, or counterclockwise, accordingly.$$O=\sum _{i=1}^{n}\left(\right)open="("\; close=")">{x}_{i+1}-{x}_{i}-\left(\right)open="("\; close=")">{y}_{i+1}-{y}_{i}$$
- Semantic allocation. The area covered by each connected component is spatially combined with semantic data from the built mask layer in order to add this information as an attribute to the polygon.
- Projection. Finally, the vector data are reprojected, according to the geographic projection of the georeferenced map image, and exported in shapefile format.

#### 2.2. Parameters Employed for the Procedural Modeling

- Height (H);
- Floor Height (${H}_{f}$);
- Number of Floors (${N}_{f}$).

#### 2.3. Addressing the Issue of Missing Parameters

#### 2.4. Filling Gaps

**Random Forest**with (a) max_depth = 5 and n_estimators = 10, (b) max_depth = 5 and n_estimators = 200, (c) max_depth = 50 and n_estimators = 200;**Decision Tree**with (a) max_depth = 5, (b) max_depth = 50, max_depth = 100;**AdaBoost**;**Gradient Boosting Classifier**.

#### 2.5. Transforming 2D Geodata into a 3D CityJSON Model with dhCityModeler

**hip**

**roofs**, that is, a roof that presents sloped surfaces on every side; (2)

**gable**

**roofs**, i.e., a roof that presents at least one vertical side (constituted by a portion of a wall); (3)

**flat**

**roofs**, which may or may not have a balustrade around their perimeter; and (4)

**domed**

**roofs**, i.e., flat roofs that also have a lowered dome on their surface (a category that was added to take into account the specificities of the case study).

## 3. Resulting Model

## 4. Discussion

## 5. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## Abbreviations

LOD | Level Of Detail |

GIS | Geographic Information Systems |

## Appendix A

**Figure A1.**Mean classification accuracy and standard deviation for the various machine learning models, computed over 200 experiments. The x-axis shows the number of rows that were employed for the training process (in logarithmic scale). For each experiment, we picked n random samples from the full training dataset, and repeated the training process 200 times. Then, we computed the mean and standard deviation of the obtained score. We plot this against the naive solution to see the improvements provided by each methods even when the available training data are consistently reduced. In this plot, the Ada Boost model and the Gradient Boosting classifier are also shown. While the Gradient Boosting classifier exhibits good performances, but was then excluded due to computational time, the Ada Boost model shows less reliable performances, even when adding data to the training set. One can in fact notice a high standard deviation in the scores, with less stable results when compared with the other predictors.

## References

- Bruschke, J.; Kröber, C.; Messemer, H. Insights into Collections of Spatialized Historical Photographs. In Proceedings of the 25th International Conference on Cultural Heritage and New Technologies, Vienna, Austria, 4–6 November 2020. [Google Scholar]
- Stiller, J.; Wintergrün, D. Digital reconstruction in historical research and its implications for virtual research environments. In 3D Research Challenges in Cultural Heritage II; Münster, S., Pfarr-Harfst, M., Kuroczyński, P., Ioannides, M., Eds.; Springer: Cham, Switzerland, 2016; pp. 47–61. [Google Scholar]
- Biljecki, F.; Arroyo Ohori, K.; Ledoux, H.; Peters, R.Y.; Stoter, J.E. Population Estimation Using a 3D City Model: A Multi-Scale Country-Wide Study in the Netherlands. PLoS ONE
**2016**, 11, e0156808. [Google Scholar] [CrossRef] [PubMed][Green Version] - Wu, Z.; Wang, Y.; Gan, W.; Zou, Y.; Dong, W.; Zhou, S.; Wang, M. A Survey of the Landscape Visibility Analysis Tools and Technical Improvements. Int. J. Environ. Res. Public Health
**2023**, 20, 1788. [Google Scholar] [CrossRef] [PubMed] - Biljecki, F.; Stoter, J.E.; Ledoux, H.; Zlatanova, S.; Çöltekin, A. Applications of 3D City Models: State of the Art Review. ISPRS Int. J. Geo Inf.
**2015**, 4, 2842–2889. [Google Scholar] [CrossRef][Green Version] - Buyukdemircioglu, M.; Kocaman, S.; Kada, M. Deep learning for 3D building reconstruction: A review. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.
**2022**, XLIII-B2-2022, 359–366. [Google Scholar] [CrossRef] - Pepe, M.; Costantino, D.; Alfio, V.S.; Vozza, G.; Cartellino, E. A Novel Method Based on Deep Learning, GIS and Geomatics Software for Building a 3D City Model from VHR Satellite Stereo Imagery. ISPRS Int. J. Geo Inf.
**2021**, 10, 697. [Google Scholar] [CrossRef] - Biljecki, F.; Ledoux, H.; Stoter, J. Generating 3D city models without elevation data. Comput. Environ. Urban Syst.
**2017**, 64, 1–18. [Google Scholar] [CrossRef][Green Version] - Roy, E.; Pronk, M.; Agugiaro, G.; Ledoux, H.T. Inferring the number of floors for residential buildings. Int. J. Geogr. Inf. Sci.
**2022**, 37, 938–962. [Google Scholar] [CrossRef] - Biljecki, F.; Dehbi, Y. Raise the roof: Towards generating Lod2 models without aerial surveys using machine learning. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci.
**2019**, IV-4/W8, 27–34. [Google Scholar] [CrossRef][Green Version] - Hecht, R.; Meinel, G.; Buchroithner, M. Automatic identification of building types based on topographic databases—A comparison of different data sources. Int. J. Cartogr.
**2015**, 1, 18–31. [Google Scholar] [CrossRef][Green Version] - Zhou, P.; Chang, Y. Automated classification of building structures for urban built environment identification using machine learning. J. Build. Eng.
**2021**, 43, 103008. [Google Scholar] [CrossRef] - Biljecki, F.; Sindram, M. Estimating Building Age with 3D GIS. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci.
**2017**, IV-4/W5, 17–24. [Google Scholar] [CrossRef][Green Version] - Rosser, J.F.; Boyd, D.S.; Long, G.; Zakhary, S.; Mao, Y.; Robinson, D. Predicting residential building age from map data. Comput. Environ. Urban Syst.
**2019**, 73, 56–67. [Google Scholar] [CrossRef][Green Version] - Farella, E.M.; Özdemir, E.; Remondino, F. 4D Building Reconstruction with Machine Learning and Historical Maps. Appl. Sci.
**2021**, 11, 1445. [Google Scholar] [CrossRef] - Harley, J.B. Deconstructing the map. Cartographica
**1989**, 26, 1–20. [Google Scholar] [CrossRef][Green Version] - The London Charter Organisation. The London Charter for the Computer-Based Visualisation of Cultural Heritage; The London Charter Organisation: London, UK, 2009. [Google Scholar]
- Beacham, R. Defining our Terms in Heritage Visualization. In Paradata: Intellectual Transparency in Historical Visualization; Bentkowska-Kafel, K., Denard, H., Eds.; Research in the Arts and Humanities Series: Ashgate, UK, 2012; pp. 7–11. [Google Scholar]
- Börjesson, L.; Sköld, O.; Huvila, I. Paradata in Documentation Standards and Recommendations for Digital Archaeological Visualisations. DCS
**2020**, 6, 191–220. [Google Scholar] [CrossRef] - Denard, H. A New Introduction to the London Charter. In Paradata: Intellectual Transparency in Historical Visualization; Bentkowska-Kafel, K., Denard, H., Eds.; Research in the Arts and Humanities Series: Ashgate, UK, 2012; pp. 57–71. [Google Scholar]
- Gregory, I.N.; Geddes, A. Introduction: From Historical GIS to Spatial Humanities: Deepening Scholarship and Broadening Technology. In Toward Spatial Humanities: Historical GIS and Spatial History; Gregory, I.N., Geddes, A., Eds.; Indiana University Press: Bloomington, IN, USA, 2014; pp. ix–xxii. [Google Scholar]
- Gröger, G.; Plümer, L. CityGML—Interoperable semantic 3D city models. ISPRS J. Photogramm. Remote Sens.
**2012**, 71, 12–33. [Google Scholar] [CrossRef] - Ledoux, H.; Ohori, K.; Kumar, K.; Dukai, B.; Labetski, A.; Vitalis, S. CityJSON: A compact and easy-to-use encoding of the CityGML data model. Open Geospat. Data Softw. Stand.
**2019**, 4, 4. [Google Scholar] [CrossRef] - Biljecki, F.; Ledoux, H.; Stoter, J. An improved LOD specification for 3D building models. Comput. Environ. Urban Syst.
**2016**, 59, 25–37. [Google Scholar] [CrossRef][Green Version] - Vaienti, B.; Guhennec, P.; di Lenardo, I. A Data Structure for Scientific Models of Historical Cities: Extending the CityJSON Format. In Proceedings of the 6th ACM SIGSPATIAL International Workshop on Geospatial Humanities, GeoHumanities’22, Seattle, DC, USA, 1 November 2022. [Google Scholar]
- Morlighem, C.; Labetski, A.; Ledoux, H. Reconstructing historical 3D city models. Urban Inform.
**2022**, 1, 11. [Google Scholar] [CrossRef] - Saldaña, M. An Integrated Approach to the Procedural modeling of Ancient Cities and Buildings. Digit. Scholar. Humanit.
**2015**, 30, 148–163. [Google Scholar] [CrossRef][Green Version] - Girindran, R.; Boyd, D.S.; Rosser, J.; Vijayan, D.; Long, G.; Robinson, D. On the Reliable Generation of 3D City Models from Open Data. Urban Sci.
**2020**, 4, 47. [Google Scholar] [CrossRef] - Badwi, I.; Ellaithy, H.; Youssef, H. 3D-GIS Parametric modelling for Virtual Urban Simulation Using CityEngine. Ann. GIS
**2022**, 28, 325–341. [Google Scholar] [CrossRef] - Adão, T.; Magalhães, L.; Peres, E. Ontology-Based Procedural Modelling of Traversable Buildings Composed by Arbitrary Shapes, 1st ed.; Springer Briefs in Computer Science; Springer International Publishing: Cham, Switzerland, 2016; Volume 1. [Google Scholar]
- Biljecki, F.; Ledoux, H.; Stoter, J.E.; Vosselman, G. The variants of an LOD of a 3D building model and their influence on spatial analyses. ISPRS J. Photogramm. Remote Sens.
**2016**, 116, 42–54. [Google Scholar] [CrossRef][Green Version] - Amiran, D.H.K.; Karmon, M. The Hebrew University of Jerusalem. Department of Geography. In Atlas of Jerusalem, 1st ed.; De Gruyter: Berlin, Germany, 1973. [Google Scholar]
- Jerusalem, S.F.J. The Old City Compiled, Drawn Printed under the Direction of F.J. Salmon, Commissioner for Lands Surveys, Palestine. 1936. Revised from Information Supplied by Dept. of Antiquities 1945. Modified Reprint May 1947. 69.5 × 58 cm. 1936. Available online: https://www.nli.org.il/en/maps/NNL_MAPS_JER002654902/NLI (accessed on 1 May 2023).
- Survey of Palestine. Jerusalem. Survey of Palestine. 1938. 71 × 61 cm. Available online: https://www.nli.org.il/en/maps/NNL_MAPS_JER002366984/NLI (accessed on 15 May 2023).
- Petitpierre, R. Historical City Maps Semantic Segmentation Dataset. Version 1.0. Zenodo
**2021**. [Google Scholar] [CrossRef] - Petitpierre, R.; Guhennec, P. Effective annotation for the automatic vectorization of cadastral maps. Digit. Scholar. Humanit.
**2022**, fqaq006. [Google Scholar] [CrossRef] - Petitpierre, R.; Kaplan, F.; di Lenardo, I. Generic Semantic Segmentation of Historical Maps. In Proceedings of the CEUR Workshop, CHR 2021: Computational Humanities Research Conference, Amsterdam, The Netherlands, 17–19 November 2021; pp. 228–248. [Google Scholar]
- Oliveira, S.A.; Seguin, B.; Kaplan, F. dhSegment: A generic deep-learning approach for document segmentation. In Proceedings of the 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), Niagara Falls, NY, USA, 5–8 August 2018; pp. 7–12. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Zhang, T.Y.; Seuen, C.Y. A fast parallel algorithm for thinning digital patterns. Com. ACM
**1984**, 27, 236–239. [Google Scholar] [CrossRef] - Harris, C.; Stephens, M. A Combined Corner and Edge Detector. Alvey Vision Conf.
**1988**, 15, 147–151. [Google Scholar] - Douglas, D.; Peucker, T. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Can. Cartogr.
**1973**, 10, 112–122. [Google Scholar] [CrossRef][Green Version] - Polylabel: A fast Algorithm for Finding the Pole of Inaccessibility of a Polygon. Available online: https://github.com/mapbox/polylabel (accessed on 30 May 2023).
- García-Castellanos, D.; Lombardo, U. Poles of inaccessibility: A calculation algorithm for the remotest places on Earth. Scott. Geogr. J.
**2007**, 123, 227–233. [Google Scholar] [CrossRef] - Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. JMLR
**2011**, 12, 2825–2830. [Google Scholar] - Breiman, L. Random Forests. Mach. Learn.
**2001**, 45, 5–32. [Google Scholar] [CrossRef][Green Version] - Zhu, J.; Rosset, S.; Zou, H.; Hastie, T. Multi-class AdaBoost. Stats. Interface
**2006**, 2, 349–360. [Google Scholar] - Friedman, J. Greedy Function Approximation: A Gradient Boosting Machine. Ann. Stats.
**2001**, 29, 1189–1232. [Google Scholar] [CrossRef] - Pydelatin. Python Bindings to ‘Hmm’ for Fast Terrain Mesh Generation. Available online: https://github.com/kylebarron/pydelatin (accessed on 30 May 2023).
- Garl, M.; Heckbert, P. Fast Polygonal Approximation of Terrains and Height Fields; Tech. Rep. CMU-CS-95-181; Carnegie Mellon University: Pittsburgh, PA, USA, 1995. [Google Scholar]
- Cadquery. A Python Parametric CAD Scripting Framework Based on OCCT. Available online: https://github.com/CadQuery/cadquery (accessed on 30 May 2023).
- Sugihara, K. Straight Skeleton Computation Optimized for Roof Model Generation. In Proceedings of the WSCG’2019—27 International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2019, Pilsen, Czech Republic, 27–31 May 2019; Volume 27, pp. 101–109. [Google Scholar]
- Vitális, S.; Labetski, A.; Boersma, F.; Dahle, F.; Li, X.; Ohori, K.; Ledoux, H.; Stoter, J. CITYJSON + WEB = NINJA. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci.
**2020**, VI-4/W1-2020, 167–173. [Google Scholar] [CrossRef] - Vitalis, S.; Labetski, A.; Arroyo Ohori, K.; Ledoux, H.; Stoter, J. A data structure to incorporate versioning in 3D city models. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci.
**2019**, IV-4/W8, 123–130. [Google Scholar] [CrossRef][Green Version]

**Figure 1.**Illustration of the vectorization pipeline: (

**a**) Output of the semantic segmentation of contours. (

**b**) Thinned contours, step 1. (

**c**) Connected components as color gradient, step 2. (

**d**) Detection of corners (in yellow) and intersections (in red), steps 4 and 5. (

**e**) Result of the vectorization (buildings in carmine). (

**f**) Close-up of the resulting vector data in QGIS (some polygons, in yellow, are selected to show their points).

**Figure 2.**The plots are organized according to the target parameter: (

**a**) On the top, results for the type of roof. (

**b**) On the bottom, results for the number of floors. Each plot reports the confusion matrix, score, and computation time for each tested model. Rows represent the true value, while columns represent the predicted value. Values on the diagonal going from the top-left to the bottom-right corners are correctly predicted. The last confusion matrix in each of the two plots represents the result that we would obtain with the naive solution (baseline accuracy). A darker colour corresponds to a higher number of values.

**Figure 3.**Accuracy (in light-blue) and calculation time (in pink), for each experiment (x-axis). The score is compared to the baseline accuracy. The lines at the top of the bars represent the standard deviation.

**Figure 4.**Mean classification accuracy and standard deviation for the various machine learning models, computed over 200 experiments. The x-axis shows the number of rows that were employed for training (in logarithmic scale). For each experiment, we picked n random samples from the full training dataset, and repeated the training process 200 times. Then, we computed the mean accuracy and standard deviation. We plot this against the naive solution to see the improvement provided by each method when the available training data are consistently reduced. The results obtained for the Adaptive Boosting model and the Gradient Boosting classifier are shown in Appendix A.

**Figure 5.**Illustration of the process followed to create the subtraction shape corresponding to each vertex. On the top row, we illustrate what happens when the angle in B is convex, while on the bottom row, the process follows the situation where the angle is concave. The iterative process, using B as the middle point, follows these steps: (1) We select the three consecutive points (A, B, C) and the side segments joining them. (2) For each side segment (AB and BC), we calculate the length of the segments from A and C that are perpendicular to the side segments, and intersect the bisector passing in B (AA’, CC’). This length can be calculated by multiplying the length of the considered side for the tangent of $\beta /2$, where $\beta $ is the angle in B ($A{A}^{\prime}=AB\ast tan(\beta /2)$). We employ the minimum between the two lengths as the offset distance. Note that when the angle is concave, we employ an arbitrarily large distance M, since self-intersection is not an issue in this case. (3) By applying the obtained length as the module of ${v}_{1}$ and ${v}_{2}$ (vectors that are perpendicular to AB and BC), we retrieve the length of the bisector in B that should be used as an offset vector for the sides. However, we use the vectors ${v}_{1}$ and ${v}_{2}$ to trim this offset, and finally find the shape of our subtraction. (4) The subtraction volume is obtained through a loft operation. (5) The subtraction volume is finally employed against the initial extrusion of the roof base shape with a Boolean subtraction.

**Figure 6.**Illustration of the process of creation of the base shape for gable roofs. (1) First, we detect the faces that are positioned along the perimeter, and present three vertices (highlighted in pink). (2) Then, we create the solid shapes required to fill the gable roof, by finding its vertices and then creating the faces joining them. (3) We apply a Boolean union and obtain the desired shape.

**Figure 7.**Illustration of the generation of the shell for gable and hip roofs. By adding the shell (highlighted in pink), we push the LOD of our models from LOD 2.0 to LOD 2.1.

**Figure 8.**Detailed view of the LOD 2.1 model resulting from the application of the described methodology over every building of the dataset. Image rendered using Blender.

**Figure 9.**In the image, we can see part of the attributes that characterize a selected building (in yellow). The visualization was performed using the CityJSON webviewer Ninja [54].

**Figure 10.**Complete view of nine temporal phases corresponding to our 4D model of the city of Jerusalem.

Hip | Gable | Flat | Domed |
---|---|---|---|

Base floor thickness | Base floor thickness | Base floor thickness | Base floor thickness |

Slope | Slope | Railing height | Railing height ^{1} |

Upper floor thickness | Upper floor thickness | Railing width | Railing width ^{1} |

Eaves overhang | Eaves overhang | Dome horizontal radius (%) | |

Dome vertical radius (%) |

(Total) | Number of Floors | Roof Type | Material | Start Year | End Year |
---|---|---|---|---|---|

Entries(10,833) | 8405 | 8607 | 8563 | 10,817 | 10,817 |

Percentage(100%) | 77.59% | 79.45% | 79.05% | 99.85% | 99.85% |

**Table 3.**Rows available for each of the two target parameters and number of rows resulting from the subdivision between training and test sets.

#Rows (Number of Floors) | #Rows (Roof Type) | |
---|---|---|

Total (100%) | 8405 | 8607 |

Training set (60%) | 5043 | 5164 |

Test set (40%) | 3362 | 3443 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Vaienti, B.; Petitpierre, R.; di Lenardo, I.; Kaplan, F.
Machine-Learning-Enhanced Procedural Modeling for 4D Historical Cities Reconstruction. *Remote Sens.* **2023**, *15*, 3352.
https://doi.org/10.3390/rs15133352

**AMA Style**

Vaienti B, Petitpierre R, di Lenardo I, Kaplan F.
Machine-Learning-Enhanced Procedural Modeling for 4D Historical Cities Reconstruction. *Remote Sensing*. 2023; 15(13):3352.
https://doi.org/10.3390/rs15133352

**Chicago/Turabian Style**

Vaienti, Beatrice, Rémi Petitpierre, Isabella di Lenardo, and Frédéric Kaplan.
2023. "Machine-Learning-Enhanced Procedural Modeling for 4D Historical Cities Reconstruction" *Remote Sensing* 15, no. 13: 3352.
https://doi.org/10.3390/rs15133352