Next Article in Journal
Fusion-Based Analytical Approaches to Iron Grade Determination in Complex Oxide Ore Systems
Previous Article in Journal
Toward Patient-Specific Digital Twin Models of Disease Progression Using Sequential Medical Imaging and EHR Data
Previous Article in Special Issue
Low-Altitude Photogrammetry and 3D Modeling for Engineering Heritage: A Case Study on the Digital Documentation of a Historic Steel Truss Viaduct
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quality Assessment of Digital 3D Models of Museum Artefacts from the Mobile LiDAR iPhone and Structured Light Scanners

1
Department of Computer Science, Lublin University of Technology, 36B Nadbystrzycka Str., 20-618 Lublin, Poland
2
Scientific-Experimental Museum-Laboratory, Samarkand State University named after Sharof Rashidov, 15 University Blv., Samarkand 703004, Uzbekistan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(4), 2100; https://doi.org/10.3390/app16042100
Submission received: 2 January 2026 / Revised: 13 February 2026 / Accepted: 15 February 2026 / Published: 21 February 2026

Abstract

Creating a digital 3D model of museum artefacts has been a common practice for many years. Such models can be used for archiving, research, and marketing purposes, as well as to counteract various types of exclusion. A digital copy created using professional 3D scanners using 3D structured-light scanning (3D SLS) or terrestrial laser scanning technology requires expensive equipment, specialised software for postprocessing, and a trained team. The introduction of mobile phones with Light Detection and Ranging (LiDAR) sensors and the development of appropriate open-access software have enabled the use of phones to generate digital 3D models. This study compares the quality of 3D models created with 3D SLS and mobile LiDAR technologies using three identical small museum artefacts from the Silk Road area of the Samarkand State University museum in Uzbekistan. They were digitised in 2017 and 2025. The results indicate that digital 3D models generated with an iPhone 16 PRO MAX device using Scaniverse LiDAR software are incomplete and thus less versatile. Therefore, they cannot serve as archival models. Their accuracy and quality (mesh density, size, and texture quality), as well as the speed of generating 3D models, make them ideal for marketing purposes and digital tourism.

1. Introduction

In a world torn by wars, terrorism, climate change, and increasing urbanisation, the protection of cultural heritage is a key element of our civilisation. Accurate digitisation and conservation of cultural artefacts have become inherent components of contemporary museum and heritage practice. Standard three-dimensional (3D) digitisation techniques utilise several technologies: terrestrial laser scanning (TLS) and Close-Range Photogrammetry (CRP) for architectural objects, as well as structured-light scanning (SLS) and CRP for small- and medium-sized objects. As numerous collections face constant threats of deterioration, environmental risks, and limited public access, the creation of precise 3D digital replicas enables both physical conservation and broader dissemination through virtual contexts. Studies [1,2,3,4,5,6] address issues related to the methodology of digitising and archiving cultural heritage in various situations and at varying levels of detail, while studies [7,8,9,10] present interesting practical solutions. Beyond contributing to conservation planning, such digital twins (also converted into 3D-printed copies) also enhance educational and exhibition possibilities, fostering new forms of engagement with cultural heritage. In [11,12,13], the authors conducted research on the functioning of platforms and their users, while in [14,15], the use of 3D printing was presented in the creation of exhibitions for the blind and in a cultural heritage game, examining the users’ gameplay.
In relation to objects that make up urban layouts, extensive engineering structures (fortified settlements, defence systems, and irrigation systems), Light Detection and Ranging (LiDAR) technology is commonly used, e.g., in archaeological exploration [16,17,18]. Historically, LiDAR technology has been used almost exclusively for aerial mapping and topographic surveys, with the sensors being carried aboard aircraft or Unmanned Aerial Vehicles (UAVs). Airborne LiDAR instruments can survey extensive land areas at an extremely high level of detail, allowing Digital Elevation Model (DEM) creation, forest canopy characterisation, as well as the creation of plans for urban infrastructure. The penetration of the vegetation canopy and the ability to record elevation measurements over the ground made LiDAR uniquely revolutionary in the geospatial and environmental sciences.
LiDAR is an active form of remote sensing that calculates distance to a target surface by transmitting laser pulses and measuring the time each pulse takes to travel back to the sensor. By calculating the travel time of the light and the speed of light, LiDAR calculates the accurate distance the sensor is away from the object. By repeating the action at high speed over millions of points, LiDAR creates dense 3D models of surfaces at high spatial resolutions.
The conventional LiDAR system is made up of three basic components: (i) Laser emitter—generates the light pulses, which are generally in the near-infrared wavelength range (e.g., 905 or 1550 nm wavelengths). (ii) Photodetector or receiver—captures the reflected light and saves the time-of-flight information. Avalanche PhotoDiodes (APDs) or single-photon detectors are generally used because they are very sensitive. (iii) Inertial Measurement Unit (IMU) and Global Navigation Satellite System (GNSS)—these sensors offer the accurate position and attitude of the LiDAR sensor, enabling accurate georeferencing of the recorded point cloud.
Recent developments in consumer-grade depth sensing technologies have opened up new possibilities for cultural heritage documentation. More technologically advanced Apple products, such as iPhone Pro and iPad Pro (from version 12), have been equipped with a compact Solid-State LiDAR (SSL) sensor since 2020, capable of generating point clouds and meshes through direct time-of-flight (dToF) measurement. Although designed primarily for Augmented Reality (AR) applications, the sensor’s potential for rapid and low-cost 3D documentation has garnered broad attention from researchers across various study fields.
This easy access to such technology allowed for the rapid development of e-tourism. Scanners have been used to digitise objects that could be later used as a way for people to discover scanned locations. There have been numerous studies to maximise the performance of LiDAR scanning for geoscience, with a lot of papers agreeing on its usefulness in this field [19,20,21].
This study aims to extend the research of Barszcz et al. (2021) [22] by exploring the aptness of Apple’s LiDAR sensor for digitisation of museum objects. Through comparative testing of the iPhone 16 Pro Max mobile LiDAR system with Scaniverse software against reference models obtained with 3D SLS techniques, this study quantifies the accuracy, efficiency, and visual fidelity of 3D digitisation based on a mobile LiDAR system. The possibility of obtaining a so-called complete digital 3D model and the suitability of the generated objects for quick sharing on websites are also examined.
The authors hope that the paper will contribute to the current discourse of democratising 3D documentation and fostering sustainable, low-budget digitisation workflows for cultural heritage institutions.

2. Study Context

Among current documentation techniques, SLS and Structure-from-Motion (SfM) photogrammetry have been widely applied to generate high-resolution 3D models of artefacts. In their study, Barszcz et al. [22] provided a comparative analysis of the two techniques, demonstrating that while both yield accurate results, they differ in terms of workflow efficiency, cost, and quality of surface representation. Their research—addressing archaeological and museum objects—illustrated the trade-offs of metric accuracy and usability, with the ultimate demand for faster, less expensive, and more flexible digitisation techniques.
Since 2020, Apple Inc. has incorporated LiDAR technology in consumer-grade devices, such as the iPad Pro and in Pro models of the iPhone (from version 12), to provide spatial awareness and depth sensing. LiDAR scanner onboard operates by the same time-of-flight principle as standard systems, although in a compact form designed for close-range sensing. The technology, in addition to various other useful applications, facilitates 3D object scanning. In the Apple ecosystem, LiDAR is central to the Augmented Reality Kit (ARKit) software tool. ARKit, in conjunction with cameras and specialised software, allows for the integration of point cloud and texture data to generate photorealistic 3D mesh models.
These unique properties of the iPhone LiDAR system, combined with the availability of the device and software, have led scientists and specialists in various industries to use it as their own affordable and mobile competitor to a variety of conventional and commercial solutions.
Research related to the use of LiDAR technology offered by Apple Inc. and comparison of its results with other technologies focused on the following main (but not limited to) application areas:
  • Criminology;
  • Architecture and construction;
  • Agriculture and forestry;
  • Cultural heritage and archaeology.

2.1. LiDAR in Criminology

Ref. [23] presents the identification of bloodstain patterns using Recon-3D software, which uses the iPhone LiDAR sensor combined with video data to create three-dimensional point clouds of crime scenes. The results, compared to a reference point cloud from a FARO Focus S350 scanner using CloudCompare software, showed that on average, 95% of the Recon-3D points fell below the 3.6 mm threshold with respect to the reference. Ref. [24] presents the results of research using LiDAR to reconstruct 3D fingerprint evidence by printing Fused Deposition Modelling (FDM) based on digital models generated using five different 3D scanning technologies (laser scanning, structured light scanning, smartphone photogrammetry, Microsoft Kinect v2 RGB-D camera, and iPhone 13 Pro LiDAR sensor with Polycam app). The largest deviations in the print (approximately 2.5 mm) compared to laser scanning taken as a reference standard were obtained from 3D models created with the iPhone LiDAR system.

2.2. LiDAR in Architecture and Construction

In [25], iPhone LiDAR scanning using the Scaniverse application was compared with data from CRP, using the example of the deformation of plastic-concrete specimens with fractal-based cross-sections. The differences in measurements obtained depending on the tested samples, in most cases, did not exceed 7%. Another application of the LiDAR system is presented in [26]. The possibility of using a LiDAR system in iPhone 12 Pro (as well as iPad Pro) for 3D scanning of interior rooms and exterior surfaces of buildings and their surroundings was investigated using the 3D Scanner App. Reference data came from TLS (FARO Focus X 330), which was also used for manually created 3D Building Information Modelling (BIM) models. The test results showed that the iPhone LiDAR system is suitable for mapping small environments (maximum two rooms), and the values obtained from the LiDAR system had deviations not exceeding 6 mm. Similar studies were conducted in [27]. This work examined the results of 3D scanning using the iPhone 13 Pro Max LiDAR system and the Scaniverse application for a 6 × 13 m room. The obtained results were compared with those from CRP and Leica TCR 405 ultra electronic total station. The accuracy of the LiDAR model was 4–5 cm, which was considered too low for object documentation. However, given its speed and low cost, it can be used to approximate the size or volume of an object and to make preliminary decisions before conducting professional engineering work.

2.3. LiDAR in Agriculture and Forestry

A comparison of the scanning results of the LiDAR system in an iPhone 12 Pro with those from the CRP and a standard pin profiler was presented in [28]. It was proven that the LiDAR system in the iPhone 12 can be used as an alternative tool for measuring soil surface roughness, especially in rough, wide, and inaccessible areas. A comparison of the LiDAR system in the iPhone 14 Pro with the 3D Scanner App with the professional Trimble TX8 and GeoSLAM ZEB Horizon devices for measuring tree diameter in situ conditions was presented in [29]. Although the obtained Diameter at Breast Height (DBH) estimation rates were approximately 78% with a Root Mean Squared Error (RMSE) of less than 20%, it was found that this device can be used for trial scanning of small forest areas due to its low cost and ease of use. In [30], the integration of aircraft-based LiDAR, handheld LiDAR based on Simultaneous Localisation and Mapping (SLAM) technology, and the iPhone 13 Pro Max LiDAR using a 3D Scanner App for post-forest road network inspection was investigated. Positive results of the integration process were obtained, indicating that the choice of technology should take into account trade-offs between accuracy, cost, and operational constraints.

2.4. LiDAR in Cultural Heritage and Archaeology

The scanner is typically used in rapid 3D scanning of complex architectural elements such as historical façades, internal monastery spaces, and even complex underground archaeospeleological areas where Global Positioning System (GPS) information is unavailable [31,32,33,34,35,36]. These applications also include accurate delineation of what appears to be damage, such as cracks and wood rot, in historical architectural woodwork.
In [31], an iPhone 14 Pro Max LiDAR system (with a Monopod Mount with 2 LED lamps) with the Polycam Pro application was used on the example of the underground portion (approximately 25 × 7 m) of the Temple of Zeus in the Ancient City of Aizanoi in Turkey. Compared to baseline measurements, a 3D model was obtained with an accuracy of 3 cm. In [32], an iPhone 14 Pro Max LiDAR system and Scaniverse v2.1.4 software in area data acquisition mode were used to 3D scan archaeological-speleological objects, i.e., a water cistern in the Flavian amphitheater in Pozzuoli and the Augustan aqueduct of Campania in the Fuorigrotta Coroglio section, built in ancient Rome. These studies were a creative continuation of the studies presented in [33,34]. Verification of LiDAR measurements with data obtained from direct measurements showed that the difference was only a few centimetres, which is sufficiently accurate for archaeological speleology. In [35], the iPhone 13 Pro LiDAR system was tested as a measurement element for establishing control points using photogrammetry by UAVs, using the example of megalithic archaeological sites in southern Iberia (Spain). The results show a deviation of ±5 mm in the capture of Ground Control Points (GCPs) compared to reference data obtained from the LEICA Flexline TS02 total station. This was considered sufficient for quick preliminary studies. A LiDAR scan was performed on an iPhone 15 Pro mounted on a three-axis gimbal (DJJ Osmo Mobile) with the 3D Scanner App v2.1.3 software to digitise the rooms of the 1940 Shuangmei Mansion building [36]. The data were integrated with a point cloud acquired from a DJI Mini 4 Pro camera, obtaining a 3D model with an overall relative error of less than 5%. Research was also conducted on the practical integration of data from the LiDAR iPhone system with photogrammetry data obtained from a UAV or CRP. It should be noted that regardless of the application used, the maximum range of the LiDAR iPhone system is 5 m. In [37], data from the UAV were integrated with data from the iPhone 13 Pro system using the 3DScanner App of the medieval ruins of the Sourp Magar monastery located on the forested slopes of the Kyrenia Range (Cyprus). The obtained data were verified using data obtained from TLS. Most of the points from the studied LiDAR system were located within 10 cm of the TLS points and Building Information Modelling (BIM) surfaces. Successful integration of scanned data from Vaulted Tomb, Truncated Cone Tomb, and an ancient jug (15 cm high) from Iran was presented in [38]. The height of the tombs exceeded 5 m. The total scanning time and 3D model generation using the iPhone were 8 times shorter for the tombs and 24 times shorter for the jug compared to TLS.
Very few studies have examined iPhone LiDAR scanning of small- and medium-sized museum artefacts. One study employed an iPhone 12 Pro LiDAR system with Scaniverse v2.1.2 software to precisely detect and diagnose visible damage in wooden elements of objects from the Damiaojiao district of Datong, Shanxi Province (China), which were built from the late Qing Dynasty (19th century) to the mid-20th century [39]. Both dismantled, scattered wooden elements and existing structural elements were scanned, primarily to measure geometric features and analyse cracks in beam surfaces. Because the beam cross-sections did not exceed 40 cm, scanning such elements can be considered small- or medium-sized objects (due to their length). The performance of the LiDAR system used was found to be essentially equivalent to that of professional 3D scanning devices in terms of accuracy and point cloud quality. In [40], a 3D LiDAR system was used on a 14 Pro iPhone, using the 3D Scanner App software, to scan a wooden box with an inlaid and carved lid (approximately 15 × 10 × 5 cm). The study was conducted using three available techniques: ‘PHOTOS’, ‘LIDAR normal’, and ‘LIDAR advance’. It was shown that the lowest error rate was observed in the ‘PHOTOS’ technique, and that the differences between the dimensions of the digital 3D model and the manual measurement of the object did not exceed 3 mm. It was found that the quality of the final model was influenced by both reflections caused by excessive light falling on the object and darkness caused by poor lighting. In [41], a LiDAR system on an iPhone 13 Pro Max and an iPad Pro with the Scaninverse application was used to scan 5 sculptures ranging in size from 28 to 73 cm. The obtained results were compared with CRP. Across all test objects, scale deviations between LiDAR-scale alignment models and reference models ranged from ±3.4%, with an overall mean deviation of less than 1.9%. The feasibility of using the iPhone 13 Pro LiDAR system with the Polycam app to create digital twins for digital exposure on the Skechlab platform is presented in [42]. A wooden box (approximately 30 × 10 × 15 cm in size) was scanned, comparing the data acquisition and processing procedures with those of the Mantis F6 scanner (LiDAR) and the Nikon Z5 camera (photogrammetry). Although the number of points comprising the digital model (initial and postprocessed) was shown, no comment was made on these data, and the file sizes of the cloud or mesh models were not presented.

2.5. Applications for Data Acquisition and Postprocessing in the LiDAR iPhone System

A large number of different data acquisition and processing apps supporting the iPhone LiDAR system are available on the market. The selection of these apps for a given application is crucial.
The results of a comparison of the performance of three apps: SiteScape, EveryPoint, and the 3D Scanner App (selected from 12 available on the market) for the iPhone 12 Pro LiDAR system are presented in [43]. The tests were conducted on three objects: a sculpture measuring approximately 2 × 1 × 1 m, a richly decorated hall (measuring approximately 9 × 8 × 7 m), and the interior corner of a building (measuring approximately 18 × 3 × 6 m). Each tested app provided different results, demonstrating the key role of the software component. It was found that the SiteScape app allowed for the acquisition of more points than the other apps and that the 3D Scanner app did not increase the number of model points when the operator repeated the scanning process. This is because it reduces noise in the point cloud. Comparative studies (using data from the TLS Faro Focus X 330 as a benchmark) showed that differences in size did not exceed a few centimetres, and the resulting 3D models were characterised by a high level of detail. None of the tested applications personalised the data processing phase of generating the digital 3D model.
The iPhone 12 Pro LiDAR system, equipped with three different software programs, was studied in [44]: Polycam, Scaniverse, and the 3D Scanner App. Research was conducted to optimise the selection of scanning parameters, such as distance from the object, the angle of the sensors relative to the object’s surface, scanning time, and lighting quality. The studies were conducted on three small objects: a rectangular box (35.4 × 25.0 × 5.7 cm) with uniform colours on all faces, a block consisting of three Rubik’s Cubes (with different edge lengths: 5.2, 5.6, and 5.0 cm), and a statuette of a kneeling Buddha (19.8 cm high). The obtained results showed that the most important scanning parameters were short distance from the object, high scanning speed, and natural lighting. This was confirmed by performing 3D scans of the Miguel Maraña gate of Santa Ana Church in Seville, Spain.
The results of testing four applications: Polycam, Sitescape, 3D Scanner App, and Scaninverse, supporting the iPhone 13 Pro LiDAR system, are presented in [45]. Five objects (vault, column, façade, small fountain, and violin) were scanned, varying in size, shape, detail level, surface type, and texture. The resulting digital models were compared with models generated from data acquired from TLSs (Faro Focus 3D and Leica HDS 7000). Polycam performed best with the vault and façade objects, while Scaniverse performed best with the column, fountain, and violin objects. Sitescape and the 3D Scanner App failed completely with the violin object. The results indicate that no universal application is equally effective at scanning all test objects.
The effectiveness of the Apple LiDAR sensor in the iPhone 13 Pro was demonstrated in architectural surveys (interior of a room approximately 35 × 9 m in size), which can be useful in scanning historic buildings [46]. Four applications were tested: Polycam, SiteScape, 3Dscanner, and Scaniverse. The results were verified against TLS data. The study showed that the iPhone 13 Pro LiDAR can generate geometrically consistent models for cultural heritage resources of medium complexity. The results from the Scaniverse application achieved an RMSE value of as much as 55 cm, while the other applications were less than 10 cm. The SiteScape application is less practical for scanning large areas than the others and only allows exporting point clouds in the .e57 format. Useful information regarding these applications can also be found in [37,46].
Study [47] compares two applications, Pix4DCatch v1.18.0 and Scaniverse v2.0.3, supporting the iPhone 12 Pro LiDAR system and CRP using a specialised DJI Osmo Pocket video camera with a gimbal, with reference to data obtained from TLS, based on the example of archaeological excavations in the Kerava region in Finland. The scanning time using an iPhone was approximately two times shorter than with the other technologies. A comparison of point clouds performed in CloudCompare software showed that the data from both applications were less precise than that from CRP and that the data from the Scaniverse application were slightly worse.
Taken together, these studies attest to a technological movement toward increasingly open and flexible documentation methods, bridging the gap between high-end metrological tools and consumer-grade devices. However, while LiDAR-based scanning of buildings and outdoor monuments has been the subject of much research, it remains relatively rare for small-scale museum objects. The complex geometries, shiny surfaces, and fine details typical of museum exhibits present unique difficulties for consumer-grade sensors. The present study focused on this underexplored use case for the system, using open-access Scaniverse software to scan and export models of scanned small objects. There are few research papers regarding the use of this application for museum artefacts with their challenges, making it an interesting topic to explore.
Small museum artefacts are usually not permanent objects. In many cases, the part of an object that is not clearly visible, or is partially or completely covered, is also of interest, such as a candlestick, cup, plate, etc. iPhone LiDAR scanning can only acquire data with a single object orientation, unlike professional scanners. Therefore, it is impossible to create a so-called Complete Digital 3D Model (CD3DM), which would allow such a digital twin to hover in digital space and be viewed from all sides. Published works have not attempted to combine two digital models of the same object to obtain a CD3DM. Furthermore, there is no research on the suitability of the resulting 3D models for sharing via online portals (their sizes, texture quality, and colour fidelity to the originals).

3. Method and Materials

3.1. Mobile LiDAR iPhone System

The LiDAR acquisition process leaves us with a point cloud, a very dense set of XYZ coordinates that constitute the object or the area being scanned. Other information, like intensity values (reflectivity) and colour (in case RGB cameras are used), can enrich the dataset even more, both in the visual interpretation and the quantitative analysis.
Although the hardware offered by Apple is what sets up the basis of the depth sensing, the usefulness of the LIDAR scanner in such devices is frequently realised and further aided by other mobile applications such as Scaniverse. This software can be termed as the platform specifically meant for scanning, processing, and then further exporting high-quality 3D models from one’s Apple device. Scaniverse is one software application specifically designed to create highly accurate and dense point clouds, along with optimised 3D models of mesh in just an instant, further eliminating the need for years of expertise in directly producing digital models of objects, architectural elements, and even room designs. Most importantly, such software also packs exclusive algorithms, which rather concentrate on data clean-up, geometric accuracy, and even further producing watertight coloured mesh models amenable to CAD and BIM software.

3.2. Scanning Procedure in the Scaniverse Application

LiDAR technology with Scaniverse software available on smartphones allows for a single scan of an artefact by enabling continuous recording (Figure 1). Object size can be selected: small object, medium object, or large object/area. Sensors mounted in the device collect surface information, while cameras collect texture data. It is possible to pause the recording by pressing the ‘Pause’ button and resume digitisation by pressing the ‘Play’ button. However, it should be noted that this action does not allow for repositioning the scanned object on the digitising stand (Figure 1d).
After stopping the recording, the program generates a digital 3D mesh model. During processing, the user can select one of the following modes: speed (10 mm resolution), area (5 mm resolution, suitable for rooms and spaces), and detail (for textured objects). After the data are processed, the result is displayed directly on the smartphone screen. If the result is not satisfactory, the user can go back to the data processing mode selection and change the initial selection. In the Detail mode used, the program performs the following operations in sequence: ALIGNING FRAMES, COMPUTING DEPTH, GENERATING POINT CLOUD, BUILDING MESH, and TEXTURING. After generating a three-dimensional scene, it is saved in the 3D model library. To obtain only the scanned object, the scene is edited by cropping unnecessary areas (the ‘Crop’ button). Other possible editing operations on the generated 3D model include FILTER, EXPOSURE, CONTRAST, and SHARPNESS (Figure 2).
Using a finished 3D model, it can be shared (SHARE), for example, by exporting it in various formats (Table 1) and saving it to the smartphone file system. You can also generate videos along predefined camera paths (Figure 3a), perform measurements of digital 3D models (MEASURE option) (Figure 3c), and share them with other users, for example, on the Sketchfab platform.
Table 1. Available formats in the Scaniverse application.
Table 1. Available formats in the Scaniverse application.
File FormatProperties
.fbxAutodesk’s proprietary format that supports meshes, textures, materials and animations. Used widely in visualisation, AR/VR and gaming engines. Is binary-compact but not an open standard.
.objSimple, highly compatible 3D mesh format. Geometry and textures supported (through .MTL file), but no metadata or advanced scene information. General-purpose 3D interchange recommended.
.glbOpen standard for compact 3D asset exchange; geometry, textures and metadata supported. Highly efficient, in web and fast visualisation ready.
.usdzScene-based format developed by Pixar and Apple. Natively supported by materials, animations and textures. Optimised for AR app- and iOS-based software.
.stlGeometry-only mesh format that is, for the most part, used for 3D printing as well as rapid prototyping. Does not include colour, texture or metadata.
.plyCommon in academic literature and in recording of cultural heritage sites. Archives of mesh or point cloud together with ancillary colour and per-vertex attributes.
.lasLiDAR industry standard format for point clouds. Natively supports intensity, RGB and GPS time, as well as classification metadata.
Figure 3. Operations on the digital 3D model: (a,b) principles from the video of object no. 3 (Table 2), (c) measurement of object no. 2 (Table 2).
Figure 3. Operations on the digital 3D model: (a,b) principles from the video of object no. 3 (Table 2), (c) measurement of object no. 2 (Table 2).
Applsci 16 02100 g003
Table 2. Objects from the Scientific-Experimental Museum-Laboratory, Samarkand, Uzbekistan.
Table 2. Objects from the Scientific-Experimental Museum-Laboratory, Samarkand, Uzbekistan.
No of ObjectName/DescriptionVisualisationObject Size (Bounding Box) in mm
1Glazed ceramics. Registan. XV century.Applsci 16 02100 i00199 × 85 × 24
Used to eat hot and cold dishes.
2Pitcher (Flask). Ceramics. Registan. XII-XIII century.Applsci 16 02100 i002108 × 97 × 178
Used to store wine and other drinks.
3Ceramics. Afrasiyab. X-XI century.Applsci 16 02100 i003180 × 210 × 204
Used to store and transport water and drinks.

3.3. Structured-Light 3D Scanning

Scanners using SLS technology emit a light pattern from their own source (various patterns varying over time)—they are therefore active devices. Capturing the geometric shape of the analysed surface involves comparing the reflected light beam with the matrix of the original pattern and converting the deformations into depth. Scanning devices also have cameras and detectors that allow them to calculate the distance between surface points in the field of view and capture information about the surface’s colour and texture. These scanners are difficult to use outdoors, as natural sunlight can be too strong and the projected pattern will not be sufficiently visible. Artec scanners (Artec 3D, Senningerberg, Luxembourg), the Spider model was used by the authors of this paper, are characterised by relatively high data acquisition speeds and, despite being handheld (weighing 0.85 kg), boast high measurement accuracy (up to 0.05 mm) and 3D resolution (up to 0.1 mm). These scanners do not require calibration, which significantly speeds up work, or the use of positioning markers, making them ideal for scanning museum artefacts. The blue light emitted for scanning is neutral for delicate, old, and valuable objects. The device must be connected to a computer (recommended requirements: i7, 16 GB RAM, NVIDIA GeForce 400 series [22]).

3.4. Description of the Objects of the Experiment

The Scientific-Experimental Museum-Laboratory (SEML) is a facility of Samarkand State University in Samarkand, Uzbekistan. The objects collected there come from the Samarkand region, whose settlement history dates back several thousand years, and the city’s history spans over 2500 years. The city’s name has changed over the centuries. Some of the exhibits come from Afrasiyab—an ancient trading city located on the Silk Road, known as early as the 5th century BC, which was destroyed by the invasion of Genghis Khan in 1220 [48]. Currently, the excavation site, covering an area of nearly 200 hectares, is located within the administrative boundaries of Samarkand. Selected exhibits from the SEML museum are presented as 3D models in a digital exhibition on the website [49]. Three objects were selected for comparative analysis of digital 3D models, presented in Table 2.

3.5. Comparison of Data Acquisition Methods and Creating 3D Models

Using equipment operating in two different technologies requires the user to understand the basic operating principles of these devices. Assuming that modern museum professionals will have widespread access to smartphone versions with sensors capable of generating digital 3D models, several additional comments have been added to the description of scanning using the LiDAR system. A summary is provided in Table 3.

4. Results

4.1. Digital 3D Models with 3D SLS and Postprocessing

The objects listed in Table 2 were scanned using the Artec Spider scanner during the First Scientific Expedition of the Lublin University of Technology to Central Asia, carried out by a team of employees of the Institute of Computer Science at the turn of May and June 2017 [49]. The objects were placed on a rotating plate, which was moved manually. Each object was scanned twice, and the change in object position was aimed at acquiring the surface not visible in the first position (Figure 4).
After postprocessing both scans, CD3DM models of individual objects were obtained, which means that they can be viewed from all sides (the lack of visibility of some parts of the surface from the first scan was compensated for by matching the point cloud from the second scan).

4.2. Digital 3D Models with LiDAR System

Scanning using LiDAR technology available in the iPhone 16 PRO MAX smartphone was performed in May 2025 during the 10th Scientific Expedition of the Lublin University of Technology to Central Asia by a team of employees of the Department of Computer Science of the Faculty of Electrical Engineering and Computer Science. Thanks to nearly 10 years of scientific collaboration with the Scientific-Experimental Museum-Laboratory, it was possible to gain access to museum exhibits, including the same museum artefacts scanned in 2017. Digitisation was performed in situ by setting up a temporary scanning station. It was crucial to provide 360-degree access to the object placed on a table. The smartphone operates as a standalone device, so it did not need to be connected to a power source (Figure 5).
During a third visit to the museum, 23 artefacts were scanned without recharging the smartphone. Access to selected objects was limited to a single session, so the scanning process for a single artefact took approximately 60 to 90 s. This undoubtedly resulted in the acquisition of redundant data, ensuring the generation of good-quality 3D models. It should be noted that generating the final 3D model consumes more smartphone battery power than data acquisition. An unconventional artefact support was sometimes used during scanning (Figure 6a; ceramic bowl). Digital models of the scanned objects are shown in Figure 6.
Object no. 2 (Table 2) was scanned twice in different positions on the stage to test the possibility of assembling two independent mesh models (Figure 7) into CD3DM.

4.3. Comparison of SLS and LiDAR Technologies

The 3D model comparison process was conducted qualitatively and quantitatively. First, the three-dimensional models generated by both technologies were compared by compiling data from all scanned museum artefacts in the form of digital mesh models (Table 4). For the models obtained using SLS technology, two versions were provided. The base models used 4096 × 4096-pixel textures with a horizontal and vertical resolution of 96 dpi, saved in .jpg formats, for surface mapping. To make the digital 3D models of the scanned objects available online, the number of vertices, faces, and the size of the mapped textures were reduced. This resulted in models whose overall size was significantly reduced (from 2.5 to 7 times).

5. Discussion

5.1. Three-Dimensional Model Size Comparison

The results in Table 4 indicate that the dimensions of 3D models generated with LiDAR technology and exported to .obj format are between 34% and 54% smaller than the baseline models obtained with SLS technology. The dimensions of LiDAR models are larger than SLS models after reduction. Due to their size, LiDAR models are suitable for immediate sharing on websites and mobile devices.

5.2. Texture Comparison

The analysis of the data in Table 4 shows that the LiDAR system generates larger texture sizes (8192 × 8192 or 8192 × 4096 px) than SLS, even for baseline models (4096 × 4096 px), and the way they are fragmented and arranged is completely different from SLS. A comparison of the colours of textured objects indicates that the self-emitting light of SLS scanners seems to guarantee better colour reproduction than when scanning objects with a LiDAR system under insufficient lighting—Figure 8. Having reference images of the scanned object, through proper editing of the texture file, can significantly improve the final visualisation (Figure 8e,f).

5.3. Comparison of 3D Model Meshes

A comparison of the meshes forming the 3D model was performed using object no. 1 as an example. The 3D model in the release version (SLS) was selected because, as shown in Table 4, it has an almost identical number of faces to the LiDAR model (23,972 vs. 23,980, respectively). Figure 9a,b show views of the CD3DM model obtained from SLS scanning, while Figure 9c shows an incomplete LiDAR model. A comparison of the mesh images forming the surface of the digital models shows that the SLS model mesh is more uniform, with the generated faces having similar sizes, unlike the LiDAR mesh, which is not uniform—there are areas with face sizes that differ 2–3 times from the others (Figure 9d vs. Figure 9e). Overall, the impression is that the SLS 3D model mesh better reflects the object’s details than the LiDAR mesh. The comparison leads to the conclusion that the iPhone LiDAR system generates accurate meshes that precisely represent the surface shape of the scanned object.

5.4. Evaluating 3D Model Quality After Creating a Complete Model

Creating a CD3DM model using LiDAR technology requires merging two or more independent 3D models. This can be accomplished by exporting the generated models as meshes in .obj format or as point cloud models in .las format.
The two mesh models were merged using the free MeshLAB v2016.12 software. The first step was to cut off the unscanned parts from both models, which were lying on the table, and then align and combine them. The merged objects revealed that the combined 3D mesh models retain their textures (different shades of individual surfaces are visible (Figure 10a,b), while the existing meshes do not integrate into a single mesh. The objects from the individual scans are not identical and do not overlap (Figure 10c,d). For the purposes of making such a model available online, it seems that the solution shown is sufficient. Of course, the size of such an object increased to 10.4 MB, which is due to the fact that two texture files are required for generation, and the overall mesh model has also increased. Despite this, the size of the complete model is still smaller than the base 3D model from SLS.
The two 3D models exported as point clouds (.las format) were merged using CloudCompare v2.13.2. As before, parts of the models that were not invisible during scanning had to be cut off. Next, the complete point cloud model was generated, followed by a mesh model. Data on the file sizes of the individual files is provided in Table 5, and visualisations of these models are presented in Figure 11.
The file sizes of the individual models indicate that in the CD3DM model (Figure 11c), all the component points of the partial models (Figure 11a,b) were used. Visualisations of these models, presented in Figure 11, show that the point model looks quite good (Figure 11c), while the mesh model has very poor texturing quality (Figure 11d). This is due to the way colour information is retrieved in cloud models. Exporting the mesh model to the .las format results in colour information being captured only from the mesh model’s vertices. As it turns out, when mapping this information to the mesh model, there is too little of it. Such a model is certainly not suitable for sharing, and its size in the .obj format is too large (Table 5, item 6).

5.5. Comparison of Geometric Quality of 3D Models

A comparison of the geometric quality (shape fidelity) of the LiDAR model was performed using CloudCompare v. 2.13.2. SLS was used as the reference technology, so the SLS base model served as the reference object for the comparison.
The geometric quality of the reference models (developed using SLS) and the LiDAR model was compared using the following approach:
  • Removing unnecessary objects from the LiDAR model (typically the base on which the artifact was placed) and removing parts of the SLS model that were inaccessible in the LiDAR model (due to its incompleteness).
  • Bringing both models to a common coordinate system.
  • Aligning the models’ positions.
  • Measuring the distances between the models.
  • Visualizing and statistically processing the distances between the models.
An example of the evaluation of the geometric quality of the models is presented in the example of model no. 3. The LiDAR model of object no. 3 was not a CD3DM model (the model also included a table on which it was scanned), so the first step involved removing the image of the table from the LiDAR model and the part of the SLS base model that represented the object’s bottom. A comparison of both models was then performed at 14,211 points. Figure 12 presents the obtained results: a heat map and a histogram of differences between the models for 163 variability intervals.
The results presented in Figure 12 indicate that the shape fidelity is very high. The shift of the results into the negative region indicates that the LiDAR object model is slightly smaller than the SLS model. Surface deviations exceeding ±3 mm occur only in less than 0.1% of the compared points. Root Mean Square Error (RMSE)—a typical measure of 3D model accuracy—is approximately 1.4 mm for model no. 3. The mean deviation (MD) was almost −0.64 mm, indicating that some deviations are quite large, but few in number. A detailed analysis of the heat map shows that a large portion of the red area (the largest deviation values) is located at the bottom of the object, i.e., where the table top on which the object was scanned and the bottom of the CD3DM object obtained using SLS technology were cut off from the 3D model.
For model no. 1, the RMSE is about 1.1 mm, and the mean deviation is −0.53 mm. These values for model no. 2 are 1.7 mm and −0.78 mm, respectively.

6. Conclusions

Based on the conducted research on 3D scanning of small museum artefacts using the 16 Pro Max iPhone LiDAR system with the Scaninverse application and comparative studies with 3D models of these objects from the SLS system, the following conclusions can be drawn:
  • The iPhone LiDAR system is suitable for detailed mapping of historical museum artefacts due to its relatively affordable price and offers a good compromise between functionality (ease of use), speed (model generation time is many times shorter than with professional devices [38]), versatility (the ability to scan objects of various sizes), quality (minor differences in the surface reproduction of small museum artefacts), and price (new iPhone models cost less than USD 1500).
  • The LiDAR scanning process is energy efficient, and a single smartphone charge can acquire and generate 3D mesh models of over 25 museum artefacts.
  • A significant feature of this scanning technology is fast operation (60–90 s per object), which allows for immediate verification of the quality of the generated 3D model and, if unsatisfactory results are obtained, a decision to repeat the scan.
  • Due to the size of the digital 3D models (approximately 5–8 MB—Table 4), they are ideal for sharing on websites and mobile devices, particularly for marketing and digital tourism purposes.
  • Scanning small- and medium-sized museum artefacts should not be performed using gimbals [36,47], as holding the smartphone directly in the hand allows for free wrist rotation, making it much easier to acquire data from surfaces with various shapes.
  • The colour textures of 3D models obtained using LiDAR scanning are sensitive to poor lighting conditions on digitized museum artefacts, which partially contradicts the information contained in [26,43,44].
  • The research conducted has shown that it is possible to generate a complete digital 3D model from digital models obtained from separate scanning processes with the iPhone LiDAR system through additional pre-processing in free, publicly available 3D graphics programs.
  • The LiDAR system available on Apple smartphones and tablets enables the so-called social documentation of historical artefacts (democratisation of activities) and the rapid sharing of results with the public, both by exporting models in formats supported by free, publicly available programs, such as .obj, and by generating short videos of objects rotating along predefined motion paths. This confirms the observations made in [39].
  • The social nature of the activities leads to the distributed storage of digital 3D models, which ensures that data on museum artefacts (although not fully meeting archival requirements) will be permanent and accessible not only to ordinary people but also to art historians and scientists.
  • Due to the fact that the iPhone LiDAR system does not generate CD3DM models, it should be perfectly suitable for scanning mounted objects such as busts, sculptures, tombstones, bas-reliefs, or column decorations, their bases and capitals, hanging bells, as well as objects placed on exhibition stands.
  • Digital 3D models generated with the iPhone 16 PRO MAX using Scaniverse LiDAR software are incomplete and therefore less versatile. They also have less accurate dimensions and lower quality (mesh density, size, and texture quality), practically eliminating their use for museum archiving purposes.
  • A significant limitation of LiDAR technology using mobile phones is the inability to scan large and very large objects (including architectural ones).

Author Contributions

Conceptualization, J.M. and M.M.; methodology, J.M. and M.M.; software, J.M. and M.M.; validation, J.M., M.M., and W.S.; formal analysis, J.M.; investigation, M.M. and R.K.; resources, W.S. and R.K.; data curation, J.M., M.M., and R.K.; writing—original draft, J.M., M.M., W.S., and R.K.; writing—review and editing, J.M., M.M., and W.S.; visualization, J.M. and M.M.; supervision, J.M.; project administration, J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data can be obtained from the authors upon receipt of a reasoned written request. Three-dimensional models for online viewing are available at: https://silkroad3d.com/.

Acknowledgments

The authors would like to thank the Samarkand State University authorities for providing the museum artifacts.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Barsanti, S.G.; Remondino, F.; Fenández-Palacios, B.J.; Visintini, D. Critical factors and guidelines for 3D surveying and modelling in Cultural Heritage. Int. J. Herit. Digit. Era 2014, 3, 141–158. [Google Scholar] [CrossRef]
  2. Munumer, E.; Lerma, J.L. Fusion of 3D data from different image-based and range-based sources for efficient heritage recording. Digit. Herit. 2015, 304, 83–86. [Google Scholar] [CrossRef]
  3. Parrinello, S.; Dell’Amico, A. Experience of Documentation for the Accessibility of Widespread Cultural Heritage. Heritage 2019, 2, 1032–1044. [Google Scholar] [CrossRef]
  4. Vacca, G.; Dessi, A. Geomatics Supporting Knowledge of Cultural Heritage Aimed at Recovery and Restoration. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 909–915. [Google Scholar] [CrossRef]
  5. Brandolini, F.; Patrucco, G. Structure-from-Motion (SFM) Photogrammetry as a Non-Invasive Methodology to Digitalize Historical Documents: A Highly Flexible and Low-Cost Approach? Heritage 2019, 2, 2124–2136. [Google Scholar] [CrossRef]
  6. Masciotta, M.G.; Sanchez Aparicio, L.J.; Oliveira, D.V.; Gonzalez Aguilera, D. Integration of laser scanning technologies and 360º photography for the digital documentation and management of cultural heritage buildings. Int. J. Archit. Herit. 2023, 17, 56–75. [Google Scholar] [CrossRef]
  7. Milosz, M.; Kęsik, J.; Abdullaev, U. 3D scanning and modeling of highly detailed and geometrically complex historical architectural objects: The example of the Juma Mosque in Khiva (Uzbekistan). Herit. Sci. 2024, 12, 1–14. [Google Scholar] [CrossRef]
  8. Aita, D.; Barsotti, R.; Bennati, S.; Caroti, G.; Piemonte, A. 3-Dimensional geometric survey and structural modelling of the dome of Pisa cathedral. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 39–46. [Google Scholar] [CrossRef]
  9. Milosz, M.; Kęsik, J.; Montusiewicz, J. Three-dimensional digitization of documentation and perpetual preservation of cultural heritage buildings at risk of liquidation and loss—The methodology and case study of St Adalbert’s Church in Chicago. Electronics 2024, 13, 561. [Google Scholar] [CrossRef]
  10. Andrianov, A.; Muradyan, G.; Andrianova, Z.; Sarvazyan, N. Bringing Stones to Life: The First Digital 3D Library of Ancient Armenian Gravestones. In Proceedings of the Eurographics Workshop on Digital Heritage, Siena, Italy, 8–13 September 2025. [Google Scholar] [CrossRef]
  11. Kęsik, J.; Montusiewicz, J.; Kayumov, R. An approach to computer-aided reconstruction of museum exhibits. Adv. Sci. Technol. Res. J. 2017, 11, 87–94. [Google Scholar] [CrossRef] [PubMed]
  12. Champiio, E.; Rahaman, H. Survey of 3D digital heritage repositories and platforms. Virtual Archaeol. Rev. 2020, 11, 1–15. [Google Scholar] [CrossRef]
  13. He, T.-L.; Qin, F. Exploring how the metaverse of cultural heritage (MCH) influences users’ intentions to experience offline: A two-stage SEM-ANN analysis. Herit. Sci. 2024, 12, 193. [Google Scholar] [CrossRef]
  14. Montusiewicz, J.; Milosz, M.; Kesik, J. Technical Aspects of Museum Exposition for Visually Impaired Preparation Using Modern 3D Technologies. In 2018 IEEE Global Engineering Education Conference (EDUCON); IEEE: Piscataway, NJ, USA, 2018; pp. 768–773. [Google Scholar] [CrossRef]
  15. Montusiewicz, J.; Milosz, M. Architectural Jewels of Lublin: A Modern Computerized Board Game in Cultural Heritage Education. J. Comput. Cult. Herit. 2021, 14, 1–21. [Google Scholar] [CrossRef]
  16. Poirier, N.; Baleux, F.; Calastrenc, C. The mapping of forested archaeological sites using UAV LiDAR. ISTE OpenScience 2020, 1–24. [Google Scholar] [CrossRef]
  17. Masini, N.; Abate, N.; Gizzi, F.T.; Vitale, V.; Minervino Amodio, A.; Sileo, M.; Biscione, M.; Lasaponara, R.; Bentivenga, M.; Cavalcante, F. UAV LiDAR Based Approach for Archaeological Micro Topography Under Canopy. Remote Sens. 2022, 14, 6074. [Google Scholar] [CrossRef]
  18. Vinci, G.; Vanzani, F.; Fontana, A.; Campana, S. LiDAR Applications in Archaeology: A Systematic Review. Archaeol. Prospect. 2024, 32, 81–101. [Google Scholar] [CrossRef]
  19. García-Gómez, P.; Royo, S.; Rodrigo, N.; Casas, J.R. Geometric model and calibration method for a solid-state LiDAR. Sensors 2020, 20, 2898. [Google Scholar] [CrossRef] [PubMed]
  20. Aijazi, A.K.; Malaterre, L.; Trassoudaine, L.; Checchin, P. Systematic evaluation of 3D solid state LiDAR sensors. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 199–203. [Google Scholar] [CrossRef]
  21. Luetzenburg, G.; Kroon, A.; Bjørk, A.A. Evaluation of the Apple iPhone 12 Pro LiDAR for an Application in Geosciences. Sci. Rep. 2021, 11, 22221. [Google Scholar] [CrossRef] [PubMed]
  22. Barszcz, M.; Montusiewicz, J.; Paśnikowska-Łukaszuk, M.; Sałamacha, A. Comparative Analysis of Digital Models of Objects of Cultural Heritage Obtained by the “3D SLS” and “SfM” Methods. Appl. Sci. 2021, 11, 5321. [Google Scholar] [CrossRef]
  23. Stevenson, S.; Liscio, E. Assessing iPhone LiDAR & Recon-3D for determining area of origin in bloodstain pattern analysis. J. Forensic Sci. 2024, 69, 1045–1060. [Google Scholar] [CrossRef]
  24. Abdelaal, O.; Aldahash, S.A. Realization of Impression Evidence with Reverse Engineering and Additive Manufacturing. Appl. Sci. 2024, 14, 5444. [Google Scholar] [CrossRef]
  25. Kędziorski, P.; Skoratko, A.; Katzer, J.; Tysiąc, P.; Jagoda, M.; Zawidzki, M. Low-cost LiDAR scanning data for geometric and volume analysis of 3D-printed concrete-plastic elements. Data Brief 2025, 61, 111799. [Google Scholar] [CrossRef] [PubMed]
  26. Díaz-Vilariño, L.; Tran, H.; Frías, E.; Balado, J.; Khoshelham, K. 3D mapping of indoor and outdoor environments using Apple smart devices. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 303–308. [Google Scholar]
  27. Yanchuk, O.; Shulgan, R.; Trokhymets, S.; Sheremet, N. Accuracy assessment of a three-dimensional model obtained using the LiDAR sensor of the iPhone 13 Pro Max. Geod. Cartogr. 2025, 51, 11–17. [Google Scholar] [CrossRef]
  28. Alijani, Z.; Meloche, J.; McLaren, A.; Lindsay, J.; Roy, A.; Berg, A. A comparison of three surface roughness characterization techniques: Photogrammetry, pin profiler, and smartphone-based LiDAR. Int. J. Digit. Earth 2022, 15, 2422–2439. [Google Scholar] [CrossRef]
  29. Hrdina, M.; Molina-Valero, J.A.; Kuželka, K.; Tatsumi, S.; Yamaguchi, K.; Melichová, Z.; Mokroš, M.; Surový, P. Obtaining the Highest Quality from a Low-Cost Mobile Scanner: A Comparison of Several Pipelines with a New Scanning Device. Remote Sens. 2025, 17, 2564. [Google Scholar] [CrossRef]
  30. Siafali, E.; Polychronos, V.; Tsioras, P.A. Fusion of Airborne, SLAM-Based, and iPhone LiDAR for Accurate Forest Road Mapping in Harvesting Areas. Land 2025, 14, 1553. [Google Scholar] [CrossRef]
  31. Özbay, F.; Sarıkaya, Y.Ç. Indoor Space Measurement With a Mobile Phone Integrated LIDAR Sensor: The Case of the Aizanoi Ancient City Zeus Temple Vaulted Gallery. Stud. Digit. Herit. 2024, 8, 97–109. [Google Scholar]
  32. De Simone, D.; Ferrari, G.W. 3D LiDAR Modeling with iPhone Pro in an Archaeo-Spelaeologic Context. Results and Prospects. Archeol. Calc. 2024, 35, 421–430. [Google Scholar] [CrossRef]
  33. Fiorini, A. Scansioni dinamiche in archeologia dell’architettura: Test e valutazioni metriche del sensore LiDAR di Apple. Archeol. Calc. 2022, 33, 35–54. [Google Scholar] [CrossRef]
  34. Ferrari, G.W. The Pozzuoli (Naples, Italy) Flavian Amphitheatre Cisterns: A Basic Experience in 3D Modelling with LiDAR. In Proceedings of the Fourth IC of Speleology in Artificial Cavities Hypogea 2023. pp. 319–324. Available online: https://anyflip.com/msnet/wici/basic/301-350 (accessed on 3 December 2025).
  35. Moyano, J.; Nieto-Julián, J.E.; Fernández-Alconchel, M.; Oreni, D.; Estévez-Pardal, R. Analysis and Precision of Light Detection and Ranging Sensors Integrated in Mobile Phones as a Framework for Registration of Ground Control Points for Unmanned Aerial Vehicles in the Scanning Technique for Building Information Modelling in Archaeological Sites. Drones 2023, 7, 477. [Google Scholar] [CrossRef]
  36. Zhang, N.; Lan, X. Everyday-Carry Equipment Mapping: A Portable and Low-Cost Method for 3D Digital Documentation of Architectural Heritage by Integrated iPhone and Microdrone. Buildings 2025, 15, 89. [Google Scholar] [CrossRef]
  37. Soyluoğlu, M.; Orabi, R.; Hermon, S.; Bakirtzis, N. Digitizing Challenging Heritage Sites with the Use of iPhone LiDAR and Photogrammetry: The Case-Study of Sourp Magar Monastery in Cyprus. Geomatics 2025, 5, 44. [Google Scholar] [CrossRef]
  38. Ulvi, A.; Hamal, S.N.G. Fusion of IPAD Pro LiDAR and SfM-Based Photogrammetry for 3D Documentation of Cultural Heritage. Iran J. Sci. Technol. Trans. Civ. Eng. 2025, 1–17. [Google Scholar] [CrossRef]
  39. Liu, H.; Wu, Y.; Li, A.; Deng, Y. Precision Detection and Identification Method for Apparent Damage in Timber Components of Historic Buildings Based on Portable LiDAR Equipment. J. Build. Eng. 2024, 98, 111050. [Google Scholar] [CrossRef]
  40. Pulat, F.; Yakar, M. Comparison of Techniques Used in Three-Dimensional Modelling of Small-Sized Objects with Mobile Phones. Mersin Photogramm. J. 2024, 6, 79–86. [Google Scholar] [CrossRef]
  41. Hegarty, Z.; Saari, M. A Proposal for Proactive Quality Assurance in Photogrammetry Workflows: Using Smart-Device LiDAR for Scaling. Digit. Herit. 2025. [Google Scholar] [CrossRef]
  42. Bocconcino, M.M.; Piras, M.; Vozzola, M.; Pavignano, M.; Gioberti, L. Giovanni Curioni’s Digital Museum (1/2): Comparative Survey Techniques for the Definition of a 3D Data Collection Procedure with Low-Cost Systems. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2023, 48, 235–242. [Google Scholar] [CrossRef]
  43. Teppati Losè, L.; Spreafico, A.; Chiabrando, F.; Giulio Tonolo, F. Apple LiDAR Sensor for 3D Surveying: Tests and Results in the Cultural Heritage Domain. Remote Sens. 2022, 14, 4157. [Google Scholar] [CrossRef]
  44. Atencio, E.; Muñoz, A.; Lozano, F.; González-Arteaga, J.; Lozano-Galant, J.A. Calibration of iPad Pro LiDAR Scanning Parameters for the Scanning of Heritage Structures Using Orthogonal Arrays. Appl. Sci. 2024, 14, 11814. [Google Scholar] [CrossRef]
  45. Vacca, G. 3D Survey with Apple LiDAR Sensor—Test and Assessment for Architectural and Cultural Heritage. Heritage 2023, 6, 1476–1501. [Google Scholar]
  46. Askar, C.; Sternberg, H. Use of Smartphone LiDAR Technology for Low-Cost 3D Building Documentation with iPhone 13 Pro: A Comparative Analysis of Mobile Scanning Applications. Geomatics 2023, 3, 563–579. [Google Scholar] [CrossRef]
  47. Paukkonen, N. Towards a Mobile 3D Documentation Solution. Video-Based Photogrammetry and iPhone 12 Pro as Fieldwork Documentation Tools. J. Comput. Appl. Archaeol. 2023, 6, 143–154. [Google Scholar] [CrossRef]
  48. Fedorov-Davydov, G.A. Archaeological Research in Central Asia of the Muslim Period. World Archaeol. 1983, 14, 393–405. [Google Scholar] [CrossRef]
  49. 3D Digital Silk Road Portal. 2025. Available online: https://silkroad3d.com (accessed on 3 December 2025).
Figure 1. LiDAR scanning process using Scaniverse on an iPhone 16 Pro Max: (a) image of the object, (b,c) scanning process, (d) 3D model of the object after repositioning it during the scanning process.
Figure 1. LiDAR scanning process using Scaniverse on an iPhone 16 Pro Max: (a) image of the object, (b,c) scanning process, (d) 3D model of the object after repositioning it during the scanning process.
Applsci 16 02100 g001
Figure 2. The 3D scene cropping process in Scaniverse on an iPhone 16 PRO MAX smartphone.
Figure 2. The 3D scene cropping process in Scaniverse on an iPhone 16 PRO MAX smartphone.
Applsci 16 02100 g002
Figure 4. 3D scanning of object no. 2 (Table 2) using the Artec Spider device (2017).
Figure 4. 3D scanning of object no. 2 (Table 2) using the Artec Spider device (2017).
Applsci 16 02100 g004
Figure 5. 3D scanning of object no. 3 (Table 2) using an iPhone 16 PRO MAX smartphone (2025).
Figure 5. 3D scanning of object no. 3 (Table 2) using an iPhone 16 PRO MAX smartphone (2025).
Applsci 16 02100 g005
Figure 6. Digital 3D models: (a) object no. 1 on the support, (b) object after the support was removed, (c) bottom of the object that could not be scanned, (d) object no. 3 with size measurement and tape measure, (e) object no. 3 with a visible grid.
Figure 6. Digital 3D models: (a) object no. 1 on the support, (b) object after the support was removed, (c) bottom of the object that could not be scanned, (d) object no. 3 with size measurement and tape measure, (e) object no. 3 with a visible grid.
Applsci 16 02100 g006
Figure 7. Digital 3D models of object no. 2 (Table 2): (a,b) model from two different scans.
Figure 7. Digital 3D models of object no. 2 (Table 2): (a,b) model from two different scans.
Applsci 16 02100 g007
Figure 8. Comparison of 3D models of object no. 3 (Table 2); image layout in texture files: (a) SLS technology, (b) LiDAR system; model appearance: (c) SLS technology, (d) LiDAR system; texture colour correction for the LiDAR system: (e) improved texture, (f) appearance of the jug.
Figure 8. Comparison of 3D models of object no. 3 (Table 2); image layout in texture files: (a) SLS technology, (b) LiDAR system; model appearance: (c) SLS technology, (d) LiDAR system; texture colour correction for the LiDAR system: (e) improved texture, (f) appearance of the jug.
Applsci 16 02100 g008
Figure 9. Object no. 1 (Table 2): (a,b) view of CD3DM models with SLS, (c) model with LiDAR system without base closure; mesh with visible vertices: (d) with SLS, (e) with LiDAR system.
Figure 9. Object no. 1 (Table 2): (a,b) view of CD3DM models with SLS, (c) model with LiDAR system without base closure; mesh with visible vertices: (d) with SLS, (e) with LiDAR system.
Applsci 16 02100 g009
Figure 10. Digital assembly of 3D mesh models of object no. 2 from the LiDAR system: (a,b) textured models after merging, (c,d) preview of fragments of the mesh model after merging.
Figure 10. Digital assembly of 3D mesh models of object no. 2 from the LiDAR system: (a,b) textured models after merging, (c,d) preview of fragments of the mesh model after merging.
Applsci 16 02100 g010
Figure 11. Digital assembly of two 3D models of object no. 2 (Table 2) from the LiDAR system in the form of point clouds: (a,b) models in the form of a point cloud, scan 1 without cutting off and scan 2 after cutting off the unnecessary part of the model, (c) model of the CD3DM type—in the cloud version, (d) CD3DM—in the mesh version with texturing.
Figure 11. Digital assembly of two 3D models of object no. 2 (Table 2) from the LiDAR system in the form of point clouds: (a,b) models in the form of a point cloud, scan 1 without cutting off and scan 2 after cutting off the unnecessary part of the model, (c) model of the CD3DM type—in the cloud version, (d) CD3DM—in the mesh version with texturing.
Applsci 16 02100 g011
Figure 12. Comparison of the geometric quality of object no. 3 (Table 2): (a) heat map of differences between the SLS and LiDAR models (dimensions in mm), (b) histogram of differences between the models (163 difference intervals).
Figure 12. Comparison of the geometric quality of object no. 3 (Table 2): (a) heat map of differences between the SLS and LiDAR models (dimensions in mm), (b) histogram of differences between the models (163 difference intervals).
Applsci 16 02100 g012
Table 3. Comparison of procedures for generating 3D models by the 3D SLS and LiDAR.
Table 3. Comparison of procedures for generating 3D models by the 3D SLS and LiDAR.
No.Features3D SLS/Artec SpiderLiDAR/iPhone 16 PRO MAXComments
1Shape data acquisition3D ScanningRecord on filmSensors, camera
2Number of activitiesMany/depending on your needsOnly 1 ** Once the video recording is finished, the 3D model is generated
3Texture data acquisitionRGB information *Video* Selective collection
4Digitisation of transparent objectsNoSelective ** E.g. dark glass
5Data formatPoint cloudPoint cloud
6Data sizeA few GBSeveral hundred MB
7SoftwareSpecialised, device-dedicatedDedicated to the device, open source ** There are many programs
8Programs usedArtec Studio v. 12Scaniverse
9HardwareHigh computing powerSmartphone processor
103D model building methodTriangulationTriangulation
11How to generate a 3D modelManual, power assistedCompletely automated ** Creating a cloud model, mesh model and texture mapping
12CostLarge/very largeMedium
13AvailabilitySmallUniversal
14Time-consumingSmall/medium *Small* Depends on the size of the object
15Size of scanned objectsOnly smallSmall/Medium/Large ** Declaration before starting
16Dimension acquisitionYesYes
17Model qualityVery goodMedium
18Achieving photorealismTexture mappingTexture mapping
19Possibility of exporting to standard formatsYesYes
20Possibility of model modificationYesNo ** After export to external programs
21Competence in using equipmentSpecialised/HighCommon/Medium
22IT competenciesSpecialised/HighCommon/Medium
*—indicates what the Comment is about.
Table 4. Information about the scans and models obtained.
Table 4. Information about the scans and models obtained.
Object No. Artec SpiderSystem LiDAR iPhone 16 Pro Max
1Scanning parameters
Maximal frame alignment error, mm0.2---
Number of partial scans conducted21
Scan frames count1193---
Texturised scan frame count64---
3D model features
Base versionDissemination versionFinal version
Vertex count23,97411,98812,205
Face count47,94423,97223,980
File size in MB, format .obj (model + texture)8.44
(2.94 + 5.5)
2.34
(1.99 + 0.35)
5.54
(1.37 + 4.17)
Texture size in px4096 × 40962048 × 20488192 × 4096
2Scanning parameters
Maximal frame alignment error, mm1.6---
No of partial scans conducted21
Scan frames count1915---
Texturised scan frame count108---
3D model features
Base versionDissemination versionFinal version
Vertex count61,01030,50617,732
Face count122,01661,00834,772
File size in MB, format .obj (model + texture)14.71
(7.68 + 7.03)
5.81
(5.34 + 0.47)
6.74
(2.04 + 4.7)
Texture size in px4096 × 40962048 × 20488192 × 8192
3Scanning parameters
Maximal frame alignment error, mm0.4---
No of partial scans conducted21
Scan frames count2497---
Texturised scan frame count138---
3D model features
Base versionDissemination versionFinal version
Vertex count12,659633027,933
Face count25,31412,65654,774
File size in MB, format .obj (model + texture)11.02
(1.5 + 9.52)
1.64
(1.03 + 0.61)
7.16
(3.27 + 3.89)
Texture size in px4096 × 40962048 × 20488192 × 4096
Table 5. Detailed information on merging object no. 2 into a CD3DM model from scans in .las format.
Table 5. Detailed information on merging object no. 2 into a CD3DM model from scans in .las format.
NoObjectTypeFormatSize MBNumber of Points/TrianglesComments
1Scan 1Point.las2.0482,447 pts.las is a binary format
2Scan 2Point.las7.21290,839 ptsLarge table top
3Scan 1cutPoint.las1.3653,712 ptsAfter cutting the table top
4Scan 2cutPoint.las2.77109,104 ptsAfter cutting the table top
5CompletePoint.las4.13162,816 ptsSum of points from the models after cutting the table top
6CompleteMesh.obj51.66272,537 pts
538,174 Tri
.obj is a text file containing vertices, normals and faces. Saved in binary .ply format, it takes up 13.5 MB
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Montusiewicz, J.; Milosz, M.; Sarnowski, W.; Kayumov, R. Quality Assessment of Digital 3D Models of Museum Artefacts from the Mobile LiDAR iPhone and Structured Light Scanners. Appl. Sci. 2026, 16, 2100. https://doi.org/10.3390/app16042100

AMA Style

Montusiewicz J, Milosz M, Sarnowski W, Kayumov R. Quality Assessment of Digital 3D Models of Museum Artefacts from the Mobile LiDAR iPhone and Structured Light Scanners. Applied Sciences. 2026; 16(4):2100. https://doi.org/10.3390/app16042100

Chicago/Turabian Style

Montusiewicz, Jerzy, Marek Milosz, Wojciech Sarnowski, and Rahim Kayumov. 2026. "Quality Assessment of Digital 3D Models of Museum Artefacts from the Mobile LiDAR iPhone and Structured Light Scanners" Applied Sciences 16, no. 4: 2100. https://doi.org/10.3390/app16042100

APA Style

Montusiewicz, J., Milosz, M., Sarnowski, W., & Kayumov, R. (2026). Quality Assessment of Digital 3D Models of Museum Artefacts from the Mobile LiDAR iPhone and Structured Light Scanners. Applied Sciences, 16(4), 2100. https://doi.org/10.3390/app16042100

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop