Next Article in Journal
Beyond sRGB: Optimizing Object Detection with Diverse Color Spaces for Precise Wildfire Risk Assessment
Previous Article in Journal
Use of Tropospheric Delay in GNSS-Based Climate Monitoring—A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Taxonomy of Sensors, Calibration and Computational Methods, and Applications of Mobile Mapping Systems: A Comprehensive Review

1
Remote-Sensing and Photogrammetry Department, Visionium Oy, 00940 Helsinki, Finland
2
Department of Geosciences and Geography, University of Helsinki, 00014 Helsinki, Finland
3
Remote Sensing and Photogrammetry Department, Finnish Geospatial Research Institute (FGI), National Land-Survey of Finland (NLS), 02150 Espoo, Finland
4
Advanced Laser Technology Laboratory of Anhui Province, Hefei 230000, China
5
Hangzhou Institute for Advanced Studies, University of Chinese Academy of Sciences, Hangzhou 310027, China
6
Lyles School of Civil and Construction Engineering, Purdue University, West Lafayette, IN 47907-2051, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(9), 1502; https://doi.org/10.3390/rs17091502
Submission received: 14 March 2025 / Revised: 17 April 2025 / Accepted: 22 April 2025 / Published: 24 April 2025
(This article belongs to the Special Issue Advances in Deep Learning Approaches: UAV Data Analysis)

Abstract

:
Innovative geospatial solutions are necessary to tackle complex environmental challenges. Mobile mapping systems (MMSs) are such key innovations emerging in this effort. MMSs, with a wide range of applications, significantly impact our increasingly developed data collection technologies by enhancing our understanding of the environment, enabling us to create more detailed models of natural resources, and optimizing the way we live on Earth. In this paper, we present and analyze recent advancements in MMS technologies, focusing on computational and modeling aspects, as well as the latest state-of-the-art sensor, hardware, and software developments. Special attention is given to the trends observed over the past decade, supported by a review of foundational literature. Finally, we outline our vision for the future of MMS, offering insights into the potential for further research and the exciting possibilities that lie ahead in this rapidly evolving field of science and technology.

1. Introduction

Our living environments are intricate ecosystems that encompass various elements, including people, infrastructure (such as roads and buildings), living organisms (like trees, forests, and lakes), and diverse non-homogeneous components (such as street furniture and light poles). These dynamic ecosystems require continuous observation, planning, and management (OPM) to ensure sustainable development and effective functioning. Established principles for preserving natural resources [1] necessitate the integration of specific geospatial technologies that provide a suite of scientific and engineering tools essential for monitoring and maintaining ecosystems. Among geospatial tools, mobile mapping systems (MMSs) stand out as advanced solutions capable of efficiently collecting large volumes of diverse, non-homogeneous data, which are crucial for various OPM applications. By integrating MMSs into geospatial workflows, we can significantly enhance our ability to manage and sustain the complex environments in which we live.
The evolution of MMS technologies flourished nearly three decades ago, pioneered by research organizations and technological companies worldwide. A key milestone in this evolution was the global positioning system (GPS), which became fully operational in 1993 [2], followed by other global navigation satellite systems (GNSS), including GLONASS, Galileo, and BeiDou [3]. The earliest MMS solutions [4] typically consisted of a GNSS receiver, an inertial measurement unit (IMU), and 360° cameras utilizing direct-georeferencing techniques [5]. Initially, the outputs from these systems were wide-angle (360-degree) stitched photographs [6] tagged with location data. Those images were displayed on online platforms, allowing users to virtually explore unseen locations—a trend that even continues today. Figure 1 demonstrates a graphic of a recent iteration of one of the most successful MMSs developed to capture 360-degree images [7].
Virtual tours have gradually become one of the most accessible technologies for virtually navigating in major cities worldwide. As MMS technologies have advanced, they now offer far more than just 360-degree imagery; they provide highly detailed spatial data and models that enhance applications in urban planning [8], infrastructure management [9], and environmental monitoring [10]. This ongoing evolution continues to push the boundaries of how we explore and manage our ecosystems.
Development of MMSs requires vast knowledge about sensors, calibration methods, hardware, and software. The evolution of MMSs has been driven by the integration of advanced sensors such as multi-camera setups, LiDARs, and GNSS and IMUs, which have enabled the collection of high-resolution, multi-spectral, and spatially accurate data.
Among the suite of sensors used in modern MMSs, cameras play an indispensable role [11]. The advent of digital photography has revolutionized this field, making digital cameras essential sensory elements in any MMS. Photogrammetry and computer vision have developed in tandem, addressing similar challenges such as camera lens distortion modeling, structure from motion (SfM), and pattern recognition and classification problems. These two fields are deeply intertwined, with numerous methods created under different terminologies yet targeting common problems [12].
Today, MMSs integrate complex sensor models for multi-camera setups and other sophisticated settings, enhancing their functionality. Typically, multiple cameras are employed in an MMS to capture a comprehensive view of the surrounding environment [13]. These setups often include a combination of projective, fisheye, or wide-angle cameras, together providing 360-degree coverage. The number and arrangement of cameras can be customized to meet specific challenges and requirements, enabling innovative camera placement scenarios.
Different camera types are designed to capture specific portions of the electromagnetic spectrum. While red, green, and blue (RGB) cameras are the most common and affordable type used, specialized cameras—such as hyperspectral, multi-spectral, and infrared—are employed for specific applications like material classification or thermal measurement [14]. Calibrating spectral measurements is discussed in radiometric photogrammetry, which deals with the complexities of light emission, reflection, and absorption, alongside the technical challenges of calibrating devices that measure various spectral bands [15]. This field is crucial for ensuring the accuracy of data collected by MMS, particularly when working with spectral information.
Another key measurement method used in MMSs is based on laser technology [16], which was first discovered in 1960. This discovery was soon followed by the invention of light detection and ranging (LiDAR) in 1962. Within a decade, LiDAR found significant real-world applications; notably, it was utilized during the Apollo mission (15–17) in 1971–1972 for planetary measurements and estimating the spacecraft’s distance from the Moon [17]. Over the next two decades, increasingly sophisticated LiDAR systems emerged for scientific and engineering applications. Despite their high costs, LiDAR sensors revolutionized MMS technologies by providing precise, high-quality 3D data of the environment [18]. This breakthrough enabled the development of complex MMS setups that addressed specific needs and paved the way for the upcoming digital revolution [19].
The evolution of MMSs can be divided into several key phases: the discovery of pulsed laser technology, the advancement of digital photogrammetry and computer vision techniques, maturation of GNSS and LiDAR sensors, and breakthroughs in computational resources and machine-learning methods. Together, these advancements have enabled MMSs to become powerful tools for capturing and analyzing spatial data, transforming fields such as urban planning, environmental monitoring, and infrastructure management. The integration of LiDAR into MMSs has profoundly impacted the geospatial industry, allowing for detailed modeling of objects and landscapes and expanding possibilities for future innovations.
One of the key direct outputs of employing MMS is its ability to provide location awareness through the fusion of diverse sensors, a fundamental concept required by many modern industries and technologies, such as robotics and autonomous vehicles [20]. Location awareness refers to a computer system’s ability to understand the geometric complexities of its operating environment. Even when the sensors and related software are not explicitly labeled as part of an MMS, these technologies often function as critical subsystems within larger, more complex systems. Consequently, advancements in the MMS field—such as data standardization, new sensor development, innovative calibration techniques, sensor fusion, and efficient computational hardware—can significantly benefit a wide range of applications. These improvements enhance the precision, speed, and consistency of systems reliant on spatial awareness, driving progress in sectors such as robotics, navigation, and automation.
The development of low-cost sensors and miniaturized platforms has democratized access to MMS technology, making it available for a broader range of users, from researchers to everyday consumers. One notable trend is the miniaturized MMSs integrated into consumer mobile phones. This innovation has attracted significant investments and is expected to rapidly bring complex MMS technology into the hands of millions of everyday users. As this trend continues, it promises to revolutionize how individuals interact with spatial data, opening new possibilities in navigation, augmented reality, and personal mapping applications.
A beneficial classification of MMSs could be done by considering possible platforms. An MMS could be installed on a variety of stands, each tailored for specific environments, e.g., a backpack or trolley-based MMS is designed to operate in indoor settings or natural environments such as forests, where specific mobility is essential [21,22,23]. Vehicle-mounted MMS platforms are optimized for large-scale data collection in urban areas, facilitating efficient mapping of cities. UAV MMSs are designed to be light-weight for large-scale aerial data capturing. Miniaturized platforms, such as smartphones equipped with MMS sensor technologies [24,25], are developed for general-purpose applications, making spatial data collection accessible to everyday users across various contexts. While MMSs conceptually encompass all sensor-mounted mobile platforms—including UAVs, handheld SLAM devices, and shoulder-borne systems—this review primarily focuses on vehicle-borne MMSs due to their maturity, widespread adoption in geospatial industries, and capacity to integrate multi-sensor suites (e.g., LiDAR, GNSS, IMU, and multi-camera arrays). Where relevant, comparisons are drawn to UAV or portable systems to highlight technological synergies or trade-offs. This scope allows for an in-depth analysis of calibration, sensor fusion, and standardization challenges specific to large-scale, high-accuracy applications in urban and environmental domains.
A significant recent development in the MMS industry is related to the advancement of machine learning (ML) methods, particularly as part of artificial intelligence (AI) technologies, which have seen substantial progress since the early 2000s. Deep learning methods have provided MMS developers with powerful ML tools, unlocking new possibilities for real-time data processing and analysis [26]. The integration of AI and ML methods has opened new frontiers in real-time data processing, object detection, and autonomous navigation, further expanding the potential applications of MMS. Applications such as real-time pedestrian detection, autonomous vehicle recognition, high-speed image matching, and road sign detection represent just a few of the many potential uses [27,28]. MMS now serves as critical data providers for these AI-driven applications, offering high-quality spatial data that enhances the accuracy and functionality of ML models. The integration of AI and ML into MMS technologies is expected to continue driving innovation, transforming industries such as autonomous driving, urban planning, and smart city development.
Currently, numerous commercial and research prototypes of MMSs are tailored for specific applications; however, a significant challenge remains: the lack of standardization in MMS design and protocols [29]. This inconsistency hinders the scientific community’s ability to fully leverage customizable MMS technologies. The objective of this work is to review and compare existing state-of-the-art MMS technologies, highlight some important recent advancements, and propose a standardized framework for customizable MMS. The discussions in this work will hopefully facilitate the development of more advanced MMS systems, enhancing their functionality and broadening their applications.
This paper is structured into four main sections. Section 2 provides a brief overview of recent technological and methodological advancements in sensor development. Section 3 reviews and compares the latest experimental, scientific, and commercial MMS technologies. The most important recent applications of MMS technologies are reviewed and stated in Section 4. Finally, the most important leading factors, such as lack of standardization and the possible future direction of the MMSs, are discussed in Section 5. This comprehensive approach seeks to enhance the usability and effectiveness of MMSs across various research contexts.

2. Sensors

There are a variety of different sensors in an MMS, each of which is considered for a specific task, including data acquisition, localization, or environmental observation. Therefore, creating an MMS requires extensive knowledge about sensors. The aim of this chapter is to state and summarize some key recent advancements in sensor developments that are most beneficial for creating an MMS. Table 1 lists the most important sensors employed in an MMS. In this table, the relevant technologies for each sensor are listed. In the following section, we review each sensor in detail.

2.1. Global Positioning

Real-time precise positioning is a fundamental requirement for any MMS. This capability relies on global positioning, which is achieved by measuring distances to primary satellite constellations: GPS, GLONASS, Galileo, and BeiDou. The GPS constellation, launched in 1993, consists of 31 satellites orbiting approximately 20,200 km above the Earth’s surface. GLONASS contains 24 operational satellites [30]. Since 2011, the GLONASS constellation has provided full global coverage with satellites positioned at around 19,100 km [31]. The Galileo system includes 24 active satellites, with plans to expand to 26 in the future. Lastly, BeiDou operates with 35 satellites located at an altitude of 35,800 km. Most commercial GNSS receivers are capable of utilizing all four constellations, significantly enhancing the accuracy of global localization.
An observer needs signals from at least four satellites to estimate their position accurately; however, obtaining signals from more satellites enhances the precision of global positioning. Two primary types of observations used in GNSS signal processing are carrier phase and pseudorange measurements, both of which involve unknown biases. Achieving high accuracy and precision in location awareness is facilitated by techniques such as real-time kinematic (RTK) positioning, precise point positioning (PPP), and the hybrid PPP-RTK method.
In RTK, one or more reference stations are used, typically located within 15 km, to provide atmospheric corrections, resulting in high positional accuracy. Higher accuracies are possible by employing multi-constellation RTK [32]. To address the limitation of short distances between a rover and reference station, network RTK (NRTK) extends the operational range to hundreds of kilometers [33]. On the other hand, PPP achieves centimeter-level accuracy by recording measurements over longer durations without relying on ground stations [34,35]. The PPP-RTK method combines the advantages of both RTK and PPP, allowing for real-time high-accuracy localization [36]. Furthermore, the multi-constellation PPP-RTK technique demonstrates extremely low initialization times of less than one second while still achieving centimeter-level positioning accuracy [37].
In addition to 3D localization, GNSS could be employed in an MMS for other tasks such as estimating its attitude or multi-sensor synchronization. Multi-antenna GNSS configurations offer the capability for accurate attitude estimation [38]. It has been demonstrated that a dual-antenna setup with a 1 m baseline can achieve attitude estimations with an accuracy of 0.6 degrees [39]. Sophisticated installations have been developed to meet specific requirements, e.g., Zhang et al. [38] utilized a four-antenna configuration with a 0.3 m baseline designed for smaller vehicles and unmanned aerial vehicles (UAVs). Additionally, a multi-antenna GNSS integrated with an IMU has been reported to provide azimuth estimation quality comparable to that of differential GNSS/IMU configurations [40]. A particularly efficient setup, utilizing three antennas with a double differencing (DD) technique, demonstrated a standard deviation of 0.02 degrees for the heading angle [41]. These advancements highlight the potential of multi-antenna GNSS systems in enhancing navigation accuracy in various MMS applications.

2.2. Inertial Measurement Unit (IMU)

An IMU works based on gyroscopic principles. A gyroscope is defined as a solid object rotating at a high angular velocity around an axis going through its body. A gyroscope is usually bundled with a mechanical instrument called Cardan suspension that allows it to freely rotate without applying noticeable force on the rotating body. If the internal frictions of a gyroscope are ignored, the gyroscope’s rotation will last forever based on Newton’s second law. The rotating body preserves its orientation in space as long as no external force applies; therefore, the rotating angles (Euler angles) could be measured by observing the Cardan suspension mechanism.
Four main classes of IMU have been developed and employed in the MMS industry. The first class is mechanical IMUs, which are based on a gyroscope supported by the suspension mechanism as described above [42]. The second class is based on the Sagnac effect [43]. In 1923, French physicist Georges Sagnac observed an interesting phenomenon: the phase between two light beams traveling on opposite sides of a ring changes when an angular velocity applies. This phenomenon called the Sagnac effect is the basis for fiber optic gyroscope (FOG) IMUs [44]. To create a FOG IMU, two light beams are considered to travel in a medium such as a mirror-based path or optical fibers. Then the phase shift between two lights is measured. This shift directly relates to the angular velocity. Usually, three optical fiber loops are considered perpendicular to each other to measure independent rotation angles.
The third class of IMUs is based on micro-electro-mechanical system (MEMS) technology [42,45], which combines electrical and mechanical parts into very small-scale productions. MEMS IMU technologies enable significant savings in the space required in comparison to mechanical and FOG IMUs. MEMS IMUs need to be accurately calibrated for different temperatures, since their behaviors change due to the environmental changes. MEMS IMUs are noticeably less expensive than the aforementioned classes; therefore, many novel recent applications arise for them. Some examples of MEMS IMUs could be seen in works such as in Kyynäräinen et al. [46]. The fourth class of IMU works based on ring laser gyroscope (RLG), which is based on the Sagnac effect, similar to FOG IMUs [47].
In addition to a gyroscope, a modern IMU usually contains other components such as accelerometers, barometers, and magnetometers.
An accelerometer measures the force applied on the sensor in a specific direction. Two main classes of accelerometers exist: closed loop [48,49] and open loop [50]. In close-loop accelerometers, a mass body is placed on the sensor, and the movement of the mass body is observed. Therefore, closed loops are less expensive and easier to build. Open-loop accelerometers work on the concept of maintaining the position of the mass body by applying a specific amount of current as magnetic force; e.g., therefore, closed-loop accelerometers are more accurate and more expensive to build.
A barometer is a sub-component of an IMU measuring air pressure that is used to estimate the approximate altitude of the IMU device. The altitude is required for magnetic shift estimation. There are different sensor principles to measure pressure. A comprehensive review could be seen in works such as Kumar and Tanwar [51] or Fiorillo et al. [52].
A magnetometer is employed when absolute orientations of an IMU are required. The Lorentz force principle [53] is employed in MEMS magnetometers as the base law. There is a shift between geographical altitude and magnetic altitude that is compensated by employing a suitable magnetic model such as the world magnetic model [54].
Calibrating an IMU usually requires a precise rotating table whose orientations and forces are precisely controlled by electro-mechanical servo motors. The standard calibration process involves accessing very accurate readings of the table and subsequently adjusting IMU readings to their relevant reference values [55]. The calibration process is usually performed for different temperature settings, since IMU behavior (especially for MEMS IMUs) could be temperature dependent. Despite the robust nature of the mentioned calibration schemes, they require accessing expensive calibration mechanisms; therefore, other practical calibration methods, such as calibration with fixed location [56], have been developed by researchers.

2.3. Camera

Cameras are central in almost any MMS design. A camera senses a specific or multiple parts of the electromagnetic spectrum. Cameras could be classified based on different technical criteria, one of which is capturing configuration that makes two classes of global and roller shutter cameras [57]. The first category deals with simultaneously reading the pixel grid, while the second refers to sequential readings. Global shutters are more desirable for MMS applications since they are less sensitive to movement of a platform [58]. Most MMS cameras are equipped with an external trigger mechanism (wired or wireless) that enables time-accurate image capturing [59]. Accurate triggering is absolutely necessary for precise synchronization of MMS components. Cameras’ optical system design deals with the placement and configuration of a set of lenses to adjust certain optical parameters such as field of view, etc. Usually, a fixed-lens camera is more suitable for an MMS since geometric properties of the perspective geometry are preserved in comparison to a variable focal-length lens system that is more complex and inaccurate [60].
In camera systems used in MMSs, usually several electromagnetic bands are observed. Multi-band cameras are usually designed in three fundamental approaches. In the first approach, a multi-spectral filter array is used to take a series of exposures using a tunable filter [61]. The second approach is to use a camera array, where each camera is assigned to take records of a specific band in the electromagnetic spectrum [62]. In the third approach, multiple pixels are designed to capture different bands of an approximate pixel location [63].
In addition to having multi-band settings, designing a suitable viewing angle is an important factor to be considered in MMSs with cameras. One way to achieve wide viewing angles is by employing multi-camera systems that are used when a single camera is not sufficient to cover a designed field of view or not suitable to cover the designed electromagnetic spectrum. Instead, a set of cameras is employed for applications such as wide imaging [64]. Former specifications enabled MMS designers to employ complex multi-band multi-directional cameras that brought unseen possibilities for tasks such as environmental monitoring and material classification.
Geometric calibration of a general multi-camera is possible by employing the following collinearity equation
F I O P i = 1 : n , R ζ , η , ψ i = 2 : n , Δ i = 2 : n , R ω , ϕ , κ t = t 1 : t m , X 0 t = t 1 : t m , X 1 : n O , x 1 : n I = 0 ,
where I O P i is the interior orientation parameters of i t h camera, R i is relative orientations of i t h camera in the camera frame, R t is relative orientation of the camera system at time (t), X 0 t is the relative position of center of the local coordinate system defined for the camera system at time (t), and finally, X and x are object coordinates and image coordinates, respectively.
Using cameras in MMSs provides a type of location awareness, since images could be processed to estimate a local positioning. In this sense, cameras could be placed in the same category as IMUs, because image locations and orientations could be accurately estimated with some degrees of uncertainty. Techniques such as SfM, aerial triangulation, or stitching could be employed to provide a trajectory of a sequence of images.

2.3.1. Camera Calibration

Camera calibration is the process of estimating internal, structural, external, radiometric, and systematic characteristics of a single or multi-camera. These calibration methods could be classified in two separate schemes: geometric [65] and radiometric [66].
Geometric camera calibration is the process of determining intrinsic, structural, extrinsic, and systematic parameters of a camera to accurately connect 3D object points to their corresponding 2D image points [67]. Intrinsic parameters involve the camera’s lens characteristics, such as focal length, principal point, and distortion coefficients, which account for lens distortion and other optical aberrations [68]. Structural parameters deal with the internal configuration of a multi-camera. Stability analysis of multi-cameras deals with studying structural changes of a multi-camera during the time [69]. This analysis is essential for efficiently employing a multi-camera in an MMS. Extrinsic parameters describe the camera’s position and orientation in the environment, defining how it is situated relative to the objects it captures. Systematic calibration involves aligning the cameras with respect to other MMS components. This aspect of the calibration ensures a robust fusion of different information, allowing data from MMS sensors to be fused into a cohesive representation. Accurate calibration is critical because it ensures that each data point captured by other components aligns with its corresponding pixel location in the captured images. Geometric camera calibration usually involves capturing images of a known calibration body, such as a checkerboard or coded targets, from different angles [70]. By analyzing these images, algorithms can calculate parameters of complex transformations that map 3D points in the world coordinate system to their corresponding 2D projections in the images.
Radiometric calibration is the process of adjusting and modeling a camera’s response to light to accurately represent the intensity values of the scene [66]. Unlike geometric calibration, which focuses on the spatial arrangement of pixels, radiometric calibration addresses how pixel intensities correspond to actual physical radiance values [71]. Radiometric camera calibration includes correcting for non-linearities in the camera’s response, exposure settings, lens vignetting, and sensor noise. By modeling these factors, radiometric calibration ensures that each pixel’s intensity in an image accurately reflects the real-world brightness, allowing for consistent comparisons across images taken with different exposure settings or cameras. Radiometric calibration usually requires a special test field that brings the calibration process under a constraint-controlled environment [72].

2.3.2. Simultaneous Localization and Mapping (SLAM)

Photogrammetric and 3D-vision principles provide a mathematical formulation to estimate locations and orientations of images in different settings without essentially requiring additional information from other sensors. Simultaneous localization and mapping (SLAM) is basically a wide concept that refers to finding the camera position and orientation while collecting geometric information, usually as a sparse point cloud of the scene [73]. It enables cameras to act as localization sensors, especially in situations where no other reliable source of acquiring location information is accessible. An image-based SLAM could be implemented in different ways, e.g., visual odometry methods such as bundle block adjustment (BBA) could be used to estimate camera and scene parameters through an optimization process [74]. In general, a SLAM problem could be addressed by diverse camera configurations and other sensors. In monocular (or single-camera image-based) SLAM, a set of images from only one camera are given, while the trajectory of the moving camera is in question [75]. Camera-based SLAMs could be enabled also with other camera settings, including stereo, RGB-D, and multi-cameras. Knowing parameters such as the interior orientation of cameras or the internal structure in the case of using multi-cameras is of high importance and could significantly increase the reliability of the output SLAM algorithms. Examples of camera-based SLAM could be seen, e.g., in works of Campus et al. [76] or Murai et al. [77].
Image-based SLAM is usually implemented by sequentially finding tie points between images, creating stereo pairs, solving relative orientation between pairs of images, and combining image pairs to create a network of images in an efficient way. Usually, a set of image-based tie points is translated into a sparse point cloud that represents the environment of the SLAM system. Local optimization by employing BBA helps to reduce the effect of noise and build a robust estimation; however, geometric constraints such as loops could significantly help the SLAM system to mitigate the effect of additive errors.
Deep learning gradually finds its way into image-based SLAMs, e.g., DROID-SLAM proposed by Teed and Deng [78] employed a neural network for monocular, stereo, and RGB-D SLAMs. An overview of deep-learning SLAMs could be seen, e.g., in Li et al. [79].

2.3.3. Automatic Image Registration and Image-Based Point Cloud Generation

Automatically connecting two or more images is a fundamental photogrammetry and computer vision problem [80]. Two-dimensional image-point connections are called tie points. Automatic tie-point extraction is a process involving finding optimum image point positions that are less susceptible to affine or projective transformations (localization) and finding correct matches between subsequent or overlapping images [81]. Multi-modal image registration is a more demanding task that has recently received attention [82]. Most photogrammetric processing steps require robust image registration; therefore, the existence of blunders could significantly affect the subsequent processing steps. In image registration, filtering of false matches is performed by employing techniques such as random sampling consensus (RANSAC), where a wide variety of kernel functions could be selected to adapt the solution to a specific situation of a problem (see, e.g., [83,84]).
On the other end of the photogrammetric processing chain lies image-based point clouds that are considered the foremost output of this process. A chain of analysis (e.g., applying classifiers) is expected to be followed afterwards. Point-cloud generation requires dense matching between all image pairs that have some level of overlap. Therefore, the quality of a photogrammetric point cloud directly depends on factors such as the geometric strength of intersections [85] and the correctness of image matches. Several technologies could be used to produce image-based point clouds, e.g., global and semi-global matching techniques [86] use network optimization strategies to find matches between stereo pairs, or ML-based methods use different approaches such as hidden Markov model (HMM) to address this problem. This problem could also be cast into a semi-supervised classification problem that could be addressed by employing neural networks [87].
Despite the clarity of the photogrammetric intersection concepts, extraction from a single image without using the intersection principle is also possible, e.g., techniques such as shape from shading [88] or deep learning could be employed to approximate building heights in aerial photos. Similar research outputs, such as cross-domain diffusion [89], try to use the same principle to create neural networks that can generate 3D information by employing highly trained non-linear classifiers.

2.3.4. Why Are Cameras Almost Always Required?

Cameras, as lightweight and cost-effective sensors, play a vital role in MMSs due to their versatility and the wide range of data that can be extracted from images. For instance, image classification methods are significantly easier to implement for object recognition compared to processing LiDAR point clouds. Additionally, cameras complement LiDAR data by adding rich visual information, such as color and texture, to the point clouds. This integration enhances the overall quality and usability of the collected data, making cameras an essential component of most MMSs. Their ability to provide both visual and spatial context ensures that MMSs can deliver comprehensive and detailed representations of the environment.

2.4. Light Detection and Ranging (LiDAR)

LiDAR is one of the most important components of most MMSs. LiDAR is able to efficiently collect 3D information from the surrounding environment. Current applications of LiDAR are vast, since it provides highly accurate distance measurements. Employing LiDAR in different architectures, such as mirror-based configurations [90] or arrays of rotating sensors [91], enables 2D and 3D measurements. The sensor principle behind measuring the position of a 3D point by LiDAR is based on emitting laser beams toward a target and measuring the time it takes for the light to travel and bounce back after hitting a surface, providing precise distance measurement [18]. These 3D points could be combined to form detailed point clouds. A scanned area could be represented in high resolution with digital twins such as the LiDAR point clouds [92].
LiDAR systems can operate at various wavelengths, typically in the near-infrared spectrum for terrestrial mapping [93], but green (532 nm) wavelengths are also used for bathymetric LiDAR [94], which penetrates water bodies. The first demonstrated LiDAR application was in meteorology [95]. It has applications across multiple domains, such as autonomous vehicle navigation [96], where it aids in object detection and environment mapping, and topographic and forest canopy studies [97,98]; it is also used in many other applications, such as urban planning [99], enabling infrastructure assessment and site analysis [100,101]. The accuracy of LiDAR data is influenced by factors such as the laser pulse frequency, angle of incidence, atmospheric conditions, and the reflectivity of the scanned surfaces, which necessitates advanced calibration and processing techniques to maximize its utility in complex environments [102].
Generating a photogrammetric point cloud usually needs considerable computation power and requires a certain amount of post-processing time. In general, LiDAR fills the computational-cost gap, since the collection of dense point clouds does not require high computational power. Despite this advantage, LiDAR in no possible way could be considered a replacement for the camera systems that exist in MMSs. Still, the cameras provide a practical form of information that is much better in many applications, such as object detection and visual presentation. If relative orientations and translations of LiDAR become known with respect to the cameras, and an accurate synchronization between LiDAR points and camera shots exists, a data fusion approach could be achieved to bring the power of both data sources under a unique 2D–3D presentation umbrella, where every image point is connected to its 3D object point. In the post-processing phase, many sophisticated geometric operations are applied to a point cloud, e.g., the number of points is efficiently reduced [103], or the point clouds are converted into mesh forms that are more suitable for visualization [104].

2.4.1. Geometric Calibration of LiDARs

Three calibration schemes could be considered for a LiDAR system including internal calibration of laser components, relative calibration of LiDAR with respect to other sensors, and time synchronization. Usually, the first calibration scheme is performed by the sensor manufacturer; however, the second and third schemes seem absolutely necessary for MMS developers to integrate and employ LiDARs.

2.4.2. LiDAR Camera Calibration

Two main strategies exist to find relative orientations and positions of a LiDAR with respect to a camera. The first strategy relies on using targets that could be extracted from LiDAR point clouds and images. The second approach relies on optimizing the time-dependent data from both sources by automatic registration schemes, e.g., a photogrammetric point cloud could be registered on a LiDAR point cloud by employing point cloud registration methods [105]. Alternatively, a photo-realistic presentation of the point cloud could be used to generate an artificial synthetic image [106]. Image registration methods [107] could be subsequently applied to find common points between images and LiDAR point clouds.

2.4.3. LiDAR-SLAM

Similar to the cameras, LiDARs could be employed to find the trajectory of an MMS. One approach is to sequentially register LiDAR point clouds (see e.g., [108]) to form a robust path of the moving platform. LiDAR SLAM, despite being extensively successful, suffers from a lack of preserving global accuracy over long baselines; however, this approach could be considered as a valuable alternative localization solution under specific circumstances, e.g., when a GNSS signal is temporarily lost for a short period of time or in situations where no other localization method exists at all. In such scenarios, SLAM approaches are useful. In a more comprehensive scenario, a combination of GNSS, IMU, LiDAR, and cameras could be used for a complex multi-sensorial location awareness.

2.5. Robotic Operating System (ROS)

Robotic operating system (ROS) is the most recent effort to create a modular software platform that encompasses the need for integrating sensor components of a robotic system. ROS has the ability to be considered as the software bed for any MMS. Despite its wide applications in robotics, its capacities are still unknown to many MMS developers. Numerous studies have demonstrated the applicability of ROS in handling large volumes of data in an integrated sensor configuration. There are two generations of ROS. The first generation (ROS 1) was primarily developed with the basic needs such as quality and performance requirements [109]; however, some challenges such as visualization, control, and navigation led to the development of the second ROS project (ROS 2), which started from scratch to address ROS 1’s shortcomings. ROS 2 is now the absolute foundation of most technological projects involving robotic-level complexities [110].

3. Mobile Mapping Systems

3.1. MMS Research Outputs

An MMS could encompass any of the aforementioned sensors in an integrated hardware and software environment. There should be a precise synchronization mechanism for ensuring the timely capturing of data frames from non-homogeneous sensors. In this context, MMS acts similar to a body if we may call sensor components separate organisms. Therefore, an MMS needs a brain to analyze data that comes from the nervous system to the central processing unit, filter outliers, merge data, add value to the sensor outputs by fusing non-homogeneous components, and produce required higher-level outputs by its ML methods.
Many attempts have been made to create commercial and research-based MMSs, all of them following relatively similar design and implementation concepts while widely varying based on the philosophy that MMSs are based upon. Therefore, it seems that MMS designers need to follow standard manufacturing protocols to build compatible systems.

3.2. Custom MMS Configurations

Numerous studies demonstrated custom MMS configurations for specific applications, e.g., a multi-camera MMS [85] has been developed by the Finnish Geospatial Research Institute (FGI) for urban mapping. A customized MMS with one rotary LiDAR on a side, one MEM LiDAR in front, a LadyBug V5 multi-camera, GNSS, and IMU is shown in Figure 2a. Two custom MMS developed by FGI are demonstrated in Figure 2b,c. In this figure, LiDAR-based MMS without a camera is demonstrated. Portable MMS designs for flexible usage by an operator without a need for a car platform are an interesting topic that is followed by many research works; e.g., the FGI roamer could be installed on a backpack. A portable LiDAR-based MMS was demonstrated by Trybała et al. [111]. A camera-based MMS constructed on the visual SLAM principle was demonstrated by Menna et al. [112]. An open-source handheld MMS solution was described by Będkowski [113]. These are just a few examples of the vast number of custom MMSs.

3.3. Commercial MMSs

Commercial MMSs cover a wide range of data-capturing applications with diverse settings that are easy to handle by their operators. Despite the relative ease of use, high price tags made them unavailable for many possible applications that need geospatial information. In this section a brief list of commercial MMSs has been listed and discussed. A modular MMS with a LiDAR, 4 cameras, GNSS, and IMU was presented by Viametris MS-96. In this system, two LiDARs are located, one in a horizontal direction on top of the system and the second one in front of the system with a tilt angle. The localization accuracy is reported as ±1 cm, which is typical for most commercial MMSs, and the system is able to plug and play new sensors. Leica developed several MMSs, two of which are demonstrated in Figure 2. In this setup, two LiDARs are placed on top of the system, both with tilt angles. The GNSS antenna of the MS-96 seems to be better located in comparison to TRK 100 and TRK Neo. The point position accuracies of 11 mm and 19 mm were reported for TRK 100 and TRK Neo, respectively. Leica has also developed trolley-based MMSs such as PoroScan, which could be moved by an operator on the ground. A painted point cloud captured by TRK 100 is demonstrated in Figure 3, which shows the benefits of integrating cameras with LiDARs in an MMS.
A commercial MMS with extensive usage of cameras is shown in Figure 4 (RIEGL VMX-2HA). In this MMS setting, two tilted LiDARs have been integrated with several cameras in a complex configuration. A dual-LiDAR tilted configuration could be seen in many other commercial MMS settings. A separate 6-head multi-camera is placed for 360 image generation, which demonstrates the usability of multi-cameras in complex MMS setups. A simple setup of a 6-head multi-camera with two tilted LiDARs has been employed in the Trimble MX50 (Figure 5). This setup demonstrates a simple, useful, yet efficient setup by the MMS designer.
Despite the vast design differences demonstrated in commercial MMSs, some structural similarities show that optimum configurations are achievable, at least for general-purpose MMSs.

3.4. Alternative MMS Platforms

Beyond vehicle-borne systems, MMS platforms vary significantly in design and application. UAV-mounted LiDAR excels in aerial surveys and precision agriculture due to its flexibility and nadir perspective, though it faces payload and endurance limitations. Shoulder-borne and handheld SLAM systems (e.g., GeoSLAM or iPhone LiDAR) enable rapid indoor mapping or forestry surveys but trade absolute accuracy for portability. While these platforms share core technologies (e.g., LiDAR–camera fusion, SLAM), their operational constraints—such as power budgets, kinematic stability, and spatial coverage—warrant distinct calibration and processing approaches. For brevity, this review emphasizes vehicle-borne systems as a benchmark, though many principles (e.g., Section 2’s sensor calibrations) are transferable.

4. Recent Advancements in Applications

Researchers and engineers employed MMS technologies for a vast number of applications. Different aspects of MMS widen the range of applications, e.g., MMS could easily be employed to capture time-lapsed information of target objects such as roads, buildings, cultural heritage, etc. An MMS could be easily customized to capture more layers of data whenever needed, e.g., a heat camera could be attached to a modular MMS to capture heat leaks in buildings, or a multi-spectral camera could be employed for complex surface classification tasks. A comprehensive review of MMS applications could be found, e.g., in Elhashash et al. [23]. Here we mainly concentrate on completing this valuable review paper by stating a summary of published research works from 2022–2025. Table 2. briefly lists some of the recent applications. In this section, the most important categories are listed that cover most of the recent applications within the past few years. Four main major categories are considered in this table: 1—natural resource management (NRM), such as forest inventory or water monitoring, 2—road and tunnel mapping and asset management, 3—data, and 4—building and cultural heritage documentation. For each selected category, a list of applications is stated and briefly discussed.

4.1. Natural Resource Monitoring and Management

Natural resource monitoring (NRM) covers observing a broad definition of entities such as forests, lakes, sea shorelines, and rivers using technologies integrated in MMSs. There is usually a constant requirement to observe natural entities in short periods of time or regularly to ensure biological and environmental parameters. Therefore, it is essential for environmental observatory organizations and individual NRM parties to access MMSs to collect timely, high-quality, and accurate geospatial data. A large number of studies cover different aspects of NRM using MMSs. Here, we briefly cover some of the most important applications that have the widest attention. Otherwise, compiling a comprehensive list requires separate research, which is outside the scope of this work.
The literature about NRM discussed in this section is divided into a selected set of observational applications on forests, agricultural fields, water bodies such as lakes, and green areas such as parks and gardens. Therefore, it only covers a portion of NRM applications.

4.2. Forest Management

Trees, as the fundamental building blocks of forests, are the primary focus of studies in MMS forest and green area applications. Recognizing individual trees from point clouds is an important task that requires special attention [114]. Counting the number of trees can help estimate forest biomass and is also beneficial for wood production industries [115]. Identifying tree species is equally crucial, as it provides insights into biodiversity and ecosystem health [116]. Investigating the spectral characteristics of trees can enhance our understanding of biophysical and biochemical properties of forests [117].
Many different aspects related to individual trees can be measured remotely by MMSs. For instance, indicators such as irrigation status, dehydration levels, and leaf area index can be assessed by analyzing the spectral data of trees [118,119]. Additionally, pest detection is possible using spectral analysis, which can help in early intervention and forest management [120]. These applications underscore the importance of integrating MMSs with spectrometry sensors, as they enable comprehensive and efficient monitoring of forest ecosystems, contributing to sustainable management and conservation efforts.

4.3. Precision Agriculture

MMSs are highly useful for monitoring and maintaining agricultural production. Data collected by MMSs can be analyzed to address a variety of applications, including harvest estimation [121], yield mapping and monitoring [122], crop classification [123], irrigation planning and monitoring [124], water stress monitoring [125], and pest detection [120]. In precision agriculture applications, similar to NRM, UAVs are often the most favorable platform. This is because an orthogonal (nadir) view is typically the best direction to collect detailed and comprehensive information from a higher perspective, enabling more accurate and efficient data analysis for improved agricultural management. The integration of MMSs and UAVs enhances the ability to monitor crops, optimize resource use, and support sustainable farming practices.

4.4. Mapping and Map Updating

One of the primary applications of MMSs appears in the process of creating 2D and 3D maps [126]. The valuable potential of MMSs in providing accurate 3D spatial information makes them appropriate tools for the map generation chain. Usually, the initial outputs compiled from MMSs raw data do not comply with the typical cartographic and mapping standard requirements used in the 2D map generation process; however, these outputs are precise sources of information to either produce or update 2D cartographic maps. 3D maps usually result from processing point clouds into simplified 3D meshes usable in applications such as virtual reality, 3D GIS, and games.
Three-dimensional maps gradually find their way into real estate, where detailed virtualizations help the sector to maintain buildings and present venues for selling or renting [127]. These maps are also invaluable resources of geometric data for other applications, such as wireless network coverage analysis that requires optimizing placements of antennas [128].
On the civil side, many construction projects use MMSs to monitor and document the progress of projects, e.g., in the mining sector, 3D maps are used to ensure the correctness of projects [129]. In road construction projects, MMSs are employed to address tasks such as estimating cutting/filling [130] and project monitoring and management [131].
Documenting cultural heritage is another important application of MMSs. Besides the artistic values that MMSs provide, precise measurements could help collect invaluable information regarding small- to large-scale sites that are prone to dangerous situations such as natural disasters. Therefore, it seems necessary to employ precise 3D maps generated from MMSs for documenting historical sites [132].

4.5. Road Inventory and City Asset Management

Mapping and monitoring road assets is a critical task primarily managed by various stakeholders, including municipalities, local governments, and contractors [133]. MMSs have proven to be highly efficient tools for optimizing this process. To achieve the objectives of road asset management, car-based MMS platforms are commonly utilized. These systems are equipped with cameras and LiDAR sensors to collect comprehensive data about the surrounding environment, including road surfaces, signage, and infrastructure. The captured images and LiDAR point clouds enable detailed analysis of road conditions, such as surface smoothness, and facilitate the detection of flaws like cracks, potholes, and other anomalies. This data-driven approach enhances the accuracy and efficiency of road maintenance and planning, ultimately contributing to safer and more sustainable urban infrastructure.

4.6. Gamification and Virtual Reality

Many advanced applications of employing data captured by MMSs are now possible, thanks to powerful 3D engines such as Unity and Unreal engines. These frameworks can efficiently generate highly realistic renders of urban environments [134] for use in applications such as gaming, simulation, and various engineering applications. MMSs are among the most valuable sources of data for creating detailed models within these engines. The incorporation of physics-based simulations in these platforms enables the creation of virtual environments that can be utilized for innovative applications, such as generating synthetic training data for robotic and autonomous systems [135]. This capability is particularly useful for developing and testing algorithms in controlled, yet highly realistic, virtual environments.
Table 2. Summary of important literature on mobile mapping systems and key applications.
Table 2. Summary of important literature on mobile mapping systems and key applications.
CategorySelected LiteratureKey Applications
Natural resource monitoringAn individual tree segmentation [114], forest digital twin [136]-Individual tree detection
-Trunk recognition from point-cloud
Forest management and monitoringForest parameter estimation at single-tree level [137], 3D mapping [138], biomass and CO2 estimation [139], Individual tree detection [140]-Biomass estimation
-Forest digital twin
-CO2 estimation
Environment
monitoring
Rocky landslide monitoring [141]
Water body monitoring [142]
-Camera-based MMS
-Hazardous site mapping
-Image-based point clouds
-UAV MMS platforms
Precision
agriculture
Production estimation [121]
Crop classification [123]
Irritation [143]
-Harvest estimation;
-yield mapping and monitoring
-Crop classification
-Irritation planning monitoring
-Water stress monitoring; pest detection
MappingMap updating [144,145]
Robotic 3D mapping [146]
-Car-mounted MMS
-Deep-learning classification
Real estateFlood risk mapping [147]-Per-building flood risk modeling
Mining industriesOutdoor and indoor mapping [129]-Geotechnical and geological study
Road mapping, inventory, and asset managementTraffic infrastructure road property survey [148], highway infrastructure survey, road inventory [133], evaluation of road infrastructure in urban and rural corridor [149], road refurbishment [150], rockfall risk management [151], road boundary extraction [152]-Road property survey
-Spatial accuracy investigation
-Traffic sign detection
-Lighting poles detection
-Road centerlines, and building corners detection
-Enhanced road safety by hazardous objects monitoring
ConstructionLarge-scale projects monitoring [153]
Tunnel inspection [154]
-MMS with ground penetrating radar (GPR)
-Site mapping
Under waterUnderwater mapping [155]-Ocean depth mapping
Low-cost
developments
Combining low-cost UAV footage with MMS point cloud data [156]-Combining terrestrial MMS and UAV
-3D urban map generation
GamificationIdentification of road assets [157]-Applications of game engines
DataSLAM dataset for urban mobile mapping [114], semantic segmentation [158], map updating using autonomous vehicles [159]-Large open datasets
-3D point cloud annotation
-MMS data capturing by autonomous vehicles
LocalizationSLAM on an MMS with multi-camera and tilted LiDAR [160]-Customized SLAM application by sensor fusion
Custom MMS Ground penetrating radar MMS [153]-Additional sensors
Cultural heritage documentationForgotten cultural heritage under forest environments [161], continuous monitoring [132]-Indoor and outdoor mapping
-Site geometric documentation
Wireless networksNetwork coverage estimation [128]-Ray tracing using 3D maps
-Wireless propagation models
Beyond the primary domains discussed, MMS technologies have found a wide range of miscellaneous applications across various fields. For instance, MMS has been successfully employed for underwater monitoring [155], enabling the collection of high-resolution spatial data in aquatic environments. This application is particularly valuable for marine research, underwater infrastructure inspection, and environmental monitoring. Additionally, MMS has been utilized in areas such as autonomous driving [97,159], indoor collision avoidance [162], archaeology, disaster response, and even entertainment, where precise 3D mapping is required for virtual reality experiences or film production. The versatility of MMS continues to expand as new use cases emerge, demonstrating its potential to address diverse challenges across multiple industries.

4.7. Adoptability of Different MMSs in Various Applications

MMSs exhibit significant adaptability across diverse application scenarios, with their effectiveness largely determined by sensor integration, platform choice, and environmental demands. In urban environments, vehicle-mounted MMSs equipped with LiDAR and multi-camera arrays excel at creating detailed 3D models for infrastructure management, though GNSS signal occlusion in dense areas often necessitates SLAM-based corrections. Conversely, natural resource management relies on UAV-based systems with multi-spectral cameras and LiDAR to monitor vegetation health and biomass, while portable backpack MMS dominate indoor and confined spaces, leveraging visual-inertial SLAM to overcome GPS-denied conditions. Precision agriculture benefits from UAVs with thermal and multi-spectral sensors for real-time crop analysis, whereas disaster response and underwater mapping demand ruggedized LiDAR or sonar systems tailored to harsh, dynamic environments. The modularity of MMS designs—enabling sensor interchangeability—enhances their versatility, though challenges like foliage obstruction, variable lighting, and data synchronization persist.
The adaptability of MMSs is further amplified by advancements in AI and standardization efforts. Machine learning algorithms improve object detection and data fusion, enabling systems to dynamically adjust to environmental variability, such as changing crop stages or urban clutter. However, the lack of universal protocols for sensor calibration and data processing remains a barrier to seamless deployment across heterogeneous scenarios. Future developments should prioritize modular, scalable architectures and robust real-time processing capabilities to address the unique demands of each application—from high-accuracy urban mapping to rapid disaster assessments. By balancing cost, precision, and operational constraints, MMSs can continue to evolve as indispensable tools for geospatial innovation.

5. Discussion and Future Directions

Despite the considerable progress in various directions of MMS development, it still seems that more unified developments are required to lower the significant gaps between commercial and research MMSs. This requires open-sourcing MMS designs to gradually optimize hardware and software platforms.
While the significant advancements in MMS technologies have been highlighted, several areas remain for further exploration. First, the integration of MMSs with emerging technologies such as 5G, edge computing, and the Internet of Things (IoT) could further enhance real-time data processing and communication capabilities. Additionally, the development of more robust calibration techniques and error correction algorithms will be crucial in improving the accuracy and reliability of MMS data, particularly in challenging environments such as urban canyons or dense forests.
Another area for future research is the ethical and societal implications of widespread MMS adoption. As MMS technologies become more pervasive, issues related to data privacy, security, and equitable access must be addressed. Ensuring that MMS technologies are used responsibly and ethically will be essential in maintaining public trust and maximizing their societal benefits. One of the most important ethical concerns is the potential for MMSs to infringe on individual privacy. MMSs can capture detailed 3D models of urban environments, including private properties, vehicles, and even individuals. This raises questions about consent and data ownership. In many cases, individuals are unaware that their properties or activities are being recorded by MMSs. This lack of transparency can lead to privacy violations, especially when data are used for purposes beyond its original intent (e.g., commercial use or law enforcement). While anonymization techniques can be applied to MMS data, the high resolution and spatial accuracy of the data make it difficult to fully anonymize individuals or properties. This is particularly problematic in urban environments where individuals can be identified through contextual information (e.g., vehicle license plates or building details). The use of MMS for surveillance by governments or private entities further exacerbates these concerns. For instance, MMS data could be used to track individuals’ movements or monitor public spaces without their knowledge. This raises ethical questions about the balance between public safety and individual privacy.
An essential aspect of MMSs relates to real-time processing of collected data to produce timely analytics. Restriction in computational resources is a significant burden in this effort. Therefore, it is worth investing in solutions such as efficient implementations, parallel computing, and the use of computation infrastructures such as local and cloud graphical processing unit (GPU) clusters.
Despite the fact that combining homogenous and non-homogenous data has been considerably progressed, especially in methods such as SLAM, noticeable scientific challenges could still be observed, e.g., in merging photogrammetric data with other data sources such as multi-spectral, hyper-spectral cameras and LiDARs in an MMS. More accurate and smarter calibration and data fusion schemes could potentially address such shortcomings.
Choosing between photogrammetric and LiDAR point clouds needs to be considered in the MMS design phase. Photogrammetric point clouds, despite being comparable to LiDAR point clouds in terms of accuracy, need a significant post-processing time. Photogrammetric point clouds could also suffer from a lack of density, especially in cases where a lack of sufficient texture information exists. On the other hand, LiDARs are more expensive than cameras and sometimes heavy. The weight aspect of LiDAR becomes important, especially when MMS is placed on platforms such as UAVs. This trade-off is worth investigating carefully based on their principal similarities, differences, costs, and requirements. In many cases, it is possible and even preferable to combine those point clouds if the elements of the MMS design allow.
The role of MMSs in addressing global challenges such as climate change, disaster management, and sustainable development cannot be overstated. By providing detailed, real-time spatial data, MMS can support evidence-based decision making and enable more effective resource management. Collaborative efforts between researchers, industry, and policymakers will be essential in harnessing the full potential of MMSs to address these pressing global issues.
The comprehensive analysis of current MMS research reveals several critical trajectories for future development. First, the integration of neuromorphic computing [163] and next-generation sensors such as event-based cameras [164], complex multi-cameras, and multi-spectral sensors promises to revolutionize real-time processing in dynamic environments, particularly for autonomous navigation applications where low latency is paramount. Second, the convergence of digital twin technologies with mobile mapping will likely enable persistent, centimeter-accurate 3D world models that update continuously through distributed MMS networks. Our analysis suggests that these advancements will be accelerated by three key enablers: (1) the maturation of 5G/6G edge computing infrastructures that alleviate current bandwidth bottlenecks, (2) standardized APIs for federated learning across heterogeneous MMS fleets, and (3) quantum-enhanced LiDAR systems [165] now entering laboratory testing phases. Particularly in urban environments, we anticipate a paradigm shift toward “always-on” mapping systems that combine vehicle-borne, UAV, and fixed sensor nodes into unified geospatial intelligence networks.
Emerging challenges will require focused research attention in the coming decade. The environmental sustainability of large-scale MMS deployments—particularly the energy footprint of high-resolution LiDAR arrays—demands new approaches to sensor power management and green computing architectures. Our review identifies an urgent need for adaptive calibration protocols that maintain accuracy amid increasing sensor heterogeneity while minimizing human intervention. Ethical frameworks for privacy-preserving mobile mapping must evolve in parallel, potentially leveraging differential privacy algorithms and blockchain-based data governance. The most transformative opportunities may emerge from cross-disciplinary fertilization, such as applying bio-inspired navigation principles from animal migration studies to SLAM systems or adapting astrophysics-based uncertainty quantification methods to urban mapping. These directions suggest that future MMS research should prioritize not just technological innovation but also the development of new theoretical frameworks for understanding complex, multi-scale geospatial dynamics.
One of the key challenges in the field remains the lack of standardization in MMS design and protocols. While numerous custom and commercial MMS configurations exist, the absence of unified standards hinders the interoperability and scalability of these systems. This paper has proposed a standardized framework for customizable MMSs, aiming to bridge the gap between research and commercial applications. By adopting standardized protocols, the geospatial community can more effectively leverage the full potential of MMS technologies, enabling more efficient data collection, processing, and analysis.
This review has centered on vehicle-borne MMSs for their foundational role in geospatial data collection, yet upcoming works should systematically compare platform-specific trade-offs (e.g., UAV vs. handheld SLAM) in accuracy, cost, and scalability. Standardizing modular MMS designs across platforms could bridge these gaps, enabling seamless transitions between aerial, terrestrial, and portable mapping.
The applications of MMSs are vast and continue to grow, spanning natural resource management, urban planning, precision agriculture, cultural heritage documentation, and autonomous vehicle navigation. The ability of MMSs to capture time-lapsed information and integrate additional data layers, such as thermal or multi-spectral imaging, further enhances their utility in addressing complex environmental and societal challenges. As MMS technology continues to evolve, it is expected to play an increasingly critical role in the development of smart cities, sustainable resource management, and autonomous systems.
Integration of AI and ML methods in an MMS still requires more investment from the scientific and engineering communities. Despite considerable attention to advanced topics such as high-accuracy or real-time ML methods, there still seems to be a lag among MMS developers in terms of smart implements. Challenges such as performance, suitability of hardware and method, efficient usage of cloud computing, high-speed wireless communications, and low-cost, low-energy in-house implementations should be considered by the researchers and developers of future generations of MMSs.
In conclusion, the future of MMSs is bright, with immense potential for innovation and impact. It lies in the continued integration of advanced sensors, computational methods, and AI-driven analytics. The development of standardized, modular, and scalable MMS platforms will be essential in unlocking the full potential of this technology. As we move forward, interdisciplinary collaboration and open-source initiatives will be key to driving innovation and ensuring that MMS technologies remain at the forefront of geospatial science and engineering. By addressing the challenges and opportunities outlined in this paper, the geospatial community can continue to push the boundaries of what is possible with MMSs, driving progress in science, industry, and society.

Author Contributions

Conceptualization, E.K.; methodology, E.K. and S.N.; investigation, E.K. and S.N.; resources, E.K.; writing—original draft preparation, E.K.; writing—review and editing, E.K., S.N., P.P., Y.C., E.H. and A.H.; visualization, E.K.; supervision E.K.; project administration, E.K.; funding acquisition, E.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research project is funded by Visionium Oy (Helsinki, Finland) through the project number 2025-1000313-1 “promote and facilitate civil applications of Mobile Mapping Systems in Finland”.

Conflicts of Interest

Authors Ehsan Khoramshahi and Somayeh Nezami are co-founders of the company Visionium Oy. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Lockwood, M.; Davidson, J.; Curtis, A.; Stratford, E.; Griffith, R. Governance Principles for Natural Resource Management. Soc. Nat. Resour. 2010, 23, 986–1001. [Google Scholar] [CrossRef]
  2. Hofmann-Wellenhof, B.; Lichtenegger, H.; Collins, J. Global Positioning System: Theory and Practice; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  3. Li, X.; Zhang, X.; Ren, X.; Fritsche, M.; Wickert, J.; Schuh, H. Precise positioning with current multi-constellation global navigation satellite systems: GPS, GLONASS, Galileo and BeiDou. Sci. Rep. 2015, 5, 8328. [Google Scholar] [CrossRef] [PubMed]
  4. Olanoff, D. Inside Google Street View: From Larry Page’s Car To The Depths Of The Grand Canyon. Retrieved July 2013, 29, 2019. [Google Scholar]
  5. Rizaldy, A.; Firdaus, W. Direct georeferencing: A new standard in photogrammetry for high accuracy mapping. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 5–9. [Google Scholar] [CrossRef]
  6. Gledhill, D.; Tian, G.Y.; Taylor, D.; Clarke, D. Panoramic imaging—A review. Comput. Graph. 2003, 27, 435–445. [Google Scholar] [CrossRef]
  7. Explore Street View and Add Your Own 360 Images to Google Maps. Available online: https://www.google.com/streetview/ (accessed on 1 April 2025).
  8. Yang, B. Developing a mobile mapping system for 3D GIS and smart city planning. Sustainability 2019, 11, 3713. [Google Scholar] [CrossRef]
  9. Sofia, H.; Anas, E.; Faïz, O. Mobile mapping, machine learning and digital twin for road infrastructure monitoring and maintenance: Case study of mohammed VI bridge in Morocco. In Proceedings of the 2020 IEEE International Conference of Moroccan Geomatics (Morgeo), Casablanca, Morocco, 11–13 May 2020; pp. 1–6. [Google Scholar]
  10. Zhao, G.; Lian, M.; Li, Y.; Duan, Z.; Zhu, S.; Mei, L.; Svanberg, S. Mobile lidar system for environmental monitoring. Appl. Opt. 2017, 56, 1506–1516. [Google Scholar] [CrossRef]
  11. Blaser, S.; Nebiker, S.; Wisler, D. Portable image-based high performance mobile mapping system in underground environments–system configuration and performance evaluation. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 4, 255–262. [Google Scholar] [CrossRef]
  12. Barazzetti, L.; Remondino, F.; Scaioni, M. Combined use of photogrammetric and computer vision techniques for fully automated and accurate 3D modeling of terrestrial objects. In Proceedings of the Videometrics, Range Imaging, and Applications X, San Diego, CA, USA, 2–3 August 2009; Volume 7447, pp. 183–194. [Google Scholar]
  13. Ravi, R.; Lin, Y.-J.; Elbahnasawy, M.; Shamseldin, T.; Habib, A. Simultaneous system calibration of a multi-lidar multicamera mobile mapping platform. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1694–1714. [Google Scholar] [CrossRef]
  14. Hennessy, A.; Clarke, K.; Lewis, M. Hyperspectral classification of plants: A review of waveband selection generalisability. Remote Sens. 2020, 12, 113. [Google Scholar] [CrossRef]
  15. Manish, R.; Lin, Y.-C.; Ravi, R.; Hasheminasab, S.M.; Zhou, T.; Habib, A. Development of a miniaturized mobile mapping system for in-row, under-canopy phenotyping. Remote Sens. 2021, 13, 276. [Google Scholar] [CrossRef]
  16. Spinhirne, J.D. Micro pulse lidar. IEEE Trans. Geosci. Remote Sens. 1993, 31, 48–55. [Google Scholar] [CrossRef]
  17. Garrett, D. Apollo 17. 1972. Available online: https://ntrs.nasa.gov/citations/20090012049 (accessed on 1 April 2025).
  18. Raj, T.; Hanim Hashim, F.; Baseri Huddin, A.; Ibrahim, M.F.; Hussain, A. A survey on LiDAR scanning mechanisms. Electronics 2020, 9, 741. [Google Scholar] [CrossRef]
  19. Di Stefano, F.; Chiappini, S.; Gorreja, A.; Balestra, M.; Pierdicca, R. Mobile 3D scan LiDAR: A literature review. Geomat. Nat. Hazards Risk 2021, 12, 2387–2429. [Google Scholar] [CrossRef]
  20. Kaasinen, E. User needs for location-aware mobile services. Pers. Ubiquitous Comput. 2003, 7, 70–79. [Google Scholar] [CrossRef]
  21. Karam, S.; Vosselman, G.; Peter, M.; Hosseinyalamdary, S.; Lehtola, V. Design, calibration, and evaluation of a backpack indoor mobile mapping system. Remote Sens. 2019, 11, 905. [Google Scholar] [CrossRef]
  22. Tachi, T.; Wang, Y.; Abe, R.; Kato, T.; Maebashi, N.; Kishimoto, N. Development of versatile mobile mapping system on a small scale. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 271–275. [Google Scholar] [CrossRef]
  23. Elhashash, M.; Albanwan, H.; Qin, R. A review of mobile mapping systems: From sensors to applications. Sensors 2022, 22, 4262. [Google Scholar] [CrossRef]
  24. Luetzenburg, G.; Kroon, A.; Bjørk, A.A. Evaluation of the Apple iPhone 12 Pro LiDAR for an Application in Geosciences. Sci. Rep. 2021, 11, 22221. [Google Scholar] [CrossRef]
  25. Chase, P.; Clarke, K.; Hawkes, A.; Jabari, S.; Jakus, J. Apple iPhone 13 Pro LiDAR accuracy assessment for engineering applications. In Proceedings of the Transforming Construction with Reality Capture Technologies: The Digital Reality of Tomorrow, Fredericton, NB, Canada, 23–25 August 2022. [Google Scholar]
  26. Balado, J.; González, E.; Arias, P.; Castro, D. Novel approach to automatic traffic sign inventory based on mobile mapping system data and deep learning. Remote Sens. 2020, 12, 442. [Google Scholar] [CrossRef]
  27. Niu, X.; Liu, T.; Kuang, J.; Li, Y. A novel position and orientation system for pedestrian indoor mobile mapping system. IEEE Sens. J. 2020, 21, 2104–2114. [Google Scholar] [CrossRef]
  28. Nebiker, S.; Meyer, J.; Blaser, S.; Ammann, M.; Rhyner, S. Outdoor mobile mapping and AI-based 3D object detection with low-cost RGB-D cameras: The use case of on-street parking statistics. Remote Sens. 2021, 13, 3099. [Google Scholar] [CrossRef]
  29. Deng, Y.; Ai, H.; Deng, Z.; Gao, W.; Shang, J. An overview of indoor positioning and mapping technology standards. Standards 2022, 2, 157–183. [Google Scholar] [CrossRef]
  30. Xie, J.; Wang, H.; Li, P.; Meng, Y. Overview of Navigation Satellite Systems. In Satellite Navigation Systems and Technologies; Space Science and Technologies; Springer: Singapore, 2021; pp. 35–66. [Google Scholar] [CrossRef]
  31. Zaminpardaz, S.; Teunissen, P.J.; Khodabandeh, A. GLONASS–only FDMA+ CDMA RTK: Performance and outlook. GPS Solut. 2021, 25, 96. [Google Scholar] [CrossRef]
  32. Henkel, P.; Mittmann, U.; Iafrancesco, M. Real-time kinematic positioning with GPS and GLONASS. In Proceedings of the 2016 24th European Signal Processing Conference (EUSIPCO), Budapest, Hungary, 29 August–2 September 2016; pp. 1063–1067. [Google Scholar]
  33. Landau, H.; Vollath, U.; Chen, X. Virtual reference station systems. J. Glob. Position. Syst. 2002, 1, 137–143. [Google Scholar] [CrossRef]
  34. Kouba, J.; Lahaye, F.; Tétreault, P. Precise Point Positioning. In Springer Handbook of Global Navigation Satellite Systems; Teunissen, P.J.G., Montenbruck, O., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 723–751. [Google Scholar] [CrossRef]
  35. Elsheikh, M.; Iqbal, U.; Noureldin, A.; Korenberg, M. The implementation of precise point positioning (PPP): A comprehensive review. Sensors 2023, 23, 8874. [Google Scholar] [CrossRef]
  36. Li, X.; Huang, J.; Li, X.; Shen, Z.; Han, J.; Li, L.; Wang, B. Review of PPP–RTK: Achievements, challenges, and opportunities. Satell. Navig. 2022, 3, 28. [Google Scholar] [CrossRef]
  37. Zhang, W.; Wang, J.; El-Mowafy, A.; Rizos, C. Integrity monitoring scheme for undifferenced and uncombined multi-frequency multi-constellation PPP-RTK. GPS Solut. 2023, 27, 68. [Google Scholar] [CrossRef]
  38. Zhang, P.; Zhao, Y.; Lin, H.; Zou, J.; Wang, X.; Yang, F. A novel GNSS attitude determination method based on primary baseline switching for a multi-antenna platform. Remote Sens. 2020, 12, 747. [Google Scholar] [CrossRef]
  39. Grove, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation System; Artech House: Norwood, MA, USA, 2013. [Google Scholar]
  40. Cai, X.; Hsu, H.; Chai, H.; Ding, L.; Wang, Y. Multi-antenna GNSS and INS integrated position and attitude determination without base station for land vehicles. J. Navig. 2019, 72, 342–358. [Google Scholar] [CrossRef]
  41. Zhang, C.; Dong, D.; Chen, W.; Cai, M.; Peng, Y.; Yu, C.; Wu, J. High-accuracy attitude determination using single-difference observables based on multi-antenna GNSS receiver with a common clock. Remote Sens. 2021, 13, 3977. [Google Scholar] [CrossRef]
  42. Mechanical Design of MEMS Gyroscopes. In MEMS Vibratory Gyroscopes; Springer: Boston, MA, USA, 2009; pp. 1–38. [CrossRef]
  43. Malykin, G.B. The Sagnac effect: Correct and incorrect explanations. Phys.-Uspekhi 2000, 43, 1229. [Google Scholar] [CrossRef]
  44. Lu, J.; Ye, L.; Zhang, J.; Luo, W.; Liu, H. A new calibration method of MEMS IMU plus FOG IMU. IEEE Sens. J. 2022, 22, 8728–8737. [Google Scholar] [CrossRef]
  45. Niu, W. Summary of research status and application of MEMS accelerometers. J. Comput. Commun. 2018, 6, 215. [Google Scholar] [CrossRef]
  46. Kyynäräinen, J.; Saarilahti, J.; Kattelus, H.; Kärkkäinen, A.; Meinander, T.; Oja, A.; Pekko, P.; Seppä, H.; Suhonen, M.; Kuisma, H. A 3D micromechanical compass. Sens. Actuators Phys. 2008, 142, 561–568. [Google Scholar] [CrossRef]
  47. Chow, W.W.; Gea-Banacloche, J.; Pedrotti, L.M.; Sanders, V.E.; Schleich, W.; Scully, M.O. The ring laser gyro. Rev. Mod. Phys. 1985, 57, 61–104. [Google Scholar] [CrossRef]
  48. Kang, H.; Yang, J.; Chang, H. A closed-loop accelerometer based on three degree-of-freedom weakly coupled resonator with self-elimination of feedthrough signal. IEEE Sens. J. 2018, 18, 3960–3967. [Google Scholar] [CrossRef]
  49. Grinberg, B.; Feingold, A.; Koenigsberg, L.; Furman, L. Closed-loop MEMS accelerometer: From design to production. In Proceedings of the 2016 DGON Intertial Sensors and Systems (ISS), Karlsruhe, Germany, 20–21 September 2016; pp. 1–16. [Google Scholar]
  50. Zwahlen, P.; Balmain, D.; Habibi, S.; Etter, P.; Rudolf, F.; Brisson, R.; Ullah, P.; Ragot, V. Open-loop and closed-loop high-end accelerometer platforms for high demanding applications. In Proceedings of the 2016 IEEE/ION Position, Location and Navigation Symposium (PLANS), Savannah, Georgia, USA, 11–14 April 2016; pp. 932–937. [Google Scholar]
  51. Santosh Kumar, S.; Tanwar, A. Development of a MEMS-based barometric pressure sensor for micro air vehicle (MAV) altitude measurement. Microsyst. Technol. 2020, 26, 901–912. [Google Scholar] [CrossRef]
  52. Fiorillo, A.S.; Critello, C.D.; Pullano, S.A. Theory, technology and applications of piezoresistive sensors: A review. Sens. Actuators Phys. 2018, 281, 156–175. [Google Scholar] [CrossRef]
  53. Thess, A.; Votyakov, E.V.; Kolesnikov, Y. Lorentz Force Velocimetry. Phys. Rev. Lett. 2006, 96, 164501. [Google Scholar] [CrossRef]
  54. Chulliat, A.; Macmillan, S.; Alken, P.; Beggan, C.; Nair, M.; Hamilton, B.; Woods, A.; Ridley, V.; Maus, S.; Thomson, A. The US/UK World Magnetic Model for 2015–2020. 2015. Available online: https://www.ngdc.noaa.gov/geomag/WMM/data/WMM2015/WMM2015_Report.pdf (accessed on 1 April 2025).
  55. Skog, I.; Händel, P. Calibration of a MEMS inertial measurement unit. In Proceedings of the XVII IMEKO World Congress, Rio de Janeiro, Brazil, 17–22 September 2006; pp. 1–6. [Google Scholar]
  56. Tedaldi, D.; Pretto, A.; Menegatti, E. A robust and easy to implement method for IMU calibration without external equipments. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 3042–3049. [Google Scholar]
  57. Albl, C.; Kukelova, Z.; Larsson, V.; Polic, M.; Pajdla, T.; Schindler, K. From two rolling shutters to one global shutter. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 2505–2513. [Google Scholar]
  58. Vautherin, J.; Rutishauser, S.; Schneider-Zapp, K.; Choi, H.F.; Chovancova, V.; Glass, A.; Strecha, C. Photogrammetric accuracy and modeling of rolling shutter cameras. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 139–146. [Google Scholar]
  59. Kauhanen, H.; Rönnholm, P. Wired and wireless camera triggering with Arduino. In Frontiers in Spectral Imaging and 3D Technologies for Geospatial Solutions; International Society for Photogrammetry and Remote Sensing (ISPRS): Vienna, Austria, 2017; pp. 101–106. [Google Scholar]
  60. Chowdhury, S.A.H.; Nguyen, C.; Li, H.; Hartley, R. Fixed-Lens camera setup and calibrated image registration for multifocus multiview 3D reconstruction. Neural Comput. Appl. 2021, 33, 7421–7440. [Google Scholar] [CrossRef]
  61. Lapray, P.-J.; Wang, X.; Thomas, J.-B.; Gouton, P. Multispectral filter arrays: Recent advances and practical implementation. Sensors 2014, 14, 21626–21659. [Google Scholar] [CrossRef]
  62. Genser, N.; Seiler, J.; Kaup, A. Camera array for multi-spectral imaging. IEEE Trans. Image Process. 2020, 29, 9234–9249. [Google Scholar] [CrossRef]
  63. Park, J.-I.; Lee, M.-H.; Grossberg, M.D.; Nayar, S.K. Multispectral imaging using multiplexed illumination. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio De Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
  64. Olagoke, A.S.; Ibrahim, H.; Teoh, S.S. Literature survey on multi-camera system and its application. IEEE Access 2020, 8, 172892–172922. [Google Scholar] [CrossRef]
  65. Ramírez-Hernández, L.R.; Rodríguez-Quiñonez, J.C.; Castro-Toscano, M.J.; Hernández-Balbuena, D.; Flores-Fuentes, W.; Rascón-Carmona, R.; Lindner, L.; Sergiyenko, O. Improve three-dimensional point localization accuracy in stereo vision systems using a novel camera calibration method. Int. J. Adv. Robot. Syst. 2020, 17, 1729881419896717. [Google Scholar] [CrossRef]
  66. Porto, L.R.; Imai, N.N.; Berveglieri, A.; Miyoshi, G.T.; Moriya, É.A.; Tommaselli, A.M.G.; Honkavaara, E. Comparison between two radiometric calibration methods applied to UAV multispectral images. In Image and Signal Processing for Remote Sensing XXVI; SPIE: Bellingham, WA, USA, 2020; Volume 11533, pp. 362–369. [Google Scholar]
  67. Huang, B.; Tang, Y.; Ozdemir, S.; Ling, H. A fast and flexible projector-camera calibration system. IEEE Trans. Autom. Sci. Eng. 2020, 18, 1049–1063. [Google Scholar] [CrossRef]
  68. Zhang, Z. Camera Parameters (Intrinsic, Extrinsic). In Computer Vision; Ikeuchi, K., Ed.; Springer International Publishing: Cham, Switzerland, 2021; pp. 135–140. [Google Scholar] [CrossRef]
  69. Habib, A.; Detchev, I.; Kwak, E. Stability analysis for a multi-camera photogrammetric system. Sensors 2014, 14, 15084–15112. [Google Scholar] [CrossRef]
  70. Huai, J.; Shao, Y.; Jozkow, G.; Wang, B.; Chen, D.; He, Y.; Yilmaz, A. Geometric Wide-Angle Camera Calibration: A Review and Comparative Study. Sensors 2024, 24, 6595. [Google Scholar] [CrossRef]
  71. Honkavaara, E.; Hakala, T.; Markelin, L.; Rosnell, T.; Saari, H.; Mäkynen, J. A process for radiometric correction of UAV image blocks. Photogramm. Fernerkund. Geoinf. 2012, 2, 115–127. [Google Scholar] [CrossRef]
  72. Suomalainen, J.; Oliveira, R.A.; Hakala, T.; Koivumäki, N.; Markelin, L.; Näsi, R.; Honkavaara, E. Direct reflectance transformation methodology for drone-based hyperspectral imaging. Remote Sens. Environ. 2021, 266, 112691. [Google Scholar] [CrossRef]
  73. Josep, A.; Yvan, P.; Joaquim, S.; Lladó, X. The SLAM problem: A survey. In Frontiers in Artificial Intelligence and Applications; IOS Press: Amsterdam, The Netherlands, 2008. [Google Scholar]
  74. He, M.; Zhu, C.; Huang, Q.; Ren, B.; Liu, J. A review of monocular visual odometry. Vis. Comput. 2020, 36, 1053–1065. [Google Scholar] [CrossRef]
  75. Yang, C.; Chen, Q.; Yang, Y.; Zhang, J.; Wu, M.; Mei, K. SDF-SLAM: A deep learning based highly accurate SLAM using monocular camera aiming at indoor map reconstruction with semantic and depth fusion. IEEE Access 2022, 10, 10259–10272. [Google Scholar] [CrossRef]
  76. Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
  77. Murai, R.; Dexheimer, E.; Davison, A.J. MASt3R-SLAM: Real-Time Dense SLAM with 3D Reconstruction Priors. arXiv 2024, arXiv:2412.12392. [Google Scholar] [CrossRef]
  78. Teed, Z.; Deng, J. Droid-slam: Deep visual slam for monocular, stereo, and rgb-d cameras. Adv. Neural Inf. Process. Syst. 2021, 34, 16558–16569. [Google Scholar]
  79. Li, S.; Zhang, D.; Xian, Y.; Li, B.; Zhang, T.; Zhong, C. Overview of deep learning application on visual SLAM. Displays 2022, 74, 102298. [Google Scholar] [CrossRef]
  80. Zitova, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef]
  81. Lowe, G. Sift-the scale invariant feature transform. Int. J. 2004, 2, 2. [Google Scholar]
  82. Ye, Y.; Zhu, B.; Tang, T.; Yang, C.; Xu, Q.; Zhang, G. A robust multimodal remote sensing image registration method and system using steerable filters with first-and second-order gradients. ISPRS J. Photogramm. Remote Sens. 2022, 188, 331–350. [Google Scholar] [CrossRef]
  83. Xiao, X.; Guo, B.; Shi, Y.; Gong, W.; Li, J.; Zhang, C. Robust and rapid matching of oblique UAV images of urban area. In MIPPR 2013: Pattern Recognition and Computer Vision; SPIE: Bellingham, WA, USA, 2013; Volume 8919, pp. 223–230. [Google Scholar]
  84. Hossein-Nejad, Z.; Nasri, M. An adaptive image registration method based on SIFT features and RANSAC transform. Comput. Electr. Eng. 2017, 62, 524–537. [Google Scholar] [CrossRef]
  85. Khoramshahi, E.; Campos, M.B.; Tommaselli, A.M.G.; Vilijanen, N.; Mielonen, T.; Kaartinen, H.; Kukko, A.; Honkavaara, E. Accurate calibration scheme for a multi-camera mobile mapping system. Remote Sens. 2019, 11, 2778. [Google Scholar] [CrossRef]
  86. Hirschmuller, H. Accurate and efficient stereo processing by semi-global matching and mutual information. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 21–23 September 2005; Volume 2, pp. 807–814. [Google Scholar]
  87. Seki, A.; Pollefeys, M. Sgm-nets: Semi-global matching with neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 231–240. [Google Scholar]
  88. Cao, Z.; Wang, Y.; Zheng, W.; Yin, L.; Tang, Y.; Miao, W.; Liu, S.; Yang, B. The algorithm of stereo vision and shape from shading based on endoscope imaging. Biomed. Signal Process. Control 2022, 76, 103658. [Google Scholar] [CrossRef]
  89. Long, X.; Guo, Y.-C.; Lin, C.; Liu, Y.; Dou, Z.; Liu, L.; Ma, Y.; Zhang, S.-H.; Habermann, M.; Theobalt, C. Wonder3d: Single image to 3d using cross-domain diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 9970–9980. [Google Scholar]
  90. Wang, D.; Watkins, C.; Xie, H. MEMS mirrors for LiDAR: A review. Micromachines 2020, 11, 456. [Google Scholar] [CrossRef] [PubMed]
  91. Zheng, H.; Ma, R.; Liu, M.; Zhu, Z. A linear-array receiver analog front-end circuit for rotating scanner LiDAR application. IEEE Sens. J. 2019, 19, 5053–5061. [Google Scholar] [CrossRef]
  92. Xue, F.; Lu, W.; Chen, Z.; Webster, C.J. From LiDAR point cloud towards digital twin city: Clustering city objects based on Gestalt principles. ISPRS J. Photogramm. Remote Sens. 2020, 167, 418–431. [Google Scholar] [CrossRef]
  93. Allouis, T.; Bailly, J.; Pastol, Y.; Le Roux, C. Comparison of LiDAR waveform processing methods for very shallow water bathymetry using Raman, near-infrared and green signals. Earth Surf. Process. Landf. 2010, 35, 640–650. [Google Scholar] [CrossRef]
  94. Szafarczyk, A.; Toś, C. The use of green laser in LiDAR bathymetry: State of the art and recent advancements. Sensors 2022, 23, 292. [Google Scholar] [CrossRef]
  95. Goyer, G.G.; Watson, R. The laser and its application to meteorology. Bull. Am. Meteorol. Soc. 1963, 44, 564–570. [Google Scholar] [CrossRef]
  96. Zou, Q.; Sun, Q.; Chen, L.; Nie, B.; Li, Q. A comparative analysis of LiDAR SLAM-based indoor navigation for autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6907–6921. [Google Scholar] [CrossRef]
  97. Rahman, M.F.; Onoda, Y.; Kitajima, K. Forest canopy height variation in relation to topography and forest types in central Japan with LiDAR. For. Ecol. Manag. 2022, 503, 119792. [Google Scholar] [CrossRef]
  98. Du, L.; Pang, Y.; Ni, W.; Liang, X.; Li, Z.; Suarez, J.; Wei, W. Forest terrain and canopy height estimation using stereo images and spaceborne LiDAR data from GF-7 satellite. Geo-Spat. Inf. Sci. 2024, 27, 811–821. [Google Scholar] [CrossRef]
  99. Uciechowska-Grakowicz, A.; Herrera-Granados, O.; Biernat, S.; Bac-Bronowicz, J. Usage of Airborne LiDAR Data and High-Resolution Remote Sensing Images in Implementing the Smart City Concept. Remote Sens. 2023, 15, 5776. [Google Scholar] [CrossRef]
  100. Dhanani, N.; Vignesh, V.P.; Venkatachalam, S. Demonstration of LiDAR on Accurate Surface Damage Measurement: A Case of Transportation Infrastructure. In ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction; IAARC Publications: Chennai, India, 2023; Volume 40, pp. 553–560. [Google Scholar]
  101. Wang, H.; Feng, D. Rapid Geometric Evaluation of Transportation Infrastructure Based on a Proposed Low-Cost Portable Mobile Laser Scanning System. Sensors 2024, 24, 425. [Google Scholar] [CrossRef]
  102. Viswanath, K.; Jiang, P.; Saripalli, S. Reflectivity Is All You Need!: Advancing LiDAR Semantic Segmentation. arXiv 2024, arXiv:2403.13188. [Google Scholar]
  103. Zang, Y.; Yang, B.; Liang, F.; Xiao, X. Novel adaptive laser scanning method for point clouds of free-form objects. Sensors 2018, 18, 2239. [Google Scholar] [CrossRef] [PubMed]
  104. Hashimoto, T.; Saito, M. Normal Estimation for Accurate 3D Mesh Reconstruction with Point Cloud Model Incorporating Spatial Structure. In Proceedings of the CVPR Workshops, Long Beach, CA, USA, 16–20 June 2019; Volume 1, pp. 1–10. [Google Scholar]
  105. Huang, X.; Mei, G.; Zhang, J.; Abbas, R. A comprehensive survey on point cloud registration. arXiv 2021, arXiv:2103.02690. [Google Scholar] [CrossRef]
  106. Jones, K.; Lichti, D.D.; Radovanovic, R. Synthetic Images for Georeferencing Camera Images in Mobile Mapping Point-clouds. Can. J. Remote Sens. 2024, 50, 2300328. [Google Scholar] [CrossRef]
  107. Ihmeida, M.; Wei, H. Image registration techniques and applications: Comparative study on remote sensing imagery. In Proceedings of the 2021 14th International Conference on Developments in Esystems Engineering (DeSE), Sharjah, United Arab Emirates, 7–10 December 2021; pp. 142–148. [Google Scholar]
  108. Yang, H.; Shi, J.; Carlone, L. Teaser: Fast and certifiable point cloud registration. IEEE Trans. Robot. 2020, 37, 314–333. [Google Scholar] [CrossRef]
  109. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In IEEE ICRA Workshop on Open Source Software; IEEE: Kobe, Japan, 2009; Volume 3, p. 5. [Google Scholar]
  110. Macenski, S.; Foote, T.; Gerkey, B.; Lalancette, C.; Woodall, W. Robot Operating System 2: Design, architecture, and uses in the wild. Sci. Robot. 2022, 7, eabm6074. [Google Scholar] [CrossRef]
  111. Trybała, P.; Kujawa, P.; Romańczukiewicz, K.; Szrek, A.; Remondino, F. Designing and evaluating a portable LiDAR-based SLAM system. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 191–198. [Google Scholar] [CrossRef]
  112. Menna, F.; Torresani, A.; Battisti, R.; Nocerino, E.; Remondino, F. A modular and low-cost portable VSLAM system for real-time 3D mapping: From indoor and outdoor spaces to underwater environments. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 48, 153–162. [Google Scholar] [CrossRef]
  113. Będkowski, J. Open source, open hardware hand-held mobile mapping system for large scale surveys. SoftwareX 2024, 25, 101618. [Google Scholar] [CrossRef]
  114. Zhang, Y.; Ahmadi, S.; Kang, J.; Arjmandi, Z.; Sohn, G. YUTO MMS: A comprehensive SLAM dataset for urban mobile mapping with tilted LiDAR and panoramic camera integration. Int. J. Robot. Res. 2025, 44, 3–21. [Google Scholar] [CrossRef]
  115. Xu, D.; Wang, H.; Xu, W.; Luan, Z.; Xu, X. LiDAR applications to estimate forest biomass at individual tree scale: Opportunities, challenges and future perspectives. Forests 2021, 12, 550. [Google Scholar] [CrossRef]
  116. Zhang, M.; Li, W.; Zhao, X.; Liu, H.; Tao, R.; Du, Q. Morphological transformation and spatial-logical aggregation for tree species classification using hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–12. [Google Scholar] [CrossRef]
  117. Asner, G.P.; Ustin, S.L.; Townsend, P.; Martin, R.E. Forest biophysical and biochemical properties from hyperspectral and LiDAR remote sensing. In Remote Sensing Handbook, Volume IV; CRC Press: Boca Raton, FL, USA, 2024; pp. 96–124. [Google Scholar]
  118. Ahmad, U.; Alvino, A.; Marino, S. A review of crop water stress assessment using remote sensing. Remote Sens. 2021, 13, 4155. [Google Scholar] [CrossRef]
  119. Guo, Y.; Chen, W.Y. Monitoring tree canopy dynamics across heterogeneous urban habitats: A longitudinal study using multi-source remote sensing data. J. Environ. Manag. 2024, 356, 120542. [Google Scholar] [CrossRef]
  120. Zhu, H.; Lin, C.; Liu, G.; Wang, D.; Qin, S.; Li, A.; Xu, J.-L.; He, Y. Intelligent agriculture: Deep learning in UAV-based remote sensing imagery for crop diseases and pests detection. Front. Plant Sci. 2024, 15, 1435016. [Google Scholar] [CrossRef]
  121. Li, F.; Bai, J.; Zhang, M.; Zhang, R. Yield estimation of high-density cotton fields using low-altitude UAV imaging and deep learning. Plant Methods 2022, 18, 55. [Google Scholar] [CrossRef]
  122. Lambertini, A.; Mandanici, E.; Tini, M.A.; Vittuari, L. Technical challenges for multi-temporal and multi-sensor image processing surveyed by UAV for mapping and monitoring in precision agriculture. Remote Sens. 2022, 14, 4954. [Google Scholar] [CrossRef]
  123. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Deep learning techniques to classify agricultural crops through UAV imagery: A review. Neural Comput. Appl. 2022, 34, 9511–9536. [Google Scholar] [CrossRef] [PubMed]
  124. Moradi, S.; Bokani, A.; Hassan, J. UAV-based smart agriculture: A review of UAV sensing and applications. In Proceedings of the 2022 32nd International Telecommunication Networks and Applications Conference (ITNAC), Wellington, New Zealand, 30 November–2 December 2022; pp. 181–184. [Google Scholar]
  125. Dong, H.; Dong, J.; Sun, S.; Bai, T.; Zhao, D.; Yin, Y.; Shen, X.; Wang, Y.; Zhang, Z.; Wang, Y. Crop water stress detection based on UAV remote sensing systems. Agric. Water Manag. 2024, 303, 109059. [Google Scholar] [CrossRef]
  126. Treccani, D.; Adami, A.; Brunelli, V.; Fregonese, L. Mobile mapping system for historic built heritage and GIS integration: A challenging case study. Appl. Geomat. 2024, 16, 293–312. [Google Scholar] [CrossRef]
  127. Naeem, N.; Rana, I.A.; Nasir, A.R. Digital real estate: A review of the technologies and tools transforming the industry and society. Smart Constr. Sustain. Cities 2023, 1, 15. [Google Scholar] [CrossRef]
  128. Hoydis, J.; Cammerer, S.; Aoudia, F.A.; Vem, A.; Binder, N.; Marcus, G.; Keller, A. Sionna: An Open-Source Library for Next-Generation Physical Layer Research. arXiv 2023, arXiv:2203.11854. [Google Scholar] [CrossRef]
  129. Vassena, G. Outdoor and indoor mapping of a mining site by indoor mobile mapping and geo referenced Ground Control Scans. In Proceedings of the XXVII FIG CONFERENCE, Warsaw, Poland, 11–15 September 2022; Volume 1, pp. 1–10. [Google Scholar]
  130. Hasegawa, H.; Sujaswara, A.A.; Kanemoto, T.; Tsubota, K. Possibilities of using UAV for estimating earthwork volumes during process of repairing a small-scale forest road, case study from Kyoto Prefecture, Japan. Forests 2023, 14, 677. [Google Scholar] [CrossRef]
  131. Yıldız, S.; Kıvrak, S.; Arslan, G. Using drone technologies for construction project management: A narrative review. J. Constr. Eng. Manag. Innov. 2021, 4, 229–244. [Google Scholar] [CrossRef]
  132. Campi, M.; Falcone, M.; Sabbatini, S. Towards continuous monitoring of architecture. Terrestrial laser scanning and mobile mapping system for the diagnostic phases of the cultural heritage. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 46, 121–127. [Google Scholar] [CrossRef]
  133. Kurşun, H. Accuracy comparison of mobile mapping system for road inventory. Mersin Photogramm. J. 2023, 5, 55–66. [Google Scholar] [CrossRef]
  134. Agarwal, D.; Kucukpinar, T.; Fraser, J.; Kerley, J.; Buck, A.R.; Anderson, D.T.; Palaniappan, K. Simulating city-scale aerial data collection using unreal engine. In Proceedings of the 2023 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), St. Louis, MO, USA, 27–29 September 2023; pp. 1–9. [Google Scholar]
  135. Sabir, A.; Hussain, R.; Pedro, A.; Soltani, M.; Lee, D.; Park, C.; Pyeon, J.-H. Synthetic Data Generation with Unity 3D and Unreal Engine for Construction Hazard Scenarios: A Comparative Analysis. In International Conference on Construction Engineering and Project Management; Korea Institute of Construction Engineering and Management: Seoul, Republic of Korea, 2024; pp. 1286–1288. [Google Scholar]
  136. Iwaszczuk, D.; Goebel, M.; Du, Y.; Schmidt, J.; Weinmann, M. Potential of Mobile Mapping To Create Digital Twins of Forests. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 199–206. [Google Scholar] [CrossRef]
  137. Spadavecchia, C.; Belcore, E.; Grasso, N.; Piras, M. A fully automatic forest parameters extraction at single-tree level: A comparison of mls and tls applications. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 457–463. [Google Scholar] [CrossRef]
  138. Fol, C.; Murtiyoso, A.; Kükenbrink, D.; Remondino, F.; Griess, V. Terrestrial 3D Mapping of Forests: Georeferencing Challenges and Sensors Comparisons. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 55–61. [Google Scholar] [CrossRef]
  139. Spadavecchia, C. Innovative LiDAR-Based Solution for Automatic Assessment of Forest Parameters for Estimating Aboveground Biomass and C02 Storage. Doctoral Dissertation, Politecnico di Torino, Torino, Italy, July 2024. Available online: https://tesidottorato.depositolegale.it/bitstream/20.500.14242/170710/1/conv_final_phd_thesis_spadavecchia.pdf (accessed on 1 April 2025).
  140. Yang, T.; Ryu, Y.; Kwon, R.; Choi, C.; Zhong, Z.; Nam, Y.; Jo, S. Mapping Carbon Stock of Individual Street Trees Using Lidar-Camera Fusion-Based Mobile Mapping System. Available online: https://ssrn.com/abstract=4762402 (accessed on 1 April 2025).
  141. Cabrelles, M.; Lerma, J.; García-Asenjo, L.; Garrigues, P.; Martínez, L. Long and Lose-Range Terrestrial Photogrammetry for Rocky Landscape Deformation Monitoring. In Proceedings of the 5th Joint International Symposium on Deformation Monitoring (JISDM 2022), Valencia, Spain, 20–22 June 2023; pp. 485–491. [Google Scholar]
  142. Șmuleac, A.; Șmuleac, L.; Popescu, C.A.; Herban, S.; Man, T.E.; Imbrea, F.; Horablaga, A.; Mihai, S.; Paşcalău, R.; Safar, T. Geospatial Technologies Used in the Management of Water Resources in West of Romania. Water 2022, 14, 3729. [Google Scholar] [CrossRef]
  143. Bwambale, E.; Abagale, F.K.; Anornu, G.K. Smart irrigation monitoring and control strategies for improving water use efficiency in precision agriculture: A review. Agric. Water Manag. 2022, 260, 107324. [Google Scholar] [CrossRef]
  144. Hwang, J.; Yun, H.; Jeong, T.; Suh, Y.; Huang, H. Frequent unscheduled updates of the national base map using the land-based mobile mapping system. Remote Sens. 2013, 5, 2513–2533. [Google Scholar] [CrossRef]
  145. Fryskowska, A.; Wroblewski, P. Mobile Laser Scanning accuracy assessment for the purpose of base-map updating. Geod. Cartogr. 2018, 67, 35–55. [Google Scholar]
  146. Maset, E.; Scalera, L.; Beinat, A.; Visintini, D.; Gasparetto, A. Performance investigation and repeatability assessment of a mobile robotic system for 3D mapping. Robotics 2022, 11, 54. [Google Scholar] [CrossRef]
  147. Feng, Y.; Xiao, Q.; Brenner, C.; Peche, A.; Yang, J.; Feuerhake, U.; Sester, M. Determination of building flood risk maps from LiDAR mobile mapping data. Comput. Environ. Urban Syst. 2022, 93, 101759. [Google Scholar] [CrossRef]
  148. Yuan, Q.; Yao, L.; Xu, Z.; Liu, H. Survey of expressway infrastructure based on vehicle-borne mobile mapping system. In International Conference on Intelligent Traffic Systems and Smart City (ITSSC 2021); SPIE: Bellingham, WA, USA, 2022; Volume 12165, pp. 404–410. [Google Scholar]
  149. Espinel-Gomez, D.; Fernandez-Gomez, W.; Moreno-Moreno, J.; Carranza-Leguizamo, D.; Marrugo, C. A Smart Mobile Mapping Application for the Evaluation of Road Infrastructure in Urban and Rural Corridors. In Applied Computer Sciences in Engineering; Communications in Computer and Information Science; Figueroa-García, J.C., Hernández, G., Suero Pérez, D.F., Gaona García, E.E., Eds.; Springer Nature: Cham, Switzerland, 2025; Volume 2222, pp. 175–185. [Google Scholar]
  150. Simon, M.; Neo, O.; Șmuleac, L.; Șmuleac, A. The Use of lidar technology-mobile mapping in urban road infrastructure. Res. J. Agric. Sci. 2023, 55, 229. [Google Scholar]
  151. Simeoni, L.; Vitti, A.; Ferro, E.; Corsini, A.; Ronchetti, F.; Lelli, F.; Costa, C.; Quattrociocchi, D.; Rover, S.; Beltrami, A. Mobile Terrestrial LiDAR survey for rockfall risk management along a highway. Procedia Struct. Integr. 2024, 62, 499–505. [Google Scholar] [CrossRef]
  152. Suleymanoglu, B.; Soycan, M.; Toth, C. 3d road boundary extraction based on machine learning strategy using lidar and image-derived mms point clouds. Sensors 2024, 24, 503. [Google Scholar] [CrossRef] [PubMed]
  153. Lisjak, J.; Petrinović, M.; Keleminec, S. Harnessing Remote Sensing Technologies for Successful Large-Scale Projects. Teh. Glas. 2024, 18, 104–109. [Google Scholar] [CrossRef]
  154. Sjölander, A.; Belloni, V.; Ansell, A.; Nordström, E. Towards automated inspections of tunnels: A review of optical inspections and autonomous assessment of concrete tunnel linings. Sensors 2023, 23, 3189. [Google Scholar] [CrossRef]
  155. Menna, F.; Battisti, R.; Nocerino, E.; Remondino, F. FROG: A portable underwater mobile mapping system. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 295–302. [Google Scholar] [CrossRef]
  156. Farkoushi, M.G.; Hong, S.; Sohn, H.-G. Generating Seamless Three-Dimensional Maps by Integrating Low-Cost Unmanned Aerial Vehicle Imagery and Mobile Mapping System Data. Sensors 2025, 25, 822. [Google Scholar] [CrossRef]
  157. Barros-Sobrín, Á.; Balado, J.; Soilán, M.; Mingueza-Bauzá, E. Gamification for road asset inspection from Mobile Mapping System data. J. Spat. Sci. 2024, 69, 443–466. [Google Scholar] [CrossRef]
  158. Peters, T.; Brenner, C.; Schindler, K. Semantic segmentation of mobile mapping point clouds via multi-view label transfer. ISPRS J. Photogramm. Remote Sens. 2023, 202, 30–39. [Google Scholar] [CrossRef]
  159. Hyyppä, E.; Manninen, P.; Maanpää, J.; Taher, J.; Litkey, P.; Hyyti, H.; Kukko, A.; Kaartinen, H.; Ahokas, E.; Yu, X. Can the perception data of autonomous vehicles be used to replace mobile mapping surveys?—A case study surveying roadside city trees. Remote Sens. 2023, 15, 1790. [Google Scholar] [CrossRef]
  160. Zhang, Y.; Kang, J.; Sohn, G. PVL-Cartographer: Panoramic vision-aided lidar cartographer-based slam for maverick mobile mapping system. Remote Sens. 2023, 15, 3383. [Google Scholar] [CrossRef]
  161. Maté-González, M.Á.; Di Pietra, V.; Piras, M. Evaluation of different LiDAR technologies for the documentation of forgotten cultural heritage under forest environments. Sensors 2022, 22, 6314. [Google Scholar] [CrossRef] [PubMed]
  162. Oleiwi, B.K.; Mahfuz, A.; Roth, H. Application of fuzzy logic for collision avoidance of mobile robots in dynamic-indoor environments. In Proceedings of the 2021 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), Dhaka, Bangladesh, 5–7 January 2021; pp. 131–136. [Google Scholar]
  163. Schuman, C.D.; Kulkarni, S.R.; Parsa, M.; Mitchell, J.P.; Date, P.; Kay, B. Opportunities for neuromorphic computing algorithms and applications. Nat. Comput. Sci. 2022, 2, 10–19. [Google Scholar] [CrossRef] [PubMed]
  164. Gallego, G.; Delbrück, T.; Orchard, G.; Bartolozzi, C.; Taba, B.; Censi, A.; Leutenegger, S.; Davison, A.J.; Conradt, J.; Daniilidis, K. Event-based vision: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 154–180. [Google Scholar] [CrossRef] [PubMed]
  165. Reichert, M.; Di Candia, R.; Win, M.Z.; Sanz, M. Quantum-enhanced Doppler lidar. Npj Quantum Inf. 2022, 8, 147. [Google Scholar] [CrossRef]
Figure 1. A mobile mapping system was developed by Google to be mounted on a car. The customized LiDAR configurations in Google’s car show how the system is designed for collecting suitable data for 360 virtual tour web-portal.
Figure 1. A mobile mapping system was developed by Google to be mounted on a car. The customized LiDAR configurations in Google’s car show how the system is designed for collecting suitable data for 360 virtual tour web-portal.
Remotesensing 17 01502 g001
Figure 2. (a) Customized mobile mapping system developed by FGI (Dr. Yuwei Chen). (b) FGI Roamer mounted on a car, (c) FGI Sensei (images b and c courtesy of NLS).
Figure 2. (a) Customized mobile mapping system developed by FGI (Dr. Yuwei Chen). (b) FGI Roamer mounted on a car, (c) FGI Sensei (images b and c courtesy of NLS).
Remotesensing 17 01502 g002
Figure 3. (a) Viametris MS-96 mobile mapping system (image courtesy of Viametris). (b) Leica Pegasus TRK 100, (c) Leica Pegasus TRK Neo (b and c images courtesy of Leica).
Figure 3. (a) Viametris MS-96 mobile mapping system (image courtesy of Viametris). (b) Leica Pegasus TRK 100, (c) Leica Pegasus TRK Neo (b and c images courtesy of Leica).
Remotesensing 17 01502 g003
Figure 4. Leica Pegasus TRK 100, painted point cloud example. Image courtesy of Leica.
Figure 4. Leica Pegasus TRK 100, painted point cloud example. Image courtesy of Leica.
Remotesensing 17 01502 g004
Figure 5. (Left): RIEGL VMX-2HA. Image from courtesy of REIGL; (right): Trimble MX50.
Figure 5. (Left): RIEGL VMX-2HA. Image from courtesy of REIGL; (right): Trimble MX50.
Remotesensing 17 01502 g005
Table 1. List of different technologies employed in sensors used in a mobile mapping system. On each section, similar technologies are painted in the same color.
Table 1. List of different technologies employed in sensors used in a mobile mapping system. On each section, similar technologies are painted in the same color.
SensorTechnologies
Compass, Magnetometer, Accelerometer, pressure, TemperatureMechanicalMEMS
UltrasonicElectricalFlash light
GNSSDifferentialRTKPPPPPP-RTKNRTK
Multi-
constellation
Multi-antenna
IMUMechanicalFOGMEMSRLG
Kalman Filter
LiDARNon-scanningNon-mechanicalMulti-temporalHyper
temporal
MEMS
ScanningMechanicalOPASLAM
Motorized Optomechanical
CameraRGBInfraredMulti-spectralMulti-cameraMulti-fisheye
RGB-DHyper-spectralMulti-projective
Geometric CalibrationRadiometric
calibration
SLAM
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khoramshahi, E.; Nezami, S.; Pellikka, P.; Honkavaara, E.; Chen, Y.; Habib, A. A Taxonomy of Sensors, Calibration and Computational Methods, and Applications of Mobile Mapping Systems: A Comprehensive Review. Remote Sens. 2025, 17, 1502. https://doi.org/10.3390/rs17091502

AMA Style

Khoramshahi E, Nezami S, Pellikka P, Honkavaara E, Chen Y, Habib A. A Taxonomy of Sensors, Calibration and Computational Methods, and Applications of Mobile Mapping Systems: A Comprehensive Review. Remote Sensing. 2025; 17(9):1502. https://doi.org/10.3390/rs17091502

Chicago/Turabian Style

Khoramshahi, Ehsan, Somayeh Nezami, Petri Pellikka, Eija Honkavaara, Yuwei Chen, and Ayman Habib. 2025. "A Taxonomy of Sensors, Calibration and Computational Methods, and Applications of Mobile Mapping Systems: A Comprehensive Review" Remote Sensing 17, no. 9: 1502. https://doi.org/10.3390/rs17091502

APA Style

Khoramshahi, E., Nezami, S., Pellikka, P., Honkavaara, E., Chen, Y., & Habib, A. (2025). A Taxonomy of Sensors, Calibration and Computational Methods, and Applications of Mobile Mapping Systems: A Comprehensive Review. Remote Sensing, 17(9), 1502. https://doi.org/10.3390/rs17091502

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop