Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Authors = Tania Landes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2042 KiB  
Article
StructScan3D v1: A First RGB-D Dataset for Indoor Building Elements Segmentation and BIM Modeling
by Ishraq Rached, Rafika Hajji, Tania Landes and Rashid Haffadi
Sensors 2025, 25(11), 3461; https://doi.org/10.3390/s25113461 - 30 May 2025
Viewed by 940
Abstract
The integration of computer vision and deep learning into Building Information Modeling (BIM) workflows has created a growing need for structured datasets that enable the semantic segmentation of indoor building elements. This paper presents StructScan3D v1, the first version of an RGB-D dataset [...] Read more.
The integration of computer vision and deep learning into Building Information Modeling (BIM) workflows has created a growing need for structured datasets that enable the semantic segmentation of indoor building elements. This paper presents StructScan3D v1, the first version of an RGB-D dataset specifically designed to facilitate the automated segmentation and modeling of architectural and structural components. Captured using the Kinect Azure sensor, StructScan3D v1 comprises 2594 annotated frames from diverse indoor environments, including residential and office spaces. The dataset focuses on six key building elements: walls, floors, ceilings, windows, doors, and miscellaneous objects. To establish a benchmark for indoor RGB-D semantic segmentation, we evaluate D-Former, a transformer-based model that leverages self-attention mechanisms for enhanced spatial understanding. Additionally, we compare its performance against state-of-the-art models such as Gemini and TokenFusion, providing a comprehensive analysis of segmentation accuracy. Experimental results show that D-Former achieves a mean Intersection over Union (mIoU) of 67.5%, demonstrating strong segmentation capabilities despite challenges like occlusions and depth variations. As an evolving dataset, StructScan3D v1 lays the foundation for future expansions, including increased scene diversity and refined annotations. By bridging the gap between deep learning-driven segmentation and real-world BIM applications, this dataset provides researchers and practitioners with a valuable resource for advancing indoor scene reconstruction, robotics, and augmented reality. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 7980 KiB  
Article
Indoor 3D Reconstruction of Buildings via Azure Kinect RGB-D Camera
by Chaimaa Delasse, Hamza Lafkiri, Rafika Hajji, Ishraq Rached and Tania Landes
Sensors 2022, 22(23), 9222; https://doi.org/10.3390/s22239222 - 27 Nov 2022
Cited by 8 | Viewed by 3958
Abstract
With the development of 3D vision techniques, RGB-D cameras are increasingly used to allow easier and cheaper access to the third dimension. In this paper, we focus on testing the potential of the Kinect Azure RGB-D camera in the 3D reconstruction of indoor [...] Read more.
With the development of 3D vision techniques, RGB-D cameras are increasingly used to allow easier and cheaper access to the third dimension. In this paper, we focus on testing the potential of the Kinect Azure RGB-D camera in the 3D reconstruction of indoor scenes. First, a series of investigations of the hardware was performed to evaluate its accuracy and precision. The results show that the measurements made with the Azure could be exploited for close-range survey applications. Second, we performed a methodological workflow for indoor reconstruction based on the Open3D framework, which was applied to two different indoor scenes. Based on the results, we can state that the quality of 3D reconstruction significantly depends on the architecture of the captured scene. This was supported by a comparison of the point cloud from the Kinect Azure with that from a terrestrial laser scanner and another from a mobile laser scanner. The results show that the average differences do not exceed 8 mm, which confirms that the Kinect Azure can be considered a 3D measurement system at least as reliable as a mobile laser scanner. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

18 pages, 3414 KiB  
Article
Towards Semantic Photogrammetry: Generating Semantically Rich Point Clouds from Architectural Close-Range Photogrammetry
by Arnadi Murtiyoso, Eugenio Pellis, Pierre Grussenmeyer, Tania Landes and Andrea Masiero
Sensors 2022, 22(3), 966; https://doi.org/10.3390/s22030966 - 26 Jan 2022
Cited by 31 | Viewed by 4844
Abstract
Developments in the field of artificial intelligence have made great strides in the field of automatic semantic segmentation, both in the 2D (image) and 3D spaces. Within the context of 3D recording technology it has also seen application in several areas, most notably [...] Read more.
Developments in the field of artificial intelligence have made great strides in the field of automatic semantic segmentation, both in the 2D (image) and 3D spaces. Within the context of 3D recording technology it has also seen application in several areas, most notably in creating semantically rich point clouds which is usually performed manually. In this paper, we propose the introduction of deep learning-based semantic image segmentation into the photogrammetric 3D reconstruction and classification workflow. The main objective is to be able to introduce semantic classification at the beginning of the classical photogrammetric workflow in order to automatically create classified dense point clouds by the end of the said workflow. In this regard, automatic image masking depending on pre-determined classes were performed using a previously trained neural network. The image masks were then employed during dense image matching in order to constraint the process into the respective classes, thus automatically creating semantically classified point clouds as the final output. Results show that the developed method is promising, with automation of the whole process feasible from input (images) to output (labelled point clouds). Quantitative assessment gave good results for specific classes e.g., building facades and windows, with IoU scores of 0.79 and 0.77 respectively. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

30 pages, 3892 KiB  
Article
From Point Clouds to Building Information Models: 3D Semi-Automatic Reconstruction of Indoors of Existing Buildings
by Hélène Macher, Tania Landes and Pierre Grussenmeyer
Appl. Sci. 2017, 7(10), 1030; https://doi.org/10.3390/app7101030 - 12 Oct 2017
Cited by 244 | Viewed by 16110
Abstract
The creation of as-built Building Information Models requires the acquisition of the as-is state of existing buildings. Laser scanners are widely used to achieve this goal since they permit to collect information about object geometry in form of point clouds and provide a [...] Read more.
The creation of as-built Building Information Models requires the acquisition of the as-is state of existing buildings. Laser scanners are widely used to achieve this goal since they permit to collect information about object geometry in form of point clouds and provide a large amount of accurate data in a very fast way and with a high level of details. Unfortunately, the scan-to-BIM (Building Information Model) process remains currently largely a manual process which is time consuming and error-prone. In this paper, a semi-automatic approach is presented for the 3D reconstruction of indoors of existing buildings from point clouds. Several segmentations are performed so that point clouds corresponding to grounds, ceilings and walls are extracted. Based on these point clouds, walls and slabs of buildings are reconstructed and described in the IFC format in order to be integrated into BIM software. The assessment of the approach is proposed thanks to two datasets. The evaluation items are the degree of automation, the transferability of the approach and the geometric quality of results of the 3D reconstruction. Additionally, quality indexes are introduced to inspect the results in order to be able to detect potential errors of reconstruction. Full article
(This article belongs to the Special Issue Laser Scanning)
Show Figures

Figure 1

23 pages, 12155 KiB  
Article
Investigation of a Combined Surveying and Scanning Device: The Trimble SX10 Scanning Total Station
by Elise Lachat, Tania Landes and Pierre Grussenmeyer
Sensors 2017, 17(4), 730; https://doi.org/10.3390/s17040730 - 31 Mar 2017
Cited by 35 | Viewed by 9757
Abstract
Surveying fields from geosciences to infrastructure monitoring make use of a wide range of instruments for accurate 3D geometry acquisition. In many cases, the Terrestrial Laser Scanner (TLS) tends to become an optimal alternative to total station measurements thanks to the high point [...] Read more.
Surveying fields from geosciences to infrastructure monitoring make use of a wide range of instruments for accurate 3D geometry acquisition. In many cases, the Terrestrial Laser Scanner (TLS) tends to become an optimal alternative to total station measurements thanks to the high point acquisition rate it offers, but also to ever deeper data processing software functionalities. Nevertheless, traditional surveying techniques are valuable in some kinds of projects. Nowadays, a few modern total stations combine their conventional capabilities with those of a laser scanner in a unique device. The recent Trimble SX10 scanning total station is a survey instrument merging high-speed 3D scanning and the capabilities of an image-assisted total station. In this paper this new instrument is introduced and first compared to state-of-the-art image-assisted total stations. The paper also addresses the topic of various laser scanning projects and the delivered point clouds are compared with those of other TLS. Directly and indirectly georeferenced projects have been carried out and are investigated in this paper, and a polygonal traverse is performed through a building. Comparisons with the results delivered by well-established survey instruments show the reliability of the Trimble SX10 for geodetic work as well as for scanning projects. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

28 pages, 5593 KiB  
Article
Assessment and Calibration of a RGB-D Camera (Kinect v2 Sensor) Towards a Potential Use for Close-Range 3D Modeling
by Elise Lachat, Hélène Macher, Tania Landes and Pierre Grussenmeyer
Remote Sens. 2015, 7(10), 13070-13097; https://doi.org/10.3390/rs71013070 - 1 Oct 2015
Cited by 188 | Viewed by 19220
Abstract
In the last decade, RGB-D cameras - also called range imaging cameras - have known a permanent evolution. Because of their limited cost and their ability to measure distances at a high frame rate, such sensors are especially appreciated for applications in robotics [...] Read more.
In the last decade, RGB-D cameras - also called range imaging cameras - have known a permanent evolution. Because of their limited cost and their ability to measure distances at a high frame rate, such sensors are especially appreciated for applications in robotics or computer vision. The Kinect v1 (Microsoft) release in November 2010 promoted the use of RGB-D cameras, so that a second version of the sensor arrived on the market in July 2014. Since it is possible to obtain point clouds of an observed scene with a high frequency, one could imagine applying this type of sensors to answer to the need for 3D acquisition. However, due to the technology involved, some questions have to be considered such as, for example, the suitability and accuracy of RGB-D cameras for close range 3D modeling. In that way, the quality of the acquired data represents a major axis. In this paper, the use of a recent Kinect v2 sensor to reconstruct small objects in three dimensions has been investigated. To achieve this goal, a survey of the sensor characteristics as well as a calibration approach are relevant. After an accuracy assessment of the produced models, the benefits and drawbacks of Kinect v2 compared to the first version of the sensor and then to photogrammetry are discussed. Full article
Show Figures

Graphical abstract

Back to TopTop