Next Article in Journal
Negative Curvature Hollow Core Fiber Based All-Fiber Interferometer and Its Sensing Applications to Temperature and Strain
Next Article in Special Issue
Laplacian Scores-Based Feature Reduction in IoT Systems for Agricultural Monitoring and Decision-Making Support
Previous Article in Journal
Improved Defect Detection of Guided Wave Testing Using Split-Spectrum Processing
Previous Article in Special Issue
An Automatic Sleep Stage Classification Algorithm Using Improved Model Based Essence Features
Open AccessArticle

Indoor Scene Change Captioning Based on Multimodality Data

1
Graduate School of Science and Technology, University of Tsukuba, Tsukuba 305-8577, Japan
2
National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(17), 4761; https://doi.org/10.3390/s20174761
Received: 31 July 2020 / Revised: 17 August 2020 / Accepted: 20 August 2020 / Published: 23 August 2020
(This article belongs to the Special Issue Sensor Signal and Information Processing III)
This study proposes a framework for describing a scene change using natural language text based on indoor scene observations conducted before and after a scene change. The recognition of scene changes plays an essential role in a variety of real-world applications, such as scene anomaly detection. Most scene understanding research has focused on static scenes. Most existing scene change captioning methods detect scene changes from single-view RGB images, neglecting the underlying three-dimensional structures. Previous three-dimensional scene change captioning methods use simulated scenes consisting of geometry primitives, making it unsuitable for real-world applications. To solve these problems, we automatically generated large-scale indoor scene change caption datasets. We propose an end-to-end framework for describing scene changes from various input modalities, namely, RGB images, depth images, and point cloud data, which are available in most robot applications. We conducted experiments with various input modalities and models and evaluated model performance using datasets with various levels of complexity. Experimental results show that the models that combine RGB images and point cloud data as input achieve high performance in sentence generation and caption correctness and are robust for change type understanding for datasets with high complexity. The developed datasets and models contribute to the study of indoor scene change understanding. View Full-Text
Keywords: image captioning; three-dimensional (3D) vision; deep learning; human-robot interaction image captioning; three-dimensional (3D) vision; deep learning; human-robot interaction
Show Figures

Figure 1

MDPI and ACS Style

Qiu, Y.; Satoh, Y.; Suzuki, R.; Iwata, K.; Kataoka, H. Indoor Scene Change Captioning Based on Multimodality Data. Sensors 2020, 20, 4761.

AMA Style

Qiu Y, Satoh Y, Suzuki R, Iwata K, Kataoka H. Indoor Scene Change Captioning Based on Multimodality Data. Sensors. 2020; 20(17):4761.

Chicago/Turabian Style

Qiu, Yue; Satoh, Yutaka; Suzuki, Ryota; Iwata, Kenji; Kataoka, Hirokatsu. 2020. "Indoor Scene Change Captioning Based on Multimodality Data" Sensors 20, no. 17: 4761.

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop