Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI
AbstractRecent advances in technologies for capturing video data have opened a vast amount of new application areas in visual sensor networks. Among them, the incorporation of light wave cameras on Ambient Intelligence (AmI) environments provides more accurate tracking capabilities for activity recognition. Although the performance of tracking algorithms has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This paper advocates for the introduction of a part-based representational level in cognitive-based systems in order to accurately represent the novel sensors’ knowledge. The paper also reviews the theoretical and practical issues in part-whole relationships proposing a specific taxonomy for computer vision approaches. General part-based patterns for human body and transitive part-based representation and inference are incorporated to an ontology-based previous framework to enhance scene interpretation in the area of video-based AmI. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP) application for the elaboration of live market researches. View Full-Text
Share & Cite This Article
Serrano, M.Á.; Gómez-Romero, J.; Patricio, M.Á.; García, J.; Molina, J.M. Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI. Sensors 2012, 12, 12126-12152.
Serrano MÁ, Gómez-Romero J, Patricio MÁ, García J, Molina JM. Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI. Sensors. 2012; 12(9):12126-12152.Chicago/Turabian Style
Serrano, Miguel Ángel; Gómez-Romero, Juan; Patricio, Miguel Ángel; García, Jesús; Molina, José Manuel. 2012. "Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI." Sensors 12, no. 9: 12126-12152.