Next Article in Journal
Recognition Stage for a Speed Supervisor Based on Road Sign Detection
Next Article in Special Issue
Framework for End-User Programming of Cross-Smart Space Applications
Previous Article in Journal
Development and Successful Application of a Tree Movement Energy Harvesting Device, to Power a Wireless Sensor Node
Previous Article in Special Issue
A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities
Open AccessArticle

Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI

1 Applied Artificial Intelligence Group, Universidad Carlos III de Madrid, Avd. de la Universidad Carlos III, 22, Colmenarejo, Spain
*
Author to whom correspondence should be addressed.
Sensors 2012, 12(9), 12126-12152; https://doi.org/10.3390/s120912126
Received: 2 May 2012 / Revised: 31 July 2012 / Accepted: 21 August 2012 / Published: 5 September 2012
Recent advances in technologies for capturing video data have opened a vast amount of new application areas in visual sensor networks. Among them, the incorporation of light wave cameras on Ambient Intelligence (AmI) environments provides more accurate tracking capabilities for activity recognition. Although the performance of tracking algorithms has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This paper advocates for the introduction of a part-based representational level in cognitive-based systems in order to accurately represent the novel sensors’ knowledge. The paper also reviews the theoretical and practical issues in part-whole relationships proposing a specific taxonomy for computer vision approaches. General part-based patterns for human body and transitive part-based representation and inference are incorporated to an ontology-based previous framework to enhance scene interpretation in the area of video-based AmI. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP) application for the elaboration of live market researches. View Full-Text
Keywords: visual sensor networks; light wave; structured light; time-of-flight; cognitive vision; ontology-based; ambient intelligence; social signal processing visual sensor networks; light wave; structured light; time-of-flight; cognitive vision; ontology-based; ambient intelligence; social signal processing
Show Figures

MDPI and ACS Style

Serrano, M.Á.; Gómez-Romero, J.; Patricio, M.Á.; García, J.; Molina, J.M. Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI. Sensors 2012, 12, 12126-12152. https://doi.org/10.3390/s120912126

AMA Style

Serrano MÁ, Gómez-Romero J, Patricio MÁ, García J, Molina JM. Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI. Sensors. 2012; 12(9):12126-12152. https://doi.org/10.3390/s120912126

Chicago/Turabian Style

Serrano, Miguel Ángel; Gómez-Romero, Juan; Patricio, Miguel Ángel; García, Jesús; Molina, José Manuel. 2012. "Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI" Sensors 12, no. 9: 12126-12152. https://doi.org/10.3390/s120912126

Find Other Styles

Article Access Map by Country/Region

1
Only visits after 24 November 2015 are recorded.
Search more from Scilit
 
Search
Back to TopTop