sensors-logo

Journal Browser

Journal Browser

Special Issue "Computational Intelligence and Intelligent Contents (CIIC)"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: 30 November 2020.

Special Issue Editors

Prof. Dr. Chang Choi
Website
Guest Editor
Computer Engineering, Gachon University, Sungnam, Korea
Interests: Intelligent Information Processing; Information Security; Smart Sensor Networks
Special Issues and Collections in MDPI journals
Prof. Dr. Hoon Ko
Website
Guest Editor
IT Research Institute, Chosun University, Gwangju, Korea
Interests: Cyber-Security; Smart-City; Human relationship based on contexts
Special Issues and Collections in MDPI journals
Prof. Dr. Xin Su
Website
Guest Editor
College of IoT Engineering, Hohai University, Changzhou, China
Interests: mobile communication; 5G systems; edge computing/fog computing; Internet of Things applications; and sensor networks
Prof. Dr. Christian Esposito
Website
Guest Editor
University of Salerno, Italy
Interests: Distributed Systems; Middleware; Dependability; Ubiquitous Computing and Artificial Intelligence; approaches and techniques to guarantee event deliveries in Internet-scale publish/subscribe services
Special Issues and Collections in MDPI journals

Special Issue Information

There are various mobility scenarios for users, such as along streets or within buildings, where context-related data are of key importance to offer a personalized use of ICT solutions. As a concrete example, a user approaching a specific building in a university campus would like to receive notifications of the classes being held there, or a person moving within a museum would like to be advised of the points of interest in the approaching rooms so as to plan their visit according to their interests. However, within the context of such ubiquitous and pervasive systems, it is possible to face certain problems and troubles caused by the presence of unreliable information, characterized by a certain degree of uncertainty and indecision. The position information is typically computed by a positioning system, and characterized by a certain amount of error. Such an error can determine a wrong computation of the user’s trajectory, causing wrong contextual information being used and incorrect data being provided to the user. The user may use this incorrect data to make wrong decisions. Besides, user inputs to such systems are never precise and numerical, but always vague and expressed in natural language. This demands the introduction of a certain intelligence within the system to manage subjective user inputs and untrustable contextual information. Such a needed computational intelligence may encompass fuzzy and rough set theory, evolutionary and smart computing, as well as approximate reasoning. However, these methods should not be hosted only within the cloud, but closer to the user devices, according to fog/edge computing, to device-intelligent environments dealing with a large amount of interacting nodes and/or generated volumes of data.

Computational Intelligence and Intelligent Content (CIIC) intends to bring together academic researchers and industrial practitioners to report progress in the development of Computational Intelligence and Intelligent Contents. As such, the focus of this Special Issue is related to the methods of Computational Intelligence and Intelligent Contents applied to fog/edge computing, with a focus on its applications in intelligent contents paradigms. We are soliciting contributions on (but not limited to) the following topics of interest (to be possibly extended and/or modified):

  • Computational Intelligence
  • Applied Soft Computing and Fuzzy Logic and Artificial Neural Networks
  • Intelligent Contents Security and Cyber-Security
  • Model Driven Architecture and Meta-Modeling
  • Multimedia Contents Processing and Retrieval
  • Sensors and Networks (wireless ad hoc N/W, Vehicular N/W)
  • Big Data, Intelligence Information Processing
  • Convergence / Complex Contents, Smart Learning
  • Culture Design, Universal Design, UI/UX, Interaction Design and Information Theory
  • Intelligent Contents Design Management, Methodology and Design theory
  • Intelligent Media Contents Convergence / Complex Media
  • Social media and collective intelligence
  • Social Media Big Data Analytics
  • Bio-Information Management and Security
Prof. Dr. Chang Choi
Prof. Dr. Hoon Ko
Prof. Dr. Xin Su
Prof. Dr. Christian Esposito
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Spatio-Temporal Representation of an Electoencephalogram for Emotion Recognition Using a Three-Dimensional Convolutional Neural Network
Sensors 2020, 20(12), 3491; https://doi.org/10.3390/s20123491 - 20 Jun 2020
Abstract
Emotion recognition plays an important role in the field of human–computer interaction (HCI). An electroencephalogram (EEG) is widely used to estimate human emotion owing to its convenience and mobility. Deep neural network (DNN) approaches using an EEG for emotion recognition have recently shown [...] Read more.
Emotion recognition plays an important role in the field of human–computer interaction (HCI). An electroencephalogram (EEG) is widely used to estimate human emotion owing to its convenience and mobility. Deep neural network (DNN) approaches using an EEG for emotion recognition have recently shown remarkable improvement in terms of their recognition accuracy. However, most studies in this field still require a separate process for extracting handcrafted features despite the ability of a DNN to extract meaningful features by itself. In this paper, we propose a novel method for recognizing an emotion based on the use of three-dimensional convolutional neural networks (3D CNNs), with an efficient representation of the spatio-temporal representations of EEG signals. First, we spatially reconstruct raw EEG signals represented as stacks of one-dimensional (1D) time series data to two-dimensional (2D) EEG frames according to the original electrode position. We then represent a 3D EEG stream by concatenating the 2D EEG frames to the time axis. These 3D reconstructions of the raw EEG signals can be efficiently combined with 3D CNNs, which have shown a remarkable feature representation from spatio-temporal data. Herein, we demonstrate the accuracy of the emotional classification of the proposed method through extensive experiments on the DEAP (a Dataset for Emotion Analysis using EEG, Physiological, and video signals) dataset. Experimental results show that the proposed method achieves a classification accuracy of 99.11%, 99.74%, and 99.73% in the binary classification of valence and arousal, and, in four-class classification, respectively. We investigate the spatio-temporal effectiveness of the proposed method by comparing it to several types of input methods with 2D/3D CNN. We then verify the best performing shape of both the kernel and input data experimentally. We verify that an efficient representation of an EEG and a network that fully takes advantage of the data characteristics can outperform methods that apply handcrafted features. Full article
(This article belongs to the Special Issue Computational Intelligence and Intelligent Contents (CIIC))
Show Figures

Figure 1

Open AccessFeature PaperArticle
Learning Hierarchical Representations of Stories by Using Multi-Layered Structures in Narrative Multimedia
Sensors 2020, 20(7), 1978; https://doi.org/10.3390/s20071978 - 01 Apr 2020
Abstract
Narrative works (e.g., novels and movies) consist of various utterances (e.g., scenes and episodes) with multi-layered structures. However, the existing studies aimed to embed only stories in a narrative work. By covering other granularity levels, we can easily compare narrative utterances that are [...] Read more.
Narrative works (e.g., novels and movies) consist of various utterances (e.g., scenes and episodes) with multi-layered structures. However, the existing studies aimed to embed only stories in a narrative work. By covering other granularity levels, we can easily compare narrative utterances that are coarser (e.g., movie series) or finer (e.g., scenes) than a narrative work. We apply the multi-layered structures on learning hierarchical representations of the narrative utterances. To represent coarser utterances, we consider adjacency and appearance of finer utterances in the coarser ones. For the movies, we suppose a four-layered structure (character roles ∈ characters ∈ scenes ∈ movies) and propose three learning methods bridging the layers: Char2Vec, Scene2Vec, and Hierarchical Story2Vec. Char2Vec represents a character by using dynamic changes in the character’s roles. To find the character roles, we use substructures of character networks (i.e., dynamic social networks of characters). A scene describes an event. Interactions between characters in the scene are designed to describe the event. Scene2Vec learns representations of a scene from interactions between characters in the scene. A story is a series of events. Meanings of the story are affected by order of the events as well as their content. Hierarchical Story2Vec uses sequential order of scenes to represent stories. The proposed model has been evaluated by estimating the similarity between narrative utterances in real movies. Full article
(This article belongs to the Special Issue Computational Intelligence and Intelligent Contents (CIIC))
Show Figures

Figure 1

Back to TopTop