E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Advanced Computational Intelligence for Object Detection, Feature Extraction and Recognition in Smart Sensor Environments"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 30 January 2020.

Special Issue Editor

Guest Editor
Dr. Marcin Woźniak

Faculty of Applied Mathematics, Silesian University of Technology, Gliwice, Poland
Website | E-Mail
Phone: 0048 032 237 13 41
Interests: computational intelligence; heuristic methods; neural networks

Special Issue Information

Dear Colleagues,

Recent years brought a vast development in various methodologies for object detection, feature extraction and recognition both in theory and practice. When processing images, video or other multimedia we need efficient solutions to perform fast and reliable processing. Computational intelligence is used for medical screenings where the detection of disease symptoms is carried out, in prevention monitoring to detect suspicious behavior, in agriculture systems to help with growing plants and animal breeding, in transportation systems for the control of incoming and outgoing transportation, for unmanned vehicles to detect obstacles and avoid collisions, in optics and materials for the detection of surface damage, etc. In many cases we use developed techniques that help us to recognize some special features. In the context of this innovative research on computational intelligence, it is my pleasure to invite you to contribute to this Special Issue, which presents an excellent opportunity for the dissemination of your recent results and cooperation for further innovations.

Topics of interest:

  • Bio-inspired methods, deep learning, convolutional neural networks, hybrid architectures, etc.
  • Time series, fractional-order controllers, gradient field methods, surface reconstruction and other mathematical models for intelligent feature detection, extraction and recognition.
  • Embedded intelligent computer vision algorithms.
  • Human, nature, technology and various object activity recognition models.
  • Hyper-parameter learning and tuning, automatic calibration, hybrid and surrogate learning for computational intelligence in vision systems.
  • Intelligent video and image acquisition techniques.

Assoc. Prof. Marcin Woźniak
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

View options order results:
result details:
Displaying articles 1-7
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
HEMIGEN: Human Embryo Image Generator Based on Generative Adversarial Networks
Sensors 2019, 19(16), 3578; https://doi.org/10.3390/s19163578 (registering DOI)
Received: 26 July 2019 / Revised: 13 August 2019 / Accepted: 14 August 2019 / Published: 16 August 2019
PDF Full-text (857 KB)
Abstract
We propose a method for generating the synthetic images of human embryo cells that could later be used for classification, analysis, and training, thus resulting in the creation of new synthetic image datasets for research areas lacking real-world data. Our focus was not [...] Read more.
We propose a method for generating the synthetic images of human embryo cells that could later be used for classification, analysis, and training, thus resulting in the creation of new synthetic image datasets for research areas lacking real-world data. Our focus was not only to generate the generic image of a cell such, but to make sure that it has all necessary attributes of a real cell image to provide a fully realistic synthetic version. We use human embryo images obtained during cell development processes for training a deep neural network (DNN). The proposed algorithm used generative adversarial network (GAN) to generate one-, two-, and four-cell stage images. We achieved a misclassification rate of 12.3% for the generated images, while the expert evaluation showed the true recognition rate (TRR) of 80.00% (for four-cell images), 86.8% (for two-cell images), and 96.2% (for one-cell images). Texture-based comparison using the Haralick features showed that there is no statistically (using the Student’s t-test) significant (p < 0.01) differences between the real and synthetic embryo images except for the sum of variance (for one-cell and four-cell images), and variance and sum of average (for two-cell images) features. The obtained synthetic images can be later adapted to facilitate the development, training, and evaluation of new algorithms for embryo image processing tasks. Full article
Open AccessArticle
Multi-Scale Vehicle Detection for Foreground-Background Class Imbalance with Improved YOLOv2
Sensors 2019, 19(15), 3336; https://doi.org/10.3390/s19153336
Received: 22 June 2019 / Revised: 26 July 2019 / Accepted: 26 July 2019 / Published: 30 July 2019
PDF Full-text (1924 KB) | HTML Full-text | XML Full-text
Abstract
Vehicle detection is a challenging task in computer vision. In recent years, numerous vehicle detection methods have been proposed. Since the vehicles may have varying sizes in a scene, while the vehicles and the background in a scene may be with imbalanced sizes, [...] Read more.
Vehicle detection is a challenging task in computer vision. In recent years, numerous vehicle detection methods have been proposed. Since the vehicles may have varying sizes in a scene, while the vehicles and the background in a scene may be with imbalanced sizes, the performance of vehicle detection is influenced. To obtain better performance on vehicle detection, a multi-scale vehicle detection method was proposed in this paper by improving YOLOv2. The main contributions of this paper include: (1) a new anchor box generation method Rk-means++ was proposed to enhance the adaptation of varying sizes of vehicles and achieve multi-scale detection; (2) Focal Loss was introduced into YOLOv2 for vehicle detection to reduce the negative influence on training resulting from imbalance between vehicles and background. The experimental results upon the Beijing Institute of Technology (BIT)-Vehicle public dataset demonstrated that the proposed method can obtain better performance on vehicle localization and recognition than that of other existing methods. Full article
Figures

Figure 1

Open AccessArticle
Citrus Pests and Diseases Recognition Model Using Weakly Dense Connected Convolution Network
Sensors 2019, 19(14), 3195; https://doi.org/10.3390/s19143195
Received: 18 June 2019 / Revised: 16 July 2019 / Accepted: 16 July 2019 / Published: 19 July 2019
PDF Full-text (7817 KB) | HTML Full-text | XML Full-text
Abstract
Pests and diseases can cause severe damage to citrus fruits. Farmers used to rely on experienced experts to recognize them, which is a time consuming and costly process. With the popularity of image sensors and the development of computer vision technology, using convolutional [...] Read more.
Pests and diseases can cause severe damage to citrus fruits. Farmers used to rely on experienced experts to recognize them, which is a time consuming and costly process. With the popularity of image sensors and the development of computer vision technology, using convolutional neural network (CNN) models to identify pests and diseases has become a recent trend in the field of agriculture. However, many researchers refer to pre-trained models of ImageNet to execute different recognition tasks without considering their own dataset scale, resulting in a waste of computational resources. In this paper, a simple but effective CNN model was developed based on our image dataset. The proposed network was designed from the aspect of parameter efficiency. To achieve this goal, the complexity of cross-channel operation was increased and the frequency of feature reuse was adapted to network depth. Experiment results showed that Weakly DenseNet-16 got the highest classification accuracy with fewer parameters. Because this network is lightweight, it can be used in mobile devices. Full article
Figures

Figure 1

Open AccessArticle
Automatic Classification Using Machine Learning for Non-Conventional Vessels on Inland Waters
Sensors 2019, 19(14), 3051; https://doi.org/10.3390/s19143051
Received: 23 May 2019 / Revised: 2 July 2019 / Accepted: 8 July 2019 / Published: 10 July 2019
PDF Full-text (2352 KB) | HTML Full-text | XML Full-text
Abstract
The prevalent methods for monitoring ships are based on automatic identification and radar systems. This applies mainly to large vessels. Additional sensors that are used include video cameras with different resolutions. Such systems feature cameras that capture images and software that analyze the [...] Read more.
The prevalent methods for monitoring ships are based on automatic identification and radar systems. This applies mainly to large vessels. Additional sensors that are used include video cameras with different resolutions. Such systems feature cameras that capture images and software that analyze the selected video frames. The analysis involves the detection of a ship and the extraction of features to identify it. This article proposes a technique to detect and categorize ships through image processing methods that use convolutional neural networks. Tests to verify the proposed method were carried out on a database containing 200 images of four classes of ships. The advantages and disadvantages of implementing the proposed method are also discussed in light of the results. The system is designed to use multiple existing video streams to identify passing ships on inland waters, especially non-conventional vessels. Full article
Figures

Figure 1

Open AccessArticle
Vision-Based Novelty Detection Using Deep Features and Evolved Novelty Filters for Specific Robotic Exploration and Inspection Tasks
Sensors 2019, 19(13), 2965; https://doi.org/10.3390/s19132965
Received: 8 May 2019 / Revised: 18 June 2019 / Accepted: 28 June 2019 / Published: 5 July 2019
PDF Full-text (4826 KB) | HTML Full-text | XML Full-text
Abstract
One of the essential abilities in animals is to detect novelties within their environment. From the computational point of view, novelty detection consists of finding data that are different in some aspect to the known data. In robotics, researchers have incorporated novelty modules [...] Read more.
One of the essential abilities in animals is to detect novelties within their environment. From the computational point of view, novelty detection consists of finding data that are different in some aspect to the known data. In robotics, researchers have incorporated novelty modules in robots to develop automatic exploration and inspection tasks. The visual sensor is one of the preferred sensors to perform this task. However, there exist problems as illumination changes, occlusion, and scale, among others. Besides, novelty detectors vary their performance depending on the specific application scenario. In this work, we propose a visual novelty detection framework for specific exploration and inspection tasks based on evolved novelty detectors. The system uses deep features to represent the visual information captured by the robots and applies a global optimization technique to design novelty detectors for specific robotics applications. We verified the performance of the proposed system against well-established state-of-the-art methods in a challenging scenario. This scenario was an outdoor environment covering typical problems in computer vision such as illumination changes, occlusion, and geometric transformations. The proposed framework presented high-novelty detection accuracy with competitive or even better results than the baseline methods. Full article
Figures

Graphical abstract

Open AccessArticle
Image Thresholding Improves 3-Dimensional Convolutional Neural Network Diagnosis of Different Acute Brain Hemorrhages on Computed Tomography Scans
Sensors 2019, 19(9), 2167; https://doi.org/10.3390/s19092167
Received: 1 April 2019 / Revised: 1 May 2019 / Accepted: 4 May 2019 / Published: 10 May 2019
PDF Full-text (1067 KB) | HTML Full-text | XML Full-text
Abstract
Intracranial hemorrhage is a medical emergency that requires urgent diagnosis and immediate treatment to improve patient outcome. Machine learning algorithms can be used to perform medical image classification and assist clinicians in diagnosing radiological scans. In this paper, we apply 3-dimensional convolutional neural [...] Read more.
Intracranial hemorrhage is a medical emergency that requires urgent diagnosis and immediate treatment to improve patient outcome. Machine learning algorithms can be used to perform medical image classification and assist clinicians in diagnosing radiological scans. In this paper, we apply 3-dimensional convolutional neural networks (3D CNN) to classify computed tomography (CT) brain scans into normal scans (N) and abnormal scans containing subarachnoid hemorrhage (SAH), intraparenchymal hemorrhage (IPH), acute subdural hemorrhage (ASDH) and brain polytrauma hemorrhage (BPH). The dataset used consists of 399 volumetric CT brain images representing approximately 12,000 images from the National Neuroscience Institute, Singapore. We used a 3D CNN to perform both 2-class (normal versus a specific abnormal class) and 4-class classification (between normal, SAH, IPH, ASDH). We apply image thresholding at the image pre-processing step, that improves 3D CNN classification accuracy and performance by accentuating the pixel intensities that contribute most to feature discrimination. For 2-class classification, the F1 scores for various pairs of medical diagnoses ranged from 0.706 to 0.902 without thresholding. With thresholding implemented, the F1 scores improved and ranged from 0.919 to 0.952. Our results are comparable to, and in some cases, exceed the results published in other work applying 3D CNN to CT or magnetic resonance imaging (MRI) brain scan classification. This work represents a direct application of a 3D CNN to a real hospital scenario involving a medically emergent CT brain diagnosis. Full article
Figures

Figure 1

Review

Jump to: Research

Open AccessReview
Vision-Based Traffic Sign Detection and Recognition Systems: Current Trends and Challenges
Sensors 2019, 19(9), 2093; https://doi.org/10.3390/s19092093
Received: 2 April 2019 / Revised: 24 April 2019 / Accepted: 26 April 2019 / Published: 6 May 2019
PDF Full-text (7298 KB) | HTML Full-text | XML Full-text
Abstract
The automatic traffic sign detection and recognition (TSDR) system is very important research in the development of advanced driver assistance systems (ADAS). Investigations on vision-based TSDR have received substantial interest in the research community, which is mainly motivated by three factors, which are [...] Read more.
The automatic traffic sign detection and recognition (TSDR) system is very important research in the development of advanced driver assistance systems (ADAS). Investigations on vision-based TSDR have received substantial interest in the research community, which is mainly motivated by three factors, which are detection, tracking and classification. During the last decade, a substantial number of techniques have been reported for TSDR. This paper provides a comprehensive survey on traffic sign detection, tracking and classification. The details of algorithms, methods and their specifications on detection, tracking and classification are investigated and summarized in the tables along with the corresponding key references. A comparative study on each section has been provided to evaluate the TSDR data, performance metrics and their availability. Current issues and challenges of the existing technologies are illustrated with brief suggestions and a discussion on the progress of driver assistance system research in the future. This review will hopefully lead to increasing efforts towards the development of future vision-based TSDR system. Full article
Figures

Figure 1

Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top