sensors-logo

Journal Browser

Journal Browser

Advanced Computational Intelligence for Object Detection, Feature Extraction and Recognition in Smart Sensor Environments

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 April 2020) | Viewed by 137121

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Applied Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland
Interests: computational intellgence; neural networks; image processing; expert systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recent years brought a vast development in various methodologies for object detection, feature extraction and recognition both in theory and practice. When processing images, video or other multimedia we need efficient solutions to perform fast and reliable processing. Computational intelligence is used for medical screenings where the detection of disease symptoms is carried out, in prevention monitoring to detect suspicious behavior, in agriculture systems to help with growing plants and animal breeding, in transportation systems for the control of incoming and outgoing transportation, for unmanned vehicles to detect obstacles and avoid collisions, in optics and materials for the detection of surface damage, etc. In many cases we use developed techniques that help us to recognize some special features. In the context of this innovative research on computational intelligence, it is my pleasure to invite you to contribute to this Special Issue, which presents an excellent opportunity for the dissemination of your recent results and cooperation for further innovations.

Topics of interest:

  • Bio-inspired methods, deep learning, convolutional neural networks, hybrid architectures, etc.
  • Time series, fractional-order controllers, gradient field methods, surface reconstruction and other mathematical models for intelligent feature detection, extraction and recognition.
  • Embedded intelligent computer vision algorithms.
  • Human, nature, technology and various object activity recognition models.
  • Hyper-parameter learning and tuning, automatic calibration, hybrid and surrogate learning for computational intelligence in vision systems.
  • Intelligent video and image acquisition techniques.

Assoc. Prof. Marcin Woźniak
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (25 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

4 pages, 153 KiB  
Editorial
Advanced Computational Intelligence for Object Detection, Feature Extraction and Recognition in Smart Sensor Environments
by Marcin Woźniak
Sensors 2021, 21(1), 45; https://doi.org/10.3390/s21010045 - 24 Dec 2020
Cited by 3 | Viewed by 2260
Abstract
The recent years have seen a vast development in various methodologies for object detection and feature extraction and recognition, both in theory and in practice [...] Full article

Research

Jump to: Editorial, Review

18 pages, 4339 KiB  
Article
Real-Time and Accurate Drone Detection in a Video with a Static Background
by Ulzhalgas Seidaliyeva, Daryn Akhmetov, Lyazzat Ilipbayeva and Eric T. Matson
Sensors 2020, 20(14), 3856; https://doi.org/10.3390/s20143856 - 10 Jul 2020
Cited by 100 | Viewed by 11638
Abstract
With the increasing number of drones, the danger of their illegal use has become relevant. This has necessitated the creation of automatic drone protection systems. One of the important tasks solved by these systems is the reliable detection of drones near guarded objects. [...] Read more.
With the increasing number of drones, the danger of their illegal use has become relevant. This has necessitated the creation of automatic drone protection systems. One of the important tasks solved by these systems is the reliable detection of drones near guarded objects. This problem can be solved using various methods. From the point of view of the price–quality ratio, the use of video cameras for a drone detection is of great interest. However, drone detection using visual information is hampered by the large similarity of drones to other objects, such as birds or airplanes. In addition, drones can reach very high speeds, so detection should be done in real time. This paper addresses the problem of real-time drone detection with high accuracy. We divided the drone detection task into two separate tasks: the detection of moving objects and the classification of the detected object into drone, bird, and background. The moving object detection is based on background subtraction, while classification is performed using a convolutional neural network (CNN). The experimental results showed that the proposed approach can achieve an accuracy comparable to existing approaches at high processing speed. We also concluded that the main limitation of our detector is the dependence of its performance on the presence of a moving background. Full article
Show Figures

Figure 1

17 pages, 4794 KiB  
Article
Rapid Multi-Sensor Feature Fusion Based on Non-Stationary Kernel JADE for the Small-Amplitude Hunting Monitoring of High-Speed Trains
by Jing Ning, Mingkuan Fang, Wei Ran, Chunjun Chen and Yanping Li
Sensors 2020, 20(12), 3457; https://doi.org/10.3390/s20123457 - 18 Jun 2020
Cited by 9 | Viewed by 2440
Abstract
Joint Approximate Diagonalization of Eigen-matrices (JADE) cannot deal with non-stationary data. Therefore, in this paper, a method called Non-stationary Kernel JADE (NKJADE) is proposed, which can extract non-stationary features and fuse multi-sensor features precisely and rapidly. In this method, the non-stationarity of the [...] Read more.
Joint Approximate Diagonalization of Eigen-matrices (JADE) cannot deal with non-stationary data. Therefore, in this paper, a method called Non-stationary Kernel JADE (NKJADE) is proposed, which can extract non-stationary features and fuse multi-sensor features precisely and rapidly. In this method, the non-stationarity of the data is considered and the data from multi-sensor are used to fuse the features efficiently. The method is compared with EEMD-SVD-LTSA and EEMD-JADE using the bearing fault data of CWRU, and the validity of the method is verified. Considering that the vibration signals of high-speed trains are typically non-stationary, it is necessary to utilize a rapid feature fusion method to identify the evolutionary trends of hunting motions quickly before the phenomenon is fully manifested. In this paper, the proposed method is applied to identify the evolutionary trend of hunting motions quickly and accurately. Results verify that the accuracy of this method is much higher than that of the EEMD-JADE and EEMD-SVD-LTSA methods. This method can also be used to fuse multi-sensor features of non-stationary data rapidly. Full article
Show Figures

Figure 1

17 pages, 4762 KiB  
Article
A Cognitive Method for Automatically Retrieving Complex Information on a Large Scale
by Yongyue Wang, Beitong Yao, Tianbo Wang, Chunhe Xia and Xianghui Zhao
Sensors 2020, 20(11), 3057; https://doi.org/10.3390/s20113057 - 28 May 2020
Cited by 1 | Viewed by 2326
Abstract
Modern retrieval systems tend to deteriorate because of their large output of useless and even misleading information, especially for complex search requests on a large scale. Complex information retrieval (IR) tasks requiring multi-hop reasoning need to fuse multiple scattered text across two or [...] Read more.
Modern retrieval systems tend to deteriorate because of their large output of useless and even misleading information, especially for complex search requests on a large scale. Complex information retrieval (IR) tasks requiring multi-hop reasoning need to fuse multiple scattered text across two or more documents. However, there are two challenges for multi-hop retrieval. To be specific, the first challenge is that since some important supporting facts have little lexical or semantic relationship with the retrieval query, the retriever often omits them; the second challenge is that once a retriever chooses misinformation related to the query as the entities of cognitive graphs, the retriever will fail. In this study, in order to improve the performance of retrievers in complex tasks, an intelligent sensor technique was proposed based on a sub-scope with cognitive reasoning (2SCR-IR), a novel method of retrieving reasoning paths over the cognitive graph to provide users with verified multi-hop reasoning chains. Inspired by the users’ process of step-by-step searching online, 2SCR-IR includes a dynamic fusion layer that starts from the entities mentioned in the given query, explores the cognitive graph dynamically built from the query and contexts, gradually finds relevant supporting entities mentioned in the given documents, and verifies the rationality of the retrieval facts. Our experimental results show that 2SCR-IR achieves competitive results on the HotpotQA full wiki and distractor settings, and outperforms the previous state-of-the-art methods by a more than two points absolute gain on the full wiki setting. Full article
Show Figures

Figure 1

19 pages, 10464 KiB  
Article
WatchPose: A View-Aware Approach for Camera Pose Data Collection in Industrial Environments
by Cong Yang, Gilles Simon, John See, Marie-Odile Berger and Wenyong Wang
Sensors 2020, 20(11), 3045; https://doi.org/10.3390/s20113045 - 27 May 2020
Cited by 4 | Viewed by 2611
Abstract
Collecting correlated scene images and camera poses is an essential step towards learning absolute camera pose regression models. While the acquisition of such data in living environments is relatively easy by following regular roads and paths, it is still a challenging task in [...] Read more.
Collecting correlated scene images and camera poses is an essential step towards learning absolute camera pose regression models. While the acquisition of such data in living environments is relatively easy by following regular roads and paths, it is still a challenging task in constricted industrial environments. This is because industrial objects have varied sizes and inspections are usually carried out with non-constant motions. As a result, regression models are more sensitive to scene images with respect to viewpoints and distances. Motivated by this, we present a simple but efficient camera pose data collection method, WatchPose, to improve the generalization and robustness of camera pose regression models. Specifically, WatchPose tracks nested markers and visualizes viewpoints in an Augmented Reality- (AR) based manner to properly guide users to collect training data from broader camera-object distances and more diverse views around the objects. Experiments show that WatchPose can effectively improve the accuracy of existing camera pose regression models compared to the traditional data acquisition method. We also introduce a new dataset, Industrial10, to encourage the community to adapt camera pose regression methods for more complex environments. Full article
Show Figures

Figure 1

22 pages, 5372 KiB  
Article
RFI Artefacts Detection in Sentinel-1 Level-1 SLC Data Based On Image Processing Techniques
by Agnieszka Chojka, Piotr Artiemjew and Jacek Rapiński
Sensors 2020, 20(10), 2919; https://doi.org/10.3390/s20102919 - 21 May 2020
Cited by 15 | Viewed by 3364
Abstract
Interferometric Synthetic Aperture Radar (InSAR) data are often contaminated by Radio-Frequency Interference (RFI) artefacts that make processing them more challenging. Therefore, easy to implement techniques for artefacts recognition have the potential to support the automatic Permanent Scatterers InSAR (PSInSAR) processing workflow during which [...] Read more.
Interferometric Synthetic Aperture Radar (InSAR) data are often contaminated by Radio-Frequency Interference (RFI) artefacts that make processing them more challenging. Therefore, easy to implement techniques for artefacts recognition have the potential to support the automatic Permanent Scatterers InSAR (PSInSAR) processing workflow during which faulty input data can lead to misinterpretation of the final outcomes. To address this issue, an efficient methodology was developed to mark images with RFI artefacts and as a consequence remove them from the stack of Synthetic Aperture Radar (SAR) images required in the PSInSAR processing workflow to calculate the ground displacements. Techniques presented in this paper for the purpose of RFI detection are based on image processing methods with the use of feature extraction involving pixel convolution, thresholding and nearest neighbor structure filtering. As the reference classifier, a convolutional neural network was used. Full article
Show Figures

Figure 1

25 pages, 1624 KiB  
Article
Training Data Extraction and Object Detection in Surveillance Scenario
by Artur Wilkowski, Maciej Stefańczyk and Włodzimierz Kasprzak
Sensors 2020, 20(9), 2689; https://doi.org/10.3390/s20092689 - 08 May 2020
Cited by 9 | Viewed by 2798
Abstract
Police and various security services use video analysis for securing public space, mass events, and when investigating criminal activity. Due to a huge amount of data supplied to surveillance systems, some automatic data processing is a necessity. In one typical scenario, an operator [...] Read more.
Police and various security services use video analysis for securing public space, mass events, and when investigating criminal activity. Due to a huge amount of data supplied to surveillance systems, some automatic data processing is a necessity. In one typical scenario, an operator marks an object in an image frame and searches for all occurrences of the object in other frames or even image sequences. This problem is hard in general. Algorithms supporting this scenario must reconcile several seemingly contradicting factors: training and detection speed, detection reliability, and learning from small data sets. In the system proposed here, we use a two-stage detector. The first region proposal stage is based on a Cascade Classifier while the second classification stage is based either on a Support Vector Machines (SVMs) or Convolutional Neural Networks (CNNs). The proposed configuration ensures both speed and detection reliability. In addition to this, an object tracking and background-foreground separation algorithm is used, supported by the GrabCut algorithm and a sample synthesis procedure, in order to collect rich training data for the detector. Experiments show that the system is effective, useful, and applicable to practical surveillance tasks. Full article
Show Figures

Figure 1

14 pages, 6625 KiB  
Article
Multi-View Visual Question Answering with Active Viewpoint Selection
by Yue Qiu, Yutaka Satoh, Ryota Suzuki, Kenji Iwata and Hirokatsu Kataoka
Sensors 2020, 20(8), 2281; https://doi.org/10.3390/s20082281 - 17 Apr 2020
Cited by 7 | Viewed by 4387
Abstract
This paper proposes a framework that allows the observation of a scene iteratively to answer a given question about the scene. Conventional visual question answering (VQA) methods are designed to answer given questions based on single-view images. However, in real-world applications, such as [...] Read more.
This paper proposes a framework that allows the observation of a scene iteratively to answer a given question about the scene. Conventional visual question answering (VQA) methods are designed to answer given questions based on single-view images. However, in real-world applications, such as human–robot interaction (HRI), in which camera angles and occluded scenes must be considered, answering questions based on single-view images might be difficult. Since HRI applications make it possible to observe a scene from multiple viewpoints, it is reasonable to discuss the VQA task in multi-view settings. In addition, because it is usually challenging to observe a scene from arbitrary viewpoints, we designed a framework that allows the observation of a scene actively until the necessary scene information to answer a given question is obtained. The proposed framework achieves comparable performance to a state-of-the-art method in question answering and simultaneously decreases the number of required observation viewpoints by a significant margin. Additionally, we found our framework plausibly learned to choose better viewpoints for answering questions, lowering the required number of camera movements. Moreover, we built a multi-view VQA dataset based on real images. The proposed framework shows high accuracy (94.01%) for the unseen real image dataset. Full article
Show Figures

Figure 1

13 pages, 3406 KiB  
Article
Liver Tumor Segmentation in CT Scans Using Modified SegNet
by Sultan Almotairi, Ghada Kareem, Mohamed Aouf, Badr Almutairi and Mohammed A.-M. Salem
Sensors 2020, 20(5), 1516; https://doi.org/10.3390/s20051516 - 10 Mar 2020
Cited by 98 | Viewed by 11709
Abstract
The main cause of death related to cancer worldwide is from hepatic cancer. Detection of hepatic cancer early using computed tomography (CT) could prevent millions of patients’ death every year. However, reading hundreds or even tens of those CT scans is an enormous [...] Read more.
The main cause of death related to cancer worldwide is from hepatic cancer. Detection of hepatic cancer early using computed tomography (CT) could prevent millions of patients’ death every year. However, reading hundreds or even tens of those CT scans is an enormous burden for radiologists. Therefore, there is an immediate need is to read, detect, and evaluate CT scans automatically, quickly, and accurately. However, liver segmentation and extraction from the CT scans is a bottleneck for any system, and is still a challenging problem. In this work, a deep learning-based technique that was proposed for semantic pixel-wise classification of road scenes is adopted and modified to fit liver CT segmentation and classification. The architecture of the deep convolutional encoder–decoder is named SegNet, and consists of a hierarchical correspondence of encode–decoder layers. The proposed architecture was tested on a standard dataset for liver CT scans and achieved tumor accuracy of up to 99.9% in the training phase. Full article
Show Figures

Figure 1

24 pages, 11451 KiB  
Article
Active Player Detection in Handball Scenes Based on Activity Measures
by Miran Pobar and Marina Ivasic-Kos
Sensors 2020, 20(5), 1475; https://doi.org/10.3390/s20051475 - 08 Mar 2020
Cited by 17 | Viewed by 4693
Abstract
In team sports training scenes, it is common to have many players on the court, each with his own ball performing different actions. Our goal is to detect all players in the handball court and determine the most active player who performs the [...] Read more.
In team sports training scenes, it is common to have many players on the court, each with his own ball performing different actions. Our goal is to detect all players in the handball court and determine the most active player who performs the given handball technique. This is a very challenging task, for which, apart from an accurate object detector, which is able to deal with complex cluttered scenes, additional information is needed to determine the active player. We propose an active player detection method that combines the Yolo object detector, activity measures, and tracking methods to detect and track active players in time. Different ways of computing player activity were considered and three activity measures are proposed based on optical flow, spatiotemporal interest points, and convolutional neural networks. For tracking, we consider the use of the Hungarian assignment algorithm and the more complex Deep SORT tracker that uses additional visual appearance features to assist the assignment process. We have proposed the evaluation measure to evaluate the performance of the proposed active player detection method. The method is successfully tested on a custom handball video dataset that was acquired in the wild and on basketball video sequences. The results are commented on and some of the typical cases and issues are shown. Full article
Show Figures

Figure 1

16 pages, 5965 KiB  
Article
A Deep-Learning-based 3D Defect Quantitative Inspection System in CC Products Surface
by Liming Zhao, Fangfang Li, Yi Zhang, Xiaodong Xu, Hong Xiao and Yang Feng
Sensors 2020, 20(4), 980; https://doi.org/10.3390/s20040980 - 12 Feb 2020
Cited by 15 | Viewed by 4156
Abstract
To create an intelligent surface region of interests (ROI) 3D quantitative inspection strategy a reality in the continuous casting (CC) production line, an improved 3D laser image scanning system (3D-LDS) was established based on binocular imaging and deep-learning techniques. In 3D-LDS, firstly, to [...] Read more.
To create an intelligent surface region of interests (ROI) 3D quantitative inspection strategy a reality in the continuous casting (CC) production line, an improved 3D laser image scanning system (3D-LDS) was established based on binocular imaging and deep-learning techniques. In 3D-LDS, firstly, to meet the requirements of the industrial application, the CCD laser image scanning method was optimized in high-temperature experiments and secondly, we proposed a novel region proposal method based on 3D ROI initial depth location for effectively suppressing redundant candidate bounding boxes generated by pseudo-defects in a real-time inspection process. Thirdly, a novel two-step defects inspection strategy was presented by devising a fusion deep CNN model which combined fully connected networks (for defects classification/recognition) and fully convolutional networks (for defects delineation). The 3D-LDS’ dichotomous inspection method of defects classification and delineation processes are helpful in understanding and addressing challenges for defects inspection in CC product surfaces. The applicability of the presented methods is mainly tied to the surface quality inspection for slab, strip and billet products. Full article
Show Figures

Figure 1

17 pages, 4495 KiB  
Article
Automatic Fabric Defect Detection Using Cascaded Mixed Feature Pyramid with Guided Localization
by You Wu, Xiaodong Zhang and Fengzhou Fang
Sensors 2020, 20(3), 871; https://doi.org/10.3390/s20030871 - 06 Feb 2020
Cited by 23 | Viewed by 4029
Abstract
Generic object detection algorithms for natural images have been proven to have excellent performance. In this paper, fabric defect detection on optical image datasets is systematically studied. In contrast to generic datasets, defect images are multi-scale, noise-filled, and blurred. Back-light intensity would also [...] Read more.
Generic object detection algorithms for natural images have been proven to have excellent performance. In this paper, fabric defect detection on optical image datasets is systematically studied. In contrast to generic datasets, defect images are multi-scale, noise-filled, and blurred. Back-light intensity would also be sensitive for visual perception. Large-scale fabric defect datasets are collected, selected, and employed to fulfill the requirements of detection in industrial practice in order to address these imbalanced issues. An improved two-stage defect detector is constructed for achieving better generalization. Stacked feature pyramid networks are set up to aggregate cross-scale defect patterns on interpolating mixed depth-wise block in stage one. By sharing feature maps, center-ness and shape branches merges cascaded modules with deformable convolution to filter and refine the proposed guided anchors. After balanced sampling, the proposals are down-sampled by position-sensitive pooling for region of interest, in order to characterize interactions among fabric defect images in stage two. The experiments show that the end-to-end architecture improves the occluded defect performance of region-based object detectors as compared with the current detectors. Full article
Show Figures

Figure 1

15 pages, 5221 KiB  
Article
Hand Gesture Recognition Using Compact CNN via Surface Electromyography Signals
by Lin Chen, Jianting Fu, Yuheng Wu, Haochen Li and Bin Zheng
Sensors 2020, 20(3), 672; https://doi.org/10.3390/s20030672 - 26 Jan 2020
Cited by 132 | Viewed by 10009
Abstract
By training the deep neural network model, the hidden features in Surface Electromyography(sEMG) signals can be extracted. The motion intention of the human can be predicted by analysis of sEMG. However, the models recently proposed by researchers often have a large number of [...] Read more.
By training the deep neural network model, the hidden features in Surface Electromyography(sEMG) signals can be extracted. The motion intention of the human can be predicted by analysis of sEMG. However, the models recently proposed by researchers often have a large number of parameters. Therefore, we designed a compact Convolution Neural Network (CNN) model, which not only improves the classification accuracy but also reduces the number of parameters in the model. Our proposed model was validated on the Ninapro DB5 Dataset and the Myo Dataset. The classification accuracy of gesture recognition achieved good results. Full article
Show Figures

Figure 1

13 pages, 5164 KiB  
Article
Ship Type Classification by Convolutional Neural Networks with Auditory-Like Mechanisms
by Sheng Shen, Honghui Yang, Xiaohui Yao, Junhao Li, Guanghui Xu and Meiping Sheng
Sensors 2020, 20(1), 253; https://doi.org/10.3390/s20010253 - 01 Jan 2020
Cited by 31 | Viewed by 3921
Abstract
Ship type classification with radiated noise helps monitor the noise of shipping around the hydrophone deployment site. This paper introduces a convolutional neural network with several auditory-like mechanisms for ship type classification. The proposed model mainly includes a cochlea model and an auditory [...] Read more.
Ship type classification with radiated noise helps monitor the noise of shipping around the hydrophone deployment site. This paper introduces a convolutional neural network with several auditory-like mechanisms for ship type classification. The proposed model mainly includes a cochlea model and an auditory center model. In cochlea model, acoustic signal decomposition at basement membrane is implemented by time convolutional layer with auditory filters and dilated convolutions. The transformation of neural patterns at hair cells is modeled by a time frequency conversion layer to extract auditory features. In the auditory center model, auditory features are first selectively emphasized in a supervised manner. Then, spectro-temporal patterns are extracted by deep architecture with multistage auditory mechanisms. The whole model is optimized with an objective function of ship type classification to form the plasticity of the auditory system. The contributions compared with an auditory inspired convolutional neural network include the improvements in dilated convolutions, deep architecture and target layer. The proposed model can extract auditory features from a raw hydrophone signal and identify types of ships under different working conditions. The model achieved a classification accuracy of 87.2% on four ship types and ocean background noise. Full article
Show Figures

Figure 1

17 pages, 5157 KiB  
Article
Waterfall Atrous Spatial Pooling Architecture for Efficient Semantic Segmentation
by Bruno Artacho and Andreas Savakis
Sensors 2019, 19(24), 5361; https://doi.org/10.3390/s19245361 - 05 Dec 2019
Cited by 40 | Viewed by 5282
Abstract
We propose a new efficient architecture for semantic segmentation, based on a “Waterfall” Atrous Spatial Pooling architecture, that achieves a considerable accuracy increase while decreasing the number of network parameters and memory footprint. The proposed Waterfall architecture leverages the efficiency of progressive filtering [...] Read more.
We propose a new efficient architecture for semantic segmentation, based on a “Waterfall” Atrous Spatial Pooling architecture, that achieves a considerable accuracy increase while decreasing the number of network parameters and memory footprint. The proposed Waterfall architecture leverages the efficiency of progressive filtering in the cascade architecture while maintaining multiscale fields-of-view comparable to spatial pyramid configurations. Additionally, our method does not rely on a postprocessing stage with Conditional Random Fields, which further reduces complexity and required training time. We demonstrate that the Waterfall approach with a ResNet backbone is a robust and efficient architecture for semantic segmentation obtaining state-of-the-art results with significant reduction in the number of parameters for the Pascal VOC dataset and the Cityscapes dataset. Full article
Show Figures

Figure 1

12 pages, 1170 KiB  
Article
Vehicular Traffic Congestion Classification by Visual Features and Deep Learning Approaches: A Comparison
by Donato Impedovo, Fabrizio Balducci, Vincenzo Dentamaro and Giuseppe Pirlo
Sensors 2019, 19(23), 5213; https://doi.org/10.3390/s19235213 - 28 Nov 2019
Cited by 37 | Viewed by 4968
Abstract
Automatic traffic flow classification is useful to reveal road congestions and accidents. Nowadays, roads and highways are equipped with a huge amount of surveillance cameras, which can be used for real-time vehicle identification, and thus providing traffic flow estimation. This research provides a [...] Read more.
Automatic traffic flow classification is useful to reveal road congestions and accidents. Nowadays, roads and highways are equipped with a huge amount of surveillance cameras, which can be used for real-time vehicle identification, and thus providing traffic flow estimation. This research provides a comparative analysis of state-of-the-art object detectors, visual features, and classification models useful to implement traffic state estimations. More specifically, three different object detectors are compared to identify vehicles. Four machine learning techniques are successively employed to explore five visual features for classification aims. These classic machine learning approaches are compared with the deep learning techniques. This research demonstrates that, when methods and resources are properly implemented and tested, results are very encouraging for both methods, but the deep learning method is the most accurately performing one reaching an accuracy of 99.9% for binary traffic state classification and 98.6% for multiclass classification. Full article
Show Figures

Figure 1

17 pages, 10354 KiB  
Article
Real-Time Vehicle-Detection Method in Bird-View Unmanned-Aerial-Vehicle Imagery
by Seongkyun Han, Jisang Yoo and Soonchul Kwon
Sensors 2019, 19(18), 3958; https://doi.org/10.3390/s19183958 - 13 Sep 2019
Cited by 16 | Viewed by 5100
Abstract
Vehicle detection is an important research area that provides background information for the diversity of unmanned-aerial-vehicle (UAV) applications. In this paper, we propose a vehicle-detection method using a convolutional-neural-network (CNN)-based object detector. We design our method, DRFBNet300, with a Deeper Receptive Field Block [...] Read more.
Vehicle detection is an important research area that provides background information for the diversity of unmanned-aerial-vehicle (UAV) applications. In this paper, we propose a vehicle-detection method using a convolutional-neural-network (CNN)-based object detector. We design our method, DRFBNet300, with a Deeper Receptive Field Block (DRFB) module that enhances the expressiveness of feature maps to detect small objects in the UAV imagery. We also propose the UAV-cars dataset that includes the composition and angular distortion of vehicles in UAV imagery to train our DRFBNet300. Lastly, we propose a Split Image Processing (SIP) method to improve the accuracy of the detection model. Our DRFBNet300 achieves 21 mAP with 45 FPS in the MS COCO metric, which is the highest score compared to other lightweight single-stage methods running in real time. In addition, DRFBNet300, trained on the UAV-cars dataset, obtains the highest AP score at altitudes of 20–50 m. The gap of accuracy improvement by applying the SIP method became larger when the altitude increases. The DRFBNet300 trained on the UAV-cars dataset with SIP method operates at 33 FPS, enabling real-time vehicle detection. Full article
Show Figures

Figure 1

17 pages, 4823 KiB  
Article
Automatic Identification of Tool Wear Based on Convolutional Neural Network in Face Milling Process
by Xuefeng Wu, Yahui Liu, Xianliang Zhou and Aolei Mou
Sensors 2019, 19(18), 3817; https://doi.org/10.3390/s19183817 - 04 Sep 2019
Cited by 89 | Viewed by 4740
Abstract
Monitoring of tool wear in machining process has found its importance to predict tool life, reduce equipment downtime, and tool costs. Traditional visual methods require expert experience and human resources to obtain accurate tool wear information. With the development of charge-coupled device (CCD) [...] Read more.
Monitoring of tool wear in machining process has found its importance to predict tool life, reduce equipment downtime, and tool costs. Traditional visual methods require expert experience and human resources to obtain accurate tool wear information. With the development of charge-coupled device (CCD) image sensor and the deep learning algorithms, it has become possible to use the convolutional neural network (CNN) model to automatically identify the wear types of high-temperature alloy tools in the face milling process. In this paper, the CNN model is developed based on our image dataset. The convolutional automatic encoder (CAE) is used to pre-train the network model, and the model parameters are fine-tuned by back propagation (BP) algorithm combined with stochastic gradient descent (SGD) algorithm. The established ToolWearnet network model has the function of identifying the tool wear types. The experimental results show that the average recognition precision rate of the model can reach 96.20%. At the same time, the automatic detection algorithm of tool wear value is improved by combining the identified tool wear types. In order to verify the feasibility of the method, an experimental system is built on the machine tool. By matching the frame rate of the industrial camera and the machine tool spindle speed, the wear image information of all the inserts can be obtained in the machining gap. The automatic detection method of tool wear value is compared with the result of manual detection by high precision digital optical microscope, the mean absolute percentage error is 4.76%, which effectively verifies the effectiveness and practicality of the method. Full article
Show Figures

Figure 1

16 pages, 9924 KiB  
Article
HEMIGEN: Human Embryo Image Generator Based on Generative Adversarial Networks
by Darius Dirvanauskas, Rytis Maskeliūnas, Vidas Raudonis, Robertas Damaševičius and Rafal Scherer
Sensors 2019, 19(16), 3578; https://doi.org/10.3390/s19163578 - 16 Aug 2019
Cited by 35 | Viewed by 5191
Abstract
We propose a method for generating the synthetic images of human embryo cells that could later be used for classification, analysis, and training, thus resulting in the creation of new synthetic image datasets for research areas lacking real-world data. Our focus was not [...] Read more.
We propose a method for generating the synthetic images of human embryo cells that could later be used for classification, analysis, and training, thus resulting in the creation of new synthetic image datasets for research areas lacking real-world data. Our focus was not only to generate the generic image of a cell such, but to make sure that it has all necessary attributes of a real cell image to provide a fully realistic synthetic version. We use human embryo images obtained during cell development processes for training a deep neural network (DNN). The proposed algorithm used generative adversarial network (GAN) to generate one-, two-, and four-cell stage images. We achieved a misclassification rate of 12.3% for the generated images, while the expert evaluation showed the true recognition rate (TRR) of 80.00% (for four-cell images), 86.8% (for two-cell images), and 96.2% (for one-cell images). Texture-based comparison using the Haralick features showed that there is no statistically (using the Student’s t-test) significant (p < 0.01) differences between the real and synthetic embryo images except for the sum of variance (for one-cell and four-cell images), and variance and sum of average (for two-cell images) features. The obtained synthetic images can be later adapted to facilitate the development, training, and evaluation of new algorithms for embryo image processing tasks. Full article
Show Figures

Figure 1

11 pages, 1924 KiB  
Article
Multi-Scale Vehicle Detection for Foreground-Background Class Imbalance with Improved YOLOv2
by Zhongyuan Wu, Jun Sang, Qian Zhang, Hong Xiang, Bin Cai and Xiaofeng Xia
Sensors 2019, 19(15), 3336; https://doi.org/10.3390/s19153336 - 30 Jul 2019
Cited by 20 | Viewed by 4471
Abstract
Vehicle detection is a challenging task in computer vision. In recent years, numerous vehicle detection methods have been proposed. Since the vehicles may have varying sizes in a scene, while the vehicles and the background in a scene may be with imbalanced sizes, [...] Read more.
Vehicle detection is a challenging task in computer vision. In recent years, numerous vehicle detection methods have been proposed. Since the vehicles may have varying sizes in a scene, while the vehicles and the background in a scene may be with imbalanced sizes, the performance of vehicle detection is influenced. To obtain better performance on vehicle detection, a multi-scale vehicle detection method was proposed in this paper by improving YOLOv2. The main contributions of this paper include: (1) a new anchor box generation method Rk-means++ was proposed to enhance the adaptation of varying sizes of vehicles and achieve multi-scale detection; (2) Focal Loss was introduced into YOLOv2 for vehicle detection to reduce the negative influence on training resulting from imbalance between vehicles and background. The experimental results upon the Beijing Institute of Technology (BIT)-Vehicle public dataset demonstrated that the proposed method can obtain better performance on vehicle localization and recognition than that of other existing methods. Full article
Show Figures

Figure 1

18 pages, 7817 KiB  
Article
Citrus Pests and Diseases Recognition Model Using Weakly Dense Connected Convolution Network
by Shuli Xing, Marely Lee and Keun-kwang Lee
Sensors 2019, 19(14), 3195; https://doi.org/10.3390/s19143195 - 19 Jul 2019
Cited by 63 | Viewed by 7492
Abstract
Pests and diseases can cause severe damage to citrus fruits. Farmers used to rely on experienced experts to recognize them, which is a time consuming and costly process. With the popularity of image sensors and the development of computer vision technology, using convolutional [...] Read more.
Pests and diseases can cause severe damage to citrus fruits. Farmers used to rely on experienced experts to recognize them, which is a time consuming and costly process. With the popularity of image sensors and the development of computer vision technology, using convolutional neural network (CNN) models to identify pests and diseases has become a recent trend in the field of agriculture. However, many researchers refer to pre-trained models of ImageNet to execute different recognition tasks without considering their own dataset scale, resulting in a waste of computational resources. In this paper, a simple but effective CNN model was developed based on our image dataset. The proposed network was designed from the aspect of parameter efficiency. To achieve this goal, the complexity of cross-channel operation was increased and the frequency of feature reuse was adapted to network depth. Experiment results showed that Weakly DenseNet-16 got the highest classification accuracy with fewer parameters. Because this network is lightweight, it can be used in mobile devices. Full article
Show Figures

Figure 1

17 pages, 2352 KiB  
Article
Automatic Classification Using Machine Learning for Non-Conventional Vessels on Inland Waters
by Marta Wlodarczyk-Sielicka and Dawid Polap
Sensors 2019, 19(14), 3051; https://doi.org/10.3390/s19143051 - 10 Jul 2019
Cited by 11 | Viewed by 2955
Abstract
The prevalent methods for monitoring ships are based on automatic identification and radar systems. This applies mainly to large vessels. Additional sensors that are used include video cameras with different resolutions. Such systems feature cameras that capture images and software that analyze the [...] Read more.
The prevalent methods for monitoring ships are based on automatic identification and radar systems. This applies mainly to large vessels. Additional sensors that are used include video cameras with different resolutions. Such systems feature cameras that capture images and software that analyze the selected video frames. The analysis involves the detection of a ship and the extraction of features to identify it. This article proposes a technique to detect and categorize ships through image processing methods that use convolutional neural networks. Tests to verify the proposed method were carried out on a database containing 200 images of four classes of ships. The advantages and disadvantages of implementing the proposed method are also discussed in light of the results. The system is designed to use multiple existing video streams to identify passing ships on inland waters, especially non-conventional vessels. Full article
Show Figures

Figure 1

27 pages, 4826 KiB  
Article
Vision-Based Novelty Detection Using Deep Features and Evolved Novelty Filters for Specific Robotic Exploration and Inspection Tasks
by Marco Antonio Contreras-Cruz, Juan Pablo Ramirez-Paredes, Uriel Haile Hernandez-Belmonte and Victor Ayala-Ramirez
Sensors 2019, 19(13), 2965; https://doi.org/10.3390/s19132965 - 05 Jul 2019
Cited by 8 | Viewed by 4420
Abstract
One of the essential abilities in animals is to detect novelties within their environment. From the computational point of view, novelty detection consists of finding data that are different in some aspect to the known data. In robotics, researchers have incorporated novelty modules [...] Read more.
One of the essential abilities in animals is to detect novelties within their environment. From the computational point of view, novelty detection consists of finding data that are different in some aspect to the known data. In robotics, researchers have incorporated novelty modules in robots to develop automatic exploration and inspection tasks. The visual sensor is one of the preferred sensors to perform this task. However, there exist problems as illumination changes, occlusion, and scale, among others. Besides, novelty detectors vary their performance depending on the specific application scenario. In this work, we propose a visual novelty detection framework for specific exploration and inspection tasks based on evolved novelty detectors. The system uses deep features to represent the visual information captured by the robots and applies a global optimization technique to design novelty detectors for specific robotics applications. We verified the performance of the proposed system against well-established state-of-the-art methods in a challenging scenario. This scenario was an outdoor environment covering typical problems in computer vision such as illumination changes, occlusion, and geometric transformations. The proposed framework presented high-novelty detection accuracy with competitive or even better results than the baseline methods. Full article
Show Figures

Graphical abstract

12 pages, 1067 KiB  
Article
Image Thresholding Improves 3-Dimensional Convolutional Neural Network Diagnosis of Different Acute Brain Hemorrhages on Computed Tomography Scans
by Justin Ker, Satya P. Singh, Yeqi Bai, Jai Rao, Tchoyoson Lim and Lipo Wang
Sensors 2019, 19(9), 2167; https://doi.org/10.3390/s19092167 - 10 May 2019
Cited by 144 | Viewed by 8924
Abstract
Intracranial hemorrhage is a medical emergency that requires urgent diagnosis and immediate treatment to improve patient outcome. Machine learning algorithms can be used to perform medical image classification and assist clinicians in diagnosing radiological scans. In this paper, we apply 3-dimensional convolutional neural [...] Read more.
Intracranial hemorrhage is a medical emergency that requires urgent diagnosis and immediate treatment to improve patient outcome. Machine learning algorithms can be used to perform medical image classification and assist clinicians in diagnosing radiological scans. In this paper, we apply 3-dimensional convolutional neural networks (3D CNN) to classify computed tomography (CT) brain scans into normal scans (N) and abnormal scans containing subarachnoid hemorrhage (SAH), intraparenchymal hemorrhage (IPH), acute subdural hemorrhage (ASDH) and brain polytrauma hemorrhage (BPH). The dataset used consists of 399 volumetric CT brain images representing approximately 12,000 images from the National Neuroscience Institute, Singapore. We used a 3D CNN to perform both 2-class (normal versus a specific abnormal class) and 4-class classification (between normal, SAH, IPH, ASDH). We apply image thresholding at the image pre-processing step, that improves 3D CNN classification accuracy and performance by accentuating the pixel intensities that contribute most to feature discrimination. For 2-class classification, the F1 scores for various pairs of medical diagnoses ranged from 0.706 to 0.902 without thresholding. With thresholding implemented, the F1 scores improved and ranged from 0.919 to 0.952. Our results are comparable to, and in some cases, exceed the results published in other work applying 3D CNN to CT or magnetic resonance imaging (MRI) brain scan classification. This work represents a direct application of a 3D CNN to a real hospital scenario involving a medically emergent CT brain diagnosis. Full article
Show Figures

Figure 1

Review

Jump to: Editorial, Research

28 pages, 7298 KiB  
Review
Vision-Based Traffic Sign Detection and Recognition Systems: Current Trends and Challenges
by Safat B. Wali, Majid A. Abdullah, Mahammad A. Hannan, Aini Hussain, Salina A. Samad, Pin J. Ker and Muhamad Bin Mansor
Sensors 2019, 19(9), 2093; https://doi.org/10.3390/s19092093 - 06 May 2019
Cited by 110 | Viewed by 12024
Abstract
The automatic traffic sign detection and recognition (TSDR) system is very important research in the development of advanced driver assistance systems (ADAS). Investigations on vision-based TSDR have received substantial interest in the research community, which is mainly motivated by three factors, which are [...] Read more.
The automatic traffic sign detection and recognition (TSDR) system is very important research in the development of advanced driver assistance systems (ADAS). Investigations on vision-based TSDR have received substantial interest in the research community, which is mainly motivated by three factors, which are detection, tracking and classification. During the last decade, a substantial number of techniques have been reported for TSDR. This paper provides a comprehensive survey on traffic sign detection, tracking and classification. The details of algorithms, methods and their specifications on detection, tracking and classification are investigated and summarized in the tables along with the corresponding key references. A comparative study on each section has been provided to evaluate the TSDR data, performance metrics and their availability. Current issues and challenges of the existing technologies are illustrated with brief suggestions and a discussion on the progress of driver assistance system research in the future. This review will hopefully lead to increasing efforts towards the development of future vision-based TSDR system. Full article
Show Figures

Figure 1

Back to TopTop