Special Issue "Intelligent Processing on Image and Optical Information"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Optics and Lasers".

Deadline for manuscript submissions: 31 December 2019.

Special Issue Editor

Prof. Dr. Seokwon Yeom
E-Mail Website
Guest Editor
School of Computer and Communication Eng. Daegu University
Interests: Intelligent image processing; IR and MMW image processing; Three-dimensional information processing; Video surveillance; Pattern recognition

Special Issue Information

Dear Colleagues,

Intelligent image and optical information processing have significantly contributed to the recent epoch of artificial intelligence and smart cars. Certainly, information acquired by various imaging techniques is of tremendous value, thus intelligent analysis of them is necessary to make the best use of it.

This special issue focuses on the vast range of intelligent processing of image and optical information acquired by various imaging methods. Images are commonly formed via visible light; three-dimensional information is acquired by multi-view imaging or digital holography; infrared, terahertz, and millimeter waves are good resources in a non-visible environment. Synthetic aperture radar and radiographic or ultrasonic imaging constitute military, industrial, and medical regimes. The objectives of intelligent processing range from the refinement of raw data to the symbolic representation and visualization of real world. It comes through unsupervised or supervised learning based on statistical and mathematical models or computational algorithms.

Intelligent processing on image and optical information has been widely involved in a variety of research fields such as video surveillance, biometric recognition, non-destructive testing, medical diagnosis, robotic sensing, compressed sensing, autonomous driving, three-dimensional scene reconstruction, and others. The latest technological developments will be shared through this special issue. We invite researchers and investigators to contribute their original research or review articles to this special issue.

Prof. Dr. Seokwon Yeom
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Intelligent image processing 
  • Machine and robot vision
  • Optical information processing
  • IR, THz, MMW, SAR image analysis 
  • Bio-medical image analysis 
  • Three-dimensional information processing 
  • Image detection, recognition, and tracking 
  • Segmentation and feature extraction 
  • Image registration and fusion 
  • Image enhancement and restoration

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Feature Extraction with Discrete Non-Separable Shearlet Transform and Its Application to Surface Inspection of Continuous Casting Slabs
Appl. Sci. 2019, 9(21), 4668; https://doi.org/10.3390/app9214668 - 01 Nov 2019
Abstract
A new feature extraction technique called DNST-GLCM-KSR (discrete non-separable shearlet transform-gray-level co-occurrence matrix-kernel spectral regression) is presented according to the direction and texture information of surface defects of continuous casting slabs with complex backgrounds. The discrete non-separable shearlet transform (DNST) is a new [...] Read more.
A new feature extraction technique called DNST-GLCM-KSR (discrete non-separable shearlet transform-gray-level co-occurrence matrix-kernel spectral regression) is presented according to the direction and texture information of surface defects of continuous casting slabs with complex backgrounds. The discrete non-separable shearlet transform (DNST) is a new multi-scale geometric analysis method that provides excellent localization properties and directional selectivity. The gray-level co-occurrence matrix (GLCM) is a texture feature extraction technology. We combine DNST features with GLCM features to characterize defects of the continuous casting slabs. Since the combination feature is high-dimensional and redundant, kernel spectral regression (KSR) algorithm was used to remove redundancy. The low-dimension features obtained and labels data were inputted to a support vector machine (SVM) for classification. The samples collected from the continuous casting slab industrial production line—including cracks, scales, lighting variation, and slag marks—and the proposed scheme were tested. The test results show that the scheme can improve the classification accuracy to 96.37%, which provides a new approach for surface defect recognition of continuous casting slabs. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

Open AccessArticle
Multi-Sensor Face Registration Based on Global and Local Structures
Appl. Sci. 2019, 9(21), 4623; https://doi.org/10.3390/app9214623 - 30 Oct 2019
Abstract
The work reported in this paper aims at utilizing the global geometrical relationship and local shape feature to register multi-spectral images for fusion-based face recognition. We first propose a multi-spectral face images registration method based on both global and local structures of feature [...] Read more.
The work reported in this paper aims at utilizing the global geometrical relationship and local shape feature to register multi-spectral images for fusion-based face recognition. We first propose a multi-spectral face images registration method based on both global and local structures of feature point sets. In order to combine the global geometrical relationship and local shape feature in a new Student’s t Mixture probabilistic model framework. On the one hand, we use inner-distance shape context as the local shape descriptors of feature point sets. On the other hand, we formulate the feature point sets registration of the multi-spectral face images as the Student’s t Mixture probabilistic model estimation, and local shape descriptors are used to replace the mixing proportions of the prior Student’s t Mixture Model. Furthermore, in order to improve the anti-interference performance of face recognition techniques, a guided filtering and gradient preserving image fusion strategy is used to fuse the registered multi-spectral face image. It can make the multi-spectral fusion image hold more apparent details of the visible image and thermal radiation information of the infrared image. Subjective and objective registration experiments are conducted with manual selected landmarks and real multi-spectral face images. The qualitative and quantitative comparisons with the state-of-the-art methods demonstrate the accuracy and robustness of our proposed method in solving the multi-spectral face image registration problem. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

Open AccessArticle
Determination of the Optimal State of Dough Fermentation in Bread Production by Using Optical Sensors and Deep Learning
Appl. Sci. 2019, 9(20), 4266; https://doi.org/10.3390/app9204266 - 11 Oct 2019
Cited by 1
Abstract
Dough fermentation plays an essential role in the bread production process, and its success is critical to producing high-quality products. In Germany, the number of stores per bakery chain has increased within the last years as well as the trend to finish the [...] Read more.
Dough fermentation plays an essential role in the bread production process, and its success is critical to producing high-quality products. In Germany, the number of stores per bakery chain has increased within the last years as well as the trend to finish the bakery products local at the stores. There is an unsatisfied demand for skilled workers, which leads to an increasing number of untrained and inexperienced employees at the stores. This paper proposes a method for the automatic monitoring of the fermentation process based on optical techniques. By using a combination of machine learning and superellipsoid model fitting, we have developed an instance segmentation and parameter estimation method for dough objects that are positioned inside a fermentation chamber. In our method we measure the given topography at discrete points in time using a movable laser sensor system that is located at the back of the fermentation chamber. By applying the superellipsoid model fitting method, we estimated the volume of each object and achieved results with a deviation of approximately 10% on average. Thereby, the volume gradient is monitored continuously and represents the progress of the fermentation state. Exploratory tests show the reliability and the potential of our method, which is particularly suitable for local stores but also for high volume production in bakery plants. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

Open AccessArticle
A Correction Method for Heat Wave Distortion in Digital Image Correlation Measurements Based on Background-Oriented Schlieren
Appl. Sci. 2019, 9(18), 3851; https://doi.org/10.3390/app9183851 - 13 Sep 2019
Abstract
Digital image correlation (DIC) is a kind of displacement and strain measurement technique. It can realize non-contact and full-field measurement and is widely used in the testing and research of mechanical properties of materials at high temperatures. However, many factors affect measurement accuracy. [...] Read more.
Digital image correlation (DIC) is a kind of displacement and strain measurement technique. It can realize non-contact and full-field measurement and is widely used in the testing and research of mechanical properties of materials at high temperatures. However, many factors affect measurement accuracy. As the high temperature environment is complex, the impact of heat waves on DIC is the most significant factor. In order to correct the disturbance in DIC measurement caused by heat waves, this paper proposes a method based on the background-oriented schlieren (BOS) technique. The spot pattern on the surface of a specimen in digital image correlation can be used as the background in the background-oriented schlieren technique. The BOS technique can measure the distortion information of the images caused by heat flow field. The specimen images taken through the heat waves can be corrected using the distortion information. Besides, the characteristics of distortions due to heat waves are also studied in this paper. The experiment results verify that the proposed method can effectively eliminate heat wave disturbances in DIC measurements. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

Open AccessArticle
Automatic Zebrafish Egg Phenotype Recognition from Bright-Field Microscopic Images Using Deep Convolutional Neural Network
Appl. Sci. 2019, 9(16), 3362; https://doi.org/10.3390/app9163362 - 15 Aug 2019
Abstract
Zebrafish eggs are widely used in biological experiments to study the environmental and genetic influence on embryo development. Due to the high throughput of microscopic imaging, automated analysis of zebrafish egg microscopic images is highly demanded. However, machine learning algorithms for zebrafish egg [...] Read more.
Zebrafish eggs are widely used in biological experiments to study the environmental and genetic influence on embryo development. Due to the high throughput of microscopic imaging, automated analysis of zebrafish egg microscopic images is highly demanded. However, machine learning algorithms for zebrafish egg image analysis suffer from the problems of small imbalanced training dataset and subtle inter-class differences. In this study, we developed an automated zebrafish egg microscopic image analysis algorithm based on deep convolutional neural network (CNN). To tackle the problem of insufficient training data, the strategies of transfer learning and data augmentation were used. We also adopted the global averaged pooling technique to overcome the subtle phenotype differences between the fertilized and unfertilized eggs. Experimental results of a five-fold cross-validation test showed that the proposed method yielded a mean classification accuracy of 95.0% and a maximum accuracy of 98.8%. The network also demonstrated higher classification accuracy and better convergence performance than conventional CNN methods. This study extends the deep learning technique to zebrafish egg phenotype classification and paves the way for automatic bright-field microscopic image analysis. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Graphical abstract

Open AccessArticle
Real-Time Automated Segmentation and Classification of Calcaneal Fractures in CT Images
Appl. Sci. 2019, 9(15), 3011; https://doi.org/10.3390/app9153011 - 26 Jul 2019
Abstract
Calcaneal fractures often occur because of accidents during exercise or activities. In general, the detection of the calcaneal fracture is still carried out manually through CT image observation, and as a result, there is a lack of precision in the analysis. This paper [...] Read more.
Calcaneal fractures often occur because of accidents during exercise or activities. In general, the detection of the calcaneal fracture is still carried out manually through CT image observation, and as a result, there is a lack of precision in the analysis. This paper proposes a computer-aid method for the calcaneal fracture detection to acquire a faster and more detailed observation. First, the anatomical plane orientation of the tarsal bone in the input image is selected to determine the location of the calcaneus. Then, several fragments of the calcaneus image are detected and marked by color segmentation. The Sanders system is used to classify fractures in transverse and coronal images into four types, based on the number of fragments. In sagittal image, fractures are classified into three types based on the involvement of the fracture area. The experimental results show that the proposed method achieves a high precision rate of 86%, with a fast computational performance of 133 frames per second (fps), used to analyze the severity of injury to the calcaneus. The results in the test image are validated based on the assessment and evaluation carried out by the physician on the reference datasets. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

Open AccessArticle
A Novel Extraction Method for Wildlife Monitoring Images with Wireless Multimedia Sensor Networks (WMSNs)
Appl. Sci. 2019, 9(11), 2276; https://doi.org/10.3390/app9112276 - 02 Jun 2019
Abstract
In remote areas, wireless multimedia sensor networks (WMSNs) have limited energy, and the data processing of wildlife monitoring images always suffers from energy consumption limitations. Generally, only part of each wildlife image is valuable. Therefore, the above mentioned issue could be avoided by [...] Read more.
In remote areas, wireless multimedia sensor networks (WMSNs) have limited energy, and the data processing of wildlife monitoring images always suffers from energy consumption limitations. Generally, only part of each wildlife image is valuable. Therefore, the above mentioned issue could be avoided by transmitting the target area. Inspired by this transport strategy, in this paper, we propose an image extraction method with a low computational complexity, which can be adapted to extract the target area (i.e., the animal) and its background area according to the characteristics of the image pixels. Specifically, we first reconstruct a color space model via a CIELUV (LUV) color space framework to extract the color parameters. Next, according to the importance of the Hermite polynomial, a Hermite filter is utilized to extract the texture features, which ensures the accuracy of the split extraction of wildlife images. Then, an adaptive mean-shift algorithm is introduced to cluster texture features and color space information, realizing the extraction of the foreground area in the monitoring image. To verify the performance of the algorithm, a demonstration of the extraction of field-captured wildlife images is presented. Further, we conduct a comparative experiment with N-cuts (N-cuts), the existing aggregating super-pixels (SAS) algorithm, and the histogram contrast saliency detection (HCS) algorithm. A comparison of the results shows that the proposed algorithm for monitoring image target area extraction increased the average pixel accuracy by 11.25%, 5.46%, and 10.39%, respectively; improved the relative limit measurement accuracy by 1.83%, 5.28%, and 12.05%, respectively; and increased the average mean intersection over the union by 7.09%, 14.96%, and 19.14%, respectively. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Graphical abstract

Open AccessArticle
A Texture Classification Approach Based on the Integrated Optimization for Parameters and Features of Gabor Filter via Hybrid Ant Lion Optimizer
Appl. Sci. 2019, 9(11), 2173; https://doi.org/10.3390/app9112173 - 28 May 2019
Cited by 1
Abstract
Texture classification is an important topic for many applications in machine vision and image analysis, and Gabor filter is considered one of the most efficient tools for analyzing texture features at multiple orientations and scales. However, the parameter settings of each filter are [...] Read more.
Texture classification is an important topic for many applications in machine vision and image analysis, and Gabor filter is considered one of the most efficient tools for analyzing texture features at multiple orientations and scales. However, the parameter settings of each filter are crucial for obtaining accurate results, and they may not be adaptable to different kinds of texture features. Moreover, there is redundant information included in the process of texture feature extraction that contributes little to the classification. In this paper, a new texture classification technique is detailed. The approach is based on the integrated optimization of the parameters and features of Gabor filter, and obtaining satisfactory parameters and the best feature subset is viewed as a combinatorial optimization problem that can be solved by maximizing the objective function using hybrid ant lion optimizer (HALO). Experimental results, particularly fitness values, demonstrate that HALO is more effective than the other algorithms discussed in this paper, and the optimal parameters and features of Gabor filter are balanced between efficiency and accuracy. The method is feasible, reasonable, and can be utilized for practical applications of texture classification. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

Open AccessArticle
IMU-Aided High-Frequency Lidar Odometry for Autonomous Driving
Appl. Sci. 2019, 9(7), 1506; https://doi.org/10.3390/app9071506 - 11 Apr 2019
Cited by 2
Abstract
For autonomous driving, it is important to obtain precise and high-frequency localization information. This paper proposes a novel method in which the Inertial Measurement Unit (IMU), wheel encoder, and lidar odometry are utilized together to estimate the ego-motion of an unmanned ground vehicle. [...] Read more.
For autonomous driving, it is important to obtain precise and high-frequency localization information. This paper proposes a novel method in which the Inertial Measurement Unit (IMU), wheel encoder, and lidar odometry are utilized together to estimate the ego-motion of an unmanned ground vehicle. The IMU is fused with the wheel encoder to obtain the motion prior, and it is involved in three levels of the lidar odometry: Firstly, we use the IMU information to rectify the intra-frame distortion of the lidar scan, which is caused by the vehicle’s own movement; secondly, the IMU provides a better initial guess for the lidar odometry; and thirdly, the IMU is fused with the lidar odometry in an Extended Kalman filter framework. In addition, an efficient method for hand–eye calibration between the IMU and the lidar is proposed. To evaluate the performance of our method, extensive experiments are performed and our system can output stable, accurate, and high-frequency localization results in diverse environment without any prior information. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

Open AccessArticle
Change Detection of Water Resources via Remote Sensing: An L-V-NSCT Approach
Appl. Sci. 2019, 9(6), 1223; https://doi.org/10.3390/app9061223 - 22 Mar 2019
Abstract
Aiming at the change detection of water resources via remote sensing, the non-subsampling contour transformation method combining a log-vari model and the Stractural Similarity of Variogram (VSSIM) model, namely log-vari and VSSIM based non-subsampled contourlet transform (L-V-NSCT) approach, is proposed. Firstly, a differential [...] Read more.
Aiming at the change detection of water resources via remote sensing, the non-subsampling contour transformation method combining a log-vari model and the Stractural Similarity of Variogram (VSSIM) model, namely log-vari and VSSIM based non-subsampled contourlet transform (L-V-NSCT) approach, is proposed. Firstly, a differential image construction method based on non-subsampled contourlet transform (NSCT) texture analysis is designed to extract the low-frequency and high-frequency texture features of the objects in the images. Secondly, the texture features of rivers, lakes and other objects in the images are accurately classified. Finally, the change detection results of regions of interest are extracted and evaluated. In this experiment, the L-V-NSCT approach is compared with other methods with the results showing the effectiveness of this method. The change in Dongting Lake is also analyzed, which can be used as a reference for relevant administrative departments. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Graphical abstract

Back to TopTop