Special Issue "Intelligent Imaging and Analysis"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Mechanical Engineering".

Deadline for manuscript submissions: closed (31 March 2019).

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Prof. Dr. DaeEun Kim
Website
Guest Editor
School of Electrical and Electronic Engineering, Yonsei University, Shinchon, Seoul, 03722, South Korea
Interests: biologically inspired robotics; mobile robots; biosensors; neural networks; evolutionary computation
Special Issues and Collections in MDPI journals
Prof. Dr. Dosik Hwang
Website
Guest Editor
School of Electrical and Electronic Engineering, Yonsei University, Shinchon, Seoul, 03722, South Korea
Interests: medical image; deep learning; magnetic resonance image; computed tomography; image analysis; signal processing; artificial intelligence

Special Issue Information

Dear Colleagues,

Imaging and analysis is widely involved with various research fields, including biomedical applications, medical imaging and diagnosis, computer vision, autonomous driving and robot controls. Imaging and analysis are now facing a big intelligent change due to the breakthroughs of artificial intelligence techniques, including deep learning. Many difficulties in image generation, reconstruction, de-noising skills, artifact removal, segmentation, detection, and control tasks are being overcome with the help of advanced artificial intelligence approaches.

This Special Issue focuses on the latest developments of learning-based intelligent imaging techniques and subsequent analyses, which include photographic imaging, medical imaging, detection, segmentation, medical diagnosis, computer vision and vision-based robot control. These latest technological developments will be shared through this Special Issue for various researchers who are involved with imaging itself, or using image data and analysis for their own specific purposes. New types of applications utilizing intelligent imaging and analysis techniques are also welcome.

Potential topics include, but are not limited to:

  • Photographic imaging
  • Medical imaging
  • Magnetic resonance imaging
  • Computed tomography
  • Image reconstruction
  • Image detection
  • Segmentation
  • Diagnosis
  • De-noising
  • Artifact removal
  • Computer vision
  • Vision-based robots

Prof. DaeEun Kim
Prof. Dosik Hwang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Deep learning
  • Photographic imaging
  • Medical imaging
  • Magnetic resonance imaging
  • Computed tomography
  • Image reconstruction
  • Detection
  • Segmentation
  • Diagnosis
  • Denoising
  • Artifact removal
  • Computer vision
  • Robot control

Published Papers (30 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Open AccessEditorial
Special Features on Intelligent Imaging and Analysis
Appl. Sci. 2019, 9(22), 4804; https://doi.org/10.3390/app9224804 - 10 Nov 2019
Cited by 1
Abstract
Intelligent imaging and analysis have been studied in various research fields, including medical imaging, biomedical applications, computer vision, visual inspection and robot systems [...] Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available

Research

Jump to: Editorial, Review

Open AccessArticle
Image Super-Resolution Algorithm Based on Dual-Channel Convolutional Neural Networks
Appl. Sci. 2019, 9(11), 2316; https://doi.org/10.3390/app9112316 - 05 Jun 2019
Cited by 17
Abstract
For the image super-resolution method from a single channel, it is difficult to achieve both fast convergence and high-quality texture restoration. By mitigating the weaknesses of existing methods, the present paper proposes an image super-resolution algorithm based on dual-channel convolutional neural networks (DCCNN). [...] Read more.
For the image super-resolution method from a single channel, it is difficult to achieve both fast convergence and high-quality texture restoration. By mitigating the weaknesses of existing methods, the present paper proposes an image super-resolution algorithm based on dual-channel convolutional neural networks (DCCNN). The novel structure of the network model was divided into a deep channel and a shallow channel. The deep channel was used to extract the detailed texture information from the original image, while the shallow channel was mainly used to recover the overall outline of the original image. Firstly, the residual block was adjusted in the feature extraction stage, and the nonlinear mapping ability of the network was enhanced. The feature mapping dimension was reduced, and the effective features of the image were obtained. In the up-sampling stage, the parameters of the deconvolutional kernel were adjusted, and high-frequency signal loss was decreased. The high-resolution feature space could be rebuilt recursively using long-term and short-term memory blocks during the reconstruction stage, further enhancing the recovery of texture information. Secondly, the convolutional kernel was adjusted in the shallow channel to reduce the parameters, ensuring that the overall outline of the image was restored and that the network converged rapidly. Finally, the dual-channel loss function was jointly optimized to enhance the feature-fitting ability in order to obtain the final high-resolution image output. Using the improved algorithm, the network converged more rapidly, the image edge and texture reconstruction effect were obviously improved, and the Peak Signal-to-Noise Ratio (PSNR) and structural similarity were also superior to those of other solutions. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Data Balancing Based on Pre-Training Strategy for Liver Segmentation from CT Scans
Appl. Sci. 2019, 9(9), 1825; https://doi.org/10.3390/app9091825 - 02 May 2019
Cited by 1
Abstract
Data imbalance is often encountered in deep learning process and is harmful to model training. The imbalance of hard and easy samples in training datasets often occurs in the segmentation tasks from Contrast Tomography (CT) scans. However, due to the strong similarity between [...] Read more.
Data imbalance is often encountered in deep learning process and is harmful to model training. The imbalance of hard and easy samples in training datasets often occurs in the segmentation tasks from Contrast Tomography (CT) scans. However, due to the strong similarity between adjacent slices in volumes and different segmentation tasks (the same slice may be classified as a hard sample in liver segmentation task, but an easy sample in the kidney or spleen segmentation task), it is hard to solve this imbalance of training dataset using traditional methods. In this work, we use a pre-training strategy to distinguish hard and easy samples, and then increase the proportion of hard slices in training dataset, which could mitigate imbalance of hard samples and easy samples in training dataset, and enhance the contribution of hard samples in training process. Our experiments on liver, kidney and spleen segmentation show that increasing the ratio of hard samples in the training dataset could enhance the prediction ability of model by improving its ability to deal with hard samples. The main contribution of this work is the application of pre-training strategy, which enables us to select training samples online according to different tasks and to ease data imbalance in the training dataset. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Graphical abstract

Open AccessArticle
A Joint Training Model for Face Sketch Synthesis
Appl. Sci. 2019, 9(9), 1731; https://doi.org/10.3390/app9091731 - 26 Apr 2019
Cited by 3
Abstract
The exemplar-based method is most frequently used in face sketch synthesis because of its efficiency in representing the nonlinear mapping between face photos and sketches. However, the sketches synthesized by existing exemplar-based methods suffer from block artifacts and blur effects. In addition, most [...] Read more.
The exemplar-based method is most frequently used in face sketch synthesis because of its efficiency in representing the nonlinear mapping between face photos and sketches. However, the sketches synthesized by existing exemplar-based methods suffer from block artifacts and blur effects. In addition, most exemplar-based methods ignore the training sketches in the weight representation process. To improve synthesis performance, a novel joint training model is proposed in this paper, taking sketches into consideration. First, we construct the joint training photo and sketch by concatenating the original photo and its sketch with a high-pass filtered image of their corresponding sketch. Then, an offline random sampling strategy is adopted for each test photo patch to select the joint training photo and sketch patches in the neighboring region. Finally, a novel locality constraint is designed to calculate the reconstruction weight, allowing the synthesized sketches to have more detailed information. Extensive experimental results on public datasets show the superiority of the proposed joint training model, both from subjective perceptual and the FaceNet-based face recognition objective evaluation, compared to existing state-of-the-art sketch synthesis methods. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Application of a Real-Time Visualization Method of AUVs in Underwater Visual Localization
Appl. Sci. 2019, 9(7), 1428; https://doi.org/10.3390/app9071428 - 04 Apr 2019
Cited by 4
Abstract
Autonomous underwater vehicles (AUVs) are widely used, but it is a tough challenge to guarantee the underwater location accuracy of AUVs. In this paper, a novel method is proposed to improve the accuracy of vision-based localization systems in feature-poor underwater environments. The traditional [...] Read more.
Autonomous underwater vehicles (AUVs) are widely used, but it is a tough challenge to guarantee the underwater location accuracy of AUVs. In this paper, a novel method is proposed to improve the accuracy of vision-based localization systems in feature-poor underwater environments. The traditional stereo visual simultaneous localization and mapping (SLAM) algorithm, which relies on the detection of tracking features, is used to estimate the position of the camera and establish a map of the environment. However, it is hard to find enough reliable point features in underwater environments and thus the performance of the algorithm is reduced. A stereo point and line SLAM (PL-SLAM) algorithm for localization, which utilizes point and line information simultaneously, was investigated in this study to resolve the problem. Experiments with an AR-marker (Augmented Reality-marker) were carried out to validate the accuracy and effect of the investigated algorithm. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
A CNN Model for Human Parsing Based on Capacity Optimization
Appl. Sci. 2019, 9(7), 1330; https://doi.org/10.3390/app9071330 - 29 Mar 2019
Cited by 1
Abstract
Although a state-of-the-art performance has been achieved in pixel-specific tasks, such as saliency prediction and depth estimation, convolutional neural networks (CNNs) still perform unsatisfactorily in human parsing where semantic information of detailed regions needs to be perceived under the influences of variations in [...] Read more.
Although a state-of-the-art performance has been achieved in pixel-specific tasks, such as saliency prediction and depth estimation, convolutional neural networks (CNNs) still perform unsatisfactorily in human parsing where semantic information of detailed regions needs to be perceived under the influences of variations in viewpoints, poses, and occlusions. In this paper, we propose to improve the robustness of human parsing modules by introducing a depth-estimation module. A novel scheme is proposed for the integration of a depth-estimation module and a human-parsing module. The robustness of the overall model is improved with the automatically obtained depth labels. As another major concern, the computational efficiency is also discussed. Our proposed human parsing module with 24 layers can achieve a similar performance as the baseline CNN model with over 100 layers. The number of parameters in the overall model is less than that in the baseline model. Furthermore, we propose to reduce the computational burden by replacing a conventional CNN layer with a stack of simplified sub-layers to further reduce the overall number of trainable parameters. Experimental results show that the integration of two modules contributes to the improvement of human parsing without additional human labeling. The proposed model outperforms the benchmark solutions and the capacity of our model is better matched to the complexity of the task. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Volumetric Tooth Wear Measurement of Scraper Conveyor Sprocket Using Shape from Focus-Based Method
Appl. Sci. 2019, 9(6), 1084; https://doi.org/10.3390/app9061084 - 14 Mar 2019
Cited by 1
Abstract
Volumetric tooth wear measurement is important to assess the life of scraper conveyor sprocket. A shape from focus-based method is used to measure scraper conveyor sprocket tooth wear. This method reduces the complexity of the process and improves the accuracy and efficiency of [...] Read more.
Volumetric tooth wear measurement is important to assess the life of scraper conveyor sprocket. A shape from focus-based method is used to measure scraper conveyor sprocket tooth wear. This method reduces the complexity of the process and improves the accuracy and efficiency of existing methods. A prototype set of sequence images taken by the camera facing the sprocket teeth is collected by controlling the fabricated track movement. In this method, a normal distribution operator image filtering is employed to improve the accuracy of an evaluation function value calculation. In order to detect noisy pixels, a normal operator is used, which involves with using a median filter to retain as much of the original image information as possible. In addition, an adaptive evaluation window selection method is proposed to address the difficulty associated with identifying an appropriate evaluation window to calculate the focused evaluation value. The shape and size of the evaluation window are autonomously determined using the correlation value of the grey scale co-occurrence matrix generated from the measured pixels’ neighbourhood pixels. A reverse engineering technique is used to quantitatively verify the shape volume recovery accuracy of different evaluation windows. The test results demonstrate that the proposed method can effectively measure sprocket teeth wear volume with an accuracy up to 97.23%. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Intelligent Evaluation of Strabismus in Videos Based on an Automated Cover Test
Appl. Sci. 2019, 9(4), 731; https://doi.org/10.3390/app9040731 - 20 Feb 2019
Cited by 5
Abstract
Strabismus is a common vision disease that brings about unpleasant influence on vision, as well as life quality. A timely diagnosis is crucial for the proper treatment of strabismus. In contrast to manual evaluation, well-designed automatic evaluation can significantly improve the objectivity, reliability, [...] Read more.
Strabismus is a common vision disease that brings about unpleasant influence on vision, as well as life quality. A timely diagnosis is crucial for the proper treatment of strabismus. In contrast to manual evaluation, well-designed automatic evaluation can significantly improve the objectivity, reliability, and efficiency of strabismus diagnosis. In this study, we have proposed an innovative intelligent evaluation system of strabismus in digital videos, based on the cover test. In particular, the video is recorded using an infrared camera, while the subject performs automated cover tests. The video is then fed into the proposed algorithm that consists of six stages: (1) eye region extraction, (2) iris boundary detection, (3) key frame detection, (4) pupil localization, (5) deviation calculation, and (6) evaluation of strabismus. A database containing cover test data of both strabismic subjects and normal subjects was established for experiments. Experimental results demonstrate that the deviation of strabismus can be well-evaluated by our proposed method. The accuracy was over 91%, in the horizontal direction, with an error of 8 diopters; and it was over 86% in the vertical direction, with an error of 4 diopters. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Evaluating the Overall Accuracy of Additional Learning and Automatic Classification System for CT Images
Appl. Sci. 2019, 9(4), 682; https://doi.org/10.3390/app9040682 - 17 Feb 2019
Cited by 6
Abstract
A large number of images that are usually registered images in a training dataset are required for creating classification models because training of images using a convolutional neural network is done using supervised learning. It takes a significant amount of time and effort [...] Read more.
A large number of images that are usually registered images in a training dataset are required for creating classification models because training of images using a convolutional neural network is done using supervised learning. It takes a significant amount of time and effort to create a registered dataset because recently computed tomography (CT) and magnetic resonance imaging devices produce hundreds of images per examination. This study aims to evaluate the overall accuracy of the additional learning and automatic classification systems for CT images. The study involved 700 patients, who were subjected to contrast or non-contrast CT examination of brain, neck, chest, abdomen, or pelvis. The images were divided into 500 images per class. The 10-class dataset was prepared with 10 datasets including with 5000–50,000 images. The overall accuracy was calculated using a confusion matrix for evaluating the created models. The highest overall reference accuracy was 0.9033 when the model was trained with a dataset containing 50,000 images. The additional learning for manual training was effective when datasets with a large number of images were used. The additional learning for automatic training requires models with an inherent higher accuracy for the classification. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Fast 3D Semantic Mapping in Road Scenes
Appl. Sci. 2019, 9(4), 631; https://doi.org/10.3390/app9040631 - 13 Feb 2019
Cited by 5
Abstract
Fast 3D reconstruction with semantic information in road scenes is of great requirements for autonomous navigation. It involves issues of geometry and appearance in the field of computer vision. In this work, we propose a fast 3D semantic mapping system based on the [...] Read more.
Fast 3D reconstruction with semantic information in road scenes is of great requirements for autonomous navigation. It involves issues of geometry and appearance in the field of computer vision. In this work, we propose a fast 3D semantic mapping system based on the monocular vision by fusion of localization, mapping, and scene parsing. From visual sequences, it can estimate the camera pose, calculate the depth, predict the semantic segmentation, and finally realize the 3D semantic mapping. Our system consists of three modules: a parallel visual Simultaneous Localization And Mapping (SLAM) and semantic segmentation module, an incrementally semantic transfer from 2D image to 3D point cloud, and a global optimization based on Conditional Random Field (CRF). It is a heuristic approach that improves the accuracy of the 3D semantic labeling in light of the spatial consistency on each step of 3D reconstruction. In our framework, there is no need to make semantic inference on each frame of sequence, since the 3D point cloud data with semantic information is corresponding to sparse reference frames. It saves on the computational cost and allows our mapping system to perform online. We evaluate the system on road scenes, e.g., KITTI, and observe a significant speed-up in the inference stage by labeling on the 3D point cloud. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
A Novel Self-Intersection Penalty Term for Statistical Body Shape Models and Its Applications in 3D Pose Estimation
Appl. Sci. 2019, 9(3), 400; https://doi.org/10.3390/app9030400 - 24 Jan 2019
Cited by 2
Abstract
Statistical body shape models are widely used in 3D pose estimation due to their low-dimensional parameters representation. However, it is difficult to avoid self-intersection between body parts accurately. Motivated by this fact, we proposed a novel self-intersection penalty term for statistical body shape [...] Read more.
Statistical body shape models are widely used in 3D pose estimation due to their low-dimensional parameters representation. However, it is difficult to avoid self-intersection between body parts accurately. Motivated by this fact, we proposed a novel self-intersection penalty term for statistical body shape models applied in 3D pose estimation. To avoid the trouble of computing self-intersection for complex surfaces like the body meshes, the gradient of our proposed self-intersection penalty term is manually derived from the perspective of geometry. First, the self-intersection penalty term is defined as the volume of the self-intersection region. To calculate the partial derivatives with respect to the coordinates of the vertices, we employed detection rays to divide vertices of statistical body shape models into different groups depending on whether the vertex is in the region of self-intersection. Second, the partial derivatives could be easily derived by the normal vectors of neighboring triangles of the vertices. Finally, this penalty term could be applied in gradient-based optimization algorithms to remove the self-intersection of triangular meshes without using any approximation. Qualitative and quantitative evaluations were conducted to demonstrate the effectiveness and generality of our proposed method compared with previous approaches. The experimental results show that our proposed penalty term can avoid self-intersection to exclude unreasonable predictions and improves the accuracy of 3D pose estimation indirectly. Further more, the proposed method could be employed universally in triangular mesh based 3D reconstruction. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Computer-Aided Design and Manufacturing Technology for Identification of Optimal Nuss Procedure and Fabrication of Patient-Specific Nuss Bar for Minimally Invasive Surgery of Pectus Excavatum
Appl. Sci. 2019, 9(1), 42; https://doi.org/10.3390/app9010042 - 22 Dec 2018
Cited by 5
Abstract
The Nuss procedure is one of the most widely used operation techniques for pectus excavatum (PE) patients. It attains the normal shape of the chest wall by lifting the patient’s chest wall with the Nuss bar. However, the Nuss bar is for the [...] Read more.
The Nuss procedure is one of the most widely used operation techniques for pectus excavatum (PE) patients. It attains the normal shape of the chest wall by lifting the patient’s chest wall with the Nuss bar. However, the Nuss bar is for the most part bent by a hand bender according to the patient’s chest wall, and this procedure causes various problems such as the failure of the operation and a decreased satisfaction of the surgeon and patient about the operation. To solve this problem, we proposed a method for deriving the optimal operation result by designing patient-specific Nuss bars through computer-aided design (CAD) and computer-aided manufacturing (CAM), and by performing auto bending based on the design. In other words, a three-dimensional chest wall model was generated using the computed tomography (CT) image of a pectus excavatum patient, and an operation scenario was selected considering the Nuss bar insertion point and the post-operative chest wall shape. Then, a design drawing of the Nuss bar that could produce the optimal operation result was derived from the operation scenario. Furthermore, after a computerized numerical control (CNC) bending machine for the Nuss bar bending was constructed, the Nuss bar prototype was manufactured based on the derived design drawing of the Nuss bar. The Nuss bar designed and manufactured with the proposed method has been found to improve the Haller index (HI) of the pectus excavatum patient by approximately 37% (3.14 before to 1.98 after operation). Moreover, the machining error in the manufacturing was within ±5% compared to the design drawing. The method proposed and verified in this study is expected to reduce the failure rate of the Nuss procedure and significantly improve the satisfaction of the surgeon and patient about the operation. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Dark Spot Detection in SAR Images of Oil Spill Using Segnet
Appl. Sci. 2018, 8(12), 2670; https://doi.org/10.3390/app8122670 - 18 Dec 2018
Cited by 8
Abstract
Damping Bragg scattering from the ocean surface is the basic underlying principle of synthetic aperture radar (SAR) oil slick detection, and they produce dark spots on SAR images. Dark spot detection is the first step in oil spill detection, which affects the accuracy [...] Read more.
Damping Bragg scattering from the ocean surface is the basic underlying principle of synthetic aperture radar (SAR) oil slick detection, and they produce dark spots on SAR images. Dark spot detection is the first step in oil spill detection, which affects the accuracy of oil spill detection. However, some natural phenomena (such as waves, ocean currents, and low wind belts, as well as human factors) may change the backscatter intensity on the surface of the sea, resulting in uneven intensity, high noise, and blurred boundaries of oil slicks or lookalikes. In this paper, Segnet is used as a semantic segmentation model to detect dark spots in oil spill areas. The proposed method is applied to a data set of 4200 from five original SAR images of an oil spill. The effectiveness of the method is demonstrated through the comparison with fully convolutional networks (FCN), an initiator of semantic segmentation models, and some other segmentation methods. It is here observed that the proposed method can not only accurately identify the dark spots in SAR images, but also show a higher robustness under high noise and fuzzy boundary conditions. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Graphical abstract

Open AccessArticle
An Image Segmentation Method Using an Active Contour Model Based on Improved SPF and LIF
Appl. Sci. 2018, 8(12), 2576; https://doi.org/10.3390/app8122576 - 11 Dec 2018
Cited by 14
Abstract
Inhomogeneous images cannot be segmented quickly or accurately using local or global image information. To solve this problem, an image segmentation method using a novel active contour model that is based on an improved signed pressure force (SPF) function and a local image [...] Read more.
Inhomogeneous images cannot be segmented quickly or accurately using local or global image information. To solve this problem, an image segmentation method using a novel active contour model that is based on an improved signed pressure force (SPF) function and a local image fitting (LIF) model is proposed in this paper, which is based on local and global image information. First, a weight function of the global grayscale means of the inside and outside of a contour curve is presented by combining the internal gray mean value with the external gray mean value, based on which a new SPF function is defined. The SPF function can segment blurred images and weak gradient images. Then, the LIF model is introduced by using local image information to segment intensity-inhomogeneous images. Subsequently, a weight function is established based on the local and global image information, and then the weight function is used to adjust the weights between the local information term and the global information term. Thus, a novel active contour model is presented, and an improved SPF- and LIF-based image segmentation (SPFLIF-IS) algorithm is developed based on that model. Experimental results show that the proposed method not only exhibits high robustness to the initial contour and noise but also effectively segments multiobjective images and images with intensity inhomogeneity and can analyze real images well. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Graphical abstract

Open AccessArticle
Automated Classification Analysis of Geological Structures Based on Images Data and Deep Learning Model
Appl. Sci. 2018, 8(12), 2493; https://doi.org/10.3390/app8122493 - 04 Dec 2018
Cited by 6
Abstract
It is meaningful to study the geological structures exposed on the Earth’s surface, which is paramount to engineering design and construction. In this research, we used 2206 images with 12 labels to identify geological structures based on the Inception-v3 model. Grayscale and color [...] Read more.
It is meaningful to study the geological structures exposed on the Earth’s surface, which is paramount to engineering design and construction. In this research, we used 2206 images with 12 labels to identify geological structures based on the Inception-v3 model. Grayscale and color images were adopted in the model. A convolutional neural network (CNN) model was also built in this research. Meanwhile, K nearest neighbors (KNN), artificial neural network (ANN) and extreme gradient boosting (XGBoost) were applied in geological structures classification based on features extracted by the Open Source Computer Vision Library (OpenCV). Finally, the performances of the five methods were compared and the results indicated that KNN, ANN, and XGBoost had a poor performance, with the accuracy of less than 40.0%. CNN was overfitting. The model trained using transfer learning had a significant effect on a small dataset of geological structure images; and the top-1 and top-3 accuracy of the model reached 83.3% and 90.0%, respectively. This shows that texture is the key feature in this research. Transfer learning based on a deep learning model can extract features of small geological structure data effectively, and it is robust in geological structure image classification. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Image Segmentation Approaches for Weld Pool Monitoring during Robotic Arc Welding
Appl. Sci. 2018, 8(12), 2445; https://doi.org/10.3390/app8122445 - 01 Dec 2018
Cited by 2
Abstract
There is a strong correlation between the geometry of the weld pool surface and the degree of penetration in arc welding. To measure the geometry of the weld pool surface robustly, many structured light laser line based monitoring systems have been proposed in [...] Read more.
There is a strong correlation between the geometry of the weld pool surface and the degree of penetration in arc welding. To measure the geometry of the weld pool surface robustly, many structured light laser line based monitoring systems have been proposed in recent years. The geometry of the specular weld pool could be computed from the reflected laser lines based on different principles. The prerequisite of accurate computation of the weld pool surface is to segment the reflected laser lines robustly and efficiently. To find the most effective segmentation solutions for the images captured with different welding parameters, different image processing algorithms are combined to form eight approaches and these approaches are compared both qualitatively and quantitatively in this paper. In particular, the gradient detection filter, the difference method and the GLCM (grey level co-occurrence matrix) are used to remove the uneven background. The spline fitting enhancement method is used to remove the fuzziness. The slope difference distribution-based threshold selection method is used to segment the laser lines from the background. Both qualitative and quantitative experiments are conducted to evaluate the accuracy and the efficiency of the proposed approaches extensively. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Deep Residual Network with Sparse Feedback for Image Restoration
Appl. Sci. 2018, 8(12), 2417; https://doi.org/10.3390/app8122417 - 28 Nov 2018
Cited by 7
Abstract
A deep neural network is difficult to train due to a large number of unknown parameters. To increase trainable performance, we present a moderate depth residual network for the restoration of motion blurring and noisy images. The proposed network has only 10 layers, [...] Read more.
A deep neural network is difficult to train due to a large number of unknown parameters. To increase trainable performance, we present a moderate depth residual network for the restoration of motion blurring and noisy images. The proposed network has only 10 layers, and the sparse feedbacks are added in the middle and the last layers, which are called FbResNet. FbResNet has fast convergence speed and effective denoising performance. In addition, it can also reduce the artificial Mosaic trace at the seam of patches, and visually pleasant output results can be produced from the blurred images or noisy images. Experimental results show the effectiveness of our designed model and method. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
A Novel Discriminating and Relative Global Spatial Image Representation with Applications in CBIR
Appl. Sci. 2018, 8(11), 2242; https://doi.org/10.3390/app8112242 - 14 Nov 2018
Cited by 17
Abstract
The requirement for effective image search, which motivates the use of Content-Based Image Retrieval (CBIR) and the search of similar multimedia contents on the basis of user query, remains an open research problem for computer vision applications. The application domains for Bag of [...] Read more.
The requirement for effective image search, which motivates the use of Content-Based Image Retrieval (CBIR) and the search of similar multimedia contents on the basis of user query, remains an open research problem for computer vision applications. The application domains for Bag of Visual Words (BoVW) based image representations are object recognition, image classification and content-based image analysis. Interest point detectors are quantized in the feature space and the final histogram or image signature do not retain any detail about co-occurrences of features in the 2D image space. This spatial information is crucial, as it adversely affects the performance of an image classification-based model. The most notable contribution in this context is Spatial Pyramid Matching (SPM), which captures the absolute spatial distribution of visual words. However, SPM is sensitive to image transformations such as rotation, flipping and translation. When images are not well-aligned, SPM may lose its discriminative power. This paper introduces a novel approach to encoding the relative spatial information for histogram-based representation of the BoVW model. This is established by computing the global geometric relationship between pairs of identical visual words with respect to the centroid of an image. The proposed research is evaluated by using five different datasets. Comprehensive experiments demonstrate the robustness of the proposed image representation as compared to the state-of-the-art methods in terms of precision and recall values. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
A High-Resolution Texture Mapping Technique for 3D Textured Model
Appl. Sci. 2018, 8(11), 2228; https://doi.org/10.3390/app8112228 - 12 Nov 2018
Cited by 5
Abstract
We proposed a texture mapping technique that comprises mesh partitioning, mesh parameterization and packing, texture transferring, and texture correction and optimization for generating a high-quality texture map of a three-dimensional (3D) model for applications in e-commerce presentations. The main problems in texture mapping [...] Read more.
We proposed a texture mapping technique that comprises mesh partitioning, mesh parameterization and packing, texture transferring, and texture correction and optimization for generating a high-quality texture map of a three-dimensional (3D) model for applications in e-commerce presentations. The main problems in texture mapping are that the texture resolution is generally worse than in the original images and considerable photo inconsistency exists at the transition of different image sources. To improve the texture resolution, we employed an oriented boundary box method for placing mesh islands on the parametric (UV) map. We also provided a texture size that can keep the texture resolution of the 3D textured model similar to that of the object images. To improve the photo inconsistency problem, we employed a method to detect and overcome the missing color that might exist on a texture map. We also proposed a blending process to minimize the transition error caused by different image sources. Thus, a high-quality 3D textured model can be obtained by applying this series of processes for presentations in e-commerce. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
An Efficient Automatic Midsagittal Plane Extraction in Brain MRI
Appl. Sci. 2018, 8(11), 2203; https://doi.org/10.3390/app8112203 - 09 Nov 2018
Cited by 1
Abstract
In this paper, a fully automatic and computationally efficient midsagittal plane (MSP) extraction technique in brain magnetic resonance images (MRIs) has been proposed. Automatic detection of MSP in neuroimages can significantly aid in registration of medical images, asymmetric analysis, and alignment or tilt [...] Read more.
In this paper, a fully automatic and computationally efficient midsagittal plane (MSP) extraction technique in brain magnetic resonance images (MRIs) has been proposed. Automatic detection of MSP in neuroimages can significantly aid in registration of medical images, asymmetric analysis, and alignment or tilt correction (recenter and reorientation) in brain MRIs. The parameters of MSP are estimated in two steps. In the first step, symmetric features and principal component analysis (PCA)-based technique is used to vertically align the bilateral symmetric axis of the brain. In the second step, PCA is used to achieve a set of parallel lines (principal axes) from the selected two-dimensional (2-D) elliptical slices of brain MRIs, followed by a plane fitting using orthogonal regression. The developed algorithm has been tested on 157 real T1-weighted brain MRI datasets including 14 cases from the patients with brain tumors. The presented algorithm is compared with a state-of-the-art approach based on bilateral symmetry maximization. Experimental results revealed that the proposed algorithm is fast (<1.04 s per MRI volume) and exhibits superior performance in terms of accuracy and precision (a mean z-distance of 0.336 voxels and a mean angle difference of 0.06). Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
A Novel One-Camera-Five-Mirror Three-Dimensional Imaging Method for Reconstructing the Cavitation Bubble Cluster in a Water Hydraulic Valve
Appl. Sci. 2018, 8(10), 1783; https://doi.org/10.3390/app8101783 - 01 Oct 2018
Cited by 4
Abstract
In order to study the bubble morphology, a novel experimental and numerical approach was implemented in this research focusing on the analysis of a transparent throttle valve made by Polymethylmethacrylate (PMMA) material. A feature-based algorithm was written using the MATLAB software, allowing the [...] Read more.
In order to study the bubble morphology, a novel experimental and numerical approach was implemented in this research focusing on the analysis of a transparent throttle valve made by Polymethylmethacrylate (PMMA) material. A feature-based algorithm was written using the MATLAB software, allowing the 2D detection and three-dimensional (3D) reconstruction of bubbles: collapsing and clustered ones. The valve core, being an important part of the throttle valve, was exposed to cavitation; hence, to distinguish it from the captured frames, the faster region-based convolutional neural network (R-CNN) algorithm was used to detect its morphology. Additionally, the main approach grouping the above listed techniques was implemented using an optimized virtual stereo vision arrangement of one camera and five plane mirrors. The results obtained during this study validated the robust algorithms and optimization applied. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
3-D Point Cloud Registration Algorithm Based on Greedy Projection Triangulation
Appl. Sci. 2018, 8(10), 1776; https://doi.org/10.3390/app8101776 - 30 Sep 2018
Cited by 3
Abstract
To address the registration problem in current machine vision, a new three-dimensional (3-D) point cloud registration algorithm that combines fast point feature histograms (FPFH) and greedy projection triangulation is proposed. First, the feature information is comprehensively described using FPFH feature description and the [...] Read more.
To address the registration problem in current machine vision, a new three-dimensional (3-D) point cloud registration algorithm that combines fast point feature histograms (FPFH) and greedy projection triangulation is proposed. First, the feature information is comprehensively described using FPFH feature description and the local correlation of the feature information is established using greedy projection triangulation. Thereafter, the sample consensus initial alignment method is applied for initial transformation to implement initial registration. By adjusting the initial attitude between the two cloud points, the improved initial registration values can be obtained. Finally, the iterative closest point method is used to obtain a precise conversion relationship; thus, accurate registration is completed. Specific registration experiments on simple target objects and complex target objects have been performed. The registration speed increased by 1.1% and the registration accuracy increased by 27.3% to 50% in the experiment on target object. The experimental results show that the accuracy and speed of registration have been improved and the efficient registration of the target object has successfully been performed using the greedy projection triangulation, which significantly improves the efficiency of matching feature points in machine vision. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Graphical abstract

Open AccessArticle
Registration of Dental Tomographic Volume Data and Scan Surface Data Using Dynamic Segmentation
Appl. Sci. 2018, 8(10), 1762; https://doi.org/10.3390/app8101762 - 29 Sep 2018
Cited by 4
Abstract
Over recent years, computer-aided design (CAD) has become widely used in the dental industry. In dental CAD applications using both volumetric computed tomography (CT) images and 3D optical scanned surface data, the two data sets need to be registered. Previous works have registered [...] Read more.
Over recent years, computer-aided design (CAD) has become widely used in the dental industry. In dental CAD applications using both volumetric computed tomography (CT) images and 3D optical scanned surface data, the two data sets need to be registered. Previous works have registered volume data and surface data by segmentation. Volume data can be converted to surface data by segmentation and the registration is achieved by the iterative closest point (ICP) method. However, the segmentation needs human input and the results of registration can be poor depending on the segmented surface. Moreover, if the volume data contains metal artifacts, the segmentation process becomes more complex since post-processing is required to remove the metal artifacts, and initially positioning the registration becomes more challenging. To overcome these limitations, we propose a modified iterative closest point (MICP) process, an automatic segmentation method for volume data and surface data. The proposed method uses a bundle of edge points detected along an intensity profile defined by points and normal of surface data. Using this dynamic segmentation, volume data becomes surface data which can be applied to the ICP method. Experimentally, MICP demonstrates fine results compared to the conventional registration method. In addition, the registration can be completed within 10 s if down sampling is applied. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
No-reference Automatic Quality Assessment for Colorfulness-Adjusted, Contrast-Adjusted, and Sharpness-Adjusted Images Using High-Dynamic-Range-Derived Features
Appl. Sci. 2018, 8(9), 1688; https://doi.org/10.3390/app8091688 - 18 Sep 2018
Cited by 2
Abstract
Image adjustment methods are one of the most widely used post-processing techniques for enhancing image quality and improving the visual preference of the human visual system (HVS). However, the assessment of the adjusted images has been mainly dependent on subjective evaluations. Also, most [...] Read more.
Image adjustment methods are one of the most widely used post-processing techniques for enhancing image quality and improving the visual preference of the human visual system (HVS). However, the assessment of the adjusted images has been mainly dependent on subjective evaluations. Also, most recently developed automatic assessment methods have mainly focused on evaluating distorted images degraded by compression or noise. The effects of the colorfulness, contrast, and sharpness adjustments on images have been overlooked. In this study, we propose a fully automatic assessment method that evaluates colorfulness-adjusted, contrast-adjusted, and sharpness-adjusted images while considering HVS preferences. The proposed method does not require a reference image and automatically calculates quantitative scores, visual preference, and quality assessment with respect to the level of colorfulness, contrast, and sharpness adjustment. The proposed method evaluates adjusted images based on the features extracted from high dynamic range images, which have higher colorfulness, contrast, and sharpness than that of low dynamic range images. Through experimentation, we demonstrate that our proposed method achieves a higher correlation with subjective evaluations than that of conventional assessment methods. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Fine-Grain Segmentation of the Intervertebral Discs from MR Spine Images Using Deep Convolutional Neural Networks: BSU-Net
Appl. Sci. 2018, 8(9), 1656; https://doi.org/10.3390/app8091656 - 14 Sep 2018
Cited by 16
Abstract
We propose a new deep learning network capable of successfully segmenting intervertebral discs and their complex boundaries from magnetic resonance (MR) spine images. The existing U-network (U-net) is known to perform well in various segmentation tasks in medical images; however, its performance with [...] Read more.
We propose a new deep learning network capable of successfully segmenting intervertebral discs and their complex boundaries from magnetic resonance (MR) spine images. The existing U-network (U-net) is known to perform well in various segmentation tasks in medical images; however, its performance with respect to details of segmentation such as boundaries is limited by the structural limitations of a max-pooling layer that plays a key role in feature extraction process in the U-net. We designed a modified convolutional and pooling layer scheme and applied a cascaded learning method to overcome these structural limitations of the max-pooling layer of a conventional U-net. The proposed network achieved 3% higher Dice similarity coefficient (DSC) than conventional U-net for intervertebral disc segmentation (89.44% vs. 86.44%, respectively; p < 0.001). For intervertebral disc boundary segmentation, the proposed network achieved 10.46% higher DSC than conventional U-net (54.62% vs. 44.16%, respectively; p < 0.001). Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Graphical abstract

Open AccessArticle
Double Low-Rank and Sparse Decomposition for Surface Defect Segmentation of Steel Sheet
Appl. Sci. 2018, 8(9), 1628; https://doi.org/10.3390/app8091628 - 12 Sep 2018
Cited by 9
Abstract
Surface defect segmentation supports real-time surface defect detection system of steel sheet by reducing redundant information and highlighting the critical defect regions for high-level image understanding. Existing defect segmentation methods usually lack adaptiveness to different shape, size and scale of the defect object. [...] Read more.
Surface defect segmentation supports real-time surface defect detection system of steel sheet by reducing redundant information and highlighting the critical defect regions for high-level image understanding. Existing defect segmentation methods usually lack adaptiveness to different shape, size and scale of the defect object. Based on the observation that the defective area can be regarded as the salient part of image, a saliency detection model using double low-rank and sparse decomposition (DLRSD) is proposed for surface defect segmentation. The proposed method adopts a low-rank assumption which characterizes the defective sub-regions and defect-free background sub-regions respectively. In addition, DLRSD model uses sparse constrains for background sub-regions so as to improve the robustness to noise and uneven illumination simultaneously. Then the Laplacian regularization among spatially adjacent sub-regions is incorporated into the DLRSD model in order to uniformly highlight the defect object. Our proposed DLRSD-based segmentation method consists of three steps: firstly, using DLRSD model to obtain the defect foreground image; then, enhancing the foreground image to establish the good foundation for segmentation; finally, the Otsu’s method is used to choose an optimal threshold automatically for segmentation. Experimental results demonstrate that the proposed method outperforms state-of-the-art approaches in terms of both subjective and objective tests. Meanwhile, the proposed method is applicable to industrial detection with limited computational resources. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Semi-Automatic Segmentation of Vertebral Bodies in MR Images of Human Lumbar Spines
Appl. Sci. 2018, 8(9), 1586; https://doi.org/10.3390/app8091586 - 07 Sep 2018
Cited by 5
Abstract
We propose a semi-automatic algorithm for the segmentation of vertebral bodies in magnetic resonance (MR) images of the human lumbar spine. Quantitative analysis of spine MR images often necessitate segmentation of the image into specific regions representing anatomic structures of interest. Existing algorithms [...] Read more.
We propose a semi-automatic algorithm for the segmentation of vertebral bodies in magnetic resonance (MR) images of the human lumbar spine. Quantitative analysis of spine MR images often necessitate segmentation of the image into specific regions representing anatomic structures of interest. Existing algorithms for vertebral body segmentation require heavy inputs from the user, which is a disadvantage. For example, the user needs to define individual regions of interest (ROIs) for each vertebral body, and specify parameters for the segmentation algorithm. To overcome these drawbacks, we developed a semi-automatic algorithm that considerably reduces the need for user inputs. First, we simplified the ROI placement procedure by reducing the requirement to only one ROI, which includes a vertebral body; subsequently, a correlation algorithm is used to identify the remaining vertebral bodies and to automatically detect the ROIs. Second, the detected ROIs are adjusted to facilitate the subsequent segmentation process. Third, the segmentation is performed via graph-based and line-based segmentation algorithms. We tested our algorithm on sagittal MR images of the lumbar spine and achieved a 90% dice similarity coefficient, when compared with manual segmentation. Our new semi-automatic method significantly reduces the user’s role while achieving good segmentation accuracy. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
A UAV-Based Visual Inspection Method for Rail Surface Defects
Appl. Sci. 2018, 8(7), 1028; https://doi.org/10.3390/app8071028 - 24 Jun 2018
Cited by 21
Abstract
Rail surface defects seriously affect the safety of railway systems. At present, human inspection and rail vehicle inspection are the main approaches for the detection of rail surface defects. However, there are many shortcomings to these approaches, such as low efficiency, high cost, [...] Read more.
Rail surface defects seriously affect the safety of railway systems. At present, human inspection and rail vehicle inspection are the main approaches for the detection of rail surface defects. However, there are many shortcomings to these approaches, such as low efficiency, high cost, and so on. This paper presents a novel visual inspection approach based on unmanned aerial vehicle (UAV) images, and focuses on two key issues of UAV-based rail images: image enhancement and defects segmentation. With regards to the first aspect, a novel image enhancement algorithm named Local Weber-like Contrast (LWLC) is proposed to enhance rail images. The rail surface defects and backgrounds can be highlighted and homogenized under various sunlight intensity by LWLC, due to its illuminance independent, local nonlinear and other advantages. With regards to the second, a new threshold segmentation method named gray stretch maximum entropy (GSME) is presented in this paper. The proposed GSME method emphasizes gray stretch and de-noising on UAV-based rail images, and selects an optimal segmentation threshold for defects detection. Two visual comparison experiments were carried out to demonstrate the efficiency of the proposed methods. Finally, a quantitative comparison experiment shows the LWLC-GSME model achieves a recall of 93.75% for T-I defects and of 94.26% for T-II defects. Therefore, LWLC for image enhancement, in conjunction with GSME for defects segmentation, is efficient and feasible for the detection of rail surface defects based on UAV Images. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Feature-Learning-Based Printed Circuit Board Inspection via Speeded-Up Robust Features and Random Forest
Appl. Sci. 2018, 8(6), 932; https://doi.org/10.3390/app8060932 - 05 Jun 2018
Cited by 16
Abstract
With the coming of the 4th industrial revolution era, manufacturers produce high-tech products. As the production process is refined, inspection technologies become more important. Specifically, the inspection of a printed circuit board (PCB), which is an indispensable part of electronic products, is an [...] Read more.
With the coming of the 4th industrial revolution era, manufacturers produce high-tech products. As the production process is refined, inspection technologies become more important. Specifically, the inspection of a printed circuit board (PCB), which is an indispensable part of electronic products, is an essential step to improve the quality of the process and yield. Image processing techniques are utilized for inspection, but there are limitations because the backgrounds of images are different and the kinds of defects increase. In order to overcome these limitations, methods based on machine learning have been used recently. These methods can inspect without a normal image by learning fault patterns. Therefore, this paper proposes a method can detect various types of defects using machine learning. The proposed method first extracts features through speeded-up robust features (SURF), then learns the fault pattern and calculates probabilities. After that, we generate a weighted kernel density estimation (WKDE) map weighted by the probabilities to consider the density of the features. Because the probability of the WKDE map can detect an area where the defects are concentrated, it improves the performance of the inspection. To verify the proposed method, we apply the method to PCB images and confirm the performance of the method. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Figure 1

Review

Jump to: Editorial, Research

Open AccessReview
Research Progress of Visual Inspection Technology of Steel Products—A Review
Appl. Sci. 2018, 8(11), 2195; https://doi.org/10.3390/app8112195 - 08 Nov 2018
Cited by 24
Abstract
The automation and intellectualization of the manufacturing processes in the iron and steel industry needs the strong support of inspection technologies, which play an important role in the field of quality control. At present, visual inspection technology based on image processing has an [...] Read more.
The automation and intellectualization of the manufacturing processes in the iron and steel industry needs the strong support of inspection technologies, which play an important role in the field of quality control. At present, visual inspection technology based on image processing has an absolute advantage because of its intuitive nature, convenience, and efficiency. A major breakthrough in this field can be achieved if sufficient research regarding visual inspection technologies is undertaken. Therefore, the purpose of this article is to study the latest developments in steel inspection relating to the detected object, system hardware, and system software, existing problems of current inspection technologies, and future research directions. The paper mainly focuses on the research status and trends of inspection technology. The network framework based on deep learning provides space for the development of end-to-end mode inspection technology, which would greatly promote the implementation of intelligent manufacturing. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis) Printed Edition available
Show Figures

Graphical abstract

Back to TopTop