Special Issue "Computer Vision and Image Processing Technologies"

A special issue of Technologies (ISSN 2227-7080). This special issue belongs to the section "Information and Communication Technologies".

Deadline for manuscript submissions: closed (30 June 2020).

Special Issue Editor

Dr. Pankaj Kumar
E-Mail Website
Guest Editor
Dhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, Gujarat, India
Interests: computer vision; image analysis; signal processing; deep learning; tracking; particle filters; plant phenotyping; activity and behavior analysis in image sequence

Special Issue Information

Dear Colleagues,

The computer vision and image analysis community has developed several technologies that are being used by the common mass of people and researched by the highly developed intelligentsia of the world. Deep learning together with computer vision and image analysis have achieved amazing results in terms of object detection and classification, along with several other problems like image segmentation.

Several big companies have released products that are based on computer vision. To name a few: Kinect by Microsoft, AIY vision kit by Google, BB8 (self-driving cars) by Nvidia, and Scananlyzer by Lemnatec. In last decade or so, image analysis has significantly aided life science researchers in carrying out phenotyping activities.

Applying laboratory science and computer technologies to real life requires solving several engineering problems. How to apply object classification to plant growth and pollination analysis? How to develop imaging hardware to obtain accurate phenotyping scores? Finding the answers to such questions requires dedicated interdisciplinary research activities. 

In order to showcase and archive such high quality research outputs in the above context, this Special Issue is inviting original submissions. This will foster increased attention to research carried out into developing real life applications of computer vision and image analysis. This will allow the interdisciplinary community using image and video analysis to present new academic research and industrial developments.

The topics include but not limited to

  • Vision for teaching and learning
  • Vision for human–computer interaction
  • Object recognition and classification
  • Multiple object tracking and behavior analysis
  • Scene analysis and context development
  • Deep learning for computer vision applications
  • Human, animal, plant phenotyping
  • Embedded vision technologies

Dr. Pankaj Kumar
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Technologies is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Tracking
  • Image analysis
  • Image enhancement
  • Machine learning
  • Event/behavior detection and analysis
  • Object detection and classification
  • Segmentation
  • Camera calibration
  • Artificial intelligence
  • Deep learning

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
An Interactive Real-Time Cutting Technique for 3D Models in Mixed Reality
Technologies 2020, 8(2), 23; https://doi.org/10.3390/technologies8020023 - 12 May 2020
Cited by 3 | Viewed by 1797
Abstract
This work describes a Mixed Reality application useful to modify and cut virtual objects. A digital simulation of surgical operations is presented. Following this approach, surgeons can test all the designed solutions of the preoperative stage in a Mixed Reality environment. High precision [...] Read more.
This work describes a Mixed Reality application useful to modify and cut virtual objects. A digital simulation of surgical operations is presented. Following this approach, surgeons can test all the designed solutions of the preoperative stage in a Mixed Reality environment. High precision in surgery applications can be achieved thanks to the new methodology. The presented solution is hands free and does not need the use of a mouse or computer’s keyboard: it is based on HoloLens, Leap Motion device and Unity. A new cutting algorithm has been developed in order to handle multiple objects and speed up the cut with complex meshes and preserve geometry quality. A case study presents the cut of several bones in order to simulate surgeon’s operations. A reduction in cut time compared to the original method is noticed, together with a high flexibility of the tool and a good fidelity of the geometry. Moreover, all the object fragments generated from the algorithm are available for manipulation and new cuts. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing Technologies)
Show Figures

Figure 1

Article
Recognition of Holoscopic 3D Video Hand Gesture Using Convolutional Neural Networks
Technologies 2020, 8(2), 19; https://doi.org/10.3390/technologies8020019 - 15 Apr 2020
Viewed by 2511
Abstract
The convolutional neural network (CNN) algorithm is one of the efficient techniques to recognize hand gestures. In human–computer interaction, a human gesture is a non-verbal communication mode, as users communicate with a computer via input devices. In this article, 3D micro hand gesture [...] Read more.
The convolutional neural network (CNN) algorithm is one of the efficient techniques to recognize hand gestures. In human–computer interaction, a human gesture is a non-verbal communication mode, as users communicate with a computer via input devices. In this article, 3D micro hand gesture recognition disparity experiments are proposed using CNN. This study includes twelve 3D micro hand motions recorded for three different subjects. The system is validated by an experiment that is implemented on twenty different subjects of different ages. The results are analysed and evaluated based on execution time, training, testing, sensitivity, specificity, positive and negative predictive value, and likelihood. The CNN training results show an accuracy as high as 100%, which present superior performance in all factors. On the other hand, the validation results average about 99% accuracy. The CNN algorithm has proven to be the most accurate classification tool for micro gesture recognition. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing Technologies)
Show Figures

Figure 1

Article
Incremental and Multi-Task Learning Strategies for Coarse-To-Fine Semantic Segmentation
Technologies 2020, 8(1), 1; https://doi.org/10.3390/technologies8010001 - 18 Dec 2019
Cited by 2 | Viewed by 3442
Abstract
The semantic understanding of a scene is a key problem in the computer vision field. In this work, we address the multi-level semantic segmentation task where a deep neural network is first trained to recognize an initial, coarse, set of a few classes. [...] Read more.
The semantic understanding of a scene is a key problem in the computer vision field. In this work, we address the multi-level semantic segmentation task where a deep neural network is first trained to recognize an initial, coarse, set of a few classes. Then, in an incremental-like approach, it is adapted to segment and label new objects’ categories hierarchically derived from subdividing the classes of the initial set. We propose a set of strategies where the output of coarse classifiers is fed to the architectures performing the finer classification. Furthermore, we investigate the possibility to predict the different levels of semantic understanding together, which also helps achieve higher accuracy. Experimental results on the New York University Depth v2 (NYUDv2) dataset show promising insights on the multi-level scene understanding. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing Technologies)
Show Figures

Figure 1

Back to TopTop