Recent Trends in Computer Vision with Neural Networks

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Computer Vision and Pattern Recognition".

Deadline for manuscript submissions: 30 January 2025 | Viewed by 827

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electrical and Information Engineering “Maurizio Scarano”, University of Cassino and Southern Lazio, 03043 Cassino, Italy
Interests: machine learning; pattern recognition; IoT; image understanding; biomedical imaging; sensors
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are delighted to announce the forthcoming Special Issue titled "Recent Trends in Computer Vision with Neural Networks" in the Journal of Imaging. This Special Issue aims to explore the cutting-edge advancements in the field of computer vision, particularly focusing on the innovative applications and developments of neural networks. As computer vision continues to revolutionize various sectors—from healthcare to automotive industries—the role of neural networks in enhancing and evolving this technology is more significant than ever.

We invite contributions that address a range of topics including, but not limited to, machine learning algorithms for image and video analysis, deep learning approaches for pattern recognition, and neural network architectures for real-time image processing. Submissions that demonstrate novel applications of AI in computer vision, or that propose innovative solutions to traditional computer vision challenges using neural networks, are highly encouraged.

This Special Issue seeks to provide a platform for researchers and practitioners from around the world to share their insights, discoveries, and advancements in the field. We welcome original research papers, comprehensive reviews, and case studies that contribute to the body of knowledge in applying neural networks to computer vision.

Your submission will contribute to a broader understanding of how neural networks are shaping the future of computer vision and its applications across diverse fields. We look forward to your valuable contributions to this dynamic and rapidly evolving area of research.

Dr. Mario Molinara
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • computer vision
  • neural networks
  • artificial intelligence
  • pattern recognition

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

0 pages, 6555 KiB  
Article
Video-Based Sign Language Recognition via ResNet and LSTM Network
by Jiayu Huang and Varin Chouvatut
J. Imaging 2024, 10(6), 149; https://doi.org/10.3390/jimaging10060149 - 20 Jun 2024
Viewed by 519
Abstract
Sign language recognition technology can help people with hearing impairments to communicate with non-hearing-impaired people. At present, with the rapid development of society, deep learning also provides certain technical support for sign language recognition work. In sign language recognition tasks, traditional convolutional neural [...] Read more.
Sign language recognition technology can help people with hearing impairments to communicate with non-hearing-impaired people. At present, with the rapid development of society, deep learning also provides certain technical support for sign language recognition work. In sign language recognition tasks, traditional convolutional neural networks used to extract spatio-temporal features from sign language videos suffer from insufficient feature extraction, resulting in low recognition rates. Nevertheless, a large number of video-based sign language datasets require a significant amount of computing resources for training while ensuring the generalization of the network, which poses a challenge for recognition. In this paper, we present a video-based sign language recognition method based on Residual Network (ResNet) and Long Short-Term Memory (LSTM). As the number of network layers increases, the ResNet network can effectively solve the granularity explosion problem and obtain better time series features. We use the ResNet convolutional network as the backbone model. LSTM utilizes the concept of gates to control unit states and update the output feature values of sequences. ResNet extracts the sign language features. Then, the learned feature space is used as the input of the LSTM network to obtain long sequence features. It can effectively extract the spatio-temporal features in sign language videos and improve the recognition rate of sign language actions. An extensive experimental evaluation demonstrates the effectiveness and superior performance of the proposed method, with an accuracy of 85.26%, F1-score of 84.98%, and precision of 87.77% on Argentine Sign Language (LSA64). Full article
(This article belongs to the Special Issue Recent Trends in Computer Vision with Neural Networks)
Show Figures

Figure 1

Back to TopTop