Special Issue "New Trends in Computer Vision, Deep Learning and Artificial Intelligence"

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 31 December 2022 | Viewed by 655

Special Issue Editors

Dr. Xiaojiang Peng
E-Mail Website
Guest Editor
College of Big Data and Internet, Shenzhen Technology University, Shenzhen 518118, China
Interests: computer vision; affective computing; neural rendering; deep learning
Prof. Dr. Linlin Shen
E-Mail Website
Guest Editor
Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
Interests: biomedical image analysis
Special Issues, Collections and Topics in MDPI journals
Prof. Dr. Yang You
E-Mail Website
Guest Editor
School of Computing, National University of Singapore, Singapore 119077, Singapore
Interests: machine learning; high performance computing; parallel and distributed systems; AI applications

Special Issue Information

Dear Colleagues,

In the past decade, deep learning algorithms have dominated in speech, computer vision, and natural language processing, and AI applications have been everywhere in our daily lives. With the amount of labeled data available, a well-trained AI system can perform much better than humans when it comes to easy, repetitive, or determinate tasks such as image recognition, face recognition, translation, etc. Identifying ways to extend AI capabilities to tasks with limited data available and other complex tasks will be particularly important for the next decade.

The purpose of this Special Issue is to gather a collection of articles reflecting new trends in computer vision, deep learning, and artificial intelligence. Topics include but are not limited to the following:

  1. Deep learning with limited data;
  2. Unsupervised deep learning technology;
  3. Deep learning for 3D vision;
  4. Neural rendering and its applications;
  5. Deep learning for efficient detection and segmentation;
  6. Deep learning for video understanding;
  7. Deep learning for language-vision tasks;
  8. Deep learning for visual affective computing;
  9. Deep learning for medical image analysis;
  10. Deep learning training acceleration;
  11. Big AI models;
  12. Industrial AI applications.

Dr. Xiaojiang Peng
Prof. Dr. Linlin Shen
Prof. Dr. Yang You
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • computer vision
  • video understanding
  • 3D vision
  • neural rendering
  • visual affective computing

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
OL-JCMSR: A Joint Coding Monitoring Strategy Recommendation Model Based on Operation Log
Mathematics 2022, 10(13), 2292; https://doi.org/10.3390/math10132292 - 30 Jun 2022
Viewed by 177
Abstract
A surveillance system with more than hundreds of cameras and much fewer monitors strongly relies on manual scheduling and inspections from monitoring personnel. A monitoring method which improves the surveillance performance by analyzing and learning from a large amount of manual operation logs [...] Read more.
A surveillance system with more than hundreds of cameras and much fewer monitors strongly relies on manual scheduling and inspections from monitoring personnel. A monitoring method which improves the surveillance performance by analyzing and learning from a large amount of manual operation logs is proposed in this paper. Compared to fixed rules or existing computer-vision methods, the proposed method can more effectively learn from the operators’ behaviors and incorporate their intentions into the monitoring strategy. To the best of our knowledge, this method is the first to apply a monitoring-strategy recommendation model containing a global encoder and a local encoder in monitoring systems. The local encoder can adaptively select important items in the operating sequence to capture the main purpose of the operator, while the global encoder is used to summarize the behavior of the entire sequence. Two experiments are conducted on two data sets. Compared with att-RNN and att-GRU, the joint coding model in experiment 1 improves the [email protected] by 9.4% and 4.6%, respectively, and improves the [email protected] by 5.49% and 3.86%, respectively. In experiment 2, compared with att-RNN and att-GRU, the joint coding model improves by 11.8% and 6.2% on [email protected], and improves by 7.02% and 5.16% on [email protected], respectively. The results illustrate the effectiveness of the our model in monitoring systems. Full article
Show Figures

Figure 1

Article
A Joint Learning Model to Extract Entities and Relations for Chinese Literature Based on Self-Attention
Mathematics 2022, 10(13), 2216; https://doi.org/10.3390/math10132216 - 24 Jun 2022
Viewed by 266
Abstract
Extracting structured information from massive and heterogeneous text is a hot research topic in the field of natural language processing. It includes two key technologies: named entity recognition (NER) and relation extraction (RE). However, previous NER models consider less about the influence of [...] Read more.
Extracting structured information from massive and heterogeneous text is a hot research topic in the field of natural language processing. It includes two key technologies: named entity recognition (NER) and relation extraction (RE). However, previous NER models consider less about the influence of mutual attention between words in the text on the prediction of entity labels, and there is less research on how to more fully extract sentence information for relational classification. In addition, previous research treats NER and RE as a pipeline of two separated tasks, which neglects the connection between them, and is mainly focused on the English corpus. In this paper, based on the self-attention mechanism, bidirectional long short-term memory (BiLSTM) neural network and conditional random field (CRF) model, we put forth a Chinese NER method based on BiLSTM-Self-Attention-CRF and a RE method based on BiLSTM-Multilevel-Attention in the field of Chinese literature. In particular, considering the relationship between these two tasks in terms of word vector and context feature representation in the neural network model, we put forth a joint learning method for NER and RE tasks based on the same underlying module, which jointly updates the parameters of the shared module during the training of these two tasks. For performance evaluation, we make use of the largest Chinese data set containing these two tasks. Experimental results show that the proposed independently trained NER and RE models achieve better performance than all previous methods, and our joint NER-RE training model outperforms the independently-trained NER and RE model. Full article
Show Figures

Figure 1

Back to TopTop