sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence in Computer Vision: Methods and Applications2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 20 December 2025 | Viewed by 7938

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mechanical Engineering, The Catholic University of America, Washington, DC 20064, USA
Interests: optics; mechanics; robotics; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Spree3D, Alameda, CA 94502, USA
Interests: computer vision; computational photography; machine learning

E-Mail Website
Guest Editor
Neuroimaging Research Branch, National Institute on Drug Abuse, National Institutes of Health, Baltimore, MD 21224, USA
Interests: computer vision; machine learning; deep learning; computer hardware; neuroimaging
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
U.S. Army Research Laboratory, 2201 Aberdeen Boulevard, Aberdeen, MD 21005, USA
Interests: machine learning

Special Issue Information

Dear Colleagues,

In recent years, there has been high interest in the research and development of artificial intelligence techniques. In the meantime, computer vision methods have been enhanced and extended to encompass an astonishing number of novel sensors and measurement systems. As artificial intelligence spreads over almost all fields of science and engineering, computer vision remains one of its primary application areas. Notably, incorporating artificial intelligence into computer vision-based sensing and measurement techniques has led to numerous unprecedented performances, such as high-accuracy object detection, image segmentation, human pose estimation, and real-time 3D sensing, which cannot be fulfilled using conventional methods.

This Special Issue aims to cover the recent advancements in computer vision that involve using artificial intelligence methods, with a particular interest in sensors and sensing. Both original research and review articles are welcome. Typical topics include but are not limited to the following:

  • Physical, chemical, biological, and healthcare sensors and sensing techniques with deep learning approaches;
  • Localization, mapping, and navigation techniques with artificial intelligence;
  • Artificial intelligence-based recognition of objects, scenes, actions, faces, gestures, expressions, and emotions, as well as object relations and interactions;
  • 3D imaging and sensing with deep learning schemes;
  • Accurate learning with simulation datasets or with a small number of training labels for sensors and sensing;
  • Supervised and unsupervised learning for sensors and sensing;
  • Broad computer vision methods and applications that involve using deep learning or artificial intelligence.

Dr. Zhaoyang Wang
Dr. Minh P. Vo
Dr. Hieu Nguyen
Dr. John Hyatt
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • deep learning
  • computer vision
  • smart sensors
  • intelligent sensing
  • 3D imaging and sensing
  • localization and mapping
  • navigation and positioning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

17 pages, 11202 KiB  
Editorial
AI-Powered Visual Sensors and Sensing: Where We Are and Where We Are Going
by Hieu Nguyen, Minh Vo, John Hyatt and Zhaoyang Wang
Sensors 2025, 25(6), 1758; https://doi.org/10.3390/s25061758 - 12 Mar 2025
Viewed by 676
Abstract
Deep learning, a machine learning method that mimics the neural network structures of the human brain to process data, recognize patterns, and make decisions, traces its origins back to the 1950s [...] Full article
Show Figures

Figure 1

Research

Jump to: Editorial

16 pages, 5435 KiB  
Article
PAPRec: 3D Point Cloud Reconstruction Based on Prior-Guided Adaptive Probabilistic Network
by Caixia Liu, Minhong Zhu, Yali Chen, Xiulan Wei and Haisheng Li
Sensors 2025, 25(5), 1354; https://doi.org/10.3390/s25051354 - 22 Feb 2025
Viewed by 676
Abstract
Inferring a complete 3D shape from a single-view image is an ill-posed problem. The proposed methods often have problems such as insufficient feature expression, unstable training and limited constraints, resulting in a low accuracy and ambiguity reconstruction. To address these problems, we propose [...] Read more.
Inferring a complete 3D shape from a single-view image is an ill-posed problem. The proposed methods often have problems such as insufficient feature expression, unstable training and limited constraints, resulting in a low accuracy and ambiguity reconstruction. To address these problems, we propose a prior-guided adaptive probabilistic network for single-view 3D reconstruction, called PAPRec. In the training stage, PAPRec encodes a single-view image and its corresponding 3D prior into image feature distribution and point cloud feature distribution, respectively. PAPRec then utilizes a latent normalizing flow to fit the two distributions and obtains a latent vector with rich cues. PAPRec finally introduces an adaptive probabilistic network consisting of a shape normalizing flow and a diffusion model in order to decode the latent vector as a complete 3D point cloud. Unlike the proposed methods, PAPRec fully learns the global and local features of objects by innovatively integrating 3D prior guidance and the adaptive probability network under the optimization of a loss function combining prior, flow and diffusion losses. The experimental results on the public ShapeNet dataset show that PAPRec, on average, improves CD by 2.62%, EMD by 5.99% and F1 by 4.41%, in comparison to several state-of-the-art methods. Full article
Show Figures

Figure 1

19 pages, 8290 KiB  
Article
Multi-Scale Contrastive Learning with Hierarchical Knowledge Synergy for Visible-Infrared Person Re-Identification
by Yongheng Qian and Su-Kit Tang
Sensors 2025, 25(1), 192; https://doi.org/10.3390/s25010192 - 1 Jan 2025
Cited by 1 | Viewed by 889
Abstract
Visible-infrared person re-identification (VI-ReID) is a challenging cross-modality retrieval task to match a person across different spectral camera views. Most existing works focus on learning shared feature representations from the final embedding space of advanced networks to alleviate modality differences between visible and [...] Read more.
Visible-infrared person re-identification (VI-ReID) is a challenging cross-modality retrieval task to match a person across different spectral camera views. Most existing works focus on learning shared feature representations from the final embedding space of advanced networks to alleviate modality differences between visible and infrared images. However, exclusively relying on high-level semantic information from the network’s final layers can restrict shared feature representations and overlook the benefits of low-level details. Different from these methods, we propose a multi-scale contrastive learning network (MCLNet) with hierarchical knowledge synergy for VI-ReID. MCLNet is a novel two-stream contrastive deep supervision framework designed to train low-level details and high-level semantic representations simultaneously. MCLNet utilizes supervised contrastive learning (SCL) at each intermediate layer to strengthen visual representations and enhance cross-modality feature learning. Furthermore, a hierarchical knowledge synergy (HKS) strategy for pairwise knowledge matching promotes explicit information interaction across multi-scale features and improves information consistency. Extensive experiments on three benchmarks demonstrate the effectiveness of MCLNet. Full article
Show Figures

Figure 1

27 pages, 12241 KiB  
Article
SURABHI: Self-Training Using Rectified Annotations-Based Hard Instances for Eidetic Cattle Recognition
by Manu Ramesh and Amy R. Reibman
Sensors 2024, 24(23), 7680; https://doi.org/10.3390/s24237680 - 30 Nov 2024
Viewed by 620
Abstract
We propose a self-training scheme, SURABHI, that trains deep-learning keypoint detection models on machine-annotated instances, together with the methodology to generate those instances. SURABHI aims to improve the keypoint detection accuracy not by altering the structure of a deep-learning-based keypoint detector model but [...] Read more.
We propose a self-training scheme, SURABHI, that trains deep-learning keypoint detection models on machine-annotated instances, together with the methodology to generate those instances. SURABHI aims to improve the keypoint detection accuracy not by altering the structure of a deep-learning-based keypoint detector model but by generating highly effective training instances. The machine-annotated instances used in SURABHI are hard instances—instances that require a rectifier to correct the keypoints misplaced by the keypoint detection model. We engineer this scheme for the task of predicting keypoints of cattle from the top, in conjunction with our Eidetic Cattle Recognition System, which is dependent on accurate prediction of keypoints for predicting the correct cow ID. We show that the final cow ID prediction accuracy on previously unseen cows also improves significantly after applying SURABHI to a deep-learning detection model with high capacity, especially when available training data are minimal. SURABHI helps us achieve a top-6 cow recognition accuracy of 91.89% on a dataset of cow videos. Using SURABHI on this dataset also improves the number of cow instances with correct identification by 22% over the baseline result from fully supervised training. Full article
Show Figures

Figure 1

22 pages, 12107 KiB  
Article
Deep Learning-Based Classification of Macrofungi: Comparative Analysis of Advanced Models for Accurate Fungi Identification
by Sifa Ozsari, Eda Kumru, Fatih Ekinci, Ilgaz Akata, Mehmet Serdar Guzel, Koray Acici, Eray Ozcan and Tunc Asuroglu
Sensors 2024, 24(22), 7189; https://doi.org/10.3390/s24227189 - 9 Nov 2024
Cited by 3 | Viewed by 1811
Abstract
This study focuses on the classification of six different macrofungi species using advanced deep learning techniques. Fungi species, such as Amanita pantherina, Boletus edulis, Cantharellus cibarius, Lactarius deliciosus, Pleurotus ostreatus and Tricholoma terreum were chosen based on their ecological [...] Read more.
This study focuses on the classification of six different macrofungi species using advanced deep learning techniques. Fungi species, such as Amanita pantherina, Boletus edulis, Cantharellus cibarius, Lactarius deliciosus, Pleurotus ostreatus and Tricholoma terreum were chosen based on their ecological importance and distinct morphological characteristics. The research employed 5 different machine learning techniques and 12 deep learning models, including DenseNet121, MobileNetV2, ConvNeXt, EfficientNet, and swin transformers, to evaluate their performance in identifying fungi from images. The DenseNet121 model demonstrated the highest accuracy (92%) and AUC score (95%), making it the most effective in distinguishing between species. The study also revealed that transformer-based models, particularly the swin transformer, were less effective, suggesting room for improvement in their application to this task. Further advancements in macrofungi classification could be achieved by expanding datasets, incorporating additional data types such as biochemical, electron microscopy, and RNA/DNA sequences, and using ensemble methods to enhance model performance. The findings contribute valuable insights into both the use of deep learning for biodiversity research and the ecological conservation of macrofungi species. Full article
Show Figures

Figure 1

23 pages, 1025 KiB  
Article
Adversarial Examples on XAI-Enabled DT for Smart Healthcare Systems
by Niddal H. Imam
Sensors 2024, 24(21), 6891; https://doi.org/10.3390/s24216891 - 27 Oct 2024
Cited by 1 | Viewed by 1703
Abstract
There have recently been rapid developments in smart healthcare systems, such as precision diagnosis, smart diet management, and drug discovery. These systems require the integration of the Internet of Things (IoT) for data acquisition, Digital Twins (DT) for data representation into a digital [...] Read more.
There have recently been rapid developments in smart healthcare systems, such as precision diagnosis, smart diet management, and drug discovery. These systems require the integration of the Internet of Things (IoT) for data acquisition, Digital Twins (DT) for data representation into a digital replica and Artificial Intelligence (AI) for decision-making. DT is a digital copy or replica of physical entities (e.g., patients), one of the emerging technologies that enable the advancement of smart healthcare systems. AI and Machine Learning (ML) offer great benefits to DT-based smart healthcare systems. They also pose certain risks, including security risks, and bring up issues of fairness, trustworthiness, explainability, and interpretability. One of the challenges that still make the full adaptation of AI/ML in healthcare questionable is the explainability of AI (XAI) and interpretability of ML (IML). Although the study of the explainability and interpretability of AI/ML is now a trend, there is a lack of research on the security of XAI-enabled DT for smart healthcare systems. Existing studies limit their focus to either the security of XAI or DT. This paper provides a brief overview of the research on the security of XAI-enabled DT for smart healthcare systems. It also explores potential adversarial attacks against XAI-enabled DT for smart healthcare systems. Additionally, it proposes a framework for designing XAI-enabled DT for smart healthcare systems that are secure and trusted. Full article
Show Figures

Figure 1

Back to TopTop