Real-Time Computer Vision

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 30 November 2025 | Viewed by 3298

Special Issue Editors


E-Mail Website
Guest Editor
Department of Engineering Technology, Middle Tennessee State University, Murfreesboro, TN 37132, USA
Interests: optical sensing; 3D imaging; robotics; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Precision Mechanical Engineering, Shanghai University, Shanghai 200444, China
Interests: digital holography
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of California, Santa Cruz, CA 95064, USA
Interests: computer vision; machine learning; neural networks and artificial intelligence

Special Issue Information

Dear Colleagues,

This is an open opportunity for you to publish your work in a Special Issue on real-time computer vision. In this Special Issue, we call for publications in the following research areas:

  1. State-of-the-art computer vision methodology;
  2. Computer vision methods applicable to real-time applications, such as robotics, medicine, security, manufacturing, construction, etc.;
  3. Generative computer vision models;
  4. Other related emerging areas of computer vision research are also welcomed.

The success of the Special Issue will augment our capability to transform research into meaningful applications that can impact and model our societies. We look forward to working with our colleagues for the success of the Special Issue in impacting the computer vision community.

Dr. Hongbo Zhang
Dr. Wen-Jing Zhou
Dr. Yuyin Zhou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • real-time computer vision
  • computer vision algorithms
  • computer vision applications

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 8420 KiB  
Article
Design and Validation of Pet Care Teaching System Based on Augmented Reality
by Ting-Rui Li and Chi-Yi Tsai
Electronics 2025, 14(7), 1271; https://doi.org/10.3390/electronics14071271 - 24 Mar 2025
Viewed by 333
Abstract
As societal perceptions of pet ownership shift, an increasing number of individuals are choosing to keep pets, leading to various challenges. In Taiwan, the growing population of stray dogs and cats is largely attributed to insufficient education and inadequate management practices among pet [...] Read more.
As societal perceptions of pet ownership shift, an increasing number of individuals are choosing to keep pets, leading to various challenges. In Taiwan, the growing population of stray dogs and cats is largely attributed to insufficient education and inadequate management practices among pet owners, posing public health and safety concerns. This issue primarily stems from a lack of understanding regarding proper pet care. In response, awareness of animal protection and life education has been gaining traction, drawing attention to these concerns. To address this, this study introduces an augmented reality (AR) pet care teaching system aimed at enhancing pet care knowledge through smartphones or tablets. Utilizing interactive AR technology, students are able to meet learning objectives related to pet care and foundational knowledge. This study adopts a quasi-experimental design and incorporates questionnaire surveys involving 61 college students and 8 teachers. The findings indicate that while both AR and traditional teaching methods are effective, the AR group exhibited superior learning outcomes. Furthermore, teacher feedback emphasized that the AR system fosters greater student engagement and significantly improves learning effectiveness. Full article
(This article belongs to the Special Issue Real-Time Computer Vision)
Show Figures

Figure 1

21 pages, 12241 KiB  
Article
A Social Assistance System for Augmented Reality Technology to Redound Face Blindness with 3D Face Recognition
by Wen-Hau Jain, Bing-Gang Jhong and Mei-Yung Chen
Electronics 2025, 14(7), 1244; https://doi.org/10.3390/electronics14071244 - 21 Mar 2025
Viewed by 398
Abstract
The objective of this study is to develop an Augmented Reality (AR) visual aid system to help patients with prosopagnosia recognize faces in social situations and everyday life. The primary contribution of this study is the use of 3D face models as the [...] Read more.
The objective of this study is to develop an Augmented Reality (AR) visual aid system to help patients with prosopagnosia recognize faces in social situations and everyday life. The primary contribution of this study is the use of 3D face models as the basis of data augmentation for facial recognition, which has practical applications for various social situations that patients with prosopagnosia find themselves in. The study comprises the following components: First, the affordances of Active Stereoscopy and stereo cameras were combined. Second, deep learning was employed to reconstruct a detailed 3D face model in real-time based on data from the 3D point cloud and the 2D image. Data were also retrieved from seven angles of the subject’s face to improve the accuracy of face recognition from the subject’s profile and in a range of dynamic interactions. Second, the data derived from the first step were entered into a convolutional neural network (CNN), which then generated a 128-dimensional characteristic vector. Next, the system deployed Structured Query Language (SQL) to compute and compare Euclidean distances to determine the smallest Euclidean distance and match it to the name that corresponded to the face; tagged face data were projected by the camera onto the AR lenses. The findings of this study show that our AR system has a robustness of more than 99% in terms of face recognition. This method offers a higher practical value than traditional 2D face recognition methods when it comes to large-pose 3D face recognition in day-to-day life. Full article
(This article belongs to the Special Issue Real-Time Computer Vision)
Show Figures

Figure 1

28 pages, 7966 KiB  
Article
Real-Time Edge Computing vs. GPU-Accelerated Pipelines for Low-Cost Microscopy Applications
by Gloria Bueno, Lucia Sanchez-Vargas, Alberto Diaz-Maroto, Jesus Ruiz-Santaquiteria, Maria Blanco, Jesus Salido and Gabriel Cristobal
Electronics 2025, 14(5), 930; https://doi.org/10.3390/electronics14050930 - 26 Feb 2025
Viewed by 686
Abstract
Environmental microscopy is crucial for analyzing microorganisms, but traditional optical microscopes are often expensive, bulky, and impractical for field use. AI-driven image recognition, powered by deep learning models like YOLO, enhances microscopy analysis but typically requires high computational resources. To address these challenges, [...] Read more.
Environmental microscopy is crucial for analyzing microorganisms, but traditional optical microscopes are often expensive, bulky, and impractical for field use. AI-driven image recognition, powered by deep learning models like YOLO, enhances microscopy analysis but typically requires high computational resources. To address these challenges, we present two cost-effective pipelines integrating AI with low-cost microscopes and edge computing. Both approaches use the OpenFlexure Microscope and Raspberry Pi devices. The first performs real-time inference with a Raspberry Pi 5 and Hailo-8L accelerator, while the second captures images with a Raspberry Pi 4, transferring them to a GPU-equipped desktop for processing. Using YOLOv8, we evaluate their ability to detect phytoplankton species, including cyanobacteria and diatoms. Results show that edge computing enables accurate, efficient, and low-power microscopy analysis, demonstrating its potential for real-time environmental monitoring in resource-limited settings. Full article
(This article belongs to the Special Issue Real-Time Computer Vision)
Show Figures

Figure 1

11 pages, 1849 KiB  
Article
Improved Segmentation of Cellular Nuclei Using UNET Architectures for Enhanced Pathology Imaging
by Simão Castro, Vitor Pereira and Rui Silva
Electronics 2024, 13(16), 3335; https://doi.org/10.3390/electronics13163335 - 22 Aug 2024
Cited by 1 | Viewed by 1286
Abstract
Medical imaging is essential for pathology diagnosis and treatment, enhancing decision making and reducing costs, but despite various computational methodologies proposed to improve imaging modalities, further optimization is needed for broader acceptance. This study explores deep learning (DL) methodologies for classifying and segmenting [...] Read more.
Medical imaging is essential for pathology diagnosis and treatment, enhancing decision making and reducing costs, but despite various computational methodologies proposed to improve imaging modalities, further optimization is needed for broader acceptance. This study explores deep learning (DL) methodologies for classifying and segmenting pathological imaging data, optimizing models to accurately predict and generalize from training to new data. Different CNN and U-Net architectures are implemented for segmentation tasks, with their performance evaluated on histological image datasets using enhanced pre-processing techniques such as resizing, normalization, and data augmentation. These are trained, parameterized, and optimized using metrics such as accuracy, the DICE coefficient, and intersection over union (IoU). The experimental results show that the proposed method improves the efficiency of cell segmentation compared to networks, such as U-NET and W-UNET. The results show that the proposed pre-processing has improved the IoU from 0.9077 to 0.9675, about 7% better results; also, the values of the DICE coefficient obtained improved from 0.9215 to 0.9916, about 7% better results, surpassing the results reported in the literature. Full article
(This article belongs to the Special Issue Real-Time Computer Vision)
Show Figures

Figure 1

Back to TopTop