Advances of Intelligent Imaging Technology

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 September 2023) | Viewed by 2358

Special Issue Editors

School of Health Sciences, Swinburne University of Technology, Melbourne, VIC 3122, Australia
Interests: bioelectromagnetic; cell biophysics; computational physics; THz spectroscopy; deep learning or machine learning applications
Department of Information Engineering and Computer Science, University of Trento, 38122 Trento, Italy
Interests: ultrasound imaging; artificial intelligence; image processing; signal processing; system design

Special Issue Information

Dear Colleagues,

AI has revolutionarily boosted the domain of intelligent imaging. Each day, the scientific milieu is facing new AI applications converting image to insight. Artificial neural networks are playing a key role in this new era where the power of AI has been unleashed along with drastic progress in image sensing and acquisition. All AI subdomains of deep learning, machine learning and reinforcement learning, along with traditional computer vision, provide unique architectures to build applications for intelligent imaging.

The main objective of this Special Issue is to highlight innovations in AI architectures and applications for the domains of image acquisition, enhancement, analytics and interpretation.

Topics include, but are not limited to:

  • Image-based decision making in autonomous systems and self-driving cars;
  • Obstacle-avoidance systems and robot vision;
  • Image recognition;
  • Image content labelling and meta-tags;
  • Image content search;
  • Remote sensing, satellite imaging, data fusion and hyper-spectral analysis;
  • AI-assisted microscopy imaging and microscopy image analysis.

Dr. Alireza Lajevardipour
Dr. Sajjad Afrakhteh
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • reinforcement learning
  • computer vision
  • image processing
  • image restoration
  • image enhancement
  • image super-resolution
  • image labelling
  • image classification
  • medical imaging
  • microscopy imaging
  • computational imaging
  • multimodal imaging
  • satellite hyper-spectral imaging
  • thermal imaging
  • surveillance imaging
  • camera network
  • visual inspection
  • railway inspection

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 7877 KiB  
Article
Supervised Video Cloth Simulation: Exploring Softness and Stiffness Variations on Fabric Types Using Deep Learning
by Makara Mao, Hongly Va, Ahyoung Lee and Min Hong
Appl. Sci. 2023, 13(17), 9505; https://doi.org/10.3390/app13179505 - 22 Aug 2023
Cited by 1 | Viewed by 1004
Abstract
Physically based cloth simulation requires a model that represents cloth as a collection of nodes connected by different types of constraints. In this paper, we present a coefficient prediction framework using a Deep Learning (DL) technique to enhance video summarization for such simulations. [...] Read more.
Physically based cloth simulation requires a model that represents cloth as a collection of nodes connected by different types of constraints. In this paper, we present a coefficient prediction framework using a Deep Learning (DL) technique to enhance video summarization for such simulations. Our proposed model represents virtual cloth as interconnected nodes that are subject to various constraints. To ensure temporal consistency, we train the video coefficient prediction using Gated Recurrent Unit (GRU), Long-Short Term Memory (LSTM), and Transformer models. Our lightweight video coefficient network combines Convolutional Neural Networks (CNN) and a Transformer to capture both local and global contexts, thus enabling highly efficient prediction of keyframe importance scores for short-length videos. We evaluated our proposed model and found that it achieved an average accuracy of 99.01%. Specifically, the accuracy for the coefficient prediction of GRU was 20%, while LSTM achieved an accuracy of 59%. Our methodology leverages various cloth simulations that utilize a mass-spring model to generate datasets representing cloth movement, thus allowing for the accurate prediction of the coefficients for virtual cloth within physically based simulations. By taking specific material parameters as input, our model successfully outputs a comprehensive set of geometric and physical properties for each cloth instance. This innovative approach seamlessly integrates DL techniques with physically based simulations, and it therefore has a high potential for use in modeling complex systems. Full article
(This article belongs to the Special Issue Advances of Intelligent Imaging Technology)
Show Figures

Figure 1

20 pages, 21653 KiB  
Article
Automatic Puncture Timing Detection for Multi-Camera Injection Motion Analysis
by Zhe Li, Aya Kanazuka, Atsushi Hojo, Takane Suzuki, Kazuyo Yamauchi, Shoichi Ito, Yukihiro Nomura and Toshiya Nakaguchi
Appl. Sci. 2023, 13(12), 7120; https://doi.org/10.3390/app13127120 - 14 Jun 2023
Viewed by 836
Abstract
Precisely detecting puncture times has long posed a challenge in medical education. This challenge is attributable not only to the subjective nature of human evaluation but also to the insufficiency of effective detection techniques, resulting in many medical students lacking full proficiency in [...] Read more.
Precisely detecting puncture times has long posed a challenge in medical education. This challenge is attributable not only to the subjective nature of human evaluation but also to the insufficiency of effective detection techniques, resulting in many medical students lacking full proficiency in injection skills upon entering clinical practice. To address this issue, we propose a novel detection method that enables automatic detection of puncture times during injection without needing wearable devices. In this study, we utilized a hardware system and the YOLOv7 algorithm to detect critical features of injection motion, including puncture time and injection depth parameters. We constructed a sample of 126 medical injection training videos of medical students, and skilled observers were employed to determine accurate puncture times. Our experimental results demonstrated that the mean puncture time of medical students was 2.264 s and the mean identification error was 0.330 s. Moreover, we confirmed that there was no significant difference (p = 0.25 with a significance level of α = 0.05) between the predicted value of the system and the ground truth, which provides a basis for the validity and reliability of the system. These results show our system’s ability to automatically detect puncture times and provide a novel approach for training healthcare professionals. At the same time, it provides a key technology for the future development of injection skill assessment systems. Full article
(This article belongs to the Special Issue Advances of Intelligent Imaging Technology)
Show Figures

Figure 1

Back to TopTop