sensors-logo

Journal Browser

Journal Browser

Optical and Acoustical Methods for Biomedical Imaging and Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (29 February 2024) | Viewed by 6305

Special Issue Editors

Neurosurgery, Stanford University, Stanford, CA 94305, USA
Interests: fluorescence imaging; molecular imaging probes; image-guided surgery; optical imaging; photoacoustic imaging; biomedical imaging; imaging instrumentation; brain tumor; blood-brain barrier; clinical oncology; drug delivery and drug targeting; cancer theranosis; tumor microenvironement

E-Mail Website
Guest Editor
Electrical and Computer Engineering, University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
Interests: optical resonators for sensing applications; optical imaging systems and photoacoustic imaging for biomedical applications

Special Issue Information

Dear Colleagues,

Optical and acoustic methods have emerged as important tools in biological sciences and clinical applications. These versatile biomedical detection approaches are attractive due to their capability of noninvasive tissue imaging and relatively low cost of implementation. Recently, innovative methods and imaging systems with improved performance have been reported, and new applications have been developed that can provide practical values in medicine and industry. In the era of precision and personalized medicine, advancements in our understanding of biological and medical phenomena based on imaging and sensing technology have leveraged specific disease biomarkers for diagnosis and treatment.

This Special Issue therefore aims to showcase recent advances, technologies, solutions, applications, and new challenges in the field of biomedical imaging and sensing using optical and acoustic methods. Both reviews and original research articles will be published. Original research papers that focus on the design and experimental verification of new sensors and imaging systems operating in the optical and acoustic spectra, as well as papers that focus on their testing for biomedical and clinical applications, are welcome. Reviews should provide an up-to-date, well-balanced overview of the current state of the art in a particular application and include main results from other groups.

Research areas may include (but are not limited to) the following:

  • Imaging probes and sensors;
  • Imaging and sensing instrumentation;
  • Biomarkers for targeted imaging;
  • Micro/nanomaterials for imaging and sensing;
  • Biomaterials for imaging and sensing;
  • Clinical and medical imaging;
  • Image-guided surgery;
  • Photoacoustic/optoacoustic imaging;
  • Thermoacoustic imaging;
  • Ultrasound imaging;
  • Theranostic imaging;
  • Life-time imaging;
  • Spectral imaging;
  • Fiber optic sensor;
  • Optical tomography;
  • Optical coherent tomography;
  • Raman sensing and imaging;
  • Image processing algorithm;
  • Computational sensing and imaging;
  • Signal processing, data fusion, and deep learning in imaging sensor systems;
  • Augmented reality and virtual reality in medicine.

I look forward to, and welcome, your participation in this Special Issue.

Dr. Quan Zhou
Dr. Sung-Liang Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biological sensors
  • sensing mechanisms
  • medical sensors
  • optoelectronic and photonic sensors
  • micro and nanosensors
  • smart/intelligent sensors
  • advanced materials for sensing
  • sensor devices
  • MEMS/NEMS
  • wearable sensors devices and electronics

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

15 pages, 2143 KiB  
Article
COVID-Net L2C-ULTRA: An Explainable Linear-Convex Ultrasound Augmentation Learning Framework to Improve COVID-19 Assessment and Monitoring
by E. Zhixuan Zeng, Ashkan Ebadi, Adrian Florea and Alexander Wong
Sensors 2024, 24(5), 1664; https://doi.org/10.3390/s24051664 - 4 Mar 2024
Viewed by 657
Abstract
While no longer a public health emergency of international concern, COVID-19 remains an established and ongoing global health threat. As the global population continues to face significant negative impacts of the pandemic, there has been an increased usage of point-of-care ultrasound (POCUS) imaging [...] Read more.
While no longer a public health emergency of international concern, COVID-19 remains an established and ongoing global health threat. As the global population continues to face significant negative impacts of the pandemic, there has been an increased usage of point-of-care ultrasound (POCUS) imaging as a low-cost, portable, and effective modality of choice in the COVID-19 clinical workflow. A major barrier to the widespread adoption of POCUS in the COVID-19 clinical workflow is the scarcity of expert clinicians who can interpret POCUS examinations, leading to considerable interest in artificial intelligence-driven clinical decision support systems to tackle this challenge. A major challenge to building deep neural networks for COVID-19 screening using POCUS is the heterogeneity in the types of probes used to capture ultrasound images (e.g., convex vs. linear probes), which can lead to very different visual appearances. In this study, we propose an analytic framework for COVID-19 assessment able to consume ultrasound images captured by linear and convex probes. We analyze the impact of leveraging extended linear-convex ultrasound augmentation learning on producing enhanced deep neural networks for COVID-19 assessment, where we conduct data augmentation on convex probe data alongside linear probe data that have been transformed to better resemble convex probe data. The proposed explainable framework, called COVID-Net L2C-ULTRA, employs an efficient deep columnar anti-aliased convolutional neural network designed via a machine-driven design exploration strategy. Our experimental results confirm that the proposed extended linear–convex ultrasound augmentation learning significantly increases performance, with a gain of 3.9% in test accuracy and 3.2% in AUC, 10.9% in recall, and 4.4% in precision. The proposed method also demonstrates a much more effective utilization of linear probe images through a 5.1% performance improvement in recall when such images are added to the training dataset, while all other methods show a decrease in recall when trained on the combined linear–convex dataset. We further verify the validity of the model by assessing what the network considers to be the critical regions of an image with our contribution clinician. Full article
(This article belongs to the Special Issue Optical and Acoustical Methods for Biomedical Imaging and Sensing)
Show Figures

Figure 1

13 pages, 2233 KiB  
Article
P-CSEM: An Attention Module for Improved Laparoscopic Surgical Tool Detection
by Herag Arabian, Tamer Abdulbaki Alshirbaji, Nour Aldeen Jalal, Sabine Krueger-Ziolek and Knut Moeller
Sensors 2023, 23(16), 7257; https://doi.org/10.3390/s23167257 - 18 Aug 2023
Cited by 1 | Viewed by 840
Abstract
Minimal invasive surgery, more specifically laparoscopic surgery, is an active topic in the field of research. The collaboration between surgeons and new technologies aims to improve operation procedures as well as to ensure the safety of patients. An integral part of operating rooms [...] Read more.
Minimal invasive surgery, more specifically laparoscopic surgery, is an active topic in the field of research. The collaboration between surgeons and new technologies aims to improve operation procedures as well as to ensure the safety of patients. An integral part of operating rooms modernization is the real-time communication between the surgeon and the data gathered using the numerous devices during surgery. A fundamental tool that can aid surgeons during laparoscopic surgery is the recognition of the different phases during an operation. Current research has shown a correlation between the surgical tools utilized and the present phase of surgery. To this end, a robust surgical tool classifier is desired for optimal performance. In this paper, a deep learning framework embedded with a custom attention module, the P-CSEM, has been proposed to refine the spatial features for surgical tool classification in laparoscopic surgery videos. This approach utilizes convolutional neural networks (CNNs) integrated with P-CSEM attention modules at different levels of the architecture for improved feature refinement. The model was trained and tested on the popular, publicly available Cholec80 database. Results showed that the attention integrated model achieved a mean average precision of 93.14%, and visualizations revealed the ability of the model to adhere more towards features of tool relevance. The proposed approach displays the benefits of integrating attention modules into surgical tool classification models for a more robust and precise detection. Full article
(This article belongs to the Special Issue Optical and Acoustical Methods for Biomedical Imaging and Sensing)
Show Figures

Figure 1

8 pages, 1098 KiB  
Communication
Photoacoustic Imaging of COVID-19 Vaccine Site Inflammation of Autoimmune Disease Patients
by Janggun Jo, David Mills, Aaron Dentinger, David Chamberland, Nada M. Abdulaziz, Xueding Wang, Elena Schiopu and Girish Gandikota
Sensors 2023, 23(5), 2789; https://doi.org/10.3390/s23052789 - 3 Mar 2023
Viewed by 1438
Abstract
Based on the observations made in rheumatology clinics, autoimmune disease (AD) patients on immunosuppressive (IS) medications have variable vaccine site inflammation responses, whose study may help predict the long-term efficacy of the vaccine in this at-risk population. However, the quantitative assessment of the [...] Read more.
Based on the observations made in rheumatology clinics, autoimmune disease (AD) patients on immunosuppressive (IS) medications have variable vaccine site inflammation responses, whose study may help predict the long-term efficacy of the vaccine in this at-risk population. However, the quantitative assessment of the inflammation of the vaccine site is technically challenging. In this study analyzing AD patients on IS medications and normal control subjects, we imaged the inflammation of the vaccine site 24 h after mRNA COVID-19 vaccinations were administered using both the emerging photoacoustic imaging (PAI) method and the established Doppler ultrasound (US) method. A total of 15 subjects were involved, including 6 AD patients on IS and 9 normal control subjects, and the results from the two groups were compared. Compared to the results obtained from the control subjects, the AD patients on IS medications showed statistically significant reductions in vaccine site inflammation, indicating that immunosuppressed AD patients also experience local inflammation after mRNA vaccination but not in as clinically apparent of a manner when compared to non-immunosuppressed non-AD individuals. Both PAI and Doppler US were able to detect mRNA COVID-19 vaccine-induced local inflammation. PAI, based on the optical absorption contrast, shows better sensitivity in assessing and quantifying the spatially distributed inflammation in soft tissues at the vaccine site. Full article
(This article belongs to the Special Issue Optical and Acoustical Methods for Biomedical Imaging and Sensing)
Show Figures

Figure 1

19 pages, 12166 KiB  
Article
Laparoscopic Video Analysis Using Temporal, Attention, and Multi-Feature Fusion Based-Approaches
by Nour Aldeen Jalal, Tamer Abdulbaki Alshirbaji, Paul David Docherty, Herag Arabian, Bernhard Laufer, Sabine Krueger-Ziolek, Thomas Neumuth and Knut Moeller
Sensors 2023, 23(4), 1958; https://doi.org/10.3390/s23041958 - 9 Feb 2023
Cited by 3 | Viewed by 1843
Abstract
Adapting intelligent context-aware systems (CAS) to future operating rooms (OR) aims to improve situational awareness and provide surgical decision support systems to medical teams. CAS analyzes data streams from available devices during surgery and communicates real-time knowledge to clinicians. Indeed, recent advances in [...] Read more.
Adapting intelligent context-aware systems (CAS) to future operating rooms (OR) aims to improve situational awareness and provide surgical decision support systems to medical teams. CAS analyzes data streams from available devices during surgery and communicates real-time knowledge to clinicians. Indeed, recent advances in computer vision and machine learning, particularly deep learning, paved the way for extensive research to develop CAS. In this work, a deep learning approach for analyzing laparoscopic videos for surgical phase recognition, tool classification, and weakly-supervised tool localization in laparoscopic videos was proposed. The ResNet-50 convolutional neural network (CNN) architecture was adapted by adding attention modules and fusing features from multiple stages to generate better-focused, generalized, and well-representative features. Then, a multi-map convolutional layer followed by tool-wise and spatial pooling operations was utilized to perform tool localization and generate tool presence confidences. Finally, the long short-term memory (LSTM) network was employed to model temporal information and perform tool classification and phase recognition. The proposed approach was evaluated on the Cholec80 dataset. The experimental results (i.e., 88.5% and 89.0% mean precision and recall for phase recognition, respectively, 95.6% mean average precision for tool presence detection, and a 70.1% F1-score for tool localization) demonstrated the ability of the model to learn discriminative features for all tasks. The performances revealed the importance of integrating attention modules and multi-stage feature fusion for more robust and precise detection of surgical phases and tools. Full article
(This article belongs to the Special Issue Optical and Acoustical Methods for Biomedical Imaging and Sensing)
Show Figures

Figure 1

Other

Jump to: Research

18 pages, 1688 KiB  
Perspective
Optical Measurement of Ligament Strain: Opportunities and Limitations for Intraoperative Application
by Christian Marx, Paul Wulff, Christian Fink and Daniel Baumgarten
Sensors 2023, 23(17), 7487; https://doi.org/10.3390/s23177487 - 28 Aug 2023
Viewed by 849
Abstract
A feasible and precise method to measure ligament strain during surgical interventions could significantly enhance the quality of ligament reconstructions. However, all existing scientific approaches to measure in vivo ligament strain possess at least one significant disadvantage, such as the impairment of the [...] Read more.
A feasible and precise method to measure ligament strain during surgical interventions could significantly enhance the quality of ligament reconstructions. However, all existing scientific approaches to measure in vivo ligament strain possess at least one significant disadvantage, such as the impairment of the anatomical structure. Seeking a more advantageous method, this paper proposes defining medical and technical requirements for a non-destructive, optical measurement technique. Furthermore, we offer a comprehensive review of current optical endoscopic techniques which could potentially be suitable for in vivo ligament strain measurement, along with the most suitable optical measurement techniques. The most promising options are rated based on the defined explicit and implicit requirements. Three methods were identified as promising candidates for a precise optical measurement of the alteration of a ligaments strain: confocal chromatic imaging, shearography, and digital image correlation. Full article
(This article belongs to the Special Issue Optical and Acoustical Methods for Biomedical Imaging and Sensing)
Show Figures

Figure 1

Back to TopTop