sensors-logo

Journal Browser

Journal Browser

Special Issue "Image and Signal Processing for Biomedical Applications"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: 23 December 2022 | Viewed by 9303

Special Issue Editor

Dr. Christoph Hintermüller
E-Mail Website
Guest Editor
Institute of Biomedical Mechatronics, Johannes Kelper University, 4040 Linz, Austria
Interests: biosignal processing; cardiac electrophysiology; 3D imaging

Special Issue Information

Recording information from the human body by measuring signals and taking images is important throughout the entire clinical process covering anamnesis, diagnosis, therapy, and treatment. In addition to proper recording, preprocessing and pre-analyzing signals and information from the patient fusion of quantitative data and qualitative information also play an important role. The field of biomedical imaging and signal processing has been and still is open to new developments in other disciplines and fields such as physics and chemistry, independent of how remote these may first appear—this is highlighted in the example of Kinect and another kind of devices originally developed for gaming rather than imaging.

This issue puts the focus on recent developments in the fields of biomedical, medical, and clinical image and signal processing. These include new sensing methods, approaches to analyzing the recorded images and signals, data fusion methods, and algorithms to obtain new and additional insights, and how they help to improve clinical processes and free clinicians and doctors to spend more time in direct contact with their patients rather than interpreting the recorded data and signals.

Dr. Christoph Hintermüller
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensing principles
  • sensors
  • image and signal processing
  • clinical applications
  • data fusion
  • image and signal analysis
  • image and signal classification

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
EIEN: Endoscopic Image Enhancement Network Based on Retinex Theory
Sensors 2022, 22(14), 5464; https://doi.org/10.3390/s22145464 - 21 Jul 2022
Viewed by 368
Abstract
In recent years, deep convolutional neural network (CNN)-based image enhancement has shown outstanding performance. However, due to the problems of uneven illumination and low contrast existing in endoscopic images, the implementation of medical endoscopic image enhancement using CNN is still an exploratory and [...] Read more.
In recent years, deep convolutional neural network (CNN)-based image enhancement has shown outstanding performance. However, due to the problems of uneven illumination and low contrast existing in endoscopic images, the implementation of medical endoscopic image enhancement using CNN is still an exploratory and challenging task. An endoscopic image enhancement network (EIEN) based on the Retinex theory is proposed in this paper to solve these problems. The structure consists of three parts: decomposition network, illumination correction network, and reflection component enhancement algorithm. First, the decomposition network model of pre-trained Retinex-Net is retrained on the endoscopic image dataset, and then the images are decomposed into illumination and reflection components by this decomposition network. Second, the illumination components are corrected by the proposed self-attention guided multi-scale pyramid structure. The pyramid structure is used to capture the multi-scale information of the image. The self-attention mechanism is based on the imaging nature of the endoscopic image, and the inverse image of the illumination component is fused with the features of the green and blue channels of the image to be enhanced to generate a weight map that reassigns weights to the spatial dimension of the feature map, to avoid the loss of details in the process of multi-scale feature fusion and image reconstruction by the network. The reflection component enhancement is achieved by sub-channel stretching and weighted fusion, which is used to enhance the vascular information and image contrast. Finally, the enhanced illumination and reflection components are multiplied to obtain the reconstructed image. We compare the results of the proposed method with six other methods on a test set. The experimental results show that EIEN enhances the brightness and contrast of endoscopic images and highlights vascular and tissue information. At the same time, the method in this paper obtained the best results in terms of visual perception and objective evaluation. Full article
(This article belongs to the Special Issue Image and Signal Processing for Biomedical Applications)
Show Figures

Figure 1

Article
Aberrated Multidimensional EEG Characteristics in Patients with Generalized Anxiety Disorder: A Machine-Learning Based Analysis Framework
Sensors 2022, 22(14), 5420; https://doi.org/10.3390/s22145420 - 20 Jul 2022
Viewed by 302
Abstract
Although increasing evidences support the notion that psychiatric disorders are associated with abnormal communication between brain regions, scattered studies have investigated brain electrophysiological disconnectivity of patients with generalized anxiety disorder (GAD). To this end, this study intends to develop an analysis framework for [...] Read more.
Although increasing evidences support the notion that psychiatric disorders are associated with abnormal communication between brain regions, scattered studies have investigated brain electrophysiological disconnectivity of patients with generalized anxiety disorder (GAD). To this end, this study intends to develop an analysis framework for automatic GAD detection through incorporating multidimensional EEG feature extraction and machine learning techniques. Specifically, resting-state EEG signals with a duration of 10 min were obtained from 45 patients with GAD and 36 healthy controls (HC). Then, an analysis framework of multidimensional EEG characteristics (including univariate power spectral density (PSD) and fuzzy entropy (FE), and multivariate functional connectivity (FC), which can decode the EEG information from three different dimensions) were introduced for extracting aberrated multidimensional EEG features via statistical inter-group comparisons. These aberrated features were subsequently fused and fed into three previously validated machine learning methods to evaluate classification performance for automatic patient detection. We showed that patients exhibited a significant increase in beta rhythm and decrease in alpha1 rhythm of PSD, together with the reduced long-range FC between frontal and other brain areas in all frequency bands. Moreover, these aberrated features contributed to a very good classification performance with 97.83 ± 0.40% of accuracy, 97.55 ± 0.31% of sensitivity, 97.78 ± 0.36% of specificity, and 97.95 ± 0.17% of F1. These findings corroborate previous hypothesis of disconnectivity in psychiatric disorders and further shed light on distribution patterns of aberrant spatio-spectral EEG characteristics, which may lead to potential application of automatic diagnosis of GAD. Full article
(This article belongs to the Special Issue Image and Signal Processing for Biomedical Applications)
Show Figures

Figure 1

Article
ECG Classification Using Orthogonal Matching Pursuit and Machine Learning
Sensors 2022, 22(13), 4960; https://doi.org/10.3390/s22134960 - 30 Jun 2022
Viewed by 356
Abstract
Health monitoring and related technologies are a rapidly growing area of research. To date, the electrocardiogram (ECG) remains a popular measurement tool in the evaluation and diagnosis of heart disease. The number of solutions involving ECG signal monitoring systems is growing exponentially in [...] Read more.
Health monitoring and related technologies are a rapidly growing area of research. To date, the electrocardiogram (ECG) remains a popular measurement tool in the evaluation and diagnosis of heart disease. The number of solutions involving ECG signal monitoring systems is growing exponentially in the literature. In this article, underestimated Orthogonal Matching Pursuit (OMP) algorithms are used, demonstrating the significant effect of concise representation parameters on improving the performance of the classification process. Cardiovascular disease classification models based on classical Machine Learning classifiers were defined and investigated. The study was undertaken on the recently published PTB-XL database, whose ECG signals were previously subjected to detailed analysis. The classification was realized for class 2, class 5, and class 15 cardiac diseases. A new method of detecting R-waves and, based on them, determining the location of QRS complexes was presented. Novel aggregation methods of ECG signal fragments containing QRS segments, necessary for tests for classical classifiers, were developed. As a result, it was proved that ECG signal subjected to algorithms of R wave detection, QRS complexes extraction, and resampling performs very well in classification using Decision Trees. The reason can be found in structuring the signal due to the actions mentioned above. The implementation of classification issues achieved the highest Accuracy of 90.4% in recognition of 2 classes, as compared to less than 78% for 5 classes and 71% for 15 classes. Full article
(This article belongs to the Special Issue Image and Signal Processing for Biomedical Applications)
Show Figures

Figure 1

Article
A Novel Method for Baroreflex Sensitivity Estimation Using Modulated Gaussian Filter
Sensors 2022, 22(12), 4618; https://doi.org/10.3390/s22124618 - 18 Jun 2022
Viewed by 451
Abstract
The evaluation of baroreflex sensitivity (BRS) has proven to be critical for medical applications. The use of α indices by spectral methods has been the most popular approach to BRS estimation. Recently, an algorithm termed Gaussian average filtering decomposition (GAFD) has been proposed [...] Read more.
The evaluation of baroreflex sensitivity (BRS) has proven to be critical for medical applications. The use of α indices by spectral methods has been the most popular approach to BRS estimation. Recently, an algorithm termed Gaussian average filtering decomposition (GAFD) has been proposed to serve the same purpose. GAFD adopts a three-layer tree structure similar to wavelet decomposition but is only constructed by Gaussian windows in different cutoff frequency. Its computation is more efficient than that of conventional spectral methods, and there is no need to specify any parameter. This research presents a novel approach, referred to as modulated Gaussian filter (modGauss) for BRS estimation. It has a more simplified structure than GAFD using only two bandpass filters of dedicated passbands, so that the three-level structure in GAFD is avoided. This strategy makes modGauss more efficient than GAFD in computation, while the advantages of GAFD are preserved. Both GAFD and modGauss are conducted extensively in the time domain, yet can achieve similar results to conventional spectral methods. In computational simulations, the EuroBavar dataset was used to assess the performance of the novel algorithm. The BRS values were calculated by four other methods (three spectral approaches and GAFD) for performance comparison. From a comparison using the Wilcoxon rank sum test, it was found that there was no statistically significant dissimilarity; instead, very good agreement using the intraclass correlation coefficient (ICC) was observed. The modGauss algorithm was also found to be the fastest in computation time and suitable for the long-term estimation of BRS. The novel algorithm, as described in this report, can be applied in medical equipment for real-time estimation of BRS in clinical settings. Full article
(This article belongs to the Special Issue Image and Signal Processing for Biomedical Applications)
Show Figures

Figure 1

Article
2D Gait Skeleton Data Normalization for Quantitative Assessment of Movement Disorders from Freehand Single Camera Video Recordings
Sensors 2022, 22(11), 4245; https://doi.org/10.3390/s22114245 - 02 Jun 2022
Viewed by 475
Abstract
Overlapping phenotypic features between Early Onset Ataxia (EOA) and Developmental Coordination Disorder (DCD) can complicate the clinical distinction of these disorders. Clinical rating scales are a common way to quantify movement disorders but in children these scales also rely on the observer’s assessment [...] Read more.
Overlapping phenotypic features between Early Onset Ataxia (EOA) and Developmental Coordination Disorder (DCD) can complicate the clinical distinction of these disorders. Clinical rating scales are a common way to quantify movement disorders but in children these scales also rely on the observer’s assessment and interpretation. Despite the introduction of inertial measurement units for objective and more precise evaluation, special hardware is still required, restricting their widespread application. Gait video recordings of movement disorder patients are frequently captured in routine clinical settings, but there is presently no suitable quantitative analysis method for these recordings. Owing to advancements in computer vision technology, deep learning pose estimation techniques may soon be ready for convenient and low-cost clinical usage. This study presents a framework based on 2D video recording in the coronal plane and pose estimation for the quantitative assessment of gait in movement disorders. To allow the calculation of distance-based features, seven different methods to normalize 2D skeleton keypoint data derived from pose estimation using deep neural networks applied to freehand video recording of gait were evaluated. In our experiments, 15 children (five EOA, five DCD and five healthy controls) were asked to walk naturally while being videotaped by a single camera in 1280 × 720 resolution at 25 frames per second. The high likelihood of the prediction of keypoint locations (mean = 0.889, standard deviation = 0.02) demonstrates the potential for distance-based features derived from routine video recordings to assist in the clinical evaluation of movement in EOA and DCD. By comparison of mean absolute angle error and mean variance of distance, the normalization methods using the Euclidean (2D) distance of left shoulder and right hip, or the average distance from left shoulder to right hip and from right shoulder to left hip were found to better perform for deriving distance-based features and further quantitative assessment of movement disorders. Full article
(This article belongs to the Special Issue Image and Signal Processing for Biomedical Applications)
Show Figures

Figure 1

Article
Study of the Few-Shot Learning for ECG Classification Based on the PTB-XL Dataset
Sensors 2022, 22(3), 904; https://doi.org/10.3390/s22030904 - 25 Jan 2022
Cited by 4 | Viewed by 1344
Abstract
The electrocardiogram (ECG) is considered a fundamental of cardiology. The ECG consists of P, QRS, and T waves. Information provided from the signal based on the intervals and amplitudes of these waves is associated with various heart diseases. The first step in isolating [...] Read more.
The electrocardiogram (ECG) is considered a fundamental of cardiology. The ECG consists of P, QRS, and T waves. Information provided from the signal based on the intervals and amplitudes of these waves is associated with various heart diseases. The first step in isolating the features of an ECG begins with the accurate detection of the R-peaks in the QRS complex. The database was based on the PTB-XL database, and the signals from Lead I–XII were analyzed. This research focuses on determining the Few-Shot Learning (FSL) applicability for ECG signal proximity-based classification. The study was conducted by training Deep Convolutional Neural Networks to recognize 2, 5, and 20 different heart disease classes. The results of the FSL network were compared with the evaluation score of the neural network performing softmax-based classification. The neural network proposed for this task interprets a set of QRS complexes extracted from ECG signals. The FSL network proved to have higher accuracy in classifying healthy/sick patients ranging from 93.2% to 89.2% than the softmax-based classification network, which achieved 90.5–89.2% accuracy. The proposed network also achieved better results in classifying five different disease classes than softmax-based counterparts with an accuracy of 80.2–77.9% as opposed to 77.1% to 75.1%. In addition, the method of R-peaks labeling and QRS complexes extraction has been implemented. This procedure converts a 12-lead signal into a set of R waves by using the detection algorithms and the k-mean algorithm. Full article
(This article belongs to the Special Issue Image and Signal Processing for Biomedical Applications)
Show Figures

Figure 1

Article
Quantification of the Link between Timed Up-and-Go Test Subtasks and Contractile Muscle Properties
Sensors 2021, 21(19), 6539; https://doi.org/10.3390/s21196539 - 30 Sep 2021
Viewed by 909
Abstract
Frailty and falls are a major public health problem in older adults. Muscle weakness of the lower and upper extremities are risk factors for any, as well as recurrent falls including injuries and fractures. While the Timed Up-and-Go (TUG) test is often used [...] Read more.
Frailty and falls are a major public health problem in older adults. Muscle weakness of the lower and upper extremities are risk factors for any, as well as recurrent falls including injuries and fractures. While the Timed Up-and-Go (TUG) test is often used to identify frail members and fallers, tensiomyography (TMG) can be used as a non-invasive tool to assess the function of skeletal muscles. In a clinical study, we evaluated the correlation between the TMG parameters of the skeletal muscle contraction of 23 elderly participants (22 f, age 86.74 ± 7.88) and distance-based TUG test subtask times. TUG tests were recorded with an ultrasonic-based device. The sit-up and walking phases were significantly correlated to the contraction and delay time of the muscle vastus medialis (ρ = 0.55–0.80, p < 0.01). In addition, the delay time of the muscles vastus medialis (ρ = 0.45, p = 0.03) and gastrocnemius medialis (ρ = −0.44, p = 0.04) correlated to the sit-down phase. The maximal radial displacements of the biceps femoris showed significant correlations with the walk-forward times (ρ = −0.47, p = 0.021) and back (ρ = −0.43, p = 0.04). The association of TUG subtasks to muscle contractile parameters, therefore, could be utilized as a measure to improve the monitoring of elderly people’s physical ability in general and during rehabilitation after a fall in particular. TUG test subtask measurements may be used as a proxy to monitor muscle properties in rehabilitation after long hospital stays and injuries or for fall prevention. Full article
(This article belongs to the Special Issue Image and Signal Processing for Biomedical Applications)
Show Figures

Figure 1

Article
Interactive Blood Vessel Segmentation from Retinal Fundus Image Based on Canny Edge Detector
Sensors 2021, 21(19), 6380; https://doi.org/10.3390/s21196380 - 24 Sep 2021
Cited by 5 | Viewed by 980
Abstract
Optometrists, ophthalmologists, orthoptists, and other trained medical professionals use fundus photography to monitor the progression of certain eye conditions or diseases. Segmentation of the vessel tree is an essential process of retinal analysis. In this paper, an interactive blood vessel segmentation from retinal [...] Read more.
Optometrists, ophthalmologists, orthoptists, and other trained medical professionals use fundus photography to monitor the progression of certain eye conditions or diseases. Segmentation of the vessel tree is an essential process of retinal analysis. In this paper, an interactive blood vessel segmentation from retinal fundus image based on Canny edge detection is proposed. Semi-automated segmentation of specific vessels can be done by simply moving the cursor across a particular vessel. The pre-processing stage includes the green color channel extraction, applying Contrast Limited Adaptive Histogram Equalization (CLAHE), and retinal outline removal. After that, the edge detection techniques, which are based on the Canny algorithm, will be applied. The vessels will be selected interactively on the developed graphical user interface (GUI). The program will draw out the vessel edges. After that, those vessel edges will be segmented to bring focus on its details or detect the abnormal vessel. This proposed approach is useful because different edge detection parameter settings can be applied to the same image to highlight particular vessels for analysis or presentation. Full article
(This article belongs to the Special Issue Image and Signal Processing for Biomedical Applications)
Show Figures

Figure 1

Article
Automatic Polyp Segmentation in Colonoscopy Images Using a Modified Deep Convolutional Encoder-Decoder Architecture
Sensors 2021, 21(16), 5630; https://doi.org/10.3390/s21165630 - 20 Aug 2021
Viewed by 1126
Abstract
Colorectal cancer has become the third most commonly diagnosed form of cancer, and has the second highest fatality rate of cancers worldwide. Currently, optical colonoscopy is the preferred tool of choice for the diagnosis of polyps and to avert colorectal cancer. Colon screening [...] Read more.
Colorectal cancer has become the third most commonly diagnosed form of cancer, and has the second highest fatality rate of cancers worldwide. Currently, optical colonoscopy is the preferred tool of choice for the diagnosis of polyps and to avert colorectal cancer. Colon screening is time-consuming and highly operator dependent. In view of this, a computer-aided diagnosis (CAD) method needs to be developed for the automatic segmentation of polyps in colonoscopy images. This paper proposes a modified SegNet Visual Geometry Group-19 (VGG-19), a form of convolutional neural network, as a CAD method for polyp segmentation. The modifications include skip connections, 5 × 5 convolutional filters, and the concatenation of four dilated convolutions applied in parallel form. The CVC-ClinicDB, CVC-ColonDB, and ETIS-LaribPolypDB databases were used to evaluate the model, and it was found that our proposed polyp segmentation model achieved an accuracy, sensitivity, specificity, precision, mean intersection over union, and dice coefficient of 96.06%, 94.55%, 97.56%, 97.48%, 92.3%, and 95.99%, respectively. These results indicate that our model performs as well as or better than previous schemes in the literature. We believe that this study will offer benefits in terms of the future development of CAD tools for polyp segmentation for colorectal cancer diagnosis and management. In the future, we intend to embed our proposed network into a medical capsule robot for practical usage and try it in a hospital setting with clinicians. Full article
(This article belongs to the Special Issue Image and Signal Processing for Biomedical Applications)
Show Figures

Figure 1

Article
Lung Nodule Segmentation with a Region-Based Fast Marching Method
Sensors 2021, 21(5), 1908; https://doi.org/10.3390/s21051908 - 09 Mar 2021
Cited by 7 | Viewed by 1580
Abstract
When dealing with computed tomography volume data, the accurate segmentation of lung nodules is of great importance to lung cancer analysis and diagnosis, being a vital part of computer-aided diagnosis systems. However, due to the variety of lung nodules and the similarity of [...] Read more.
When dealing with computed tomography volume data, the accurate segmentation of lung nodules is of great importance to lung cancer analysis and diagnosis, being a vital part of computer-aided diagnosis systems. However, due to the variety of lung nodules and the similarity of visual characteristics for nodules and their surroundings, robust segmentation of nodules becomes a challenging problem. A segmentation algorithm based on the fast marching method is proposed that separates the image into regions with similar features, which are then merged by combining regions growing with k-means. An evaluation was performed with two distinct methods (objective and subjective) that were applied on two different datasets, containing simulation data generated for this study and real patient data, respectively. The objective experimental results show that the proposed technique can accurately segment nodules, especially in solid cases, given the mean Dice scores of 0.933 and 0.901 for round and irregular nodules. For non-solid and cavitary nodules the performance dropped—0.799 and 0.614 mean Dice scores, respectively. The proposed method was compared to active contour models and to two modern deep learning networks. It reached better overall accuracy than active contour models, having comparable results to DBResNet but lesser accuracy than 3D-UNet. The results show promise for the proposed method in computer-aided diagnosis applications. Full article
(This article belongs to the Special Issue Image and Signal Processing for Biomedical Applications)
Show Figures

Figure 1

Back to TopTop