sensors-logo

Journal Browser

Journal Browser

Advanced Machine Learning Techniques for Biomedical Imaging Sensing and Healthcare Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 January 2022) | Viewed by 63337

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
Interests: recommender systems; service computing; intelligent data analytics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Charotar University of Science and Technology, Anand, India
Interests: Internet of Things; fog computing; big data analysis; computer vision

E-Mail Website
Guest Editor
Department of Computing Science, Umeå University, SE-901 87 Umeå, Sweden
Interests: machine learning; anomaly detection; trustworthy AI; distributed systems; data analytics

E-Mail Website
Guest Editor
Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, FL 33431-0991, USA
Interests: biosignal processing; gait analysis; cardiovascular engineering; speech data analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The biomedical and healthcare sciences have become data-intensive fields, with a strong need for sophisticated data mining methods to extract knowledge from the available information. Both biomedical and healthcare data contain several challenges in data analysis, including high dimensionality, highly distributed data as well as data sources, class imbalance, and low numbers of samples. Although the current research in this field has shown promising results, several research issues need to be explored as follows. There is a need to explore feature selection methods to select stable sets of genes to improve predictive performance along with interpretation. There is also a need to explore big data in biomedical and healthcare research. An increasing flood of data characterizes human healthcare and biomedical research. Healthcare data are available in different formats, including numeric, textual reports, signals and images, and data are available from different sources.

Researchers working in medical imaging and healthcare rely on the expertise of clinicians who play a significant role in understanding complex medical data for diagnosis of diseases. Automation of diagnosis procedures for various healthcare problems may help in improving patient care and overall healthcare. Recently, advanced machine learning methods have shown promising results in biomedical and healthcare applications. Therefore, there is a need to explore novel learning methods, optimization and inference techniques for processing biomedical and healthcare data to get performance closer to clinical diagnosis. Advances in machine learning can be used to develop sophisticated and novel applications in the field of biomedical and healthcare domains. This will attract healthcare practitioners who have access to interesting sources of data but lack the expertise in using machine learning techniques effectively. Special attention will be devoted to handling feature selection, class imbalance, model robustness, scalability, distributed and heterogeneous data sources, and data fusion in biomedical and healthcare applications.

Topics:

The main topics of this Special Issue include but are not limited to the following:

  • Information fusion and knowledge transfer in biomedical and healthcare applications;
  • Information retrieval of medical images;
  • Imaging sensing tools, technologies and applications in biomedical research;
  • Body motion and pose detection in biomedical imaging;
  • Computer aided detection and diagnosis, especially for cancers;
  • Transfer learning in medical imaging;
  • Adversarial training in medical imaging;
  • Medical image reconstruction;
  • Knowledge-assisted image processing;
  • Domain adaptation in medical imaging;
  • Content-based information retrieval;
  • Medical image compression;
  • Distributed training, learning, and inference for biomedical and healthcare data;
  • Distributed model optimization for biomedical and healthcare data;
  • Federated learning for biomedical and healthcare data.

Dr. Mukesh Prasad
Prof. Dr. Jian Cao
Dr. Chintan Bhatt
Dr. Monowar H. Bhuyan
Prof. Dr. Behnaz Ghoraani
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

32 pages, 8934 KiB  
Article
Deep Learning-Based Approach for Emotion Recognition Using Electroencephalography (EEG) Signals Using Bi-Directional Long Short-Term Memory (Bi-LSTM)
by Mona Algarni, Faisal Saeed, Tawfik Al-Hadhrami, Fahad Ghabban and Mohammed Al-Sarem
Sensors 2022, 22(8), 2976; https://doi.org/10.3390/s22082976 - 13 Apr 2022
Cited by 51 | Viewed by 6263
Abstract
Emotions are an essential part of daily human communication. The emotional states and dynamics of the brain can be linked by electroencephalography (EEG) signals that can be used by the Brain–Computer Interface (BCI), to provide better human–machine interactions. Several studies have been conducted [...] Read more.
Emotions are an essential part of daily human communication. The emotional states and dynamics of the brain can be linked by electroencephalography (EEG) signals that can be used by the Brain–Computer Interface (BCI), to provide better human–machine interactions. Several studies have been conducted in the field of emotion recognition. However, one of the most important issues facing the emotion recognition process, using EEG signals, is the accuracy of recognition. This paper proposes a deep learning-based approach for emotion recognition through EEG signals, which includes data selection, feature extraction, feature selection and classification phases. This research serves the medical field, as the emotion recognition model helps diagnose psychological and behavioral disorders. The research contributes to improving the performance of the emotion recognition model to obtain more accurate results, which, in turn, aids in making the correct medical decisions. A standard pre-processed Database of Emotion Analysis using Physiological signaling (DEAP) was used in this work. The statistical features, wavelet features, and Hurst exponent were extracted from the dataset. The feature selection task was implemented through the Binary Gray Wolf Optimizer. At the classification stage, the stacked bi-directional Long Short-Term Memory (Bi-LSTM) Model was used to recognize human emotions. In this paper, emotions are classified into three main classes: arousal, valence and liking. The proposed approach achieved high accuracy compared to the methods used in past studies, with an average accuracy of 99.45%, 96.87% and 99.68% of valence, arousal, and liking, respectively, which is considered a high performance for the emotion recognition model. Full article
Show Figures

Figure 1

16 pages, 33659 KiB  
Article
Short Single-Lead ECG Signal Delineation-Based Deep Learning: Implementation in Automatic Atrial Fibrillation Identification
by Bambang Tutuko, Muhammad Naufal Rachmatullah, Annisa Darmawahyuni, Siti Nurmaini, Alexander Edo Tondas, Rossi Passarella, Radiyati Umi Partan, Ahmad Rifai, Ade Iriani Sapitri and Firdaus Firdaus
Sensors 2022, 22(6), 2329; https://doi.org/10.3390/s22062329 - 17 Mar 2022
Cited by 9 | Viewed by 3898
Abstract
Physicians manually interpret an electrocardiogram (ECG) signal morphology in routine clinical practice. This activity is a monotonous and abstract task that relies on the experience of understanding ECG waveform meaning, including P-wave, QRS-complex, and T-wave. Such a manual process depends on signal quality [...] Read more.
Physicians manually interpret an electrocardiogram (ECG) signal morphology in routine clinical practice. This activity is a monotonous and abstract task that relies on the experience of understanding ECG waveform meaning, including P-wave, QRS-complex, and T-wave. Such a manual process depends on signal quality and the number of leads. ECG signal classification based on deep learning (DL) has produced an automatic interpretation; however, the proposed method is used for specific abnormality conditions. When the ECG signal morphology change to other abnormalities, it cannot proceed automatically. To generalize the automatic interpretation, we aim to delineate ECG waveform. However, the output of delineation process only ECG waveform duration classes for P-wave, QRS-complex, and T-wave. It should be combined with a medical knowledge rule to produce the abnormality interpretation. The proposed model is applied for atrial fibrillation (AF) identification. This study meets the AF criteria with RR irregularities and the absence of P-waves in essential oscillations for even more accurate identification. The QT database by Physionet is utilized for developing the delineation model, and it validates with The Lobachevsky University Database. The results show that our delineation model works properly, with 98.91% sensitivity, 99.01% precision, 99.79% specificity, 99.79% accuracy, and a 98.96% F1 score. We use about 4058 normal sinus rhythm records and 1804 AF records from the experiment to identify AF conditions that are taken from three datasets. The comprehensive testing has produced higher negative predictive value and positive predictive value. This means that the proposed model can identify AF conditions from ECG signal delineation. Our approach can considerably contribute to AF diagnosis with these results. Full article
Show Figures

Figure 1

18 pages, 4193 KiB  
Article
Thermal Change Index-Based Diabetic Foot Thermogram Image Classification Using Machine Learning Techniques
by Amith Khandakar, Muhammad E. H. Chowdhury, Mamun Bin Ibne Reaz, Sawal Hamid Md Ali, Tariq O. Abbas, Tanvir Alam, Mohamed Arselene Ayari, Zaid B. Mahbub, Rumana Habib, Tawsifur Rahman, Anas M. Tahir, Ahmad Ashrif A. Bakar and Rayaz A. Malik
Sensors 2022, 22(5), 1793; https://doi.org/10.3390/s22051793 - 24 Feb 2022
Cited by 16 | Viewed by 3871
Abstract
Diabetes mellitus (DM) can lead to plantar ulcers, amputation and death. Plantar foot thermogram images acquired using an infrared camera have been shown to detect changes in temperature distribution associated with a higher risk of foot ulceration. Machine learning approaches applied to such [...] Read more.
Diabetes mellitus (DM) can lead to plantar ulcers, amputation and death. Plantar foot thermogram images acquired using an infrared camera have been shown to detect changes in temperature distribution associated with a higher risk of foot ulceration. Machine learning approaches applied to such infrared images may have utility in the early diagnosis of diabetic foot complications. In this work, a publicly available dataset was categorized into different classes, which were corroborated by domain experts, based on a temperature distribution parameter—the thermal change index (TCI). We then explored different machine-learning approaches for classifying thermograms of the TCI-labeled dataset. Classical machine learning algorithms with feature engineering and the convolutional neural network (CNN) with image enhancement techniques were extensively investigated to identify the best performing network for classifying thermograms. The multilayer perceptron (MLP) classifier along with the features extracted from thermogram images showed an accuracy of 90.1% in multi-class classification, which outperformed the literature-reported performance metrics on this dataset. Full article
Show Figures

Figure 1

15 pages, 5989 KiB  
Article
A High-Performance Deep Neural Network Model for BI-RADS Classification of Screening Mammography
by Kuen-Jang Tsai, Mei-Chun Chou, Hao-Ming Li, Shin-Tso Liu, Jung-Hsiu Hsu, Wei-Cheng Yeh, Chao-Ming Hung, Cheng-Yu Yeh and Shaw-Hwa Hwang
Sensors 2022, 22(3), 1160; https://doi.org/10.3390/s22031160 - 3 Feb 2022
Cited by 25 | Viewed by 4935
Abstract
Globally, the incidence rate for breast cancer ranks first. Treatment for early-stage breast cancer is highly cost effective. Five-year survival rate for stage 0–2 breast cancer exceeds 90%. Screening mammography has been acknowledged as the most reliable way to diagnose breast cancer at [...] Read more.
Globally, the incidence rate for breast cancer ranks first. Treatment for early-stage breast cancer is highly cost effective. Five-year survival rate for stage 0–2 breast cancer exceeds 90%. Screening mammography has been acknowledged as the most reliable way to diagnose breast cancer at an early stage. Taiwan government has been urging women without any symptoms, aged between 45 and 69, to have a screening mammogram bi-yearly. This brings about a large workload for radiologists. In light of this, this paper presents a deep neural network (DNN)-based model as an efficient and reliable tool to assist radiologists with mammographic interpretation. For the first time in the literature, mammograms are completely classified into BI-RADS categories 0, 1, 2, 3, 4A, 4B, 4C and 5. The proposed model was trained using block-based images segmented from a mammogram dataset of our own. A block-based image was applied to the model as an input, and a BI-RADS category was predicted as an output. At the end of this paper, the outperformance of this work is demonstrated by an overall accuracy of 94.22%, an average sensitivity of 95.31%, an average specificity of 99.15% and an area under curve (AUC) of 0.9723. When applied to breast cancer screening for Asian women who are more likely to have dense breasts, this model is expected to give a higher accuracy than others in the literature, since it was trained using mammograms taken from Taiwanese women. Full article
Show Figures

Figure 1

17 pages, 5158 KiB  
Article
Using a Deep Learning Model to Explore the Impact of Clinical Data on COVID-19 Diagnosis Using Chest X-ray
by Irfan Ullah Khan, Nida Aslam, Talha Anwar, Hind S. Alsaif, Sara Mhd. Bachar Chrouf, Norah A. Alzahrani, Fatimah Ahmed Alamoudi, Mariam Moataz Aly Kamaleldin and Khaled Bassam Awary
Sensors 2022, 22(2), 669; https://doi.org/10.3390/s22020669 - 16 Jan 2022
Cited by 23 | Viewed by 3375
Abstract
The coronavirus pandemic (COVID-19) is disrupting the entire world; its rapid global spread threatens to affect millions of people. Accurate and timely diagnosis of COVID-19 is essential to control the spread and alleviate risk. Due to the promising results achieved by integrating machine [...] Read more.
The coronavirus pandemic (COVID-19) is disrupting the entire world; its rapid global spread threatens to affect millions of people. Accurate and timely diagnosis of COVID-19 is essential to control the spread and alleviate risk. Due to the promising results achieved by integrating machine learning (ML), particularly deep learning (DL), in automating the multiple disease diagnosis process. In the current study, a model based on deep learning was proposed for the automated diagnosis of COVID-19 using chest X-ray images (CXR) and clinical data of the patient. The aim of this study is to investigate the effects of integrating clinical patient data with the CXR for automated COVID-19 diagnosis. The proposed model used data collected from King Fahad University Hospital, Dammam, KSA, which consists of 270 patient records. The experiments were carried out first with clinical data, second with the CXR, and finally with clinical data and CXR. The fusion technique was used to combine the clinical features and features extracted from images. The study found that integrating clinical data with the CXR improves diagnostic accuracy. Using the clinical data and the CXR, the model achieved an accuracy of 0.970, a recall of 0.986, a precision of 0.978, and an F-score of 0.982. Further validation was performed by comparing the performance of the proposed system with the diagnosis of an expert. Additionally, the results have shown that the proposed system can be used as a tool that can help the doctors in COVID-19 diagnosis. Full article
Show Figures

Figure 1

19 pages, 4087 KiB  
Article
A Case Study of Quantizing Convolutional Neural Networks for Fast Disease Diagnosis on Portable Medical Devices
by Mukhammed Garifulla, Juncheol Shin, Chanho Kim, Won Hwa Kim, Hye Jung Kim, Jaeil Kim and Seokin Hong
Sensors 2022, 22(1), 219; https://doi.org/10.3390/s22010219 - 29 Dec 2021
Cited by 13 | Viewed by 3358
Abstract
Recently, the amount of attention paid towards convolutional neural networks (CNN) in medical image analysis has rapidly increased since they can analyze and classify images faster and more accurately than human abilities. As a result, CNNs are becoming more popular and play a [...] Read more.
Recently, the amount of attention paid towards convolutional neural networks (CNN) in medical image analysis has rapidly increased since they can analyze and classify images faster and more accurately than human abilities. As a result, CNNs are becoming more popular and play a role as a supplementary assistant for healthcare professionals. Using the CNN on portable medical devices can enable a handy and accurate disease diagnosis. Unfortunately, however, the CNNs require high-performance computing resources as they involve a significant amount of computation to process big data. Thus, they are limited to being used on portable medical devices with limited computing resources. This paper discusses the network quantization techniques that reduce the size of CNN models and enable fast CNN inference with an energy-efficient CNN accelerator integrated into recent mobile processors. With extensive experiments, we show that the quantization technique reduces inference time by 97% on the mobile system integrating a CNN acceleration engine. Full article
Show Figures

Figure 1

15 pages, 1466 KiB  
Article
The Impact of Load Style Variation on Gait Recognition Based on sEMG Images Using a Convolutional Neural Network
by Xianfu Zhang, Yuping Hu, Ruimin Luo, Chao Li and Zhichuan Tang
Sensors 2021, 21(24), 8365; https://doi.org/10.3390/s21248365 - 15 Dec 2021
Cited by 3 | Viewed by 1578
Abstract
Surface electromyogram (sEMG) signals are widely employed as a neural control source for lower-limb exoskeletons, in which gait recognition based on sEMG is particularly important. Many scholars have taken measures to improve the accuracy of gait recognition, but several real-time limitations affect its [...] Read more.
Surface electromyogram (sEMG) signals are widely employed as a neural control source for lower-limb exoskeletons, in which gait recognition based on sEMG is particularly important. Many scholars have taken measures to improve the accuracy of gait recognition, but several real-time limitations affect its applicability, of which variation in the load styles is obvious. The purposes of this study are to (1) investigate the impact of different load styles on gait recognition; (2) study whether good gait recognition performance can be obtained when a convolutional neural network (CNN) is used to deal with the sEMG image from sparse multichannel sEMG (SMC-sEMG); and (3) explore whether the control system of the lower-limb exoskeleton trained by sEMG from part of the load styles still works efficiently in a real-time environment where multiload styles are required. In addition, we discuss an effective method to improve gait recognition at the levels of the load styles. In our experiment, fifteen able-bodied male graduate students with load (20% of body weight) and using three load styles (SBP = backpack, SCS = cross shoulder, SSS = straight shoulder) were asked to walk uniformly on a treadmill. Each subject performed 50 continuous gait cycles under three speeds (V3 = 3 km/h, V5 = 5 km/h, and V7 = 7 km/h). A CNN was employed to deal with sEMG images from sEMG signals for gait recognition, and back propagation neural networks (BPNNs) and support vector machines (SVMs) were used for comparison by dealing with the same sEMG signal. The results indicated that (1) different load styles had remarkable impact on the gait recognition at three speeds under three load styles (p < 0.001); (2) the performance of gait recognition from the CNN was better than that from the SVM and BPNN at each speed (84.83%, 81.63%, and 83.76% at V3; 93.40%, 88.48%, and 92.36% at V5; and 90.1%, 86.32%, and 85.42% at V7, respectively); and (3) when all the data from three load styles were pooled as testing sets at each speed, more load styles were included in the training set, better performance was obtained, and the statistical analysis suggested that the kinds of load styles included in training set had a significant effect on gait recognition (p = 0.002), from which it can be concluded that the control system of a lower-limb exoskeleton trained by sEMG using only some load styles is not sufficient in a real-time environment. Full article
Show Figures

Figure 1

17 pages, 3535 KiB  
Article
Severity Grading and Early Retinopathy Lesion Detection through Hybrid Inception-ResNet Architecture
by Sana Yasin, Nasrullah Iqbal, Tariq Ali, Umar Draz, Ali Alqahtani, Muhammad Irfan, Abdul Rehman, Adam Glowacz, Samar Alqhtani, Klaudia Proniewska, Frantisek Brumercik and Lukasz Wzorek
Sensors 2021, 21(20), 6933; https://doi.org/10.3390/s21206933 - 19 Oct 2021
Cited by 9 | Viewed by 2743
Abstract
Diabetic retinopathy (DR) is a diabetes disorder that disturbs human vision. It starts due to the damage in the light-sensitive tissues of blood vessels at the retina. In the beginning, DR may show no symptoms or only slight vision issues, but in the [...] Read more.
Diabetic retinopathy (DR) is a diabetes disorder that disturbs human vision. It starts due to the damage in the light-sensitive tissues of blood vessels at the retina. In the beginning, DR may show no symptoms or only slight vision issues, but in the long run, it could be a permanent source of impaired vision, simply known as blindness in the advanced as well as in developing nations. This could be prevented if DR is identified early enough, but it can be challenging as we know the disease frequently shows rare signs until it is too late to deliver an effective cure. In our work, we recommend a framework for severity grading and early DR detection through hybrid deep learning Inception-ResNet architecture with smart data preprocessing. Our proposed method is composed of three steps. Firstly, the retinal images are preprocessed with the help of augmentation and intensity normalization. Secondly, the preprocessed images are given to the hybrid Inception-ResNet architecture to extract the vector image features for the categorization of different stages. Lastly, to identify DR and decide its stage (e.g., mild DR, moderate DR, severe DR, or proliferative DR), a classification step is used. The studies and trials have to reveal suitable outcomes when equated with some other previously deployed approaches. However, there are specific constraints in our study that are also discussed and we suggest methods to enhance further research in this field. Full article
Show Figures

Figure 1

15 pages, 757 KiB  
Article
Identification of Autism Subtypes Based on Wavelet Coherence of BOLD FMRI Signals Using Convolutional Neural Network
by Mohammed Isam Al-Hiyali, Norashikin Yahya, Ibrahima Faye and Ahmed Faeq Hussein
Sensors 2021, 21(16), 5256; https://doi.org/10.3390/s21165256 - 4 Aug 2021
Cited by 21 | Viewed by 4057
Abstract
The functional connectivity (FC) patterns of resting-state functional magnetic resonance imaging (rs-fMRI) play an essential role in the development of autism spectrum disorders (ASD) classification models. There are available methods in literature that have used FC patterns as inputs for binary classification models, [...] Read more.
The functional connectivity (FC) patterns of resting-state functional magnetic resonance imaging (rs-fMRI) play an essential role in the development of autism spectrum disorders (ASD) classification models. There are available methods in literature that have used FC patterns as inputs for binary classification models, but the results barely reach an accuracy of 80%. Additionally, the generalizability across multiple sites of the models has not been investigated. Due to the lack of ASD subtypes identification model, the multi-class classification is proposed in the present study. This study aims to develop automated identification of autism spectrum disorder (ASD) subtypes using convolutional neural networks (CNN) using dynamic FC as its inputs. The rs-fMRI dataset used in this study consists of 144 individuals from 8 independent sites, labeled based on three ASD subtypes, namely autistic disorder (ASD), Asperger’s disorder (APD), and pervasive developmental disorder not otherwise specified (PDD-NOS). The blood-oxygen-level-dependent (BOLD) signals from 116 brain nodes of automated anatomical labeling (AAL) atlas are used, where the top-ranked node is determined based on one-way analysis of variance (ANOVA) of the power spectral density (PSD) values. Based on the statistical analysis of the PSD values of 3-level ASD and normal control (NC), putamen_R is obtained as the top-ranked node and used for the wavelet coherence computation. With good resolution in time and frequency domain, scalograms of wavelet coherence between the top-ranked node and the rest of the nodes are used as dynamic FC feature input to the convolutional neural networks (CNN). The dynamic FC patterns of wavelet coherence scalogram represent phase synchronization between the pairs of BOLD signals. Classification algorithms are developed using CNN and the wavelet coherence scalograms for binary and multi-class identification were trained and tested using cross-validation and leave-one-out techniques. Results of binary classification (ASD vs. NC) and multi-class classification (ASD vs. APD vs. PDD-NOS vs. NC) yielded, respectively, 89.8% accuracy and 82.1% macro-average accuracy, respectively. Findings from this study have illustrated the good potential of wavelet coherence technique in representing dynamic FC between brain nodes and open possibilities for its application in computer aided diagnosis of other neuropsychiatric disorders, such as depression or schizophrenia. Full article
Show Figures

Figure 1

13 pages, 4178 KiB  
Communication
Pupil Size Prediction Techniques Based on Convolution Neural Network
by Allen Jong-Woei Whang, Yi-Yung Chen, Wei-Chieh Tseng, Chih-Hsien Tsai, Yi-Ping Chao, Chieh-Hung Yen, Chun-Hsiu Liu and Xin Zhang
Sensors 2021, 21(15), 4965; https://doi.org/10.3390/s21154965 - 21 Jul 2021
Cited by 3 | Viewed by 2423
Abstract
The size of one’s pupil can indicate one’s physical condition and mental state. When we search related papers about AI and the pupil, most studies focused on eye-tracking. This paper proposes an algorithm that can calculate pupil size based on a convolution neural [...] Read more.
The size of one’s pupil can indicate one’s physical condition and mental state. When we search related papers about AI and the pupil, most studies focused on eye-tracking. This paper proposes an algorithm that can calculate pupil size based on a convolution neural network (CNN). Usually, the shape of the pupil is not round, and 50% of pupils can be calculated using ellipses as the best fitting shapes. This paper uses the major and minor axes of an ellipse to represent the size of pupils and uses the two parameters as the output of the network. Regarding the input of the network, the dataset is in video format (continuous frames). Taking each frame from the videos and using these to train the CNN model may cause overfitting since the images are too similar. This study used data augmentation and calculated the structural similarity to ensure that the images had a certain degree of difference to avoid this problem. For optimizing the network structure, this study compared the mean error with changes in the depth of the network and the field of view (FOV) of the convolution filter. The result shows that both deepening the network and widening the FOV of the convolution filter can reduce the mean error. According to the results, the mean error of the pupil length is 5.437% and the pupil area is 10.57%. It can operate in low-cost mobile embedded systems at 35 frames per second, demonstrating that low-cost designs can be used for pupil size prediction. Full article
Show Figures

Figure 1

15 pages, 4957 KiB  
Article
Continuous Blood Pressure Estimation Using Exclusively Photopletysmography by LSTM-Based Signal-to-Signal Translation
by Latifa Nabila Harfiya, Ching-Chun Chang and Yung-Hui Li
Sensors 2021, 21(9), 2952; https://doi.org/10.3390/s21092952 - 23 Apr 2021
Cited by 60 | Viewed by 6153
Abstract
Monitoring continuous BP signal is an important issue, because blood pressure (BP) varies over days, minutes, or even seconds for short-term cases. Most of photoplethysmography (PPG)-based BP estimation methods are susceptible to noise and only provides systolic blood pressure (SBP) and diastolic blood [...] Read more.
Monitoring continuous BP signal is an important issue, because blood pressure (BP) varies over days, minutes, or even seconds for short-term cases. Most of photoplethysmography (PPG)-based BP estimation methods are susceptible to noise and only provides systolic blood pressure (SBP) and diastolic blood pressure (DBP) prediction. Here, instead of estimating a discrete value, we focus on different perspectives to estimate the whole waveform of BP. We propose a novel deep learning model to learn how to perform signal-to-signal translation from PPG to arterial blood pressure (ABP). Furthermore, using a raw PPG signal only as the input, the output of the proposed model is a continuous ABP signal. Based on the translated ABP signal, we extract the SBP and DBP values accordingly to ease the comparative evaluation. Our prediction results achieve average absolute error under 5 mmHg, with 70% confidence for SBP and 95% confidence for DBP without complex feature engineering. These results fulfill the standard from Association for the Advancement of Medical Instrumentation (AAMI) and the British Hypertension Society (BHS) with grade A. From the results, we believe that our model is applicable and potentially boosts the accuracy of an effective signal-to-signal continuous blood pressure estimation. Full article
Show Figures

Figure 1

15 pages, 1940 KiB  
Article
Efficiency of Machine Learning Algorithms for the Determination of Macrovesicular Steatosis in Frozen Sections Stained with Sudan to Evaluate the Quality of the Graft in Liver Transplantation
by Fernando Pérez-Sanz, Miriam Riquelme-Pérez, Enrique Martínez-Barba, Jesús de la Peña-Moral, Alejandro Salazar Nicolás, Marina Carpes-Ruiz, Angel Esteban-Gil, María Del Carmen Legaz-García, María Antonia Parreño-González, Pablo Ramírez and Carlos M. Martínez
Sensors 2021, 21(6), 1993; https://doi.org/10.3390/s21061993 - 12 Mar 2021
Cited by 14 | Viewed by 1998
Abstract
Liver transplantation is the only curative treatment option in patients diagnosed with end-stage liver disease. The low availability of organs demands an accurate selection procedure based on histological analysis, in order to evaluate the allograft. This assessment, traditionally carried out by a pathologist, [...] Read more.
Liver transplantation is the only curative treatment option in patients diagnosed with end-stage liver disease. The low availability of organs demands an accurate selection procedure based on histological analysis, in order to evaluate the allograft. This assessment, traditionally carried out by a pathologist, is not exempt from subjectivity. In this sense, new tools based on machine learning and artificial vision are continuously being developed for the analysis of medical images of different typologies. Accordingly, in this work, we develop a computer vision-based application for the fast and automatic objective quantification of macrovesicular steatosis in histopathological liver section slides stained with Sudan stain. For this purpose, digital microscopy images were used to obtain thousands of feature vectors based on the RGB and CIE L*a*b* pixel values. These vectors, under a supervised process, were labelled as fat vacuole or non-fat vacuole, and a set of classifiers based on different algorithms were trained, accordingly. The results obtained showed an overall high accuracy for all classifiers (>0.99) with a sensitivity between 0.844 and 1, together with a specificity >0.99. In relation to their speed when classifying images, KNN and Naïve Bayes were substantially faster than other classification algorithms. Sudan stain is a convenient technique for evaluating ME in pre-transplant liver biopsies, providing reliable contrast and facilitating fast and accurate quantification through the machine learning algorithms tested. Full article
Show Figures

Figure 1

Review

Jump to: Research, Other

17 pages, 1396 KiB  
Review
Machine Learning Techniques for Differential Diagnosis of Vertigo and Dizziness: A Review
by Varad Kabade, Ritika Hooda, Chahat Raj, Zainab Awan, Allison S. Young, Miriam S. Welgampola and Mukesh Prasad
Sensors 2021, 21(22), 7565; https://doi.org/10.3390/s21227565 - 14 Nov 2021
Cited by 21 | Viewed by 8699
Abstract
Vertigo is a sensation of movement that results from disorders of the inner ear balance organs and their central connections, with aetiologies that are often benign and sometimes serious. An individual who develops vertigo can be effectively treated only after a correct diagnosis [...] Read more.
Vertigo is a sensation of movement that results from disorders of the inner ear balance organs and their central connections, with aetiologies that are often benign and sometimes serious. An individual who develops vertigo can be effectively treated only after a correct diagnosis of the underlying vestibular disorder is reached. Recent advances in artificial intelligence promise novel strategies for the diagnosis and treatment of patients with this common symptom. Human analysts may experience difficulties manually extracting patterns from large clinical datasets. Machine learning techniques can be used to visualize, understand, and classify clinical data to create a computerized, faster, and more accurate evaluation of vertiginous disorders. Practitioners can also use them as a teaching tool to gain knowledge and valuable insights from medical data. This paper provides a review of the literatures from 1999 to 2021 using various feature extraction and machine learning techniques to diagnose vertigo disorders. This paper aims to provide a better understanding of the work done thus far and to provide future directions for research into the use of machine learning in vertigo diagnosis. Full article
Show Figures

Figure 1

Other

Jump to: Research, Review

20 pages, 2419 KiB  
Systematic Review
A Systematic Review on Healthcare Artificial Intelligent Conversational Agents for Chronic Conditions
by Abdullah Bin Sawad, Bhuva Narayan, Ahlam Alnefaie, Ashwaq Maqbool, Indra Mckie, Jemma Smith, Berkan Yuksel, Deepak Puthal, Mukesh Prasad and A. Baki Kocaballi
Sensors 2022, 22(7), 2625; https://doi.org/10.3390/s22072625 - 29 Mar 2022
Cited by 30 | Viewed by 6274
Abstract
This paper reviews different types of conversational agents used in health care for chronic conditions, examining their underlying communication technology, evaluation measures, and AI methods. A systematic search was performed in February 2021 on PubMed Medline, EMBASE, PsycINFO, CINAHL, Web of Science, and [...] Read more.
This paper reviews different types of conversational agents used in health care for chronic conditions, examining their underlying communication technology, evaluation measures, and AI methods. A systematic search was performed in February 2021 on PubMed Medline, EMBASE, PsycINFO, CINAHL, Web of Science, and ACM Digital Library. Studies were included if they focused on consumers, caregivers, or healthcare professionals in the prevention, treatment, or rehabilitation of chronic diseases, involved conversational agents, and tested the system with human users. The search retrieved 1087 articles. Twenty-six studies met the inclusion criteria. Out of 26 conversational agents (CAs), 16 were chatbots, seven were embodied conversational agents (ECA), one was a conversational agent in a robot, and another was a relational agent. One agent was not specified. Based on this review, the overall acceptance of CAs by users for the self-management of their chronic conditions is promising. Users’ feedback shows helpfulness, satisfaction, and ease of use in more than half of included studies. Although many users in the studies appear to feel more comfortable with CAs, there is still a lack of reliable and comparable evidence to determine the efficacy of AI-enabled CAs for chronic health conditions due to the insufficient reporting of technical implementation details. Full article
Show Figures

Figure 1

Back to TopTop