Artificial Intelligence and Beyond in Medical and Healthcare Engineering

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (15 April 2022) | Viewed by 59364

Special Issue Editor


E-Mail Website
Guest Editor
Graduate School of Engineering, University of Hyogo 2167, Shosha, Himeji 671-2280, Japan
Interests: medical image analysis; artificial intelligence in medicine
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Quantitative radiology (QR), when brought to routine clinical practice, will bring about a significant enhancement of the role of radiology in the medical milieu, potentially spawning numerous new advances in medicine. Currently, more and more medical images (MRI, CT, Ultrasound, PETCT, OCT, etc.) are being collected and analyzed for disease quantification body-region-wide or bodywide in patients with cancer and/or disease conditions, and clinical tasks related with medical images including screening, detection/diagnosis, staging, prognosis assessment, treatment planning, treatment prediction assessment, treatment response assessment, and restaging/surveillance. New algorithms for medical image processing will play the role of an engineer for the above clinical tasks.

In this Special Issue, we will focus on the vast range of new algorithms for medical image processing, analysis, and quantification. Machine learning, especially deep learning, has recently been widely investigated and has shown its power in medical image segmentation, registration, classification, responde prediction, etc. We welcome manuscripts using unsupervised or supervised learning based on statistical and mathematical models for all the above clinical tasks in this Special Issue. Other topics include but are not limited to new algorithms on medical image segmentation, registration, disease response prediction, classification, image quality enhancement, image construction, and new systems in computer-aided diagnosis, perception, image-guided procedures, biomedical applications, informatics, radiology, and digital pathology.

Prof. Syoji Kobashi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Deep learning
  • Statistical model
  • Medial image processing
  • Prediction
  • Personalized medicine
  • Digial Heatlh
  • Patient saisfaction
  • Computer-aided systems
  • Deep medicine

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 2942 KiB  
Article
Evaluating Explainable Artificial Intelligence for X-ray Image Analysis
by Miquel Miró-Nicolau, Gabriel Moyà-Alcover and Antoni Jaume-i-Capó
Appl. Sci. 2022, 12(9), 4459; https://doi.org/10.3390/app12094459 - 28 Apr 2022
Cited by 4 | Viewed by 2199
Abstract
The lack of justification of the results obtained by artificial intelligence (AI) algorithms has limited their usage in the medical context. To increase the explainability of the existing AI methods, explainable artificial intelligence (XAI) is proposed. We performed a systematic literature review, based [...] Read more.
The lack of justification of the results obtained by artificial intelligence (AI) algorithms has limited their usage in the medical context. To increase the explainability of the existing AI methods, explainable artificial intelligence (XAI) is proposed. We performed a systematic literature review, based on the guidelines proposed by Kitchenham and Charters, of studies that applied XAI methods in X-ray-image-related tasks. We identified 141 studies relevant to the objective of this research from five different databases. For each of these studies, we assessed the quality and then analyzed them according to a specific set of research questions. We determined two primary purposes for X-ray images: the detection of bone diseases and lung diseases. We found that most of the AI methods used were based on a CNN. We identified the different techniques to increase the explainability of the models and grouped them depending on the kind of explainability obtained. We found that most of the articles did not evaluate the quality of the explainability obtained, causing problems of confidence in the explanation. Finally, we identified the current challenges and future directions of this subject and provide guidelines to practitioners and researchers to improve the limitations and the weaknesses that we detected. Full article
Show Figures

Figure 1

22 pages, 5112 KiB  
Article
Smartphone Sensor-Based Human Locomotion Surveillance System Using Multilayer Perceptron
by Usman Azmat, Yazeed Yasin Ghadi, Tamara al Shloul, Suliman A. Alsuhibany, Ahmad Jalal and Jeongmin Park
Appl. Sci. 2022, 12(5), 2550; https://doi.org/10.3390/app12052550 - 28 Feb 2022
Cited by 17 | Viewed by 1825
Abstract
Applied sensing technology has made it possible for human beings to experience a revolutionary aspect of the science and technology world. Along with many other fields in which this technology is working wonders, human locomotion activity recognition, which finds applications in healthcare, smart [...] Read more.
Applied sensing technology has made it possible for human beings to experience a revolutionary aspect of the science and technology world. Along with many other fields in which this technology is working wonders, human locomotion activity recognition, which finds applications in healthcare, smart homes, life-logging, and many other fields, is also proving to be a landmark. The purpose of this study is to develop a novel model that can robustly handle divergent data that are acquired remotely from various sensors and make an accurate classification of human locomotion activities. The biggest support for remotely sensed human locomotion activity recognition (RS-HLAR) is provided by modern smartphones. In this paper, we propose a robust model for an RS-HLAR that is trained and tested on remotely extracted data from smartphone-embedded sensors. Initially, the system denoises the input data and then performs windowing and segmentation. Then, this preprocessed data goes to the feature extraction module where Parseval’s energy, skewness, kurtosis, Shannon entropy, and statistical features from the time domain and the frequency domain are extracted from it. Advancing further, by using Luca-measure fuzzy entropy (LFE) and Lukasiewicz similarity measure (LS)–based feature selection, the system drops the least-informative features and shrinks the feature set by 25%. In the next step, the Yeo–Johnson power transform is applied, which is a maximum-likelihood-based feature optimization algorithm. The optimized feature set is then forwarded to the multilayer perceptron (MLP) classifier that performs the classification. MLP uses the cross-validation technique for training and testing to generate reliable results. We designed our system while experimenting on three benchmark datasets namely, MobiAct_v2.0, Real-World HAR, and Real-Life HAR. The proposed model outperforms the existing state-of-the-art models by scoring a mean accuracy of 84.49% on MobiAct_v2.0, 94.16% on Real-World HAR, and 95.89% on Real-Life HAR. Although our system can accurately differentiate among similar activities, excessive noise in data and complex activities have shown an inverse effect on its performance. Full article
Show Figures

Figure 1

22 pages, 2425 KiB  
Article
Predicting Wearing-Off of Parkinson’s Disease Patients Using a Wrist-Worn Fitness Tracker and a Smartphone: A Case Study
by John Noel Victorino, Yuko Shibata, Sozo Inoue and Tomohiro Shibata
Appl. Sci. 2021, 11(16), 7354; https://doi.org/10.3390/app11167354 - 10 Aug 2021
Cited by 8 | Viewed by 3652
Abstract
Parkinson’s disease (PD) patients experience varying symptoms related to their illness. Therefore, each patient needs a tailored treatment program from their doctors. One approach is the use of anti-PD medicines. However, a “wearing-off” phenomenon occurs when these medicines lose their effect. As a [...] Read more.
Parkinson’s disease (PD) patients experience varying symptoms related to their illness. Therefore, each patient needs a tailored treatment program from their doctors. One approach is the use of anti-PD medicines. However, a “wearing-off” phenomenon occurs when these medicines lose their effect. As a result, patients start to experience the symptoms again until their next medicine intake. In the long term, the duration of “wearing-off” begins to shorten. Thus, patients and doctors have to work together to manage PD symptoms effectively. This study aims to develop a prediction model that can determine the “wearing-off” of anti-PD medicine. We used fitness tracker data and self-reported symptoms from a smartphone application in a real-world environment. Two participants wore the fitness tracker for a month while reporting any symptoms using the Wearing-Off Questionnaire (WoQ-9) on a smartphone application. Then, we processed and combined the datasets for each participant’s models. Our analysis produced prediction models for each participant. The average balanced accuracy with the best hyperparameters was at 70.0–71.7% for participant 1 and 76.1–76.9% for participant 2, suggesting that our approach would be helpful to manage the “wearing-off” of anti-PD medicine, motor fluctuations of PD patients, and customized treatment for PD patients. Full article
Show Figures

Figure 1

15 pages, 4967 KiB  
Article
Human Activity Classification Based on Angle Variance Analysis Utilizing the Poincare Plot
by Solaiman Ahmed, Tanveer Ahmed Bhuiyan, Taiki Kishi, Manabu Nii and Syoji Kobashi
Appl. Sci. 2021, 11(16), 7230; https://doi.org/10.3390/app11167230 - 05 Aug 2021
Viewed by 2276
Abstract
We propose a single sensor-based activity classification method where the Poincare plot was introduced to analyze the variance of the angle between acceleration vector with gravity calculated from the raw accelerometer data for human activity classification. Two datasets named ‘Human Activity Recognition’ and [...] Read more.
We propose a single sensor-based activity classification method where the Poincare plot was introduced to analyze the variance of the angle between acceleration vector with gravity calculated from the raw accelerometer data for human activity classification. Two datasets named ‘Human Activity Recognition’ and ‘MHealth dataset’ were used to develop the model to classify activity from low to vigorous intensity activities and posture estimation. Short-term and long-term variability analyzing the property of the Poincare plot was used to classify activities according to the vibrational intensity of body movement. Commercially available Actigraph’s activity classification metric ‘count’ resembled value was used to compare the feasibility of the proposed classification algorithm. In the case of the HAR dataset, laying, sitting, standing, and walking activities were classified. Poincare plot parameters SD1, SD2, and SDRR of angle in the case of angle variance analysis and the mean count of X-, Y-, and Z-axis were fitted to a support vector machine (SVM) classifier individually and jointly. The variance- and count-based methods have 100% accuracy in the static–dynamic classification. Laying activity classification has 100% accuracy from other static conditions in the proposed method, whereas the count-based method has 98.08% accuracy with 10-fold cross-validation. In the sitting–standing classification, the proposed angle-based algorithm shows 88% accuracy, whereas the count-based approach has 58% accuracy with a support vector machine classifier with 10-fold cross-validation. In the classification of the variants of dynamic activities with the MHealth dataset, the accuracy for angle variance-based and count-based methods is 100%, in both cases, for fivefold cross validation with SVM classifiers. Full article
Show Figures

Figure 1

16 pages, 6089 KiB  
Article
Prediction of COVID-19 from Chest CT Images Using an Ensemble of Deep Learning Models
by Shreya Biswas, Somnath Chatterjee, Arindam Majee, Shibaprasad Sen, Friedhelm Schwenker and Ram Sarkar
Appl. Sci. 2021, 11(15), 7004; https://doi.org/10.3390/app11157004 - 29 Jul 2021
Cited by 37 | Viewed by 3670
Abstract
The novel SARS-CoV-2 virus, responsible for the dangerous pneumonia-type disease, COVID-19, has undoubtedly changed the world by killing at least 3,900,000 people as of June 2021 and compromising the health of millions across the globe. Though the vaccination process has started, in developing [...] Read more.
The novel SARS-CoV-2 virus, responsible for the dangerous pneumonia-type disease, COVID-19, has undoubtedly changed the world by killing at least 3,900,000 people as of June 2021 and compromising the health of millions across the globe. Though the vaccination process has started, in developing countries such as India, the process has not been fully developed. Thereby, a diagnosis of COVID-19 can restrict its spreading and level the pestilence curve. As the quickest indicative choice, a computerized identification framework ought to be carried out to hinder COVID-19 from spreading more. Meanwhile, Computed Tomography (CT) imaging reveals that the attributes of these images for COVID-19 infected patients vary from healthy patients with or without other respiratory diseases, such as pneumonia. This study aims to establish an effective COVID-19 prediction model through chest CT images using efficient transfer learning (TL) models. Initially, we used three standard deep learning (DL) models, namely, VGG-16, ResNet50, and Xception, for the prediction of COVID-19. After that, we proposed a mechanism to combine the above-mentioned pre-trained models for the overall improvement of the prediction capability of the system. The proposed model provides 98.79% classification accuracy and a high F1-score of 0.99 on the publicly available SARS-CoV-2 CT dataset. The model proposed in this study is effective for the accurate screening of COVID-19 CT scans and, hence, can be a promising supplementary diagnostic tool for the forefront clinical specialists. Full article
Show Figures

Figure 1

15 pages, 2036 KiB  
Article
A Study of Predictive Models for Early Outcomes of Post-Prostatectomy Incontinence: Machine Learning Approach vs. Logistic Regression Analysis Approach
by Seongkeun Park and Jieun Byun
Appl. Sci. 2021, 11(13), 6225; https://doi.org/10.3390/app11136225 - 05 Jul 2021
Cited by 6 | Viewed by 2130
Abstract
Background: Post-prostatectomy incontinence (PPI) is a major complication that can significantly decrease quality of life. Approximately 20% of patients experience consistent PPI as long as 1 year after radical prostatectomy (RP). This study develops a preoperative predictive model and compares its diagnostic [...] Read more.
Background: Post-prostatectomy incontinence (PPI) is a major complication that can significantly decrease quality of life. Approximately 20% of patients experience consistent PPI as long as 1 year after radical prostatectomy (RP). This study develops a preoperative predictive model and compares its diagnostic performance with conventional tools. Methods: A total of 166 prostate cancer patients who underwent magnetic resonance imaging (MRI) and RP were evaluated. According to the date of the RP, patients were divided into a development cohort (n = 109) and a test cohort (n = 57). Patients were classified as PPI early-recovery or consistent on the basis of pad usage for incontinence at 3 months after RP. Uni- and multi-variable logistic regression analyses were performed to identify associates of PPI early recovery. Four well-known machine learning algorithms (k-nearest neighbor, decision tree, support-vector machine (SVM), and random forest) and a logistic regression model were used to build prediction models for recovery from PPI using preoperative clinical and imaging data. The performances of the prediction models were assessed internally and externally using sensitivity, specificity, accuracy, and area-under-the-curve values and estimated probabilities and the actual proportion of cases of recovery from PPI within 3 months were compared using a chi-squared test. Results: Clinical and imaging findings revealed that age (70.1 years old for the PPI early-recovery group vs. 72.8 years old for the PPI consistent group), membranous urethral length (MUL; 15.7 mm for the PPI early-recovery group vs. 13.9 mm for the PPI consistent group), and obturator internal muscle (18.2 mm for the PPI early-recovery group vs. 17.5 mm for the PPI consistent group) were significantly different between the PPI early-recovery and consistent groups (all p-values < 0.05). Multivariate analysis confirmed that age (odds ratio = 1.07, 95% confidence interval = 1.02–1.14, p-value = 0.007) and MUL (odds ratio = 0.87, 95% confidence interval = 0.80–0.95, p-value = 0.002) were significant independent factors for early recovery. The prediction model using machine learning algorithms showed superior diagnostic performance compared with conventional logistic regression (AUC = 0.59 ± 0.07), especially SVM (AUC = 0.65 ± 0.07). Moreover, all models showed good calibration between the estimated probability and actual observed proportion of cases of recovery from PPI within 3 months. Conclusions: Preoperative clinical data and anatomic features on preoperative MRI can be used to predict early recovery from PPI after RP, and machine learning algorithms provide greater diagnostic accuracy compared with conventional statistical approaches. Full article
Show Figures

Figure 1

26 pages, 8801 KiB  
Article
Augmented EHR: Enrichment of EHR with Contents from Semantic Web Sources
by Alejandro Mañas-García, José Alberto Maldonado, Mar Marcos, Diego Boscá and Montserrat Robles
Appl. Sci. 2021, 11(9), 3978; https://doi.org/10.3390/app11093978 - 27 Apr 2021
Viewed by 2352
Abstract
This work presents methods to combine data from the Semantic Web into existing EHRs, leading to an augmented EHR. An existing EHR extract is augmented by combining it with additional information from external sources, typically linked data sources. The starting point is a [...] Read more.
This work presents methods to combine data from the Semantic Web into existing EHRs, leading to an augmented EHR. An existing EHR extract is augmented by combining it with additional information from external sources, typically linked data sources. The starting point is a standardized EHR extract described by an archetype. The method consists of combining specific data from the original EHR with contents from the external information source by building a semantic representation, which is used to query the external source. The results are converted into a standardized EHR extract according to an archetype. This work sets the foundations to transform Semantic Web contents into normalized EHR extracts. Finally, to exemplify the approach, the work includes a practical use case in which the summarized EHR is augmented with drug–drug interactions and disease-related treatment information. Full article
Show Figures

Figure 1

18 pages, 39277 KiB  
Article
Deep ConvLSTM Network with Dataset Resampling for Upper Body Activity Recognition Using Minimal Number of IMU Sensors
by Xiang Yang Lim, Kok Beng Gan and Noor Azah Abd Aziz
Appl. Sci. 2021, 11(8), 3543; https://doi.org/10.3390/app11083543 - 15 Apr 2021
Cited by 9 | Viewed by 2932
Abstract
Human activity recognition (HAR) is the study of the identification of specific human movement and action based on images, accelerometer data and inertia measurement unit (IMU) sensors. In the sensor based HAR application, most of the researchers used many IMU sensors to get [...] Read more.
Human activity recognition (HAR) is the study of the identification of specific human movement and action based on images, accelerometer data and inertia measurement unit (IMU) sensors. In the sensor based HAR application, most of the researchers used many IMU sensors to get an accurate HAR classification. The use of many IMU sensors not only limits the deployment phase but also increase the difficulty and discomfort for users. As reported in the literature, the original model used 19 sensor data consisting of accelerometers and IMU sensors. The imbalanced class distribution is another challenge to the recognition of human activity in real-life. This is a real-life scenario, and the classifier may predict some of the imbalanced classes with very high accuracy. When a model is trained using an imbalanced dataset, it can degrade model’s performance. In this paper, two approaches, namely resampling and multiclass focal loss, were used to address the imbalanced dataset. The resampling method was used to reconstruct the imbalanced class distribution of the IMU sensor dataset prior to model development and learning using the cross-entropy loss function. A deep ConvLSTM network with a minimal number of IMU sensor data was used to develop the upper-body HAR model. On the other hand, the multiclass focal loss function was used in the HAR model and classified minority classes without the need to resample the imbalanced dataset. Based on the experiments results, the developed HAR model using a cross-entropy loss function and reconstructed dataset achieved a good performance of 0.91 in the model accuracy and F1-score. The HAR model with a multiclass focal loss function and imbalanced dataset has a slightly lower model accuracy and F1-score in both 1% difference from the resampling method. In conclusion, the upper body HAR model using a minimal number of IMU sensors and proper handling of imbalanced class distribution by the resampling method is useful for the assessment of home-based rehabilitation involving activities of daily living. Full article
Show Figures

Figure 1

20 pages, 4579 KiB  
Article
Retinal Image Analysis for Diabetes-Based Eye Disease Detection Using Deep Learning
by Tahira Nazir, Aun Irtaza, Ali Javed, Hafiz Malik, Dildar Hussain and Rizwan Ali Naqvi
Appl. Sci. 2020, 10(18), 6185; https://doi.org/10.3390/app10186185 - 05 Sep 2020
Cited by 64 | Viewed by 11305
Abstract
Diabetic patients are at the risk of developing different eye diseases i.e., diabetic retinopathy (DR), diabetic macular edema (DME) and glaucoma. DR is an eye disease that harms the retina and DME is developed by the accumulation of fluid in the macula, while [...] Read more.
Diabetic patients are at the risk of developing different eye diseases i.e., diabetic retinopathy (DR), diabetic macular edema (DME) and glaucoma. DR is an eye disease that harms the retina and DME is developed by the accumulation of fluid in the macula, while glaucoma damages the optic disk and causes vision loss in advanced stages. However, due to slow progression, the disease shows few signs in early stages, hence making disease detection a difficult task. Therefore, a fully automated system is required to support the detection and screening process at early stages. In this paper, an automated disease localization and segmentation approach based on Fast Region-based Convolutional Neural Network (FRCNN) algorithm with fuzzy k-means (FKM) clustering is presented. The FRCNN is an object detection approach that requires the bounding-box annotations to work; however, datasets do not provide them, therefore, we have generated these annotations through ground-truths. Afterward, FRCNN is trained over the annotated images for localization that are then segmented-out through FKM clustering. The segmented regions are then compared against the ground-truths through intersection-over-union operations. For performance evaluation, we used the Diaretdb1, MESSIDOR, ORIGA, DR-HAGIS, and HRF datasets. A rigorous comparison against the latest methods confirms the efficacy of the approach in terms of both disease detection and segmentation. Full article
Show Figures

Figure 1

19 pages, 1549 KiB  
Article
Enhancing U-Net with Spatial-Channel Attention Gate for Abnormal Tissue Segmentation in Medical Imaging
by Trinh Le Ba Khanh, Duy-Phuong Dao, Ngoc-Huynh Ho, Hyung-Jeong Yang, Eu-Tteum Baek, Gueesang Lee, Soo-Hyung Kim and Seok Bong Yoo
Appl. Sci. 2020, 10(17), 5729; https://doi.org/10.3390/app10175729 - 19 Aug 2020
Cited by 56 | Viewed by 8761
Abstract
In recent years, deep learning has dominated medical image segmentation. Encoder-decoder architectures, such as U-Net, can be used in state-of-the-art models with powerful designs that are achieved by implementing skip connections that propagate local information from an encoder path to a decoder path [...] Read more.
In recent years, deep learning has dominated medical image segmentation. Encoder-decoder architectures, such as U-Net, can be used in state-of-the-art models with powerful designs that are achieved by implementing skip connections that propagate local information from an encoder path to a decoder path to retrieve detailed spatial information lost by pooling operations. Despite their effectiveness for segmentation, these naïve skip connections still have some disadvantages. First, multi-scale skip connections tend to use unnecessary information and computational sources, where likable low-level encoder features are repeatedly used at multiple scales. Second, the contextual information of the low-level encoder feature is insufficient, leading to poor performance for pixel-wise recognition when concatenating with the corresponding high-level decoder feature. In this study, we propose a novel spatial-channel attention gate that addresses the limitations of plain skip connections. This can be easily integrated into an encoder-decoder network to effectively improve the performance of the image segmentation task. Comprehensive results reveal that our spatial-channel attention gate remarkably enhances the segmentation capability of the U-Net architecture with a minimal computational overhead added. The experimental results show that our proposed method outperforms the conventional deep networks in term of Dice score, which achieves 71.72%. Full article
Show Figures

Figure 1

15 pages, 1154 KiB  
Article
Emotion Recognition Using Convolutional Neural Network with Selected Statistical Photoplethysmogram Features
by MinSeop Lee, Yun Kyu Lee, Myo-Taeg Lim and Tae-Koo Kang
Appl. Sci. 2020, 10(10), 3501; https://doi.org/10.3390/app10103501 - 19 May 2020
Cited by 45 | Viewed by 4146
Abstract
Emotion recognition research has been conducted using various physiological signals. In this paper, we propose an efficient photoplethysmogram-based method that fuses the deep features extracted by two deep convolutional neural networks and the statistical features selected by Pearson’s correlation technique. A photoplethysmogram (PPG) [...] Read more.
Emotion recognition research has been conducted using various physiological signals. In this paper, we propose an efficient photoplethysmogram-based method that fuses the deep features extracted by two deep convolutional neural networks and the statistical features selected by Pearson’s correlation technique. A photoplethysmogram (PPG) signal can be easily obtained through many devices, and the procedure for recording this signal is simpler than that for other physiological signals. The normal-to-normal (NN) interval values of heart rate variability (HRV) were utilized to extract the time domain features, and the normalized PPG signal was used to acquire the frequency domain features. Then, we selected features that correlated highly with an emotion through Pearson’s correlation. These statistical features were fused with deep-learning features extracted from a convolutional neural network (CNN). The PPG signal and the NN interval were used as the inputs of the CNN to extract the features, and the total concatenated features were utilized to classify the valence and the arousal, which are the basic parameters of emotion. The Database for Emotion Analysis using Physiological signals (DEAP) was chosen for the experiment, and the results demonstrated that the proposed method achieved a noticeable performance with a short recognition interval. Full article
Show Figures

Figure 1

20 pages, 3179 KiB  
Article
Fuzzy Logic Systems for Diagnosis of Renal Cancer
by Nikita Jindal, Jimmy Singla, Balwinder Kaur, Harsh Sadawarti, Deepak Prashar, Sudan Jha, Gyanendra Prasad Joshi and Changho Seo
Appl. Sci. 2020, 10(10), 3464; https://doi.org/10.3390/app10103464 - 17 May 2020
Cited by 8 | Viewed by 3764
Abstract
Renal cancer is a serious and common type of cancer affecting old ages. The growth of such type of cancer can be stopped by detecting it before it reaches advanced or end-stage. Hence, renal cancer must be identified and diagnosed in the initial [...] Read more.
Renal cancer is a serious and common type of cancer affecting old ages. The growth of such type of cancer can be stopped by detecting it before it reaches advanced or end-stage. Hence, renal cancer must be identified and diagnosed in the initial stages. In this research paper, an intelligent medical diagnostic system to diagnose renal cancer is developed by using fuzzy and neuro-fuzzy techniques. Essentially, for a fuzzy inference system, two layers are used. The first layer gives the output about whether the patient is having renal cancer or not. Similarly, the second layer detects the current stage of suffering patients. While in the development of a medical diagnostic system by using a neuro-fuzzy technique, the Gaussian membership functions are used for all the input variables considered for the diagnosis. In this paper, the comparison between the performance of developed systems has been done by taking some suitable parameters. The results obtained from this comparison study show that the intelligent medical system developed by using a neuro-fuzzy model gives the more precise and accurate results than existing systems. Full article
Show Figures

Figure 1

11 pages, 1898 KiB  
Article
Non-REM Sleep Marker for Wearable Monitoring: Power Concentration of Respiratory Heart Rate Fluctuation
by Junichiro Hayano, Norihiro Ueda, Masaya Kisohara, Yutaka Yoshida, Haruhito Tanaka and Emi Yuda
Appl. Sci. 2020, 10(9), 3336; https://doi.org/10.3390/app10093336 - 11 May 2020
Cited by 9 | Viewed by 3165
Abstract
A variety of heart rate variability (HRV) indices have been reported to estimate sleep stages, but the associations are modest and lacking solid physiological basis. Non-REM (NREM) sleep is associated with increased regularity of respiratory frequency, which results in the concentration of high [...] Read more.
A variety of heart rate variability (HRV) indices have been reported to estimate sleep stages, but the associations are modest and lacking solid physiological basis. Non-REM (NREM) sleep is associated with increased regularity of respiratory frequency, which results in the concentration of high frequency (HF) HRV power into a narrow frequency range. Using this physiological feature, we developed a new HRV sleep index named Hsi to quantify the degree of HF power concentration. We analyzed 11,636 consecutive 5-min segments of electrocardiographic (ECG) signal of polysomnographic data in 141 subjects and calculated Hsi and conventional HRV indices for each segment. Hsi was greater during NREM (mean [SD], 75.1 [8.3]%) than wake (61.0 [10.3]%) and REM (62.0 [8.4]%) stages. Receiver-operating characteristic curve analysis revealed that Hsi discriminated NREM from wake and REM segments with an area under the curve of 0.86, which was greater than those of heart rate (0.642), peak HF power (0.75), low-to-high frequency ratio (0.77), and scaling exponent α (0.77). With a cutoff >70%, Hsi detected NREM segments with 77% sensitivity, 80% specificity, and a Cohen’s kappa coefficient of 0.57. Hsi may provide an accurate NREM sleep maker for ECG and pulse wave signals obtained from wearable sensors. Full article
Show Figures

Graphical abstract

13 pages, 2827 KiB  
Article
Impact of Heart Rate Fragmentation on the Assessment of Heart Rate Variability
by Junichiro Hayano, Masaya Kisohara, Norihiro Ueda and Emi Yuda
Appl. Sci. 2020, 10(9), 3314; https://doi.org/10.3390/app10093314 - 10 May 2020
Cited by 16 | Viewed by 5282
Abstract
Heart rate fragmentation (HRF) is a type of sinoatrial instability characterized by frequent (often every beat) appearance of inflection in the R-R interval time series, despite the electrocardiograms appearing to be sinus rhythm. Because the assessment of parasympathetic function by heart rate variability [...] Read more.
Heart rate fragmentation (HRF) is a type of sinoatrial instability characterized by frequent (often every beat) appearance of inflection in the R-R interval time series, despite the electrocardiograms appearing to be sinus rhythm. Because the assessment of parasympathetic function by heart rate variability (HRV) analysis depends on the assumption that the high-frequency component (HF, 0.15–0.4 Hz) of HRV is mediated solely by the cardiac parasympathetic nerve, HRF that is measured as a part of HF power confounds the parasympathetic functional assessment by HRV. In this study, we analyzed HRF in a 24-h electrocardiogram big data and investigated the changes in HRF with age and sex and its influence on the assessment of HRV. We observed that HRF is often observed during childhoods (0–20 year) and increased after 75 year, but it has a large impact on individual differences in HF power at ages 60–90. Full article
Show Figures

Figure 1

Back to TopTop