Biomedical Signal Processing, Data Mining and Artificial Intelligence

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: closed (31 May 2022) | Viewed by 26297

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Engineering, Pukyong National University, Busan, Korea
Interests: biomedical Signal Processing; algorithms; handwriting recognition; EEG recognition; pattern recognition
Department of Biomedical Engineering, Chonnam National University, Yeosu, Korea
Interests: EEG; cognitive neuroscience; neuroimaging; neurobiology and brain physiology; functional; brain imaging; sleep; memory and learning; cognitive neuropsychology; behavioral neuroscience; emotion recognition

E-Mail Website
Guest Editor
Department of Radiological Science at Health Sciences Division, Dongseo University, Busan, Korea
Interests: therapeutic exercise; masseter muscle thickness; therapeutic effects; anodal transcranial; kinesiology
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Division of Software Convergence, Cheongju University, Cheongju, Korea
Interests: computer vision; artificial intelligence; medical image processing; factory automation

Special Issue Information

Dear Colleagues,

Thanks to the advances in artificial intelligence, analyzing biomedical signals and developing algorithms to understand and recognize the patterns in the signals are both facing a new era. The qualities and accuracy of signal processing are improving, and real-world applications utilizing the biomedical signals are available. Medical environments are also about to be change. Automatic analysis of EEGs, ECGs, EMGs, and medical images assists medical doctors, and the diagnosis quality of some diseases is approaching the level of the experts in the field.

In this Special Issue, we invite submissions exploring the advances in biomedical signal processing which utilize the algorithms and techniques of data-mining and artificial intelligence. The term artificial intelligence in this Special Issue includes artificial neural networks, as well as conventional pattern recognition and data mining techniques such as random forest, support vector machine, and conventional statistical analysis.

Application areas include, but are not limited to, EEG/EMG/EOG/PPG analysis of healthy participants or patients, recognition and verification of utilizing biomedical signals, medical image analysis, motion analysis and recognition, and the human–computer interface for utilizing biosignals. Applications utilizing hardware implementations and survey papers and reviews are also welcomed.

Prof. Dr. Won-Du Chang
Dr. Do-Won Kim
Dr. Youngjin Jung
Dr. Hyun Jun Park
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • biomedical signal processing
  • data mining
  • pattern recognition
  • artificial neural network
  • healthcare
  • medical image processing
  • motion analysis

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 531 KiB  
Article
A Photoplethysmogram Dataset for Emotional Analysis
by Ye-Ji Jin, Erkinov Habibilloh, Ye-Seul Jang, Taejun An, Donghyun Jo, Saron Park and Won-Du Chang
Appl. Sci. 2022, 12(13), 6544; https://doi.org/10.3390/app12136544 - 28 Jun 2022
Cited by 1 | Viewed by 2143
Abstract
In recent years, research on emotion classification based on physiological signals has actively attracted scholars’ attention worldwide. Several studies and experiments have been conducted to analyze human emotions based on physiological signals, including the use of electrocardiograms (ECGs), electroencephalograms (EEGs), and photoplethysmograms (PPGs). [...] Read more.
In recent years, research on emotion classification based on physiological signals has actively attracted scholars’ attention worldwide. Several studies and experiments have been conducted to analyze human emotions based on physiological signals, including the use of electrocardiograms (ECGs), electroencephalograms (EEGs), and photoplethysmograms (PPGs). Although the achievements with ECGs and EEGs are progressive, reaching higher accuracies over 90%, the number of studies utilizing PPGs are limited and their accuracies are relatively lower than other signals. One of the difficulties in studying PPGs for emotional analysis is the lack of open datasets (there is a single dataset to the best of the authors). This study introduces a new PPG dataset for emotional analysis. A total of 72 PPGs were recorded from 18 participants while watching short video clips and analyzed in time and frequency domains. Moreover, emotional classification accuracies with the presented dataset were presented with various neural network structures. The results prove that this dataset can be used for further emotional analysis with PPGs. Full article
(This article belongs to the Special Issue Biomedical Signal Processing, Data Mining and Artificial Intelligence)
Show Figures

Figure 1

27 pages, 3501 KiB  
Article
TTCNN: A Breast Cancer Detection and Classification towards Computer-Aided Diagnosis Using Digital Mammography in Early Stages
by Sarmad Maqsood, Robertas Damaševičius and Rytis Maskeliūnas
Appl. Sci. 2022, 12(7), 3273; https://doi.org/10.3390/app12073273 - 23 Mar 2022
Cited by 63 | Viewed by 6387
Abstract
Breast cancer is a major research area in the medical image analysis field; it is a dangerous disease and a major cause of death among women. Early and accurate diagnosis of breast cancer based on digital mammograms can enhance disease detection accuracy. Medical [...] Read more.
Breast cancer is a major research area in the medical image analysis field; it is a dangerous disease and a major cause of death among women. Early and accurate diagnosis of breast cancer based on digital mammograms can enhance disease detection accuracy. Medical imagery must be detected, segmented, and classified for computer-aided diagnosis (CAD) systems to help the radiologists for accurate diagnosis of breast lesions. Therefore, an accurate breast cancer detection and classification approach is proposed for screening of mammograms. In this paper, we present a deep learning system that can identify breast cancer in mammogram screening images using an “end-to-end” training strategy that efficiently uses mammography images for computer-aided breast cancer recognition in the early stages. First, the proposed approach implements the modified contrast enhancement method in order to refine the detail of edges from the source mammogram images. Next, the transferable texture convolutional neural network (TTCNN) is presented to enhance the performance of classification and the energy layer is integrated in this work to extract the texture features from the convolutional layer. The proposed approach consists of only three layers of convolution and one energy layer, rather than the pooling layer. In the third stage, we analyzed the performance of TTCNN based on deep features of convolutional neural network models (InceptionResNet-V2, Inception-V3, VGG-16, VGG-19, GoogLeNet, ResNet-18, ResNet-50, and ResNet-101). The deep features are extracted by determining the best layers which enhance the classification accuracy. In the fourth stage, by using the convolutional sparse image decomposition approach, all the extracted feature vectors are fused and, finally, the best features are selected by using the entropy controlled firefly method. The proposed approach employed on DDSM, INbreast, and MIAS datasets and attained the average accuracy of 97.49%. Our proposed transferable texture CNN-based method for classifying screening mammograms has outperformed prior methods. These findings demonstrate that automatic deep learning algorithms can be easily trained to achieve high accuracy in diverse mammography images, and can offer great potential to improve clinical tools to minimize false positive and false negative screening mammography results. Full article
(This article belongs to the Special Issue Biomedical Signal Processing, Data Mining and Artificial Intelligence)
Show Figures

Figure 1

13 pages, 369 KiB  
Article
Predicting Children with ADHD Using Behavioral Activity: A Machine Learning Analysis
by Md. Maniruzzaman, Jungpil Shin and Md. Al Mehedi Hasan
Appl. Sci. 2022, 12(5), 2737; https://doi.org/10.3390/app12052737 - 07 Mar 2022
Cited by 13 | Viewed by 5378
Abstract
Attention deficit hyperactivity disorder (ADHD) is one of childhood’s most frequent neurobehavioral disorders. The purpose of this study is to: (i) extract the most prominent risk factors for children with ADHD; and (ii) propose a machine learning (ML)-based approach to classify children as [...] Read more.
Attention deficit hyperactivity disorder (ADHD) is one of childhood’s most frequent neurobehavioral disorders. The purpose of this study is to: (i) extract the most prominent risk factors for children with ADHD; and (ii) propose a machine learning (ML)-based approach to classify children as either having ADHD or healthy. We extracted the data of 45,779 children aged 3–17 years from the 2018–2019 National Survey of Children’s Health (NSCH, 2018–2019). About 5218 (11.4%) of children were ADHD, and the rest of the children were healthy. Since the class label is highly imbalanced, we adopted a combination of oversampling and undersampling approaches to make a balanced class label. We adopted logistic regression (LR) to extract the significant factors for children with ADHD based on p-values (<0.05). Eight ML-based classifiers such as random forest (RF), Naïve Bayes (NB), decision tree (DT), XGBoost, k-nearest neighborhood (KNN), multilayer perceptron (MLP), support vector machine (SVM), and 1-dimensional convolution neural network (1D CNN) were adopted for the prediction of children with ADHD. The average age of the children with ADHD was 12.4 ± 3.4 years. Our findings showed that RF-based classifier provided the highest classification accuracy of 85.5%, sensitivity of 84.4%, specificity of 86.4%, and an AUC of 0.94. This study illustrated that LR with RF-based system could provide excellent accuracy for classifying and predicting children with ADHD. This system will be helpful for early detection and diagnosis of ADHD. Full article
(This article belongs to the Special Issue Biomedical Signal Processing, Data Mining and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 1921 KiB  
Article
MFDGCN: Multi-Stage Spatio-Temporal Fusion Diffusion Graph Convolutional Network for Traffic Prediction
by Zhengyan Cui, Junjun Zhang, Giseop Noh and Hyun Jun Park
Appl. Sci. 2022, 12(5), 2688; https://doi.org/10.3390/app12052688 - 04 Mar 2022
Cited by 9 | Viewed by 2504
Abstract
Traffic prediction is a popular research topic in the field of Intelligent Transportation System (ITS), as it can allocate resources more reasonably, relieve traffic congestion, and improve road traffic efficiency. Graph neural networks are widely used in traffic prediction because they are good [...] Read more.
Traffic prediction is a popular research topic in the field of Intelligent Transportation System (ITS), as it can allocate resources more reasonably, relieve traffic congestion, and improve road traffic efficiency. Graph neural networks are widely used in traffic prediction because they are good at dealing with complex nonlinear structures. Existing traffic prediction studies use distance-based graphs to represent spatial relationships, which ignores the deep connections between non-adjacent spatio-temporal information. The use of a simple approach to fuse spatio-temporal information is not conducive to obtaining long-term deep spatio-temporal dependencies. Therefore, we propose a new deep learning model Multi-Stage Spatio-Temporal Fusion Diffusion Graph Convolutional Network (MFDGCN). It generates multiple static and dynamic spatio-temporal association graphs to enhance features and adopts the multi-stage hybrid spatio-temporal fusion method. This promotes the effective fusion of a spatio-temporal multimodal and uses the diffuse convolution method to model the graph structure and time series in traffic prediction, respectively. The model can better predict both long and short-term traffic simultaneously. We evaluated MFDGCN using real road network traffic data and it shows good performance. Full article
(This article belongs to the Special Issue Biomedical Signal Processing, Data Mining and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 5255 KiB  
Article
Multiclass Skin Lesion Classification Using a Novel Lightweight Deep Learning Framework for Smart Healthcare
by Long Hoang, Suk-Hwan Lee, Eung-Joo Lee and Ki-Ryong Kwon
Appl. Sci. 2022, 12(5), 2677; https://doi.org/10.3390/app12052677 - 04 Mar 2022
Cited by 47 | Viewed by 4829
Abstract
Skin lesion classification has recently attracted significant attention. Regularly, physicians take much time to analyze the skin lesions because of the high similarity between these skin lesions. An automated classification system using deep learning can assist physicians in detecting the skin lesion type [...] Read more.
Skin lesion classification has recently attracted significant attention. Regularly, physicians take much time to analyze the skin lesions because of the high similarity between these skin lesions. An automated classification system using deep learning can assist physicians in detecting the skin lesion type and enhance the patient’s health. The skin lesion classification has become a hot research area with the evolution of deep learning architecture. In this study, we propose a novel method using a new segmentation approach and wide-ShuffleNet for skin lesion classification. First, we calculate the entropy-based weighting and first-order cumulative moment (EW-FCM) of the skin image. These values are used to separate the lesion from the background. Then, we input the segmentation result into a new deep learning structure wide-ShuffleNet and determine the skin lesion type. We evaluated the proposed method on two large datasets: HAM10000 and ISIC2019. Based on our numerical results, EW-FCM and wide-ShuffleNet achieve more accuracy than state-of-the-art approaches. Additionally, the proposed method is superior lightweight and suitable with a small system like a mobile healthcare system. Full article
(This article belongs to the Special Issue Biomedical Signal Processing, Data Mining and Artificial Intelligence)
Show Figures

Figure 1

10 pages, 2850 KiB  
Article
Efficient Fuzzy Image Stretching for Automatic Ganglion Cyst Extraction Using Fuzzy C-Means Quantization
by Sun Joo Lee, Doo Heon Song, Kwang Baek Kim and Hyun Jun Park
Appl. Sci. 2021, 11(24), 12094; https://doi.org/10.3390/app112412094 - 19 Dec 2021
Cited by 3 | Viewed by 1764
Abstract
Ganglion cysts are commonly observed in association with the joints and tendons of the appendicular skeleton. Ultrasonography is the favored modality used to manage such benign tumors, but it may suffer from operator subjectivity. In the treatment phase, ultrasonography also provides guidance for [...] Read more.
Ganglion cysts are commonly observed in association with the joints and tendons of the appendicular skeleton. Ultrasonography is the favored modality used to manage such benign tumors, but it may suffer from operator subjectivity. In the treatment phase, ultrasonography also provides guidance for aspiration and injection, and the information regarding the accurate location of the pedicle of the ganglion. Thus, in this paper, we propose an automatic ganglion cyst extracting method based on fuzzy stretching and fuzzy C-means quantization. The proposed method, with its carefully designed image-enhancement policy, successfully detects ganglion cysts in 86 out of 90 cases (95.6%) without requiring human intervention. Full article
(This article belongs to the Special Issue Biomedical Signal Processing, Data Mining and Artificial Intelligence)
Show Figures

Figure 1

10 pages, 2650 KiB  
Article
Recognition of Eye-Written Characters Using Deep Neural Network
by Won-Du Chang, Jae-Hyeok Choi and Jungpil Shin
Appl. Sci. 2021, 11(22), 11036; https://doi.org/10.3390/app112211036 - 22 Nov 2021
Cited by 2 | Viewed by 1632
Abstract
Eye writing is a human–computer interaction tool that translates eye movements into characters using automatic recognition by computers. Eye-written characters are similar in form to handwritten ones, but their shapes are often distorted because of the biosignal’s instability or user mistakes. Various conventional [...] Read more.
Eye writing is a human–computer interaction tool that translates eye movements into characters using automatic recognition by computers. Eye-written characters are similar in form to handwritten ones, but their shapes are often distorted because of the biosignal’s instability or user mistakes. Various conventional methods have been used to overcome these limitations and recognize eye-written characters accurately, but difficulties have been reported as regards decreasing the error rates. This paper proposes a method using a deep neural network with inception modules and an ensemble structure. Preprocessing procedures, which are often used in conventional methods, were minimized using the proposed method. The proposed method was validated in a writer-independent manner using an open dataset of characters eye-written by 18 writers. The method achieved a 97.78% accuracy, and the error rates were reduced by almost a half compared to those of conventional methods, which indicates that the proposed model successfully learned eye-written characters. Remarkably, the accuracy was achieved in a writer-independent manner, which suggests that a deep neural network model trained using the proposed method is would be stable even for new writers. Full article
(This article belongs to the Special Issue Biomedical Signal Processing, Data Mining and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop