Special Issue "Advances in Mathematical Methods for Machine Learning Algorithms for Computer Aided Diagnostic Systems"

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (31 March 2021).

Special Issue Editors

Prof. Dr. Seungmin Rho
E-Mail Website
Guest Editor
Department of Industrial Security, Chung-Ang University, Seoul 06974, Korea
Interests: databases; big data analysis; music retrieval; multimedia systems; machine learning; knowledge management; computational intelligence
Special Issues and Collections in MDPI journals
Assoc. Prof. Dr. Damien Sauveron
E-Mail Website
Guest Editor
Faculty of Sciences and Techniques, XLIM (UMR CNRS 7252 / University of Limoges), 123, avenue Albert Thomas, 87060 Limoges Cedex, France
Interests: sensors network applications and security; smart home applications and security; IoT security; smart card applications and security; security of Java Card technology; RFID/NFC applications and security; mobile networks applications and security; vehicular network
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Computer-aided diagnosis (CAD) systems have gained a great deal of attention in recent years. Medical-imaging-based CAD systems are most commonly in use to help physicians. These systems can help to extract regions of interest from any imaging modality and can identify different diseases, like brain tumors, Alzheimer’s disease, Parkinson’s disease, lung nodules, cerebral microbleeds, and many more. These systems can be used to detect the above-mentioned diseases at an early stage for better and effective treatment. This makes these systems more critical and requires more reliable and accurate diagnosis. Machine learning algorithms have recently seen wide use in CAD systems. The advances in machine learning algorithms have proved helpful to improve the performance of CAD systems. In this way, this Special Issue focuses on the use of current advances in machine learning for medical imaging modalities. This Special Issue provides a platform for researchers from academia and industry to present their novel and unpublished work in the domain of medical imaging. This will help to foster future research in emerging fields of medical imaging and its related fields.

Dr. Seungmin Rho
Dr. Damien Sauveron
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • algorithm
  • computer aided diagnosis
  • machine learning

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
DeepBlockShield: Blockchain Agent-Based Secured Clinical Data Management Model from the Deep Web Environment
Mathematics 2021, 9(9), 1069; https://doi.org/10.3390/math9091069 - 10 May 2021
Viewed by 474
Abstract
With the growth of artificial intelligence in healthcare and biomedical research, many researchers are interested in large amounts of data in hospitals and medical research centers. Then the need for remote medicine services and clinical data utilization are expanding. However, since the misuse [...] Read more.
With the growth of artificial intelligence in healthcare and biomedical research, many researchers are interested in large amounts of data in hospitals and medical research centers. Then the need for remote medicine services and clinical data utilization are expanding. However, since the misuse and abuse of clinical data causes serious problems, the scope of its use is bound to have a limited range physically and logically. Then a security-enhanced data distribution system for medical deep web environments. Therefore, in this paper, we propose a blockchain-based clinical data management model named DeepBlockshield to prevent information leakage between the deep web and the surface web. Blockchain supports data integrity and user validation to support data sharing in closed networks. Meanwhile, the agent performs integrity verification between the blockchain and the deep web and strengthens the security between the surface web and the deep web. DeepBlockShield verifies the user’s validity through the records of the deep web and blockchain. Furthermore, we wrap the results analyzed by the valid request into a web interface and provide information to the requester asynchronously. In the experiment, the block generation cycle and size on the delay time was analyzed for verifying the stability of the blockchain network. As a result, it showed that the proposed approach guarantees the integrity and availability of clinical data in the deep web environment. Full article
Show Figures

Figure 1

Article
GASVeM: A New Machine Learning Methodology for Multi-SNP Analysis of GWAS Data Based on Genetic Algorithms and Support Vector Machines
Mathematics 2021, 9(6), 654; https://doi.org/10.3390/math9060654 - 18 Mar 2021
Viewed by 360
Abstract
Genome-wide association studies (GWAS) are observational studies of a large set of genetic variants in an individual’s sample in order to find if any of these variants are linked to a particular trait. In the last two decades, GWAS have contributed to several [...] Read more.
Genome-wide association studies (GWAS) are observational studies of a large set of genetic variants in an individual’s sample in order to find if any of these variants are linked to a particular trait. In the last two decades, GWAS have contributed to several new discoveries in the field of genetics. This research presents a novel methodology to which GWAS can be applied to. It is mainly based on two machine learning methodologies, genetic algorithms and support vector machines. The database employed for the study consisted of information about 370,750 single-nucleotide polymorphisms belonging to 1076 cases of colorectal cancer and 973 controls. Ten pathways with different degrees of relationship with the trait under study were tested. The results obtained showed how the proposed methodology is able to detect relevant pathways for a certain trait: in this case, colorectal cancer. Full article
Show Figures

Figure 1

Article
Smart Machinery Monitoring System with Reduced Information Transmission and Fault Prediction Methods Using Industrial Internet of Things
Mathematics 2021, 9(1), 3; https://doi.org/10.3390/math9010003 - 22 Dec 2020
Viewed by 518
Abstract
A monitoring system for smart machinery has been considered to be one of the most important goals in recent enterprises. This monitoring system will encounter huge difficulties, such as more data uploaded by smart machines, and the available internet bandwidth will influence the [...] Read more.
A monitoring system for smart machinery has been considered to be one of the most important goals in recent enterprises. This monitoring system will encounter huge difficulties, such as more data uploaded by smart machines, and the available internet bandwidth will influence the transmission speed of data and the reliability of the equipment monitoring platform. This paper proposes reducing the periodical information that has been uploaded to the monitoring platform by setting an upload event through the traits of production data from machines. The proposed methods reduce bandwidth and power consumption. The monitoring information is reconstructed by the proposed methods, so history data will not reduce storage in the cloud server database. In order to reduce the halt time caused by machine error, the proposed system uses machine-learning technology to model the operating status of machinery for fault prediction. In the experimental results, the smart machinery monitoring system using the Industrial Internet of Things reduces the volume of information uploaded by 54.57% and obtains a 98% prediction accuracy. Full article
Show Figures

Figure 1

Article
Improving the Lifetime of an Out-Patient Implanted Medical Device Using a Novel Flower Pollination-Based Optimization Algorithm in WBAN Systems
Mathematics 2020, 8(12), 2189; https://doi.org/10.3390/math8122189 - 09 Dec 2020
Cited by 2 | Viewed by 416
Abstract
The new inventions in health care devices have led to a considerable increase in the human lifespan. Miniaturized bio-sensing elements and dedicated wireless communication bands have led to the development of a new arena called Wireless Body Area Network (WBAN) (IEEE 802.11.6). These [...] Read more.
The new inventions in health care devices have led to a considerable increase in the human lifespan. Miniaturized bio-sensing elements and dedicated wireless communication bands have led to the development of a new arena called Wireless Body Area Network (WBAN) (IEEE 802.11.6). These Implantable Medical Devices (IMDs) are used for monitoring a chronic patient’s medical condition as well as therapeutic and life-saving functions. The aim of this study is to improve the dynamic channel selection algorithm for an increased Out Patient-Body Network Controller (OP-BNC) medical device during visits to the hospital. There is a fixed number of licensed spectra allocated to the In Patient-Body Network Controller (IP-BNC) and Out-Patient Body Network Controller (OP-BNC). When there is an increase in the OP-BNC, there is an availability of idle spectrum in the IP-BNC. An existing rank-based algorithm is used in the allocation of idle spectrum to the increased OP-BNC. This ranking method takes more time for the processing and selection of an idle channel to the registered user. To avoid it, we proposed an EFPOC model to select from the free idle channels of the IP-BNC licensed spectrum. We also discussed the algorithm complexity of the proposed Enhanced Flower Pollination-based Optimized Channel selection (EFPOC) algorithm and obtained a complexity of O(n2), which is a significant improvement over the existing algorithm rank-based algorithm complexity. Our experimental result shows that the proposed EFPOC algorithm improves the Tier-2 systems lifetime by 46.47%. Then, to prove that the proposed model is time efficient in channel selection, a simulated experimented is conducted. When selecting a number of channels from a Look-Up Table (LUT), the proposed EFPOC method takes 25% less time than the existing algorithms. Full article
Show Figures

Figure 1

Article
Deep Neural Network for Predicting Diabetic Retinopathy from Risk Factors
Mathematics 2020, 8(9), 1620; https://doi.org/10.3390/math8091620 - 19 Sep 2020
Cited by 3 | Viewed by 1293
Abstract
Extracting information from individual risk factors provides an effective way to identify diabetes risk and associated complications, such as retinopathy, at an early stage. Deep learning and machine learning algorithms are being utilized to extract information from individual risk factors to improve early-stage [...] Read more.
Extracting information from individual risk factors provides an effective way to identify diabetes risk and associated complications, such as retinopathy, at an early stage. Deep learning and machine learning algorithms are being utilized to extract information from individual risk factors to improve early-stage diagnosis. This study proposes a deep neural network (DNN) combined with recursive feature elimination (RFE) to provide early prediction of diabetic retinopathy (DR) based on individual risk factors. The proposed model uses RFE to remove irrelevant features and DNN to classify the diseases. A publicly available dataset was utilized to predict DR during initial stages, for the proposed and several current best-practice models. The proposed model achieved 82.033% prediction accuracy, which was a significantly better performance than the current models. Thus, important risk factors for retinopathy can be successfully extracted using RFE. In addition, to evaluate the proposed prediction model robustness and generalization, we compared it with other machine learning models and datasets (nephropathy and hypertension–diabetes). The proposed prediction model will help improve early-stage retinopathy diagnosis based on individual risk factors. Full article
Show Figures

Figure 1

Article
Transfer Learning for Stenosis Detection in X-ray Coronary Angiography
Mathematics 2020, 8(9), 1510; https://doi.org/10.3390/math8091510 - 04 Sep 2020
Cited by 1 | Viewed by 760
Abstract
Coronary artery disease is the most frequent type of heart disease caused by an abnormal narrowing of coronary arteries, also called stenosis or atherosclerosis. It is also the leading cause of death globally. Currently, X-ray Coronary Angiography (XCA) remains the gold-standard imaging technique [...] Read more.
Coronary artery disease is the most frequent type of heart disease caused by an abnormal narrowing of coronary arteries, also called stenosis or atherosclerosis. It is also the leading cause of death globally. Currently, X-ray Coronary Angiography (XCA) remains the gold-standard imaging technique for medical diagnosis of stenosis and other related conditions. This paper presents a new method for the automatic detection of coronary artery stenosis in XCA images, employing a pre-trained (VGG16, ResNet50, and Inception-v3) Convolutional Neural Network (CNN) via Transfer Learning. The method is based on a network-cut and fine-tuning approach. The optimal cut and fine-tuned layers were selected following 20 different configurations for each network. The three networks were fine-tuned using three strategies: only real data, only artificial data, and artificial with real data. The synthetic dataset consists of 10,000 images (80% for training, 20% for validation) produced by a generative model. These different configurations were analyzed and compared using a real dataset of 250 real XCA images (125 for testing and 125 for fine-tuning), regarding their randomly initiated CNNs and a fourth custom CNN, trained as well with artificial and real data. The results showed that pre-trained VGG16, ResNet50, and Inception-v3 cut on an early layer and fine-tuned, overcame the referencing CNNs performance. Specifically, Inception-v3 provided the best stenosis detection with an accuracy of 0.95, a precision of 0.93, sensitivity, specificity, and F1 score of 0.98, 0.92, and 0.95, respectively. Moreover, a class activation map is applied to identify the high attention regions for stenosis detection. Full article
Show Figures

Figure 1

Article
The Effects of Fiscal and Tax Incentives on Regional Innovation Capability: Text Extraction Based on Python
Mathematics 2020, 8(7), 1193; https://doi.org/10.3390/math8071193 - 21 Jul 2020
Cited by 6 | Viewed by 857
Abstract
The regulation of fiscal and tax policies is an imperative prerequisite for improving the regional innovation capability. In view of this, an attempt was made to select 31 provinces and cities in China as the research object from 2009 to 2018, to extract [...] Read more.
The regulation of fiscal and tax policies is an imperative prerequisite for improving the regional innovation capability. In view of this, an attempt was made to select 31 provinces and cities in China as the research object from 2009 to 2018, to extract the fiscal and tax policy text encouraging innovation of the Chinese provinces and cities based on Python, and analyze their impact on regional innovation capability from both a text data and numerical data perspective. It is noteworthy that most of the provincial fiscal policies just follow the national fiscal policies. Each province does not formulate fiscal and tax policy according to its own unique characteristics. Fiscal policies and regional innovation capability exhibit significant spatial heterogeneity. Based on the results of the dynamic panel data model, it is seen that the R&D input and industrial structure are the main sources of improving innovation capability. The fiscal expenditure for science and technology, fiscal and tax policy text, macro tax burden, business tax (BT), and value-added tax (VAT) have a significant boosting effect on the regional innovation capability. However, the corporate income tax hinders the regional innovation capability. Finally, through the robustness test of invention patents, it is found that the fiscal and tax policy text, macro tax burden, and business tax still have a positive effect on invention patents, but the role of value-added tax has changed from promotion to obstruction, and the corporate income tax has become a significant obstacle on invention patents. This shows that China should build a tax system that promotes fair competition, reduce the tax burden of enterprises, encourage enterprises to conduct independent R&D, and guide enterprises in the evolution from the low-tech to high-tech innovation by improving the tax structure and fiscal technology expenditures. Full article
Article
An Elderly Health Monitoring System Using Machine Learning and In-Depth Analysis Techniques on the NIH Stroke Scale
Mathematics 2020, 8(7), 1115; https://doi.org/10.3390/math8071115 - 06 Jul 2020
Cited by 3 | Viewed by 913
Abstract
Recently, with the rapid change to an aging society and the increased interest in healthcare, disease prediction and management through various healthcare devices and services is attracting much attention. In particular, stroke, represented by cerebrovascular disease, is a very dangerous disease, in which [...] Read more.
Recently, with the rapid change to an aging society and the increased interest in healthcare, disease prediction and management through various healthcare devices and services is attracting much attention. In particular, stroke, represented by cerebrovascular disease, is a very dangerous disease, in which death or mental and physical aftereffects are very large in adults and the elderly. The sequelae of such stroke diseases are very dangerous, because they make social and economic activities difficult. In this paper, we propose a new system to prediction and in-depth analysis stroke severity of elderly over 65 years based on the National Institutes of Health Stroke Scale (NIHSS). In addition, we use the algorithm of decision tree of C4.5, which is a methodology of prediction and analysis of machine learning techniques. The C4.5 decision trees are machine learning algorithms that provide additional in-depth rules of the execution mechanism and semantic interpretation analysis. Finally, in this paper, it is verified that the C4.5 decision tree algorithm can be used to classify and predict stroke severity, and to obtain additional NIHSS features reduction effects. Therefore, during the operation of an actual system, the proposed model uses only 13 features out of the 18 stroke scale features, including age, so that it can provide faster and more accurate service support. Experimental results show that the system enables this by reducing the patient NIH stroke scale measurement time and making the operation more efficient, with an overall accuracy, using the C4.5 decision tree algorithm, of 91.11%. Full article
Show Figures

Figure 1

Article
A New Machine Learning Algorithm Based on Optimization Method for Regression and Classification Problems
Mathematics 2020, 8(6), 1007; https://doi.org/10.3390/math8061007 - 19 Jun 2020
Cited by 1 | Viewed by 802
Abstract
A convex minimization problem in the form of the sum of two proper lower-semicontinuous convex functions has received much attention from the community of optimization due to its broad applications to many disciplines, such as machine learning, regression and classification problems, image and [...] Read more.
A convex minimization problem in the form of the sum of two proper lower-semicontinuous convex functions has received much attention from the community of optimization due to its broad applications to many disciplines, such as machine learning, regression and classification problems, image and signal processing, compressed sensing and optimal control. Many methods have been proposed to solve such problems but most of them take advantage of Lipschitz continuous assumption on the derivative of one function from the sum of them. In this work, we introduce a new accelerated algorithm for solving the mentioned convex minimization problem by using a linesearch technique together with a viscosity inertial forward–backward algorithm (VIFBA). A strong convergence result of the proposed method is obtained under some control conditions. As applications, we apply our proposed method to solve regression and classification problems by using an extreme learning machine model. Moreover, we show that our proposed algorithm has more efficiency and better convergence behavior than some algorithms mentioned in the literature. Full article
Show Figures

Figure 1

Article
Comparison of Supervised Classification Models on Textual Data
Mathematics 2020, 8(5), 851; https://doi.org/10.3390/math8050851 - 24 May 2020
Cited by 4 | Viewed by 909
Abstract
Text classification is an essential aspect in many applications, such as spam detection and sentiment analysis. With the growing number of textual documents and datasets generated through social media and news articles, an increasing number of machine learning methods are required for accurate [...] Read more.
Text classification is an essential aspect in many applications, such as spam detection and sentiment analysis. With the growing number of textual documents and datasets generated through social media and news articles, an increasing number of machine learning methods are required for accurate textual classification. For this paper, a comprehensive evaluation of the performance of multiple supervised learning models, such as logistic regression (LR), decision trees (DT), support vector machine (SVM), AdaBoost (AB), random forest (RF), multinomial naive Bayes (NB), multilayer perceptrons (MLP), and gradient boosting (GB), was conducted to assess the efficiency and robustness, as well as limitations, of these models on the classification of textual data. SVM, LR, and MLP had better performance in general, with SVM being the best, while DT and AB had much lower accuracies amongst all the tested models. Further exploration on the use of different SVM kernels was performed, demonstrating the advantage of using linear kernels over polynomial, sigmoid, and radial basis function kernels for text classification. The effects of removing stop words on model performance was also investigated; DT performed better with stop words removed, while all other models were relatively unaffected by the presence or absence of stop words. Full article
Show Figures

Figure 1

Review

Jump to: Research

Review
Objective Diagnosis for Histopathological Images Based on Machine Learning Techniques: Classical Approaches and New Trends
Mathematics 2020, 8(11), 1863; https://doi.org/10.3390/math8111863 - 24 Oct 2020
Cited by 3 | Viewed by 912
Abstract
Histopathology refers to the examination by a pathologist of biopsy samples. Histopathology images are captured by a microscope to locate, examine, and classify many diseases, such as different cancer types. They provide a detailed view of different types of diseases and their tissue [...] Read more.
Histopathology refers to the examination by a pathologist of biopsy samples. Histopathology images are captured by a microscope to locate, examine, and classify many diseases, such as different cancer types. They provide a detailed view of different types of diseases and their tissue status. These images are an essential resource with which to define biological compositions or analyze cell and tissue structures. This imaging modality is very important for diagnostic applications. The analysis of histopathology images is a prolific and relevant research area supporting disease diagnosis. In this paper, the challenges of histopathology image analysis are evaluated. An extensive review of conventional and deep learning techniques which have been applied in histological image analyses is presented. This review summarizes many current datasets and highlights important challenges and constraints with recent deep learning techniques, alongside possible future research avenues. Despite the progress made in this research area so far, it is still a significant area of open research because of the variety of imaging techniques and disease-specific characteristics. Full article
Show Figures

Figure 1

Back to TopTop