Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = hybrid deep learning (HDL)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 6931 KiB  
Article
A Hybrid Deep Learning Framework for OFDM with Index Modulation Under Uncertain Channel Conditions
by Md Abdul Aziz, Md Habibur Rahman, Rana Tabassum, Mohammad Abrar Shakil Sejan, Myung-Sun Baek and Hyoung-Kyu Song
Mathematics 2024, 12(22), 3583; https://doi.org/10.3390/math12223583 - 15 Nov 2024
Cited by 1 | Viewed by 1283
Abstract
Index modulation (IM) is considered a promising approach for fifth-generation wireless systems due to its spectral efficiency and reduced complexity compared to conventional modulation techniques. However, IM faces difficulties in environments with unpredictable channel conditions, particularly in accurately detecting index values and dynamically [...] Read more.
Index modulation (IM) is considered a promising approach for fifth-generation wireless systems due to its spectral efficiency and reduced complexity compared to conventional modulation techniques. However, IM faces difficulties in environments with unpredictable channel conditions, particularly in accurately detecting index values and dynamically adjusting index assignments. Deep learning (DL) offers a potential solution by improving detection performance and resilience through the learning of intricate patterns in varying channel conditions. In this paper, we introduce a robust detection method based on a hybrid DL (HDL) model designed specifically for orthogonal frequency-division multiplexing with IM (OFDM-IM) in challenging channel environments. Our proposed HDL detector leverages a one-dimensional convolutional neural network (1D-CNN) for feature extraction, followed by a bidirectional long short-term memory (Bi-LSTM) network to capture temporal dependencies. Before feeding data into the network, the channel matrix and received signals are preprocessed using domain-specific knowledge. We evaluate the bit error rate (BER) performance of the proposed model using different optimizers and equalizers, then compare it with other models. Moreover, we evaluate the throughput and spectral efficiency across varying SNR levels. Simulation results demonstrate that the proposed hybrid detector surpasses traditional and other DL-based detectors in terms of performance, underscoring its effectiveness for OFDM-IM under uncertain channel conditions. Full article
Show Figures

Figure 1

19 pages, 2164 KiB  
Article
Enhancing IoT Security Using GA-HDLAD: A Hybrid Deep Learning Approach for Anomaly Detection
by Ibrahim Mutambik
Appl. Sci. 2024, 14(21), 9848; https://doi.org/10.3390/app14219848 - 28 Oct 2024
Cited by 9 | Viewed by 2365
Abstract
The adoption and use of the Internet of Things (IoT) have increased rapidly over recent years, and cyber threats in IoT devices have also become more common. Thus, the development of a system that can effectively identify malicious attacks and reduce security threats [...] Read more.
The adoption and use of the Internet of Things (IoT) have increased rapidly over recent years, and cyber threats in IoT devices have also become more common. Thus, the development of a system that can effectively identify malicious attacks and reduce security threats in IoT devices has become a topic of great importance. One of the most serious threats comes from botnets, which commonly attack IoT devices by interrupting the networks required for the devices to run. There are a number of methods that can be used to improve security by identifying unknown patterns in IoT networks, including deep learning and machine learning approaches. In this study, an algorithm named the genetic algorithm with hybrid deep learning-based anomaly detection (GA-HDLAD) is developed, with the aim of improving security by identifying botnets within the IoT environment. The GA-HDLAD technique addresses the problem of high dimensionality by using a genetic algorithm during feature selection. Hybrid deep learning is used to detect botnets; the approach is a combination of recurrent neural networks (RNNs), feature extraction techniques (FETs), and attention concepts. Botnet attacks commonly involve complex patterns that the hybrid deep learning (HDL) method can detect. Moreover, the use of FETs in the model ensures that features can be effectively extracted from spatial data, while temporal dependencies are captured by RNNs. Simulated annealing (SA) is utilized to select the hyperparameters necessary for the HDL approach. In this study, the GA-HDLAD system is experimentally assessed using a benchmark botnet dataset, and the findings reveal that the system provides superior results in comparison to existing detection methods. Full article
(This article belongs to the Special Issue Advances in Internet of Things (IoT) Technologies and Cybersecurity)
Show Figures

Figure 1

32 pages, 4635 KiB  
Article
Cardiovascular Disease Risk Stratification Using Hybrid Deep Learning Paradigm: First of Its Kind on Canadian Trial Data
by Mrinalini Bhagawati, Sudip Paul, Laura Mantella, Amer M. Johri, Siddharth Gupta, John R. Laird, Inder M. Singh, Narendra N. Khanna, Mustafa Al-Maini, Esma R. Isenovic, Ekta Tiwari, Rajesh Singh, Andrew Nicolaides, Luca Saba, Vinod Anand and Jasjit S. Suri
Diagnostics 2024, 14(17), 1894; https://doi.org/10.3390/diagnostics14171894 - 28 Aug 2024
Cited by 4 | Viewed by 2713
Abstract
Background: The risk of cardiovascular disease (CVD) has traditionally been predicted via the assessment of carotid plaques. In the proposed study, AtheroEdge™ 3.0HDL (AtheroPoint™, Roseville, CA, USA) was designed to demonstrate how well the features obtained from carotid plaques determine the risk [...] Read more.
Background: The risk of cardiovascular disease (CVD) has traditionally been predicted via the assessment of carotid plaques. In the proposed study, AtheroEdge™ 3.0HDL (AtheroPoint™, Roseville, CA, USA) was designed to demonstrate how well the features obtained from carotid plaques determine the risk of CVD. We hypothesize that hybrid deep learning (HDL) will outperform unidirectional deep learning, bidirectional deep learning, and machine learning (ML) paradigms. Methodology: 500 people who had undergone targeted carotid B-mode ultrasonography and coronary angiography were included in the proposed study. ML feature selection was carried out using three different methods, namely principal component analysis (PCA) pooling, the chi-square test (CST), and the random forest regression (RFR) test. The unidirectional and bidirectional deep learning models were trained, and then six types of novel HDL-based models were designed for CVD risk stratification. The AtheroEdge™ 3.0HDL was scientifically validated using seen and unseen datasets while the reliability and statistical tests were conducted using CST along with p-value significance. The performance of AtheroEdge™ 3.0HDL was evaluated by measuring the p-value and area-under-the-curve for both seen and unseen data. Results: The HDL system showed an improvement of 30.20% (0.954 vs. 0.702) over the ML system using the seen datasets. The ML feature extraction analysis showed 70% of common features among all three methods. The generalization of AtheroEdge™ 3.0HDL showed less than 1% (p-value < 0.001) difference between seen and unseen data, complying with regulatory standards. Conclusions: The hypothesis for AtheroEdge™ 3.0HDL was scientifically validated, and the model was tested for reliability and stability and is further adaptable clinically. Full article
(This article belongs to the Special Issue Artificial Intelligence in Cardiovascular Diseases (2024))
Show Figures

Figure 1

21 pages, 44455 KiB  
Article
Optimal and Multi-View Strategic Hybrid Deep Learning for Old Landslide Detection in the Loess Plateau, Northwest China
by Siyan Gao, Jiangbo Xi, Zhenhong Li, Daqing Ge, Zhaocheng Guo, Junchuan Yu, Qiong Wu, Zhe Zhao and Jiahuan Xu
Remote Sens. 2024, 16(8), 1362; https://doi.org/10.3390/rs16081362 - 12 Apr 2024
Cited by 4 | Viewed by 1580
Abstract
Old landslides in the Loess Plateau, Northwest China usually occurred over a relatively long period, and their sizes are usually smaller compared to old landslides in the alpine valley areas of Sichuan, Yunnan, and Southeast Tibet. These landslide areas may have been changed [...] Read more.
Old landslides in the Loess Plateau, Northwest China usually occurred over a relatively long period, and their sizes are usually smaller compared to old landslides in the alpine valley areas of Sichuan, Yunnan, and Southeast Tibet. These landslide areas may have been changed either partially or greatly, and they are usually covered with vegetation and similar to their surrounding environment. Therefore, it is a great challenge to detect them using high-resolution remote sensing images with only orthophoto view. This paper proposes the optimal-view and multi-view strategic hybrid deep learning (OMV-HDL) method for old loess landslide detection. First, the optimal-view dataset in the Yan’an area (YA-OP) was established to solve the problem of insufficient optical features in orthophoto images. Second, in order to make the process of interpretation more labor-saving, the optimal-view and multi-view (OMV) strategy was proposed. Third, hybrid deep learning with weighted boxes fusion (HDL-WBF) was proposed to detect old loess landslides effectively. The experimental results with the constructed optimal-view dataset and multi-view data show that the proposed method has excellent performance among the compared methods—the F1 score and AP (mean) of the proposed method were improved by about 30% compared with the single detection model using traditional orthophoto-view data—and that it has good detection performance on multi-view data with the recall of 81.4%. Full article
Show Figures

Figure 1

18 pages, 3777 KiB  
Article
Political Optimization Algorithm with a Hybrid Deep Learning Assisted Malicious URL Detection Model
by Mohammed Aljebreen, Fatma S. Alrayes, Sumayh S. Aljameel and Muhammad Kashif Saeed
Sustainability 2023, 15(24), 16811; https://doi.org/10.3390/su152416811 - 13 Dec 2023
Cited by 5 | Viewed by 1591
Abstract
With the enhancement of the Internet of Things (IoT), smart cities have developed the idea of conventional urbanization. IoT networks permit distributed smart devices to collect and process data in smart city structures utilizing an open channel, the Internet. Accordingly, challenges like security, [...] Read more.
With the enhancement of the Internet of Things (IoT), smart cities have developed the idea of conventional urbanization. IoT networks permit distributed smart devices to collect and process data in smart city structures utilizing an open channel, the Internet. Accordingly, challenges like security, centralization, privacy (i.e., execution data poisoning and inference attacks), scalability, transparency, and verifiability restrict faster variations of smart cities. Detecting malicious URLs in an IoT environment is crucial to protect devices and the network from potential security threats. Malicious URL detection is an essential element of cybersecurity. It is established that malicious URL attacks mean large risks in smart cities, comprising financial damages, losses of personal identifications, online banking, losing data, and loss of user confidentiality in online businesses, namely e-commerce and employment of social media. Therefore, this paper concentrates on the proposal of a Political Optimization Algorithm by a Hybrid Deep Learning Assisted Malicious URL Detection and Classification for Cybersecurity (POAHDL-MDC) technique. The presented POAHDL-MDC technique identifies whether malicious URLs occur. To accomplish this, the POAHDL-MDC technique performs pre-processing to transform the data to a compatible format, and a Fast Text word embedding process is involved. For malicious URL recognition, a Hybrid Deep Learning (HDL) model integrates the features of stacked autoencoder (SAE) and bi-directional long short-term memory (Bi-LSTM). Finally, POA is exploited for optimum hyperparameter tuning of the HDL technique. The simulation values of the POAHDL-MDC approach are tested on a Malicious URL database, and the outcome exhibits an improvement of the POAHDL-MDC technique with a maximal accuracy of 99.31%. Full article
Show Figures

Figure 1

16 pages, 5068 KiB  
Article
Automated Arrhythmia Classification Using Farmland Fertility Algorithm with Hybrid Deep Learning Model on Internet of Things Environment
by Ahmed S. Almasoud, Hanan Abdullah Mengash, Majdy M. Eltahir, Nabil Sharaf Almalki, Mrim M. Alnfiai and Ahmed S. Salama
Sensors 2023, 23(19), 8265; https://doi.org/10.3390/s23198265 - 6 Oct 2023
Cited by 4 | Viewed by 1735
Abstract
In recent years, the rapid progress of Internet of Things (IoT) solutions has offered an immense opportunity for the collection and dissemination of health records in a central data platform. Electrocardiogram (ECG), a fast, easy, and non-invasive method, is generally employed in the [...] Read more.
In recent years, the rapid progress of Internet of Things (IoT) solutions has offered an immense opportunity for the collection and dissemination of health records in a central data platform. Electrocardiogram (ECG), a fast, easy, and non-invasive method, is generally employed in the evaluation of heart conditions that lead to heart ailments and the identification of heart diseases. The deployment of IoT devices for arrhythmia classification offers many benefits such as remote patient care, continuous monitoring, and early recognition of abnormal heart rhythms. However, it is challenging to diagnose and manually classify arrhythmia as the manual diagnosis of ECG signals is a time-consuming process. Therefore, the current article presents the automated arrhythmia classification using the Farmland Fertility Algorithm with Hybrid Deep Learning (AAC-FFAHDL) approach in the IoT platform. The proposed AAC-FFAHDL system exploits the hyperparameter-tuned DL model for ECG signal analysis, thereby diagnosing arrhythmia. In order to accomplish this, the AAC-FFAHDL technique initially performs data pre-processing to scale the input signals into a uniform format. Further, the AAC-FFAHDL technique uses the HDL approach for detection and classification of arrhythmia. In order to improve the classification and detection performance of the HDL approach, the AAC-FFAHDL technique involves an FFA-based hyperparameter tuning process. The proposed AAC-FFAHDL approach was validated through simulation using the benchmark ECG database. The comparative experimental analysis outcomes confirmed that the AAC-FFAHDL system achieves promising performance compared with other models under different evaluation measures. Full article
(This article belongs to the Special Issue Remote Healthcare with Sensors and Internet of Things)
Show Figures

Figure 1

15 pages, 3779 KiB  
Article
Optimal ElGamal Encryption with Hybrid Deep-Learning-Based Classification on Secure Internet of Things Environment
by Chinnappa Annamalai, Chellavelu Vijayakumaran, Vijayakumar Ponnusamy and Hyunsung Kim
Sensors 2023, 23(12), 5596; https://doi.org/10.3390/s23125596 - 15 Jun 2023
Cited by 5 | Viewed by 3018
Abstract
The Internet of Things (IoT) is a kind of advanced information technology that has grabbed the attention of society. Stimulators and sensors were generally known as smart devices in this ecosystem. In parallel, IoT security provides new challenges. Internet connection and the possibility [...] Read more.
The Internet of Things (IoT) is a kind of advanced information technology that has grabbed the attention of society. Stimulators and sensors were generally known as smart devices in this ecosystem. In parallel, IoT security provides new challenges. Internet connection and the possibility of communication with smart gadgets cause gadgets to indulge in human life. Thus, safety is essential in devising IoT. IoT contains three notable features: intelligent processing, overall perception, and reliable transmission. Due to the IoT span, the security of transmitting data becomes a crucial factor for system security. This study designs a slime mold optimization with ElGamal Encryption with a Hybrid Deep-Learning-Based Classification (SMOEGE-HDL) model in an IoT environment. The proposed SMOEGE-HDL model mainly encompasses two major processes, namely data encryption and data classification. At the initial stage, the SMOEGE technique is applied to encrypt the data in an IoT environment. For optimal key generation in the EGE technique, the SMO algorithm has been utilized. Next, in the later stage, the HDL model is utilized to carry out the classification process. In order to boost the classification performance of the HDL model, the Nadam optimizer is utilized in this study. The experimental validation of the SMOEGE-HDL approach is performed, and the outcomes are inspected under distinct aspects. The proposed approach offers the following scores: 98.50% for specificity, 98.75% for precision, 98.30% for recall, 98.50% for accuracy, and 98.25% for F1-score. This comparative study demonstrated the enhanced performance of the SMOEGE-HDL technique compared to existing techniques. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

20 pages, 8273 KiB  
Article
CNN-LSTM vs. LSTM-CNN to Predict Power Flow Direction: A Case Study of the High-Voltage Subnet of Northeast Germany
by Fachrizal Aksan, Yang Li, Vishnu Suresh and Przemysław Janik
Sensors 2023, 23(2), 901; https://doi.org/10.3390/s23020901 - 12 Jan 2023
Cited by 58 | Viewed by 20250
Abstract
The massive installation of renewable energy sources together with energy storage in the power grid can lead to fluctuating energy consumption when there is a bi-directional power flow due to the surplus of electricity generation. To ensure the security and reliability of the [...] Read more.
The massive installation of renewable energy sources together with energy storage in the power grid can lead to fluctuating energy consumption when there is a bi-directional power flow due to the surplus of electricity generation. To ensure the security and reliability of the power grid, high-quality bi-directional power flow prediction is required. However, predicting bi-directional power flow remains a challenge due to the ever-changing characteristics of power flow and the influence of weather on renewable power generation. To overcome these challenges, we present two of the most popular hybrid deep learning (HDL) models based on a combination of a convolutional neural network (CNN) and long-term memory (LSTM) to predict the power flow in the investigated network cluster. In our approach, the models CNN-LSTM and LSTM-CNN were trained with two different datasets in terms of size and included parameters. The aim was to see whether the size of the dataset and the additional weather data can affect the performance of the proposed model to predict power flow. The result shows that both proposed models can achieve a small error under certain conditions. While the size and parameters of the dataset can affect the training time and accuracy of the HDL model. Full article
(This article belongs to the Special Issue Sensors for Measurements and Diagnostic in Electrical Power Systems)
Show Figures

Figure 1

17 pages, 626 KiB  
Article
HDL-ODPRs: A Hybrid Deep Learning Technique Based Optimal Duplication Detection for Pull-Requests in Open-Source Repositories
by Saud S. Alotaibi
Appl. Sci. 2022, 12(24), 12594; https://doi.org/10.3390/app122412594 - 8 Dec 2022
Viewed by 1750
Abstract
Recently, open-source repositories have grown rapidly due to volunteer contributions worldwide. Collaboration software platforms have gained popularity as thousands of external contributors have contributed to open-source repositories. Although data de-duplication decreases the size of backup workloads, this causes poor data locality (fragmentation) and [...] Read more.
Recently, open-source repositories have grown rapidly due to volunteer contributions worldwide. Collaboration software platforms have gained popularity as thousands of external contributors have contributed to open-source repositories. Although data de-duplication decreases the size of backup workloads, this causes poor data locality (fragmentation) and redundant review time and effort. Deep learning and machine learning techniques have recently been applied to identify complex bugs and duplicate issue reports. It is difficult to use, but it increases the risk of developers submitting duplicate pull requests, resulting in additional maintenance costs. We propose a hybrid deep learning technique in this work on the basis of an optimal duplication detection is for pull requests (HDL-ODPRs) in open-source repositories. An algorithm used to extract textual data from pull requests is hybrid leader-based optimization (HLBO), which increases the accuracy of duplicate detection. Following that, we compute the similarities between pull requests by utilizing the multiobjective alpine skiing optimization (MASO) algorithm, which provides textual, file-change, and code-change similarities. For pull request duplicate detection, a hybrid deep learning technique (named GAN-GS) is introduced, in which the global search (GS) algorithm is used to optimize the design metrics of the generative adversarial network (GAN). The proposed HDL-ODPR model is validated against the public standard benchmark datasets, such as DupPR-basic and DupPR-complementary data. According to the simulation results, the proposed HDL-ODPR model can achieve promising results in comparison with existing state-of-the-art models. Full article
Show Figures

Figure 1

34 pages, 51506 KiB  
Article
COVLIAS 1.0Lesion vs. MedSeg: An Artificial Intelligence Framework for Automated Lesion Segmentation in COVID-19 Lung Computed Tomography Scans
by Jasjit S. Suri, Sushant Agarwal, Gian Luca Chabert, Alessandro Carriero, Alessio Paschè, Pietro S. C. Danna, Luca Saba, Armin Mehmedović, Gavino Faa, Inder M. Singh, Monika Turk, Paramjit S. Chadha, Amer M. Johri, Narendra N. Khanna, Sophie Mavrogeni, John R. Laird, Gyan Pareek, Martin Miner, David W. Sobel, Antonella Balestrieri, Petros P. Sfikakis, George Tsoulfas, Athanasios D. Protogerou, Durga Prasanna Misra, Vikas Agarwal, George D. Kitas, Jagjit S. Teji, Mustafa Al-Maini, Surinder K. Dhanjil, Andrew Nicolaides, Aditya Sharma, Vijay Rathore, Mostafa Fatemi, Azra Alizad, Pudukode R. Krishnan, Ferenc Nagy, Zoltan Ruzsa, Mostafa M. Fouda, Subbaram Naidu, Klaudija Viskovic and Manudeep K. Kalraadd Show full author list remove Hide full author list
Diagnostics 2022, 12(5), 1283; https://doi.org/10.3390/diagnostics12051283 - 21 May 2022
Cited by 21 | Viewed by 4704
Abstract
Background: COVID-19 is a disease with multiple variants, and is quickly spreading throughout the world. It is crucial to identify patients who are suspected of having COVID-19 early, because the vaccine is not readily available in certain parts of the world. Methodology: Lung [...] Read more.
Background: COVID-19 is a disease with multiple variants, and is quickly spreading throughout the world. It is crucial to identify patients who are suspected of having COVID-19 early, because the vaccine is not readily available in certain parts of the world. Methodology: Lung computed tomography (CT) imaging can be used to diagnose COVID-19 as an alternative to the RT-PCR test in some cases. The occurrence of ground-glass opacities in the lung region is a characteristic of COVID-19 in chest CT scans, and these are daunting to locate and segment manually. The proposed study consists of a combination of solo deep learning (DL) and hybrid DL (HDL) models to tackle the lesion location and segmentation more quickly. One DL and four HDL models—namely, PSPNet, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet—were trained by an expert radiologist. The training scheme adopted a fivefold cross-validation strategy on a cohort of 3000 images selected from a set of 40 COVID-19-positive individuals. Results: The proposed variability study uses tracings from two trained radiologists as part of the validation. Five artificial intelligence (AI) models were benchmarked against MedSeg. The best AI model, ResNet-UNet, was superior to MedSeg by 9% and 15% for Dice and Jaccard, respectively, when compared against MD 1, and by 4% and 8%, respectively, when compared against MD 2. Statistical tests—namely, the Mann–Whitney test, paired t-test, and Wilcoxon test—demonstrated its stability and reliability, with p < 0.0001. The online system for each slice was <1 s. Conclusions: The AI models reliably located and segmented COVID-19 lesions in CT scans. The COVLIAS 1.0Lesion lesion locator passed the intervariability test. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

20 pages, 3543 KiB  
Article
HDL-IDS: A Hybrid Deep Learning Architecture for Intrusion Detection in the Internet of Vehicles
by Safi Ullah, Muazzam A. Khan, Jawad Ahmad, Sajjad Shaukat Jamal, Zil e Huma, Muhammad Tahir Hassan, Nikolaos Pitropakis, Arshad and William J. Buchanan
Sensors 2022, 22(4), 1340; https://doi.org/10.3390/s22041340 - 10 Feb 2022
Cited by 90 | Viewed by 8983
Abstract
Internet of Vehicles (IoV) is an application of the Internet of Things (IoT) network that connects smart vehicles to the internet, and vehicles with each other. With the emergence of IoV technology, customers have placed great attention on smart vehicles. However, the rapid [...] Read more.
Internet of Vehicles (IoV) is an application of the Internet of Things (IoT) network that connects smart vehicles to the internet, and vehicles with each other. With the emergence of IoV technology, customers have placed great attention on smart vehicles. However, the rapid growth of IoV has also caused many security and privacy challenges that can lead to fatal accidents. To reduce smart vehicle accidents and detect malicious attacks in vehicular networks, several researchers have presented machine learning (ML)-based models for intrusion detection in IoT networks. However, a proficient and real-time faster algorithm is needed to detect malicious attacks in IoV. This article proposes a hybrid deep learning (DL) model for cyber attack detection in IoV. The proposed model is based on long short-term memory (LSTM) and gated recurrent unit (GRU). The performance of the proposed model is analyzed by using two datasets—a combined DDoS dataset that contains CIC DoS, CI-CIDS 2017, and CSE-CIC-IDS 2018, and a car-hacking dataset. The experimental results demonstrate that the proposed algorithm achieves higher attack detection accuracy of 99.5% and 99.9% for DDoS and car hacks, respectively. The other performance scores, precision, recall, and F1-score, also verify the superior performance of the proposed framework. Full article
(This article belongs to the Collection Cyber Situational Awareness in Computer Networks)
Show Figures

Figure 1

27 pages, 13817 KiB  
Article
COVLIAS 1.0 vs. MedSeg: Artificial Intelligence-Based Comparative Study for Automated COVID-19 Computed Tomography Lung Segmentation in Italian and Croatian Cohorts
by Jasjit S. Suri, Sushant Agarwal, Alessandro Carriero, Alessio Paschè, Pietro S. C. Danna, Marta Columbu, Luca Saba, Klaudija Viskovic, Armin Mehmedović, Samriddhi Agarwal, Lakshya Gupta, Gavino Faa, Inder M. Singh, Monika Turk, Paramjit S. Chadha, Amer M. Johri, Narendra N. Khanna, Sophie Mavrogeni, John R. Laird, Gyan Pareek, Martin Miner, David W. Sobel, Antonella Balestrieri, Petros P. Sfikakis, George Tsoulfas, Athanasios Protogerou, Durga Prasanna Misra, Vikas Agarwal, George D. Kitas, Jagjit S. Teji, Mustafa Al-Maini, Surinder K. Dhanjil, Andrew Nicolaides, Aditya Sharma, Vijay Rathore, Mostafa Fatemi, Azra Alizad, Pudukode R. Krishnan, Ferenc Nagy, Zoltan Ruzsa, Archna Gupta, Subbaram Naidu, Kosmas I. Paraskevas and Mannudeep K. Kalraadd Show full author list remove Hide full author list
Diagnostics 2021, 11(12), 2367; https://doi.org/10.3390/diagnostics11122367 - 15 Dec 2021
Cited by 25 | Viewed by 4578
Abstract
(1) Background: COVID-19 computed tomography (CT) lung segmentation is critical for COVID lung severity diagnosis. Earlier proposed approaches during 2020–2021 were semiautomated or automated but not accurate, user-friendly, and industry-standard benchmarked. The proposed study compared the COVID Lung Image Analysis System, COVLIAS 1.0 [...] Read more.
(1) Background: COVID-19 computed tomography (CT) lung segmentation is critical for COVID lung severity diagnosis. Earlier proposed approaches during 2020–2021 were semiautomated or automated but not accurate, user-friendly, and industry-standard benchmarked. The proposed study compared the COVID Lung Image Analysis System, COVLIAS 1.0 (GBTI, Inc., and AtheroPointTM, Roseville, CA, USA, referred to as COVLIAS), against MedSeg, a web-based Artificial Intelligence (AI) segmentation tool, where COVLIAS uses hybrid deep learning (HDL) models for CT lung segmentation. (2) Materials and Methods: The proposed study used 5000 ITALIAN COVID-19 positive CT lung images collected from 72 patients (experimental data) that confirmed the reverse transcription-polymerase chain reaction (RT-PCR) test. Two hybrid AI models from the COVLIAS system, namely, VGG-SegNet (HDL 1) and ResNet-SegNet (HDL 2), were used to segment the CT lungs. As part of the results, we compared both COVLIAS and MedSeg against two manual delineations (MD 1 and MD 2) using (i) Bland–Altman plots, (ii) Correlation coefficient (CC) plots, (iii) Receiver operating characteristic curve, and (iv) Figure of Merit and (v) visual overlays. A cohort of 500 CROATIA COVID-19 positive CT lung images (validation data) was used. A previously trained COVLIAS model was directly applied to the validation data (as part of Unseen-AI) to segment the CT lungs and compare them against MedSeg. (3) Result: For the experimental data, the four CCs between COVLIAS (HDL 1) vs. MD 1, COVLIAS (HDL 1) vs. MD 2, COVLIAS (HDL 2) vs. MD 1, and COVLIAS (HDL 2) vs. MD 2 were 0.96, 0.96, 0.96, and 0.96, respectively. The mean value of the COVLIAS system for the above four readings was 0.96. CC between MedSeg vs. MD 1 and MedSeg vs. MD 2 was 0.98 and 0.98, respectively. Both had a mean value of 0.98. On the validation data, the CC between COVLIAS (HDL 1) vs. MedSeg and COVLIAS (HDL 2) vs. MedSeg was 0.98 and 0.99, respectively. For the experimental data, the difference between the mean values for COVLIAS and MedSeg showed a difference of <2.5%, meeting the standard of equivalence. The average running times for COVLIAS and MedSeg on a single lung CT slice were ~4 s and ~10 s, respectively. (4) Conclusions: The performances of COVLIAS and MedSeg were similar. However, COVLIAS showed improved computing time over MedSeg. Full article
Show Figures

Figure 1

20 pages, 1973 KiB  
Article
Locality Preserving and Label-Aware Constraint-Based Hybrid Dictionary Learning for Image Classification
by Jianqiang Song, Lin Wang, Zuozhi Liu, Muhua Liu, Mingchuan Zhang and Qingtao Wu
Appl. Sci. 2021, 11(16), 7701; https://doi.org/10.3390/app11167701 - 21 Aug 2021
Cited by 1 | Viewed by 1975
Abstract
Dictionary learning has been an important role in the success of data representation. As a complete view of data representation, hybrid dictionary learning (HDL) is still in its infant stage. In previous HDL approaches, the scheme of how to learn an effective hybrid [...] Read more.
Dictionary learning has been an important role in the success of data representation. As a complete view of data representation, hybrid dictionary learning (HDL) is still in its infant stage. In previous HDL approaches, the scheme of how to learn an effective hybrid dictionary for image classification has not been well addressed. In this paper, we proposed a locality preserving and label-aware constraint-based hybrid dictionary learning (LPLC-HDL) method, and apply it in image classification effectively. More specifically, the locality information of the data is preserved by using a graph Laplacian matrix based on the shared dictionary for learning the commonality representation, and a label-aware constraint with group regularization is imposed on the coding coefficients corresponding to the class-specific dictionary for learning the particularity representation. Moreover, all the introduced constraints in the proposed LPLC-HDL method are based on the l2-norm regularization, which can be solved efficiently via employing an alternative optimization strategy. The extensive experiments on the benchmark image datasets demonstrate that our method is an improvement over previous competing methods on both the hand-crafted and deep features. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

36 pages, 21101 KiB  
Article
COVLIAS 1.0: Lung Segmentation in COVID-19 Computed Tomography Scans Using Hybrid Deep Learning Artificial Intelligence Models
by Jasjit S. Suri, Sushant Agarwal, Rajesh Pathak, Vedmanvitha Ketireddy, Marta Columbu, Luca Saba, Suneet K. Gupta, Gavino Faa, Inder M. Singh, Monika Turk, Paramjit S. Chadha, Amer M. Johri, Narendra N. Khanna, Klaudija Viskovic, Sophie Mavrogeni, John R. Laird, Gyan Pareek, Martin Miner, David W. Sobel, Antonella Balestrieri, Petros P. Sfikakis, George Tsoulfas, Athanasios Protogerou, Durga Prasanna Misra, Vikas Agarwal, George D. Kitas, Jagjit S. Teji, Mustafa Al-Maini, Surinder K. Dhanjil, Andrew Nicolaides, Aditya Sharma, Vijay Rathore, Mostafa Fatemi, Azra Alizad, Pudukode R. Krishnan, Nagy Frence, Zoltan Ruzsa, Archna Gupta, Subbaram Naidu and Mannudeep Kalraadd Show full author list remove Hide full author list
Diagnostics 2021, 11(8), 1405; https://doi.org/10.3390/diagnostics11081405 - 4 Aug 2021
Cited by 48 | Viewed by 5589
Abstract
Background: COVID-19 lung segmentation using Computed Tomography (CT) scans is important for the diagnosis of lung severity. The process of automated lung segmentation is challenging due to (a) CT radiation dosage and (b) ground-glass opacities caused by COVID-19. The lung segmentation methodologies proposed [...] Read more.
Background: COVID-19 lung segmentation using Computed Tomography (CT) scans is important for the diagnosis of lung severity. The process of automated lung segmentation is challenging due to (a) CT radiation dosage and (b) ground-glass opacities caused by COVID-19. The lung segmentation methodologies proposed in 2020 were semi- or automated but not reliable, accurate, and user-friendly. The proposed study presents a COVID Lung Image Analysis System (COVLIAS 1.0, AtheroPoint™, Roseville, CA, USA) consisting of hybrid deep learning (HDL) models for lung segmentation. Methodology: The COVLIAS 1.0 consists of three methods based on solo deep learning (SDL) or hybrid deep learning (HDL). SegNet is proposed in the SDL category while VGG-SegNet and ResNet-SegNet are designed under the HDL paradigm. The three proposed AI approaches were benchmarked against the National Institute of Health (NIH)-based conventional segmentation model using fuzzy-connectedness. A cross-validation protocol with a 40:60 ratio between training and testing was designed, with 10% validation data. The ground truth (GT) was manually traced by a radiologist trained personnel. For performance evaluation, nine different criteria were selected to perform the evaluation of SDL or HDL lung segmentation regions and lungs long axis against GT. Results: Using the database of 5000 chest CT images (from 72 patients), COVLIAS 1.0 yielded AUC of ~0.96, ~0.97, ~0.98, and ~0.96 (p-value < 0.001), respectively within 5% range of GT area, for SegNet, VGG-SegNet, ResNet-SegNet, and NIH. The mean Figure of Merit using four models (left and right lung) was above 94%. On benchmarking against the National Institute of Health (NIH) segmentation method, the proposed model demonstrated a 58% and 44% improvement in ResNet-SegNet, 52% and 36% improvement in VGG-SegNet for lung area, and lung long axis, respectively. The PE statistics performance was in the following order: ResNet-SegNet > VGG-SegNet > NIH > SegNet. The HDL runs in <1 s on test data per image. Conclusions: The COVLIAS 1.0 system can be applied in real-time for radiology-based clinical settings. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

37 pages, 3914 KiB  
Review
Hybrid Deep Learning (hDL)-Based Brain-Computer Interface (BCI) Systems: A Systematic Review
by Nibras Abo Alzahab, Luca Apollonio, Angelo Di Iorio, Muaaz Alshalak, Sabrina Iarlori, Francesco Ferracuti, Andrea Monteriù and Camillo Porcaro
Brain Sci. 2021, 11(1), 75; https://doi.org/10.3390/brainsci11010075 - 8 Jan 2021
Cited by 78 | Viewed by 13401
Abstract
Background: Brain-Computer Interface (BCI) is becoming more reliable, thanks to the advantages of Artificial Intelligence (AI). Recently, hybrid Deep Learning (hDL), which combines different DL algorithms, has gained momentum over the past five years. In this work, we proposed a review on hDL-based [...] Read more.
Background: Brain-Computer Interface (BCI) is becoming more reliable, thanks to the advantages of Artificial Intelligence (AI). Recently, hybrid Deep Learning (hDL), which combines different DL algorithms, has gained momentum over the past five years. In this work, we proposed a review on hDL-based BCI starting from the seminal studies in 2015. Objectives: We have reviewed 47 papers that apply hDL to the BCI system published between 2015 and 2020 extracting trends and highlighting relevant aspects to the topic. Methods: We have queried four scientific search engines (Google Scholar, PubMed, IEEE Xplore and Elsevier Science Direct) and different data items were extracted from each paper such as the database used, kind of application, online/offline training, tasks used for the BCI, pre-processing methodology adopted, type of normalization used, which kind of features were extracted, type of DL architecture used, number of layers implemented and which optimization approach were used as well. All these items were then investigated one by one to uncover trends. Results: Our investigation reveals that Electroencephalography (EEG) has been the most used technique. Interestingly, despite the lower Signal-to-Noise Ratio (SNR) of the EEG data that makes pre-processing of that data mandatory, we have found that the pre-processing has only been used in 21.28% of the cases by showing that hDL seems to be able to overcome this intrinsic drawback of the EEG data. Temporal-features seem to be the most effective with 93.94% accuracy, while spatial-temporal features are the most used with 33.33% of the cases investigated. The most used architecture has been Convolutional Neural Network-Recurrent Neural Network CNN-RNN with 47% of the cases. Moreover, half of the studies have used a low number of layers to achieve a good compromise between the complexity of the network and computational efficiency. Significance: To give useful information to the scientific community, we make our summary table of hDL-based BCI papers available and invite the community to published work to contribute to it directly. We have indicated a list of open challenges, emphasizing the need to use neuroimaging techniques other than EEG, such as functional Near-Infrared Spectroscopy (fNIRS), deeper investigate the advantages and disadvantages of using pre-processing and the relationship with the accuracy obtained. To implement new combinations of architectures, such as RNN-based and Deep Belief Network DBN-based, it is necessary to better explore the frequency and temporal-frequency features of the data at hand. Full article
(This article belongs to the Special Issue A Decade of Brain Sciences)
Show Figures

Figure 1

Back to TopTop