Special Issue "Intelligent Data Sensing, Processing, Mining, and Communication"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 July 2023 | Viewed by 31102

Special Issue Editor

Special Issue Information

Dear Colleagues,

Electronics invites manuscript submissions in the area of intelligent data sensing, processing, mining, and communication. It is important to develop smart methods to detect, collect, explore (mine), and communicate data. This will reduce the workload of those who use said data. In addition, the scale of detection, collection, and dissemination of data could increase. Mobile devices, in particular, are equipped with various detection devices. Some application examples are listed, including smartphones, wireless body sensors, smart sensing devices in manufacturing, and smart meters. Since these mobile devices detect and collect large volumes of data, they should have adequate computing power for this volume in order to process, explore and communicate them. The above-mentioned trends pose fundamental challenges for researchers and developers. Many open issues remain unresolved, such as how to achieve expected performance for intelligent data discovery, data collection, and data processing; how to ensure data quality, data reliability, information security and reliability, as well as confidentiality in data collection, processing, and exploration; as well as the communication of data with intelligence.

Other relevant aspects should also be investigated, such as computing effort, platforms, tools, service discovery, data management, and analytics for intelligent data collection and communication. These unresolved issues have been research hotspots for many researchers, as they are critical to ensuring rigid and efficient applications for intelligent data detection, collection, processing, mining, and diffusion in fixed/mobile computing. To tackle the above issues and challenges, this Special Issue of Electronics will advance innovative solutions and recent advancements in the field of intelligent data detection, collection, mining, and communication.

The scope of this Special Issue covers various real-world problems and their appropriate innovative solutions and recent advancements in the field of intelligent data detection, collection, mining, and communication, including (but not limited to):

  • Intelligent data sensing and collection.
  • Intelligent data processing and mining.
  • Scalable data and resource management.
  • Mobile-computing-based intelligent sensing and data collection, processing, mining, and communication of data;
  • Complexity for mobile computing of intelligent sensing as well as data collection and communication;
  • Mobile-computing-based intelligent sensing as well as data collection and communication in the cloud;
  • Large-scale data analysis in mobile-computing-based intelligent sensing as well as data collection and communication;
  • Knowledge and service discovery in mobile-computing-based sensing as well as data collection and communication with intelligence;
  • Business and societal applications of intelligent sensing as well as data collection and communication in mobile computing;
  • Big-Data-based technologies and algorithms for data acquisition, processing, and mining;
  • Security and privacy issues;
  • IoT-based data sensing, processing, mining, and communication;
  • Intelligent integration and exploration of biomedical and industrial data
  • Artificial-intelligence-driven analytics of data;
  • Data acquisition techniques (RFID, sensors, etc.);
  • Communication, networking, optimization, and performance measurement of trustworthy systems.

Prof. Dr. Habib Hamam
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Electronics
  • Data Processing
  • Data Mining
  • Communication
  • Internet of Things
  • Cloud Computing
  • Fog Computing
  • Blockchains
  • Security
  • Privacy
  • Artificial Intelligence
  • RFID

Published Papers (22 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

Article
A Deep-Ensemble-Learning-Based Approach for Skin Cancer Diagnosis
Electronics 2023, 12(6), 1342; https://doi.org/10.3390/electronics12061342 - 12 Mar 2023
Viewed by 397
Abstract
Skin cancer is one of the widespread diseases among existing cancer types. More importantly, the detection of lesions in early diagnosis has tremendously attracted researchers’ attention. Thus, artificial intelligence (AI)-based techniques have supported the early diagnosis of skin cancer by investigating deep-learning-based convolutional [...] Read more.
Skin cancer is one of the widespread diseases among existing cancer types. More importantly, the detection of lesions in early diagnosis has tremendously attracted researchers’ attention. Thus, artificial intelligence (AI)-based techniques have supported the early diagnosis of skin cancer by investigating deep-learning-based convolutional neural networks (CNN). However, the current methods remain challenging in detecting melanoma in dermoscopic images. Therefore, in this paper, we propose an ensemble model that uses the vision of both EfficientNetV2S and Swin-Transformer models to detect the early focal zone of skin cancer. Hence, we considerthat the former architecture leads to greater accuracy, while the latter model has the advantage of recognizing dark parts in an image. We have modified the fifth block of the EfficientNetV2S model and have included the Swin-Transformer model. Our experiments demonstrate that the constructed ensemble model has attained a higher level of accuracy over the individual models and has also decreased the losses as compared to traditional strategies. The proposed model achieved an accuracy score of 99.10%, a sensitivity of 99.27%, and a specificity score of 99.80%. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
SHO-CNN: A Metaheuristic Optimization of a Convolutional Neural Network for Multi-Label News Classification
Electronics 2023, 12(1), 113; https://doi.org/10.3390/electronics12010113 - 27 Dec 2022
Cited by 2 | Viewed by 1376
Abstract
News media always pursue informing the public at large. It is impossible to overestimate the significance of understanding the semantics of news coverage. Traditionally, a news text is assigned to a single category; however, a piece of news may contain information from more [...] Read more.
News media always pursue informing the public at large. It is impossible to overestimate the significance of understanding the semantics of news coverage. Traditionally, a news text is assigned to a single category; however, a piece of news may contain information from more than one domain. A multi-label text classification model for news is proposed in this paper. The proposed model is an automated expert system designed to optimize CNN’s classification of multi-label news items. The performance of a CNN is highly dependent on its hyperparameters, and manually tweaking their values is a cumbersome and inefficient task. A high-level metaheuristic optimization algorithm, spotted hyena optimizer (SHO), has higher advanced exploration and exploitation capabilities. SHO generates a collection of solutions as a group of hyperparameters to be optimized, and the process is repeated until the desired optimal solution is achieved. SHO is integrated to automate the tuning of the hyperparameters of a CNN, including learning rate, momentum, number of epochs, batch size, dropout, number of nodes, and activation function. Four publicly available news datasets are used to evaluate the proposed model. The tuned hyperparameters and higher convergence rate of the proposed model result in higher performance for multi-label news classification compared to a baseline CNN and other optimizations of CNNs. The resulting accuracies are 93.6%, 90.8%, 68.7%, and 95.4% for RCV1-v2, Reuters-21578, Slashdot, and NELA-GT-2019, respectively. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
Handling Missing Values Based on Similarity Classifiers and Fuzzy Entropy Measures
Electronics 2022, 11(23), 3929; https://doi.org/10.3390/electronics11233929 - 28 Nov 2022
Cited by 1 | Viewed by 547
Abstract
Handling missing values (MVs) and feature selection (FS) are vital preprocessing tasks for many pattern recognition, data mining, and machine learning (ML) applications, involving classification and regression problems. The existence of MVs in data badly affects making decisions. Hence, MVs have to be [...] Read more.
Handling missing values (MVs) and feature selection (FS) are vital preprocessing tasks for many pattern recognition, data mining, and machine learning (ML) applications, involving classification and regression problems. The existence of MVs in data badly affects making decisions. Hence, MVs have to be taken into consideration during preprocessing tasks as a critical problem. To this end, the authors proposed a new algorithm for manipulating MVs using FS. Bayesian ridge regression (BRR) is the most beneficial type of Bayesian regression. BRR estimates a probabilistic model of the regression problem. The proposed algorithm is dubbed as cumulative Bayesian ridge with similarity and Luca’s fuzzy entropy measure (CBRSL). CBRSL reveals how the fuzzy entropy FS used for selecting the candidate feature holding MVs aids in the prediction of the MVs within the selected feature using the Bayesian Ridge technique. CBRSL can be utilized to manipulate MVs within other features in a cumulative order; the filled features are incorporated within the BRR equation in order to predict the MVs for the next selected incomplete feature. An experimental analysis was conducted on four datasets holding MVs generated from three missingness mechanisms to compare CBRSL with state-of-the-art practical imputation methods. The performance was measured in terms of R2 score (determination coefficient), RMSE (root mean square error), and MAE (mean absolute error). Experimental results indicate that the accuracy and execution times differ depending on the amount of MVs, the dataset’s size, and the mechanism type of missingness. In addition, the results show that CBRSL can manipulate MVs generated from any missingness mechanism with a competitive accuracy against the compared methods. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
Brain Tumor Classification and Detection Using Hybrid Deep Tumor Network
Electronics 2022, 11(21), 3457; https://doi.org/10.3390/electronics11213457 - 25 Oct 2022
Cited by 3 | Viewed by 1591
Abstract
Brain tumor (BTs) is considered one of the deadly, destructive, and belligerent disease, that shortens the average life span of patients. Patients with misdiagnosed and insufficient medical treatment of BTs have less chance of survival. For tumor analysis, magnetic resonance imaging (MRI) is [...] Read more.
Brain tumor (BTs) is considered one of the deadly, destructive, and belligerent disease, that shortens the average life span of patients. Patients with misdiagnosed and insufficient medical treatment of BTs have less chance of survival. For tumor analysis, magnetic resonance imaging (MRI) is often utilized. However, due to the vast data produced by MRI, manual segmentation in a reasonable period of time is difficult, which limits the application of standard criteria in clinical practice. So, efficient and automated segmentation techniques are required. The accurate early detection and segmentation of BTs is a difficult and challenging task in biomedical imaging. Automated segmentation is an issue because of the considerable temporal and anatomical variability of brain tumors. Early detection and treatment are therefore essential. To detect brain cancers or tumors, different classical machine learning (ML) algorithms have been utilized. However, the main difficulty with these models is the manually extracted features. This research provides a deep hybrid learning (DeepTumorNetwork) model of binary BTs classification and overcomes the above-mentioned problems. The proposed method hybrid GoogLeNet architecture with a CNN model by eliminating the 5 layers of GoogLeNet and adding 14 layers of the CNN model that extracts features automatically. On the same Kaggle (Br35H) dataset, the proposed model key performance indicator was compared to transfer learning (TL) model (ResNet, VGG-16, SqeezNet, AlexNet, MobileNet V2) and different ML/DL. Furthermore, the proposed approach outperformed based on a key performance indicator (Acc, Recall, Precision, and F1-Score) of BTs classification. Additionally, the proposed methods exhibited high classification performance measures, Accuracy (99.51%), Precision (99%), Recall (98.90%), and F1-Score (98.50%). The proposed approaches show its superiority on recent sibling methods for BTs classification. The proposed method outperformed current methods for BTs classification using MRI images. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
A Large Scale Evolutionary Algorithm Based on Determinantal Point Processes for Large Scale Multi-Objective Optimization Problems
Electronics 2022, 11(20), 3317; https://doi.org/10.3390/electronics11203317 - 14 Oct 2022
Cited by 2 | Viewed by 604
Abstract
Global optimization challenges are frequent in scientific and engineering areas where loads of evolutionary computation methods i.e., differential evolution (DE) and particle-swarm optimization (PSO) are employed to handle these problems. However, the performance of these algorithms declines due to expansion in the problem [...] Read more.
Global optimization challenges are frequent in scientific and engineering areas where loads of evolutionary computation methods i.e., differential evolution (DE) and particle-swarm optimization (PSO) are employed to handle these problems. However, the performance of these algorithms declines due to expansion in the problem dimension. The evolutionary algorithms are obstructed to congregate with the Pareto front rapidly while using the large-scale optimization algorithm. This work intends a large-scale multi-objective evolutionary optimization scheme aided by the determinantal point process (LSMOEA-DPPs) to handle this problem. The proposed DPP model introduces a mechanism consisting of a kernel matrix and a probability model to achieve convergence and population variety in high dimensional relationship balance to keep the population diverse. We have also employed elitist non-dominated sorting for environmental selection. Moreover, the projected algorithm also demonstrates and distinguishes four cutting-edge algorithms, each with two and three objectives, respectively, and up to 2500 decision variables. The experimental results show that LSMOEA-DPPs outperform four cutting-edge multi-objective evolutionary algorithms by a large margin. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
A Deep Learning-Based Approach for the Diagnosis of Acute Lymphoblastic Leukemia
Electronics 2022, 11(19), 3168; https://doi.org/10.3390/electronics11193168 - 02 Oct 2022
Viewed by 949
Abstract
Leukemia is a deadly disease caused by the overproduction of immature white blood cells (WBS) in the bone marrow. If leukemia is detected at the initial stages, the chances of recovery are better. Typically, morphological analysis for the identification of acute lymphoblastic leukemia [...] Read more.
Leukemia is a deadly disease caused by the overproduction of immature white blood cells (WBS) in the bone marrow. If leukemia is detected at the initial stages, the chances of recovery are better. Typically, morphological analysis for the identification of acute lymphoblastic leukemia (ALL) is performed manually on blood cells by skilled medical personnel, which has several disadvantages, including a lack of medical personnel, sluggish analysis, and prediction that is dependent on the medical personnel’s expertise. Therefore, we proposed the Multi-Attention EfficientNetV2S and EfficientNetB3 state-of-the-art deep learning architectures using transfer learning-based fine-tuning approach to distinguish the normal and blast cells from microscopic blood smear images that both are pretrained on large-scale ImageNet database. We simply modified the last block of both models and added additional layers to both models. After including this Multi-Attention Mechanism, it not only reduces the model’s complexities but also generalizes its network quite well. By using the proposed technique, the accuracy has improved and the overall loss is also minimized. Our Multi-Attention EfficientNetV2S and EfficientNetB3 models achieved 99.73% and 99.25% accuracy, respectively. We have further compared the proposed model’s performance to other individual and ensemble models. Upon comparison, the proposed model outclassed the existing literature and other benchmark models, thus proving its efficiency. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
Blockchain-Enabled Decentralized Secure Big Data of Remote Sensing
Electronics 2022, 11(19), 3164; https://doi.org/10.3390/electronics11193164 - 01 Oct 2022
Cited by 4 | Viewed by 960
Abstract
Blockchain technology has emerged as a promising candidate for space exploration and sustainable energy systems. This transformative technology offers secure and decentralized strategies to process and manipulate space resources. Remote sensing provides viable potential with the coexistence of open data from various sources, [...] Read more.
Blockchain technology has emerged as a promising candidate for space exploration and sustainable energy systems. This transformative technology offers secure and decentralized strategies to process and manipulate space resources. Remote sensing provides viable potential with the coexistence of open data from various sources, such as short-range sensors on unmanned aerial vehicles (UAVs) or Internet-of-Things (IoT) tags and far-range sensors incorporated on satellites. Open data resources have most recently emerged as attractive connecting parties where owners have shown consent to share data. However, most data owners are anonymous and untrustworthy, which makes shared data likely insecure and unreliable. At present, there are several tools that distribute open data, serving as an intermediate party to link users with data owners. However, these platforms are operated by central authorities who develop guidelines for data ownership, integrity, and access, consequently restricting both users and data owners. Therefore, the need and feasibility of a decentralized system arise for data sharing and retrieving without involving these intermediate limiting parties. This study proposes a blockchain-based system without any central authority to share and retrieve data. Our proposed system features (i) data sharing, (ii) maintaining the historical data, and (iii) retrieving and evaluation of data along with enhanced security. We have also discussed the use of blockchain algorithms based on smart contracts to track space transactions and communications in a secure, verifiable, and transparent manner. We tested the suggested framework in the Windows environment by writing smart contracts prototype on an Ethereum TESTNET blockchain. The results of the study showed that the suggested strategy is efficient, practicable, and free of common security attacks and vulnerabilities. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
Efficient Elliptic Curve Operators for Jacobian Coordinates
Electronics 2022, 11(19), 3123; https://doi.org/10.3390/electronics11193123 - 29 Sep 2022
Viewed by 564
Abstract
The speed up of group operations on elliptic curves is proposed using a new type of projective coordinate representation. These operations are the most common computations in key exchange and encryption for both current and postquantum technology. The boost this improvement brings to [...] Read more.
The speed up of group operations on elliptic curves is proposed using a new type of projective coordinate representation. These operations are the most common computations in key exchange and encryption for both current and postquantum technology. The boost this improvement brings to computational efficiency impacts not only encryption efforts but also attacks. For maintaining security, the community needs to take note of this development as it may need to operate changes in the key size of various algorithms. Our proposed projective representation can be viewed as a warp on the Jacobian projective coordinates, or as a new operation replacing the addition in a Jacobian projective representation, basically yielding a new group with the same algebra elements and homomorphic to it. Efficient algorithms are introduced for computing the expression Pk+Q where P and Q are points on the curve and k is an integer. They exploit optimized versions for particular k values. Measurements of the numbers of basic computer instructions needed for operations based on the new representation show clear improvements. The experiments are based on benchmarks selected using standard NIST elliptic curves. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
A DDoS Vulnerability Analysis System against Distributed SDN Controllers in a Cloud Computing Environment
Electronics 2022, 11(19), 3120; https://doi.org/10.3390/electronics11193120 - 29 Sep 2022
Viewed by 732
Abstract
Software-Defined Networking (SDN) is now a well-established approach in 5G, Internet of Things (IoT) and Cloud Computing. The primary idea behind its immense popularity is the separation of its underlying intelligence from the data-carrying components like routers and switches. The intelligence of the [...] Read more.
Software-Defined Networking (SDN) is now a well-established approach in 5G, Internet of Things (IoT) and Cloud Computing. The primary idea behind its immense popularity is the separation of its underlying intelligence from the data-carrying components like routers and switches. The intelligence of the SDN-based networks lies in the central point, popularly known as the SDN controller. It is the central control hub of the SDN-based network, which has full privileges and a global view over the entire network. Providing security to SDN controllers is one such important task. Whenever one wishes to implement SDN into their data center or network, they are required to provide the website to SDN controllers. Several attacks are becoming a hurdle in the exponential growth of SDN, and among all one such attack is a Distributed Denial of Service (DDoS) attack. In a couple of years, several new SDN controllers will be available. Among many, Open Networking Operating System (ONOS) and OpenDayLight (ODL) are two popular SDN controllers laying the foundation for many other controllers. These SDN controllers are now being used by numerous businesses, including Cisco, Juniper, IBM, Google, etc. In this paper, vulnerability analysis is carried out against DDoS attacks on the latest released versions of both ODL and ONOS SDN controllers in real-time cloud data centers. For this, we have considered distributed SDN controllers (located at different locations) on two different clouds (AWS and Azure). These controllers are connected through the Internet and work on different networks. DDoS attacks are bombarded on the distributed SDN controllers, and vulnerability is analyzed. It was observed with experimentation that, under five different scenarios (malicious traffic generated), ODL-3 node cluster controller had performed better than ONOS. In these five different scenarios, the amount of malicious traffic was incregradually increased. It also observed that, in terms of disk utilization, memory utilization, and CPU utilization, the ODL 3-node cluster was way ahead of the SDN controller. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
BTH: Behavior-Based Structured Threat Hunting Framework to Analyze and Detect Advanced Adversaries
Electronics 2022, 11(19), 2992; https://doi.org/10.3390/electronics11192992 - 21 Sep 2022
Viewed by 967
Abstract
Organizations of every size and industry are facing a new normal. Adversaries have become more sophisticated and persistent than ever before. Every network is facing never-ending onslaughts. Yet many organizations continue to rely on signature-based reactive threat detection and mitigation solutions as the [...] Read more.
Organizations of every size and industry are facing a new normal. Adversaries have become more sophisticated and persistent than ever before. Every network is facing never-ending onslaughts. Yet many organizations continue to rely on signature-based reactive threat detection and mitigation solutions as the primary line of defense against new-age, cutting-edge attacks. Even conventional attacks can bypass such security solutions. This means legacy protection solutions leave the organization’s data vulnerable to damage, destruction, and theft. Adversarial attacks are like ocean waves: they are very persistent and keep coming like attack campaigns. Sometimes the waves, in our case, attacks, look the same, where indicators of compromise (IoCs) effectively detect the attacks, while sometimes, the waves or attacks change and continue to look different, especially over a while. If somehow the defenders can recognize what is making those attacks or waves and the conditions, then detecting threats and attacks can have a longer-lasting effect of success. This study focuses on the behavior and habits of the attackers that can provide better and long-lasting results when matching adversarial profiles instead of using just IoCs. The paper presents a unique framework for behavior-based structured threat hunting to deliver rapid, consistent remediation against emerging threats and malware on systems and networks. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
A Novel Anomaly Detection System on the Internet of Railways Using Extended Neural Networks
Electronics 2022, 11(18), 2813; https://doi.org/10.3390/electronics11182813 - 06 Sep 2022
Cited by 1 | Viewed by 860
Abstract
The Internet of Railways (IoR) network is made up of a variety of sensors, actuators, network layers, and communication systems that work together to build a railway system. The IoR’s success depends on effective communication. A network of railways uses a variety of [...] Read more.
The Internet of Railways (IoR) network is made up of a variety of sensors, actuators, network layers, and communication systems that work together to build a railway system. The IoR’s success depends on effective communication. A network of railways uses a variety of protocols to share and transmit information amongst each other. Because of the widespread usage of wireless technology on trains, the entire system is susceptible to hacks. These hacks could lead to harmful behavior on the Internet of Railways if they spread sensitive data to an infected network or a fake user. For the previous few years, spotting IoR attacks has been incredibly challenging. To detect malicious intrusions, models based on machine learning and deep learning must still contend with the problem of selecting features. k-means clustering has been used for feature scoring and ranking because of this. To categorize attacks in two datasets, the Internet of Railways and the University of New South Wales, we employed a new neural network model, the extended neural network (ENN). Accuracy and precision were among the model’s strengths. According to our proposed ENN model, the feature-scoring technique performed well. The most accurate models in dataset 1 (UNSW-NB15) were based on deep neural networks (DNNs) (92.2%), long short-term memory LSTM (90.9%), and ENN (99.7%). To categorize attacks, the second dataset (IOR dataset) yielded the highest accuracy (99.3%) for ENN, followed by CNN (87%), LSTM (89%), and DNN (82.3%). Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
Intelligent Hybrid Deep Learning Model for Breast Cancer Detection
Electronics 2022, 11(17), 2767; https://doi.org/10.3390/electronics11172767 - 02 Sep 2022
Cited by 12 | Viewed by 1752 | Correction
Abstract
Breast cancer (BC) is a type of tumor that develops in the breast cells and is one of the most common cancers in women. Women are also at risk from BC, the second most life-threatening disease after lung cancer. The early diagnosis and [...] Read more.
Breast cancer (BC) is a type of tumor that develops in the breast cells and is one of the most common cancers in women. Women are also at risk from BC, the second most life-threatening disease after lung cancer. The early diagnosis and classification of BC are very important. Furthermore, manual detection is time-consuming, laborious work, and, possibility of pathologist errors, and incorrect classification. To address the above highlighted issues, this paper presents a hybrid deep learning (CNN-GRU) model for the automatic detection of BC-IDC (+,−) using whole slide images (WSIs) of the well-known PCam Kaggle dataset. In this research, the proposed model used different layers of architectures of CNNs and GRU to detect breast IDC (+,−) cancer. The validation tests for quantitative results were carried out using each performance measure (accuracy (Acc), precision (Prec), sensitivity (Sens), specificity (Spec), AUC and F1-Score. The proposed model shows the best performance measures (accuracy 86.21%, precision 85.50%, sensitivity 85.60%, specificity 84.71%, F1-score 88%, while AUC 0.89 which overcomes the pathologist’s error and miss classification problem. Additionally, the efficiency of the proposed hybrid model was tested and compared with CNN-BiLSTM, CNN-LSTM, and current machine learning and deep learning (ML/DL) models, which indicated that the proposed hybrid model is more robust than recent ML/DL approaches. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
IoMT-Based Platform for E-Health Monitoring Based on the Blockchain
Electronics 2022, 11(15), 2314; https://doi.org/10.3390/electronics11152314 - 25 Jul 2022
Cited by 15 | Viewed by 1341
Abstract
With the evolution of information technology, the use of internet of things has increased. It affects several areas such as medical field, smart cities, and information systems. In this work, we will use this technological development in the context of health, particularly e-health. [...] Read more.
With the evolution of information technology, the use of internet of things has increased. It affects several areas such as medical field, smart cities, and information systems. In this work, we will use this technological development in the context of health, particularly e-health. We present a platform based on IoMT to allow the monitoring of patient’s health. To meet the constraint of medical secrecy and confidentiality of information, we will use the Blockchain as a secure system. Our system will use the data collected by several smart sensors such as blood pressure, SPO2 concentration, and EEG signals. These encrypted data will be collected by an embedded Raspberry PI 4 platform (working as a smart data relay) before being processed (on a backend server) and then saved in an embedded Blockchain node. The preliminary results show the effectiveness of the proposed platform as a candidate of a low-cost example of secured Electronic Health Record (EHR). Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
Big Data Analysis Framework for Water Quality Indicators with Assimilation of IoT and ML
Electronics 2022, 11(13), 1927; https://doi.org/10.3390/electronics11131927 - 21 Jun 2022
Cited by 7 | Viewed by 1365
Abstract
According to the United Nations, the Sustainable Development Goal ‘6’ seeks to ensure the availability and sustainable management of water for all. Digital technologies, such as big data, Internet of Things (IoT), and machine learning (ML) have a significant role and capability to [...] Read more.
According to the United Nations, the Sustainable Development Goal ‘6’ seeks to ensure the availability and sustainable management of water for all. Digital technologies, such as big data, Internet of Things (IoT), and machine learning (ML) have a significant role and capability to meet the goal. Water quality analysis in any region is critical to identify and understand the standard of water quality and the quality of water is analyzed based on water quality parameters (WQP). Currently, water pollution and the scarcity of water are two major concerns in the region of Uttarakhand, and the analysis of water before it is supplied for human consumption has gained attention. In this study, a big data analytics framework is proposed to analyze the water quality parameters of 13 districts of Uttarakhand and find the correlation among the parameters with the assimilation of IoT and ML. During the analysis, statistical and fractal methods are implemented to understand the anomalies between the water quality parameters in 13 districts of Uttarakhand. The variation in WQP is analyzed using a random forest (RF) model, and the dataset is segmented location wise and the mean, mode, standard deviation, median, kurtosis, and skewness of time series datasets are examined. The mean of the parameters is adjusted with the coefficient of variation based on the standard values of each parameter. The turbidity in almost all the experimental sites has a normal distribution, with the lowest mean value (0.352 mg/L) and highest (11.9 mg/L) in the Pauri Garhwal and Almora districts, respectively. The pH of the water samples is observed to be in the standard range in all the experimental sites, with average and median values being nearly identical, at 7.189 and 7.20, respectively. However, the pH mode is 0.25. The Cl concentration varies with mean values from the lowest (0.46 mg/L) to the highest (35.2 mg/L) over the experimental sites, i.e., the Bageshwar and Rudraprayag districts, respectively. Based on the analysis, it was concluded that the water samples were found to be safe to drink and in healthy condition in almost all the districts of the state Uttarakhand, except for the Haridwar district, where some increase in contaminants was observed. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
Multi-Objective Quantum-Inspired Seagull Optimization Algorithm
Electronics 2022, 11(12), 1834; https://doi.org/10.3390/electronics11121834 - 09 Jun 2022
Cited by 4 | Viewed by 1006
Abstract
Objective solutions of multi-objective optimization problems (MOPs) are required to balance convergence and distribution to the Pareto front. This paper proposes a multi-objective quantum-inspired seagull optimization algorithm (MOQSOA) to optimize the convergence and distribution of solutions in multi-objective optimization problems. The proposed algorithm [...] Read more.
Objective solutions of multi-objective optimization problems (MOPs) are required to balance convergence and distribution to the Pareto front. This paper proposes a multi-objective quantum-inspired seagull optimization algorithm (MOQSOA) to optimize the convergence and distribution of solutions in multi-objective optimization problems. The proposed algorithm adopts opposite-based learning, the migration and attacking behavior of seagulls, grid ranking, and the superposition principles of quantum computing. To obtain a better initialized population in the absence of a priori knowledge, an opposite-based learning mechanism is used for initialization. The proposed algorithm uses nonlinear migration and attacking operation, simulating the behavior of seagulls for exploration and exploitation. Moreover, the real-coded quantum representation of the current optimal solution and quantum rotation gate are adopted to update the seagull population. In addition, a grid mechanism including global grid ranking and grid density ranking provides a criterion for leader selection and archive control. The experimental results of the IGD and Spacing metrics performed on ZDT, DTLZ, and UF test suites demonstrate the superiority of MOQSOA over NSGA-II, MOEA/D, MOPSO, IMMOEA, RVEA, and LMEA for enhancing the distribution and convergence performance of MOPs. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
A Hybrid Deep Learning-Based Approach for Brain Tumor Classification
Electronics 2022, 11(7), 1146; https://doi.org/10.3390/electronics11071146 - 05 Apr 2022
Cited by 33 | Viewed by 3036
Abstract
Brain tumors (BTs) are spreading very rapidly across the world. Every year, thousands of people die due to deadly brain tumors. Therefore, accurate detection and classification are essential in the treatment of brain tumors. Numerous research techniques have been introduced for BT detection [...] Read more.
Brain tumors (BTs) are spreading very rapidly across the world. Every year, thousands of people die due to deadly brain tumors. Therefore, accurate detection and classification are essential in the treatment of brain tumors. Numerous research techniques have been introduced for BT detection as well as classification based on traditional machine learning (ML) and deep learning (DL). The traditional ML classifiers require hand-crafted features, which is very time-consuming. On the contrary, DL is very robust in feature extraction and has recently been widely used for classification and detection purposes. Therefore, in this work, we propose a hybrid deep learning model called DeepTumorNet for three types of brain tumors (BTs)—glioma, meningioma, and pituitary tumor classification—by adopting a basic convolutional neural network (CNN) architecture. The GoogLeNet architecture of the CNN model was used as a base. While developing the hybrid DeepTumorNet approach, the last 5 layers of GoogLeNet were removed, and 15 new layers were added instead of these 5 layers. Furthermore, we also utilized a leaky ReLU activation function in the feature map to increase the expressiveness of the model. The proposed model was tested on a publicly available research dataset for evaluation purposes, and it obtained 99.67% accuracy, 99.6% precision, 100% recall, and a 99.66% F1-score. The proposed methodology obtained the highest accuracy compared with the state-of-the-art classification results obtained with Alex net, Resnet50, darknet53, Shufflenet, GoogLeNet, SqueezeNet, ResNet101, Exception Net, and MobileNetv2. The proposed model showed its superiority over the existing models for BT classification from the MRI images. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
Forensic Analysis on Internet of Things (IoT) Device Using Machine-to-Machine (M2M) Framework
Electronics 2022, 11(7), 1126; https://doi.org/10.3390/electronics11071126 - 02 Apr 2022
Cited by 19 | Viewed by 2917
Abstract
The versatility of IoT devices increases the probability of continuous attacks on them. The low processing power and low memory of IoT devices have made it difficult for security analysts to keep records of various attacks performed on these devices during forensic analysis. [...] Read more.
The versatility of IoT devices increases the probability of continuous attacks on them. The low processing power and low memory of IoT devices have made it difficult for security analysts to keep records of various attacks performed on these devices during forensic analysis. The forensic analysis estimates how much damage has been done to the devices due to various attacks. In this paper, we have proposed an intelligent forensic analysis mechanism that automatically detects the attack performed on IoT devices using a machine-to-machine (M2M) framework. Further, the M2M framework has been developed using different forensic analysis tools and machine learning to detect the type of attacks. Additionally, the problem of an evidence acquisition (attack on IoT devices) has been resolved by introducing a third-party logging server. Forensic analysis is also performed on logs using forensic server (security onion) to determine the effect and nature of the attacks. The proposed framework incorporates different machine learning (ML) algorithms for the automatic detection of attacks. The performance of these models is measured in terms of accuracy, precision, recall, and F1 score. The results indicate that the decision tree algorithm shows the optimum performance as compared to the other algorithms. Moreover, comprehensive performance analysis and results presented validate the proposed model. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
Automated Detection of Alzheimer’s via Hybrid Classical Quantum Neural Networks
Electronics 2022, 11(5), 721; https://doi.org/10.3390/electronics11050721 - 26 Feb 2022
Cited by 8 | Viewed by 2573
Abstract
Deep Neural Networks have offered numerous innovative solutions to brain-related diseases including Alzheimer’s. However, there are still a few standpoints in terms of diagnosis and planning that can be transformed via quantum Machine Learning (QML). In this study, we present a hybrid classical–quantum [...] Read more.
Deep Neural Networks have offered numerous innovative solutions to brain-related diseases including Alzheimer’s. However, there are still a few standpoints in terms of diagnosis and planning that can be transformed via quantum Machine Learning (QML). In this study, we present a hybrid classical–quantum machine learning model for the detection of Alzheimer’s using 6400 labeled MRI scans with two classes. Hybrid classical–quantum transfer learning is used, which makes it possible to optimally pre-process complex and high-dimensional data. Classical neural networks extract high-dimensional features and embed informative feature vectors into a quantum processor. We use resnet34 to extract features from the image and feed a 512-feature vector to our quantum variational circuit (QVC) to generate a four-feature vector for precise decision boundaries. Adam optimizer is used to exploit the adaptive learning rate corresponding to each parameter based on first- and second-order gradients. Furthermore, to validate the model, different quantum simulators (PennyLane, qiskit.aer and qiskit.basicaer) are used for the detection of the demented and non-demented images. The learning rate is set to 10−4 for and optimized quantum depth of six layers, resulting in a training accuracy of 99.1% and a classification accuracy of 97.2% for 20 epochs. The hybrid classical–quantum network significantly outperformed the classical network, as the classification accuracy achieved by the classical transfer learning model was 92%. Thus, a hybrid transfer-learning model is used for binary detection, in which a quantum circuit improves the performance of a pre-trained ResNet34 architecture. Therefore, this work offers a method for selecting an optimal approach for detecting Alzheimer’s disease. The proposed model not only allows for the automated detection of Alzheimer’s but would also speed up the process significantly in clinical settings. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Article
Deep Learning Models for Predicting Epileptic Seizures Using iEEG Signals
Electronics 2022, 11(4), 605; https://doi.org/10.3390/electronics11040605 - 16 Feb 2022
Cited by 5 | Viewed by 2012
Abstract
Epilepsy is a chronic neurological disease characterized by a large electrical explosion that is excessive and uncontrolled, as defined by the world health organization. It is an anomaly that affects people of all ages. An electroencephalogram (EEG) of the brain activity is a [...] Read more.
Epilepsy is a chronic neurological disease characterized by a large electrical explosion that is excessive and uncontrolled, as defined by the world health organization. It is an anomaly that affects people of all ages. An electroencephalogram (EEG) of the brain activity is a widely known method designed as a reference dedicated to study epileptic seizures and to record the changes in brain electrical activity. Therefore, the prediction and early detection of epilepsy is necessary to provide timely preventive interventions that allow patients to be relieved from the harmful consequences of epileptic seizures. Despite decades of research, the prediction of these seizures with accuracy remains an unresolved problem. In this article, we have proposed five deep learning models on intracranial electroencephalogram (iEEG) datasets with the aim of automatically predicting epileptic seizures. The proposed models are based on the Convolutional Neural Network (CNN) model, the fusion of the two CNNs (2-CNN), the fusion of the three CNNs (3-CNN), the fusion of the four CNNs (4-CNN), and transfer learning with ResNet50. The experimental results show that our proposed methods based on 3-CNN and 4-CNN gave the best values. They both achieve an accuracy value of 95%. Finally, our proposed methods are compared with previous studies, which confirm that seizure prediction performance was significantly improved. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Review

Jump to: Research, Other

Review
A Comparative Study of Software Defined Networking Controllers Using Mininet
Electronics 2022, 11(17), 2715; https://doi.org/10.3390/electronics11172715 - 29 Aug 2022
Cited by 5 | Viewed by 1767
Abstract
Software Defined Networking (SDN) is a relatively new networking architecture that has become the most widely discussed networking technology in recent years and the latest development in the field of developing digital networks, which aims to break down the traditional connection in the [...] Read more.
Software Defined Networking (SDN) is a relatively new networking architecture that has become the most widely discussed networking technology in recent years and the latest development in the field of developing digital networks, which aims to break down the traditional connection in the middle of the control surface and the infrastructure surface. The goal of this separation is to make resources more manageable, secure, and controllable. As a result, many controllers such as Beacon, Floodlight, Ryu, OpenDayLight (ODL), Open Network Operating System (ONOS), NOX, as well as Pox, have been developed. The selection of the finest-fit controller has evolved into an application-specific tool operation due to the large range of SDN applications and controllers. This paper discusses SDN, a new paradigm of networking in which the architecture transitions from a completely distributed form to a more centralized form and evaluates and contrasts the effects of various SDN controllers on SDN. This report examines some SDN controllers or the network’s “brains,” shows how they differ from one another, and compares them to see which is best overall. The presentation of SDN controllers such as Ryu, ODL, and others is compared by utilizing the Mininet simulation environment. In this study, we offer a variety of controllers before introducing the tools used in the paper: Mininet. Then, we run an experiment to show how to use ODL to establish a custom network topology on a Mininet. The experimental results show that the O controller, with its larger bandwidth and reduced latency, outperforms other controllers in all topologies (both the default topology and a custom topology with ODL). Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Review
Intelligent Load-Balancing Framework for Fog-Enabled Communication in Healthcare
Electronics 2022, 11(4), 566; https://doi.org/10.3390/electronics11040566 - 13 Feb 2022
Cited by 5 | Viewed by 1574
Abstract
The present technological era significantly makes use of Internet-of-Things (IoT) devices for offering and implementing healthcare services. Post COVID-19, the future of the healthcare system is highly reliant upon the inculcation of Artificial-Intelligence (AI) mechanisms in its day-to-day procedures, and this is realized [...] Read more.
The present technological era significantly makes use of Internet-of-Things (IoT) devices for offering and implementing healthcare services. Post COVID-19, the future of the healthcare system is highly reliant upon the inculcation of Artificial-Intelligence (AI) mechanisms in its day-to-day procedures, and this is realized in its implementation using sensor-enabled smart and intelligent IoT devices for providing extensive care to patients relative to the symmetric concept. The offerings of such AI-enabled services include handling the huge amount of data processed and sensed by smart medical sensors without compromising the performance parameters, such as the response time, latency, availability, cost and processing time. This has resulted in a need to balance the load of the smart operational devices to avoid any failure of responsiveness. Thus, in this paper, a fog-based framework is proposed that can balance the load among fog nodes for handling the challenging communication and processing requirements of intelligent real-time applications. Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Show Figures

Figure 1

Other

Jump to: Research, Review

Correction
Correction: Xiaomei et al. Intelligent Hybrid Deep Learning Model for Breast Cancer Detection. Electronics 2022, 11, 2767
Electronics 2022, 11(24), 4084; https://doi.org/10.3390/electronics11244084 - 08 Dec 2022
Viewed by 269
Abstract
In the published publication [...] Full article
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)
Back to TopTop