Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Keywords = Google colaboratory

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1347 KB  
Article
SecFedDNN: A Secure Federated Deep Learning Framework for Edge–Cloud Environments
by Roba H. Alamir, Ayman Noor, Hanan Almukhalfi, Reham Almukhlifi and Talal H. Noor
Systems 2025, 13(6), 463; https://doi.org/10.3390/systems13060463 - 12 Jun 2025
Cited by 4 | Viewed by 2720
Abstract
Cyber threats that target Internet of Things (IoT) and edge computing environments are growing in scale and complexity, which necessitates the development of security solutions that are both robust and scalable while also protecting privacy. Edge scenarios require new intrusion detection solutions because [...] Read more.
Cyber threats that target Internet of Things (IoT) and edge computing environments are growing in scale and complexity, which necessitates the development of security solutions that are both robust and scalable while also protecting privacy. Edge scenarios require new intrusion detection solutions because traditional centralized intrusion detection systems (IDSs) lack in the protection of data privacy, create excessive communication overhead, and show limited contextual adaptation capabilities. This paper introduces the SecFedDNN framework, which combines federated deep learning (FDL) capabilities to protect edge–cloud environments from cyberattacks such as Distributed Denial of Service (DDoS), Denial of Service (DoS), and injection attacks. SecFedDNN performs edge-level pre-aggregation filtering through Layer-Adaptive Sparsified Model Aggregation (LASA) for anomaly detection while supporting balanced multi-class evaluation across federated clients. A Deep Neural Network (DNN) forms the main model that trains concurrently with multiple clients through the Federated Averaging (FedAvg) protocol while keeping raw data local. We utilized Google Cloud Platform (GCP) along with Google Colaboratory (Colab) to create five federated clients for simulating attacks on the TON_IoT dataset, which we balanced across selected attack types. Initial tests showed DNN outperformed Long Short-Term Memory (LSTM) and SimpleNN in centralized environments by providing higher accuracy at lower computational costs. Following federated training, the SecFedDNN framework achieved an average accuracy and precision above 84% and recall and F1-score above 82% across all clients with suitable response times for real-time deployment. The study proves that FDL can strengthen intrusion detection across distributed edge networks without compromising data privacy guarantees. Full article
Show Figures

Figure 1

10 pages, 1059 KB  
Proceeding Paper
An Internet of Medical Things-Based Smart Electromyogram Device for Monitoring of Musculoskeletal Disorders
by Vijayalakshmi Sankaran, Paramasivam Alagumariappan, Malathy Sathyamoorthy, Rajesh Kumar Dhanaraj, Kamalanand Krishnamurthy and Emmanuel Cyril
Eng. Proc. 2024, 82(1), 108; https://doi.org/10.3390/ecsa-11-20351 - 25 Nov 2024
Cited by 3 | Viewed by 816
Abstract
Electromyography (EMG) is a technique that measures the electrical activity of the muscles, and it has been used extensively in the field of physiotherapy to assess muscle function and activity. Grading muscle power is an important aspect of assessing muscle function, as it [...] Read more.
Electromyography (EMG) is a technique that measures the electrical activity of the muscles, and it has been used extensively in the field of physiotherapy to assess muscle function and activity. Grading muscle power is an important aspect of assessing muscle function, as it provides information about the strength and endurance of muscles. Presently, physiotherapists use manual muscle testing (MMT) to grade muscle power; however, it requires a therapist with good expertise. In this work, a Smart EMG device based on the Internet of Medical Things (IoMT) is designed and developed for monitoring patients suffering from abnormal musculoskeletal health conditions. EMG signals are acquired from normal, healthy individuals as well as from patients with abnormal health conditions. Muscle power grading is used to grade the EMG signals and a convolutional neural network-based (CNN) deep learning algorithm is utilized to visualize the progress of the course of treatment provided to patients with musculoskeletal problems, such as stroke and spinal cord injuries, among other issues. The entire analysis is performed using the Google Colaboratory IoT cloud platform, and the algorithms are coded using the Python programming language. The results demonstrate that the proposed IoMT-based smart device can predict different muscle power with an average accuracy of 97.5%, which proves the effectiveness of the device. This work appears to be of high clinical relevance, since the proposed device is capable of providing valuable information about muscle function and enables physiotherapists to design personalized treatment plans for patients with musculoskeletal disorders. Full article
Show Figures

Figure 1

20 pages, 5094 KB  
Article
Effect of Hyperparameter Tuning on the Performance of YOLOv8 for Multi Crop Classification on UAV Images
by Oluibukun Gbenga Ajayi, Pius Onoja Ibrahim and Oluwadamilare Samuel Adegboyega
Appl. Sci. 2024, 14(13), 5708; https://doi.org/10.3390/app14135708 - 29 Jun 2024
Cited by 17 | Viewed by 5388
Abstract
This study investigates the performance of YOLOv8, a Convolutional Neural Network (CNN) architecture, for multi-crop classification in a mixed farm with Unmanned Aerial Vehicle (UAV) imageries. Emphasizing hyperparameter optimization, specifically batch size, the study’s primary objective is to refine the model’s batch size [...] Read more.
This study investigates the performance of YOLOv8, a Convolutional Neural Network (CNN) architecture, for multi-crop classification in a mixed farm with Unmanned Aerial Vehicle (UAV) imageries. Emphasizing hyperparameter optimization, specifically batch size, the study’s primary objective is to refine the model’s batch size for improved accuracy and efficiency in crop detection and classification. Using the Google Colaboratory platform, the YOLOv8 model was trained over various batch sizes (10, 20, 30, 40, 50, 60, 70, 80, and 90) to automatically identify the five different classes (sugarcane, banana trees, spinach, pepper, and weeds) present on the UAV images. The performance of the model was assessed using classification accuracy, precision, and recall with the aim of identifying the optimal batch size. The results indicate a substantial improvement in classifier performance from batch sizes of 10 up to 60, while significant dips and peaks were recorded at batch sizes 70 to 90. Based on the analysis of the obtained results, Batch size 60 emerged with the best overall performance for automatic crop detection and classification. Although the F1 score was moderate, the combination of high accuracy, precision, and recall makes it the most balanced option. However, Batch Size 80 also shows very high precision (98%) and balanced recall (84%), which is suitable if the primary focus is on achieving high precision. The findings demonstrate the robustness of YOLOv8 for automatic crop identification and classification in a mixed crop farm while highlighting the significant impact of tuning to the appropriate batch size on the model’s overall performance. Full article
Show Figures

Figure 1

23 pages, 5126 KB  
Article
A Chlorophyll-a Concentration Inversion Model Based on Backpropagation Neural Network Optimized by an Improved Metaheuristic Algorithm
by Xichen Wang, Jianyong Cui and Mingming Xu
Remote Sens. 2024, 16(9), 1503; https://doi.org/10.3390/rs16091503 - 24 Apr 2024
Cited by 9 | Viewed by 2317
Abstract
Chlorophyll-a (Chl-a) concentration monitoring is very important for managing water resources and ensuring the stability of marine ecosystems. Due to their high operating efficiency and high prediction accuracy, backpropagation (BP) neural networks are widely used in Chl-a concentration inversion. However, BP neural networks [...] Read more.
Chlorophyll-a (Chl-a) concentration monitoring is very important for managing water resources and ensuring the stability of marine ecosystems. Due to their high operating efficiency and high prediction accuracy, backpropagation (BP) neural networks are widely used in Chl-a concentration inversion. However, BP neural networks tend to become stuck in local optima, and their prediction accuracy fluctuates significantly, thus posing restrictions to their accuracy and stability in the inversion process. Studies have found that metaheuristic optimization algorithms can significantly improve these shortcomings by optimizing the initial parameters (weights and biases) of BP neural networks. In this paper, the adaptive nonlinear weight coefficient, the path search strategy “Levy flight” and the dynamic crossover mechanism are introduced to optimize the three main steps of the Artificial Ecosystem Optimization (AEO) algorithm to overcome the algorithm’s limitation in solving complex problems, improve its global search capability, and thereby improve its performance in optimizing BP neural networks. Relying on Google Earth Engine and Google Colaboratory (Colab), a model for the inversion of Chl-a concentration in the coastal waters of Hong Kong was built to verify the performance of the improved AEO algorithm in optimizing BP neural networks, and the improved AEO algorithm proposed herein was compared with 17 different metaheuristic optimization algorithms. The results show that the Chl-a concentration inversion model based on a BP neural network optimized using the improved AEO algorithm is significantly superior to other models in terms of prediction accuracy and stability, and the results obtained via the model through inversion with respect to Chl-a concentration in the coastal waters of Hong Kong during heavy precipitation events and red tides are highly consistent with the measured values of Chl-a concentration in both time and space domains. These conclusions can provide a new method for Chl-a concentration monitoring and water quality management for coastal waters. Full article
Show Figures

Figure 1

28 pages, 5284 KB  
Article
IoT-Based Intrusion Detection System Using New Hybrid Deep Learning Algorithm
by Sami Yaras and Murat Dener
Electronics 2024, 13(6), 1053; https://doi.org/10.3390/electronics13061053 - 12 Mar 2024
Cited by 96 | Viewed by 13837
Abstract
The most significant threat that networks established in IoT may encounter is cyber attacks. The most commonly encountered attacks among these threats are DDoS attacks. After attacks, the communication traffic of the network can be disrupted, and the energy of sensor nodes can [...] Read more.
The most significant threat that networks established in IoT may encounter is cyber attacks. The most commonly encountered attacks among these threats are DDoS attacks. After attacks, the communication traffic of the network can be disrupted, and the energy of sensor nodes can quickly deplete. Therefore, the detection of occurring attacks is of great importance. Considering numerous sensor nodes in the established network, analyzing the network traffic data through traditional methods can become impossible. Analyzing this network traffic in a big data environment is necessary. This study aims to analyze the obtained network traffic dataset in a big data environment and detect attacks in the network using a deep learning algorithm. This study is conducted using PySpark with Apache Spark in the Google Colaboratory (Colab) environment. Keras and Scikit-Learn libraries are utilized in the study. ‘CICIoT2023’ and ‘TON_IoT’ datasets are used for training and testing the model. The features in the datasets are reduced using the correlation method, ensuring the inclusion of significant features in the tests. A hybrid deep learning algorithm is designed using one-dimensional CNN and LSTM. The developed method was compared with ten machine learning and deep learning algorithms. The model’s performance was evaluated using accuracy, precision, recall, and F1 parameters. Following the study, an accuracy rate of 99.995% for binary classification and 99.96% for multiclassification is achieved in the ‘CICIoT2023’ dataset. In the ‘TON_IoT’ dataset, a binary classification success rate of 98.75% is reached. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

23 pages, 6256 KB  
Article
An Enhanced Python-Based Open-Source Particle Image Velocimetry Software for Use with Central Processing Units
by Ali Shirinzad, Khodr Jaber, Kecheng Xu and Pierre E. Sullivan
Fluids 2023, 8(11), 285; https://doi.org/10.3390/fluids8110285 - 27 Oct 2023
Cited by 4 | Viewed by 5126
Abstract
Particle Image Velocimetry (PIV) is a widely used experimental technique for measuring flow. In recent years, open-source PIV software has become more popular as it offers researchers and practitioners enhanced computational capabilities. Software development for graphical processing unit (GPU) architectures requires careful algorithm [...] Read more.
Particle Image Velocimetry (PIV) is a widely used experimental technique for measuring flow. In recent years, open-source PIV software has become more popular as it offers researchers and practitioners enhanced computational capabilities. Software development for graphical processing unit (GPU) architectures requires careful algorithm design and data structure selection for optimal performance. PIV software, optimized for central processing units (CPUs), offer an alternative to specialized GPU software. In the present work, an improved algorithm for the OpenPIV–Python software (Version 0.25.1, OpenPIV, Tel Aviv-Yafo, Israel) is presented and implemented under a traditional CPU framework. The Python language was selected due to its versatility and widespread adoption. The algorithm was also tested on a supercomputing cluster, a workstation, and Google Colaboratory during the development phase. Using a known velocity field, the algorithm precisely captured the time-average flow, momentary velocity fields, and vortices. Full article
(This article belongs to the Special Issue Flow Visualization: Experiments and Techniques)
Show Figures

Figure 1

28 pages, 6697 KB  
Article
Prediction of Battery Remaining Useful Life Using Machine Learning Algorithms
by J. N. Chandra Sekhar, Bullarao Domathoti and Ernesto D. R. Santibanez Gonzalez
Sustainability 2023, 15(21), 15283; https://doi.org/10.3390/su152115283 - 25 Oct 2023
Cited by 42 | Viewed by 12357
Abstract
Electrified transportation systems are emerging quickly worldwide, helping to diminish carbon gas emissions and paving the way for the reduction of global warming possessions. Battery remaining useful life (RUL) prediction is gaining attention in real world applications to tone down maintenance expenses and [...] Read more.
Electrified transportation systems are emerging quickly worldwide, helping to diminish carbon gas emissions and paving the way for the reduction of global warming possessions. Battery remaining useful life (RUL) prediction is gaining attention in real world applications to tone down maintenance expenses and improve system reliability and efficiency. RUL forms the prominent component of fault analysis forecast and health management when the equipment operation life cycle is considered. The uprightness of RUL prediction is vital in providing the effectiveness of electric batteries and reducing the chance of battery illness. In assessing battery performance, the existing prediction approaches are unsatisfactory even though the battery operational parameters are well tabulated. In addition, battery management has an important contribution to several sustainable development goals, such as Clean and Affordable Energy (SDG 7), and Climate Action (SDG 13). The current work attempts to increase the prediction accuracy and robustness with selected machine learning algorithms. A Real battery life cycle data set from the Hawaii National Energy Institute (HNEI) is used to evaluate accuracy estimation using selected machine learning algorithms and is validated in Google Co-laboratory using Python. Evaluated error metrics such as Mean Square Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), R-Squared, and execution time are computed for different L methods and relevant inferences are presented which highlight the potential of battery RUL prediction close to the most accurate values. Full article
(This article belongs to the Special Issue Sustainable Development Goals: A Pragmatic Approach)
Show Figures

Figure 1

12 pages, 2278 KB  
Article
Accelerating Bayesian Estimation of Solar Cell Equivalent Circuit Parameters Using JAX-Based Sampling
by Kazuya Tada
Electronics 2023, 12(17), 3631; https://doi.org/10.3390/electronics12173631 - 28 Aug 2023
Cited by 2 | Viewed by 1828
Abstract
Equivalent circuit models that reproduce the current–voltage characteristics of solar cells are useful not only to gain physical insight into the power loss mechanisms that take place in solar cells but also for designing systems that use renewable solar energy as a power [...] Read more.
Equivalent circuit models that reproduce the current–voltage characteristics of solar cells are useful not only to gain physical insight into the power loss mechanisms that take place in solar cells but also for designing systems that use renewable solar energy as a power source. As mentioned in a previous paper, Bayesian estimation of equivalent circuit parameters avoids the drawbacks of nonlinear least-squares methods, such as the possibility of evaluating estimation errors. However, it requires a long computation time because the estimated values are obtained by sampling using a Markov chain Monte Carlo method. In this paper, a trial to accelerate the calculation by upgrading the Bayesian statistical package PyMC is presented. PyMC ver. 4, the next version of PyMC3 used in the previous paper, started to support the latest sampling libraries using a machine learning framework JAX, in addition to PyMC-specific methods. The acceleration effect of JAX is remarkable, achieving a calculation time of less than 1/20 times that of the case without JAX. Recommended calculation conditions were disclosed based on the results of a number of trials, and a demonstration with testable Python code on Google Colaboratory using the recommended conditions is published on GitHub. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

12 pages, 832 KB  
Data Descriptor
VPAgs-Dataset4ML: A Dataset to Predict Viral Protective Antigens for Machine Learning-Based Reverse Vaccinology
by Zakia Salod and Ozayr Mahomed
Data 2023, 8(2), 41; https://doi.org/10.3390/data8020041 - 17 Feb 2023
Cited by 2 | Viewed by 4031
Abstract
Reverse vaccinology (RV) is a computer-aided approach for vaccine development that identifies a subset of pathogen proteins as protective antigens (PAgs) or potential vaccine candidates. Machine learning (ML)-based RV is promising, but requires a dataset of PAgs (positives) and non-protective protein sequences (negatives). [...] Read more.
Reverse vaccinology (RV) is a computer-aided approach for vaccine development that identifies a subset of pathogen proteins as protective antigens (PAgs) or potential vaccine candidates. Machine learning (ML)-based RV is promising, but requires a dataset of PAgs (positives) and non-protective protein sequences (negatives). This study aimed to create an ML dataset, VPAgs-Dataset4ML, to predict viral PAgs based on PAgs obtained from Protegen. We performed seven steps to identify PAgs from the Protegen website and non-protective protein sequences from Universal Protein Resource (UniProt). The seven steps included downloading viral PAgs from Protegen, performing quality checks on PAgs using the standard BLASTp identity check ≤30% via MMseqs2, and computational steps running on Google Colaboratory and the Ubuntu terminal to retrieve and perform quality checks (similar to the PAgs) on non-protective protein sequences as negatives from UniProt. VPAgs-Dataset4ML contains 2145 viral protein sequences, with 210 PAgs in positive.fasta and 1935 non-protective protein sequences in negative.fasta. This dataset can be used to train ML models to predict antigens for various viral pathogens with the aim of developing effective vaccines. Full article
Show Figures

Figure 1

15 pages, 2418 KB  
Article
Investigating the Impact of Two Major Programming Environments on the Accuracy of Deep Learning-Based Glioma Detection from MRI Images
by Vadi Su Yilmaz, Metehan Akdag, Yaser Dalveren, Resat Ozgur Doruk, Ali Kara and Ahmet Soylu
Diagnostics 2023, 13(4), 651; https://doi.org/10.3390/diagnostics13040651 - 9 Feb 2023
Cited by 6 | Viewed by 2994
Abstract
Brain tumors have been the subject of research for many years. Brain tumors are typically classified into two main groups: benign and malignant tumors. The most common tumor type among malignant brain tumors is known as glioma. In the diagnosis of glioma, different [...] Read more.
Brain tumors have been the subject of research for many years. Brain tumors are typically classified into two main groups: benign and malignant tumors. The most common tumor type among malignant brain tumors is known as glioma. In the diagnosis of glioma, different imaging technologies could be used. Among these techniques, MRI is the most preferred imaging technology due to its high-resolution image data. However, the detection of gliomas from a huge set of MRI data could be challenging for the practitioners. In order to solve this concern, many Deep Learning (DL) models based on Convolutional Neural Networks (CNNs) have been proposed to be used in detecting glioma. However, understanding which CNN architecture would work efficiently under various conditions including development environment or programming aspects as well as performance analysis has not been studied so far. In this research work, therefore, the purpose is to investigate the impact of two major programming environments (namely, MATLAB and Python) on the accuracy of CNN-based glioma detection from Magnetic Resonance Imaging (MRI) images. To this end, experiments on the Brain Tumor Segmentation (BraTS) dataset (2016 and 2017) consisting of multiparametric magnetic MRI images are performed by implementing two popular CNN architectures, the three-dimensional (3D) U-Net and the V-Net in the programming environments. From the results, it is concluded that the use of Python with Google Colaboratory (Colab) might be highly useful in the implementation of CNN-based models for glioma detection. Moreover, the 3D U-Net model is found to perform better, attaining a high accuracy on the dataset. The authors believe that the results achieved from this study would provide useful information to the research community in their appropriate implementation of DL approaches for brain tumor detection. Full article
Show Figures

Figure 1

18 pages, 3493 KB  
Article
L-RAPiT: A Cloud-Based Computing Pipeline for the Analysis of Long-Read RNA Sequencing Data
by Theodore M. Nelson, Sankar Ghosh and Thomas S. Postler
Int. J. Mol. Sci. 2022, 23(24), 15851; https://doi.org/10.3390/ijms232415851 - 13 Dec 2022
Cited by 2 | Viewed by 4507
Abstract
Long-read sequencing (LRS) has been adopted to meet a wide variety of research needs, ranging from the construction of novel transcriptome annotations to the rapid identification of emerging virus variants. Amongst other advantages, LRS preserves more information about RNA at the transcript level [...] Read more.
Long-read sequencing (LRS) has been adopted to meet a wide variety of research needs, ranging from the construction of novel transcriptome annotations to the rapid identification of emerging virus variants. Amongst other advantages, LRS preserves more information about RNA at the transcript level than conventional high-throughput sequencing, including far more accurate and quantitative records of splicing patterns. New studies with LRS datasets are being published at an exponential rate, generating a vast reservoir of information that can be leveraged to address a host of different research questions. However, mining such publicly available data in a tailored fashion is currently not easy, as the available software tools typically require familiarity with the command-line interface, which constitutes a significant obstacle to many researchers. Additionally, different research groups utilize different software packages to perform LRS analysis, which often prevents a direct comparison of published results across different studies. To address these challenges, we have developed the Long-Read Analysis Pipeline for Transcriptomics (L-RAPiT), a user-friendly, free pipeline requiring no dedicated computational resources or bioinformatics expertise. L-RAPiT can be implemented directly through Google Colaboratory, a system based on the open-source Jupyter notebook environment, and allows for the direct analysis of transcriptomic reads from Oxford Nanopore and PacBio LRS machines. This new pipeline enables the rapid, convenient, and standardized analysis of publicly available or newly generated LRS datasets. Full article
(This article belongs to the Collection Feature Papers in “Molecular Biology”)
Show Figures

Figure 1

30 pages, 13794 KB  
Article
Visual Saliency and Image Reconstruction from EEG Signals via an Effective Geometric Deep Network-Based Generative Adversarial Network
by Nastaran Khaleghi, Tohid Yousefi Rezaii, Soosan Beheshti, Saeed Meshgini, Sobhan Sheykhivand and Sebelan Danishvar
Electronics 2022, 11(21), 3637; https://doi.org/10.3390/electronics11213637 - 7 Nov 2022
Cited by 21 | Viewed by 7076
Abstract
Reaching out the function of the brain in perceiving input data from the outside world is one of the great targets of neuroscience. Neural decoding helps us to model the connection between brain activities and the visual stimulation. The reconstruction of images from [...] Read more.
Reaching out the function of the brain in perceiving input data from the outside world is one of the great targets of neuroscience. Neural decoding helps us to model the connection between brain activities and the visual stimulation. The reconstruction of images from brain activity can be achieved through this modelling. Recent studies have shown that brain activity is impressed by visual saliency, the important parts of an image stimuli. In this paper, a deep model is proposed to reconstruct the image stimuli from electroencephalogram (EEG) recordings via visual saliency. To this end, the proposed geometric deep network-based generative adversarial network (GDN-GAN) is trained to map the EEG signals to the visual saliency maps corresponding to each image. The first part of the proposed GDN-GAN consists of Chebyshev graph convolutional layers. The input of the GDN part of the proposed network is the functional connectivity-based graph representation of the EEG channels. The output of the GDN is imposed to the GAN part of the proposed network to reconstruct the image saliency. The proposed GDN-GAN is trained using the Google Colaboratory Pro platform. The saliency metrics validate the viability and efficiency of the proposed saliency reconstruction network. The weights of the trained network are used as initial weights to reconstruct the grayscale image stimuli. The proposed network realizes the image reconstruction from EEG signals. Full article
(This article belongs to the Section Bioelectronics)
Show Figures

Figure 1

18 pages, 8070 KB  
Article
Dynamics of the Burlan and Pomacochas Lakes Using SAR Data in GEE, Machine Learning Classifiers, and Regression Methods
by Darwin Gómez Fernández, Rolando Salas López, Nilton B. Rojas Briceño, Jhonsy O. Silva López and Manuel Oliva
ISPRS Int. J. Geo-Inf. 2022, 11(11), 534; https://doi.org/10.3390/ijgi11110534 - 24 Oct 2022
Cited by 13 | Viewed by 3542
Abstract
Amazonas is a mountain region in Peru with high cloud cover, so using optical data in the analysis of surface changes of water bodies (such as the Burlan and Pomacochas lakes in Peru) is difficult, on the other hand, SAR images are suitable [...] Read more.
Amazonas is a mountain region in Peru with high cloud cover, so using optical data in the analysis of surface changes of water bodies (such as the Burlan and Pomacochas lakes in Peru) is difficult, on the other hand, SAR images are suitable for the extraction of water bodies and delineation of contours. Therefore, in this research, to determine the surface changes of Burlan and Pomacochas lakes, we used Sentinel-1 A/B products to analyse the dynamics from 2014 to 2020, in addition to evaluating the procedure we performed a photogrammetric flight and compared the shapes and geometric attributes from each lake. For this, in Google Earth Engine (GEE), we processed 517 SAR images for each lake using the following algorithms: a classification and regression tree (CART), Random Forest (RF) and support vector machine (SVM).) 2021-02-10, then; the same value was validated by comparing the area and perimeter values obtained from a photogrammetric flight, and the classification of a SAR image of the same date. During the first months of the year, there were slight increases in the area and perimeter of each lake, influenced by the increase in rainfall in the area. CART and Random Forest obtained better results for image classification, and for regression analysis, Support Vector Regression (SVR) and Random Forest Regression (RFR) were a better fit to the data (higher R2), for Burlan and Pomacochas lakes, respectively. The shape of the lakes obtained by classification was similar to that of the photogrammetric flight. For 2021-02-10, for Burlan Lake, all 3 classifiers had area values between 42.48 and 43.53, RFR 44.47 and RPAS 45.63 hectares. For Pomacohas Lake, the 3 classifiers had area values between 414.23 and 434.89, SVR 411.89 and RPAS 429.09 hectares. Ultimately, we seek to provide a rapid methodology to classify SAR images into two categories and thus obtain the shape of water bodies and analyze their changes over short periods. A methodological scheme is also provided to perform a regression analysis in GC using five methods that can be replicated in different thematic areas. Full article
(This article belongs to the Special Issue Geo-Information for Watershed Processes)
Show Figures

Figure 1

14 pages, 4164 KB  
Technical Note
A Land Cover Background-Adaptive Framework for Large-Scale Road Extraction
by Yu Li, Hao Liang, Guangmin Sun, Zifeng Yuan, Yuanzhi Zhang and Hongsheng Zhang
Remote Sens. 2022, 14(20), 5114; https://doi.org/10.3390/rs14205114 - 13 Oct 2022
Cited by 7 | Viewed by 2894
Abstract
Background: Road network data are crucial in various applications, such as emergency response, urban planning, and transportation management. The recent application of deep neural networks has significantly boosted the efficiency and accuracy of road network extraction based on remote sensing data. However, most [...] Read more.
Background: Road network data are crucial in various applications, such as emergency response, urban planning, and transportation management. The recent application of deep neural networks has significantly boosted the efficiency and accuracy of road network extraction based on remote sensing data. However, most existing methods for road extraction were designed at local or regional scales. Automatic extraction of large-scale road datasets from satellite images remains challenging due to the complex background around the roads, especially the complicated land cover types. To tackle this issue, this paper proposes a land cover background-adaptive framework for large-scale road extraction. Method: A large number of sample image blocks (6820) are selected from six different countries of a wide region as the dataset. OpenStreetMap (OSM) is automatically converted to the ground truth of networks, and Esri 2020 Land Cover Dataset is taken as the background land cover information. A fuzzy C-means clustering algorithm is first applied to cluster the sample images according to the proportion of certain land use types that obviously negatively affect road extraction performance. Then, the specific model is trained on the images clustered as abundant with that certain land use type, while a general model is trained based on the rest of the images. Finally, the road extraction results obtained by those general and specific modes are combined. Results: The dataset selection and algorithm implementation were conducted on the cloud-based geoinformation platform Google Earth Engine (GEE) and Google Colaboratory. Experimental results showed that the proposed framework achieved stronger adaptivity on large-scale road extraction in both visual and statistical analysis. The C-means clustering algorithm applied in this study outperformed other hard clustering algorithms. Significance: The promising potential of the proposed background-adaptive network was demonstrated in the automatic extraction of large-scale road networks from satellite images as well as other object detection tasks. This search demonstrated a new paradigm for the study of large-scale remote sensing applications based on deep neural networks. Full article
Show Figures

Figure 1

21 pages, 3152 KB  
Article
Malware Detection Using Memory Analysis Data in Big Data Environment
by Murat Dener, Gökçe Ok and Abdullah Orman
Appl. Sci. 2022, 12(17), 8604; https://doi.org/10.3390/app12178604 - 27 Aug 2022
Cited by 89 | Viewed by 13480
Abstract
Malware is a significant threat that has grown with the spread of technology. This makes detecting malware a critical issue. Static and dynamic methods are widely used in the detection of malware. However, traditional static and dynamic malware detection methods may fall short [...] Read more.
Malware is a significant threat that has grown with the spread of technology. This makes detecting malware a critical issue. Static and dynamic methods are widely used in the detection of malware. However, traditional static and dynamic malware detection methods may fall short in advanced malware detection. Data obtained through memory analysis can provide important insights into the behavior and patterns of malware. This is because malwares leave various traces on memories. For this reason, the memory analysis method is one of the issues that should be studied in malware detection. In this study, the use of memory data in malware detection is suggested. Malware detection was carried out by using various deep learning and machine learning approaches in a big data environment with memory data. This study was carried out with Pyspark on Apache Spark big data platform in Google Colaboratory. Experiments were performed on the balanced CIC-MalMem-2022 dataset. Binary classification was made using Random Forest, Decision Tree, Gradient Boosted Tree, Logistic Regression, Naive Bayes, Linear Vector Support Machine, Multilayer Perceptron, Deep Feed Forward Neural Network, and Long Short-Term Memory algorithms. The performances of the algorithms used have been compared. The results were evaluated using the Accuracy, F1-score, Precision, Recall, and AUC performance metrics. As a result, the most successful malware detection was obtained with the Logistic Regression algorithm, with an accuracy level of 99.97% in malware detection by memory analysis. Gradient Boosted Tree follows the Logistic Regression algorithm with 99.94% accuracy. The Naive Bayes algorithm showed the lowest performance in malware analysis with memory data, with an accuracy of 98.41%. In addition, many of the algorithms used have achieved very successful results. According to the results obtained, the data obtained from memory analysis is very useful in detecting malware. In addition, deep learning and machine learning approaches were trained with memory datasets and achieved very successful results in malware detection. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop