Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline

Search Results (10)

Search Parameters:
Authors = Imran Zualkernan ORCID = 0000-0002-1048-5633

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 3698 KiB  
Review
A Historical Survey of Advances in Transformer Architectures
by Ali Reza Sajun, Imran Zualkernan and Donthi Sankalpa
Appl. Sci. 2024, 14(10), 4316; https://doi.org/10.3390/app14104316 - 20 May 2024
Cited by 10 | Viewed by 11098
Abstract
In recent times, transformer-based deep learning models have risen in prominence in the field of machine learning for a variety of tasks such as computer vision and text generation. Given this increased interest, a historical outlook at the development and rapid progression of [...] Read more.
In recent times, transformer-based deep learning models have risen in prominence in the field of machine learning for a variety of tasks such as computer vision and text generation. Given this increased interest, a historical outlook at the development and rapid progression of transformer-based models becomes imperative in order to gain an understanding of the rise of this key architecture. This paper presents a survey of key works related to the early development and implementation of transformer models in various domains such as generative deep learning and as backbones of large language models. Previous works are classified based on their historical approaches, followed by key works in the domain of text-based applications, image-based applications, and miscellaneous applications. A quantitative and qualitative analysis of the various approaches is presented. Additionally, recent directions of transformer-related research such as those in the biomedical and timeseries domains are discussed. Finally, future research opportunities, especially regarding the multi-modality and optimization of the transformer training process, are identified. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

25 pages, 6941 KiB  
Article
Bat2Web: A Framework for Real-Time Classification of Bat Species Echolocation Signals Using Audio Sensor Data
by Taslim Mahbub, Azadan Bhagwagar, Priyanka Chand, Imran Zualkernan, Jacky Judas and Dana Dghaym
Sensors 2024, 24(9), 2899; https://doi.org/10.3390/s24092899 - 1 May 2024
Cited by 5 | Viewed by 2968
Abstract
Bats play a pivotal role in maintaining ecological balance, and studying their behaviors offers vital insights into environmental health and aids in conservation efforts. Determining the presence of various bat species in an environment is essential for many bat studies. Specialized audio sensors [...] Read more.
Bats play a pivotal role in maintaining ecological balance, and studying their behaviors offers vital insights into environmental health and aids in conservation efforts. Determining the presence of various bat species in an environment is essential for many bat studies. Specialized audio sensors can be used to record bat echolocation calls that can then be used to identify bat species. However, the complexity of bat calls presents a significant challenge, necessitating expert analysis and extensive time for accurate interpretation. Recent advances in neural networks can help identify bat species automatically from their echolocation calls. Such neural networks can be integrated into a complete end-to-end system that leverages recent internet of things (IoT) technologies with long-range, low-powered communication protocols to implement automated acoustical monitoring. This paper presents the design and implementation of such a system that uses a tiny neural network for interpreting sensor data derived from bat echolocation signals. A highly compact convolutional neural network (CNN) model was developed that demonstrated excellent performance in bat species identification, achieving an F1-score of 0.9578 and an accuracy rate of 97.5%. The neural network was deployed, and its performance was evaluated on various alternative edge devices, including the NVIDIA Jetson Nano and Google Coral. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

15 pages, 8467 KiB  
Article
A Light-Weight Cropland Mapping Model Using Satellite Imagery
by Maya Haj Hussain, Diaa Addeen Abuhani, Jowaria Khan, Mohamed ElMohandes, Imran Zualkernan and Tarig Ali
Sensors 2023, 23(15), 6729; https://doi.org/10.3390/s23156729 - 27 Jul 2023
Cited by 2 | Viewed by 1862
Abstract
Many applications in agriculture as well as other related fields including natural resources, environment, health, and sustainability, depend on recent and reliable cropland maps. Cropland extent and intensity plays a critical input variable for the study of crop production and food security around [...] Read more.
Many applications in agriculture as well as other related fields including natural resources, environment, health, and sustainability, depend on recent and reliable cropland maps. Cropland extent and intensity plays a critical input variable for the study of crop production and food security around the world. However, generating such variables manually is difficult, expensive, and time consuming. In this work, we discuss a cost effective, fast, and simple machine-learning-based approach to provide reliable cropland mapping model using satellite imagery. The study includes four test regions, namely Iran, Mozambique, Sri-Lanka, and Sudan, where Sentinel-2 satellite imagery were obtained with assigned NDVI scores. The solution presented in this paper discusses a complete pipeline including data collection, time series reconstruction, and cropland extent and crop intensity mapping using machine learning models. The approach proposed managed to achieve high accuracy results ranging between 0.92 and 0.98 across the four test regions at hand. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

36 pages, 1311 KiB  
Review
Machine Learning for Precision Agriculture Using Imagery from Unmanned Aerial Vehicles (UAVs): A Survey
by Imran Zualkernan, Diaa Addeen Abuhani, Maya Haj Hussain, Jowaria Khan and Mohamed ElMohandes
Drones 2023, 7(6), 382; https://doi.org/10.3390/drones7060382 - 6 Jun 2023
Cited by 44 | Viewed by 8834
Abstract
Unmanned aerial vehicles (UAVs) are increasingly being integrated into the domain of precision agriculture, revolutionizing the agricultural landscape. Specifically, UAVs are being used in conjunction with machine learning techniques to solve a variety of complex agricultural problems. This paper provides a careful survey [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly being integrated into the domain of precision agriculture, revolutionizing the agricultural landscape. Specifically, UAVs are being used in conjunction with machine learning techniques to solve a variety of complex agricultural problems. This paper provides a careful survey of more than 70 studies that have applied machine learning techniques utilizing UAV imagery to solve agricultural problems. The survey examines the models employed, their applications, and their performance, spanning a wide range of agricultural tasks, including crop classification, crop and weed detection, cropland mapping, and field segmentation. Comparisons are made among supervised, semi-supervised, and unsupervised machine learning approaches, including traditional machine learning classifiers, convolutional neural networks (CNNs), single-stage detectors, two-stage detectors, and transformers. Lastly, future advancements and prospects for UAV utilization in precision agriculture are highlighted and discussed. The general findings of the paper demonstrate that, for simple classification problems, traditional machine learning techniques, CNNs, and transformers can be used, with CNNs being the optimal choice. For segmentation tasks, UNETs are by far the preferred approach. For detection tasks, two-stage detectors delivered the best performance. On the other hand, for dataset augmentation and enhancement, generative adversarial networks (GANs) were the most popular choice. Full article
Show Figures

Figure 1

14 pages, 1268 KiB  
Article
Classification of Arabic Poetry Emotions Using Deep Learning
by Sakib Shahriar, Noora Al Roken and Imran Zualkernan
Computers 2023, 12(5), 89; https://doi.org/10.3390/computers12050089 - 22 Apr 2023
Cited by 12 | Viewed by 4301
Abstract
The automatic classification of poems into various categories, such as by author or era, is an interesting problem. However, most current work categorizing Arabic poems into eras or emotions has utilized traditional feature engineering and machine learning approaches. This paper explores deep learning [...] Read more.
The automatic classification of poems into various categories, such as by author or era, is an interesting problem. However, most current work categorizing Arabic poems into eras or emotions has utilized traditional feature engineering and machine learning approaches. This paper explores deep learning methods to classify Arabic poems into emotional categories. A new labeled poem emotion dataset was developed, containing 9452 poems with emotional labels of joy, sadness, and love. Various deep learning models were trained on this dataset. The results show that traditional deep learning models, such as one-dimensional Convolutional Neural Networks (1DCNN), Gated Recurrent Unit (GRU), and Long Short-Term Memory (LSTM) networks, performed with F1-scores of 0.62, 0.62, and 0.53, respectively. However, the AraBERT model, an Arabic version of the Bidirectional Encoder Representations from Transformers (BERT), performed best, obtaining an accuracy of 76.5% and an F1-score of 0.77. This model outperformed the previous state-of-the-art in this domain. Full article
Show Figures

Figure 1

28 pages, 843 KiB  
Review
Survey on Recent Trends in Medical Image Classification Using Semi-Supervised Learning
by Zahra Solatidehkordi and Imran Zualkernan
Appl. Sci. 2022, 12(23), 12094; https://doi.org/10.3390/app122312094 - 25 Nov 2022
Cited by 19 | Viewed by 5653
Abstract
Training machine learning and deep learning models for medical image classification is a challenging task due to a lack of large, high-quality labeled datasets. As the labeling of medical images requires considerable time and effort from medical experts, models need to be specifically [...] Read more.
Training machine learning and deep learning models for medical image classification is a challenging task due to a lack of large, high-quality labeled datasets. As the labeling of medical images requires considerable time and effort from medical experts, models need to be specifically designed to train on low amounts of labeled data. Therefore, an application of semi-supervised learning (SSL) methods provides one potential solution. SSL methods use a combination of a small number of labeled datasets with a much larger number of unlabeled datasets to achieve successful predictions by leveraging the information gained through unsupervised learning to improve the supervised model. This paper provides a comprehensive survey of the latest SSL methods proposed for medical image classification tasks. Full article
Show Figures

Figure 1

26 pages, 4306 KiB  
Article
Investigating the Performance of FixMatch for COVID-19 Detection in Chest X-rays
by Ali Reza Sajun, Imran Zualkernan and Donthi Sankalpa
Appl. Sci. 2022, 12(9), 4694; https://doi.org/10.3390/app12094694 - 6 May 2022
Cited by 8 | Viewed by 3104
Abstract
The advent of the COVID-19 pandemic has resulted in medical resources being stretched to their limits. Chest X-rays are one method of diagnosing COVID-19; they are used due to their high efficacy. However, detecting COVID-19 manually by using these images is time-consuming and [...] Read more.
The advent of the COVID-19 pandemic has resulted in medical resources being stretched to their limits. Chest X-rays are one method of diagnosing COVID-19; they are used due to their high efficacy. However, detecting COVID-19 manually by using these images is time-consuming and expensive. While neural networks can be trained to detect COVID-19, doing so requires large amounts of labeled data, which are expensive to collect and code. One approach is to use semi-supervised neural networks to detect COVID-19 based on a very small number of labeled images. This paper explores how well such an approach could work. The FixMatch algorithm, which is a state-of-the-art semi-supervised classification algorithm, was trained on chest X-rays to detect COVID-19, Viral Pneumonia, Bacterial Pneumonia and Lung Opacity. The model was trained with decreasing levels of labeled data and compared with the best supervised CNN models, using transfer learning. FixMatch was able to achieve a COVID F1-score of 0.94 with only 80 labeled samples per class and an overall macro-average F1-score of 0.68 with only 20 labeled samples per class. Furthermore, an exploratory analysis was conducted to determine the performance of FixMatch to detect COVID-19 when trained with imbalanced data. The results show a predictable drop in performance as compared to training with uniform data; however, a statistical analysis suggests that FixMatch may be somewhat robust to data imbalance, as in many cases, and the same types of mistakes are made when the amount of labeled data is decreased. Full article
(This article belongs to the Topic Artificial Intelligence in Healthcare)
Show Figures

Figure 1

21 pages, 2631 KiB  
Review
Survey on Implementations of Generative Adversarial Networks for Semi-Supervised Learning
by Ali Reza Sajun and Imran Zualkernan
Appl. Sci. 2022, 12(3), 1718; https://doi.org/10.3390/app12031718 - 7 Feb 2022
Cited by 29 | Viewed by 5296
Abstract
Given recent advances in deep learning, semi-supervised techniques have seen a rise in interest. Generative adversarial networks (GANs) represent one recent approach to semi-supervised learning (SSL). This paper presents a survey method using GANs for SSL. Previous work in applying GANs to SSL [...] Read more.
Given recent advances in deep learning, semi-supervised techniques have seen a rise in interest. Generative adversarial networks (GANs) represent one recent approach to semi-supervised learning (SSL). This paper presents a survey method using GANs for SSL. Previous work in applying GANs to SSL are classified into pseudo-labeling/classification, encoder-based, TripleGAN-based, two GAN, manifold regularization, and stacked discriminator approaches. A quantitative and qualitative analysis of the various approaches is presented. The R3-CGAN architecture is identified as the GAN architecture with state-of-the-art results. Given the recent success of non-GAN-based approaches for SSL, future research opportunities involving the adaptation of elements of SSL into GAN-based implementations are also identified. Full article
(This article belongs to the Special Issue Generative Models in Artificial Intelligence and Their Applications)
Show Figures

Figure 1

24 pages, 37226 KiB  
Article
An IoT System Using Deep Learning to Classify Camera Trap Images on the Edge
by Imran Zualkernan, Salam Dhou, Jacky Judas, Ali Reza Sajun, Brylle Ryan Gomez and Lana Alhaj Hussain
Computers 2022, 11(1), 13; https://doi.org/10.3390/computers11010013 - 13 Jan 2022
Cited by 45 | Viewed by 13497
Abstract
Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to [...] Read more.
Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to manual coding, the results are often stale by the time they get to the ecologists. Using the Internet of Things (IoT) combined with deep learning represents a good solution for both these problems, as the images can be classified automatically, and the results immediately made available to ecologists. This paper proposes an IoT architecture that uses deep learning on edge devices to convey animal classification results to a mobile app using the LoRaWAN low-power, wide-area network. The primary goal of the proposed approach is to reduce the cost of the wildlife monitoring process for ecologists, and to provide real-time animal sightings data from the camera traps in the field. Camera trap image data consisting of 66,400 images were used to train the InceptionV3, MobileNetV2, ResNet18, EfficientNetB1, DenseNet121, and Xception neural network models. While performance of the trained models was statistically different (Kruskal–Wallis: Accuracy H(5) = 22.34, p < 0.05; F1-score H(5) = 13.82, p = 0.0168), there was only a 3% difference in the F1-score between the worst (MobileNet V2) and the best model (Xception). Moreover, the models made similar errors (Adjusted Rand Index (ARI) > 0.88 and Adjusted Mutual Information (AMU) > 0.82). Subsequently, the best model, Xception (Accuracy = 96.1%; F1-score = 0.87; F1-Score = 0.97 with oversampling), was optimized and deployed on the Raspberry Pi, Google Coral, and Nvidia Jetson edge devices using both TenorFlow Lite and TensorRT frameworks. Optimizing the models to run on edge devices reduced the average macro F1-Score to 0.7, and adversely affected the minority classes, reducing their F1-score to as low as 0.18. Upon stress testing, by processing 1000 images consecutively, Jetson Nano, running a TensorRT model, outperformed others with a latency of 0.276 s/image (s.d. = 0.002) while consuming an average current of 1665.21 mA. Raspberry Pi consumed the least average current (838.99 mA) with a ten times worse latency of 2.83 s/image (s.d. = 0.036). Nano was the only reasonable option as an edge device because it could capture most animals whose maximum speeds were below 80 km/h, including goats, lions, ostriches, etc. While the proposed architecture is viable, unbalanced data remain a challenge and the results can potentially be improved by using object detection to reduce imbalances and by exploring semi-supervised learning. Full article
(This article belongs to the Special Issue Survey in Deep Learning for IoT Applications)
Show Figures

Figure 1

19 pages, 5353 KiB  
Article
An IoT-Based Services Infrastructure for Utility-Scale Distributed Solar Farms
by Salsabeel Shapsough and Imran Zualkernan
Energies 2022, 15(2), 440; https://doi.org/10.3390/en15020440 - 9 Jan 2022
Cited by 2 | Viewed by 3144
Abstract
Internet of Things (IoT) provides large-scale solutions for efficient resource monitoring and management. As such, the technology has been heavily integrated into domains such as manufacturing, healthcare, agriculture, and utilities, which led to the emergence of sustainable smart cities. The success of smart [...] Read more.
Internet of Things (IoT) provides large-scale solutions for efficient resource monitoring and management. As such, the technology has been heavily integrated into domains such as manufacturing, healthcare, agriculture, and utilities, which led to the emergence of sustainable smart cities. The success of smart cities depends on the availability of data, as well as the quality of the data management infrastructure. IoT introduced numerous new software, hardware, and networking technologies designed for efficient and low-cost data transport, storage, and processing. However, proper selection and integration of the correct technologies is crucial to ensuring a positive return on investment for such systems. This paper presents a novel end-to-end infrastructure for solar energy analysis and prediction via edge-based analytics. Full article
(This article belongs to the Section B: Energy and Environment)
Show Figures

Figure 1

Back to TopTop