Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Authors = Chris Yakopcic

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3504 KiB  
Article
Memristor-Based Neuromorphic System for Unsupervised Online Learning and Network Anomaly Detection on Edge Devices
by Md Shahanur Alam, Chris Yakopcic, Raqibul Hasan and Tarek M. Taha
Information 2025, 16(3), 222; https://doi.org/10.3390/info16030222 - 13 Mar 2025
Viewed by 1015
Abstract
An ultralow-power, high-performance online-learning and anomaly-detection system has been developed for edge security applications. Designed to support personalized learning without relying on cloud data processing, the system employs sample-wise learning, eliminating the need for storing entire datasets for training. Built using memristor-based analog [...] Read more.
An ultralow-power, high-performance online-learning and anomaly-detection system has been developed for edge security applications. Designed to support personalized learning without relying on cloud data processing, the system employs sample-wise learning, eliminating the need for storing entire datasets for training. Built using memristor-based analog neuromorphic and in-memory computing techniques, the system integrates two unsupervised autoencoder neural networks—one utilizing optimized crossbar weights and the other performing real-time learning to detect novel intrusions. Threshold optimization and anomaly detection are achieved through a fully analog Euclidean Distance (ED) computation circuit, eliminating the need for floating-point processing units. The system demonstrates 87% anomaly-detection accuracy; achieves a performance of 16.1 GOPS—774× faster than the ASUS Tinker Board edge processor; and delivers an energy efficiency of 783 GOPS/W, consuming only 20.5 mW during anomaly detection. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Graphical abstract

44 pages, 8494 KiB  
Review
Survey of Deep Learning Accelerators for Edge and Emerging Computing
by Shahanur Alam, Chris Yakopcic, Qing Wu, Mark Barnell, Simon Khan and Tarek M. Taha
Electronics 2024, 13(15), 2988; https://doi.org/10.3390/electronics13152988 - 29 Jul 2024
Cited by 11 | Viewed by 11914
Abstract
The unprecedented progress in artificial intelligence (AI), particularly in deep learning algorithms with ubiquitous internet connected smart devices, has created a high demand for AI computing on the edge devices. This review studied commercially available edge processors, and the processors that are still [...] Read more.
The unprecedented progress in artificial intelligence (AI), particularly in deep learning algorithms with ubiquitous internet connected smart devices, has created a high demand for AI computing on the edge devices. This review studied commercially available edge processors, and the processors that are still in industrial research stages. We categorized state-of-the-art edge processors based on the underlying architecture, such as dataflow, neuromorphic, and processing in-memory (PIM) architecture. The processors are analyzed based on their performance, chip area, energy efficiency, and application domains. The supported programming frameworks, model compression, data precision, and the CMOS fabrication process technology are discussed. Currently, most commercial edge processors utilize dataflow architectures. However, emerging non-von Neumann computing architectures have attracted the attention of the industry in recent years. Neuromorphic processors are highly efficient for performing computation with fewer synaptic operations, and several neuromorphic processors offer online training for secured and personalized AI applications. This review found that the PIM processors show significant energy efficiency and consume less power compared to dataflow and neuromorphic processors. A future direction of the industry could be to implement state-of-the-art deep learning algorithms in emerging non-von Neumann computing paradigms for low-power computing on edge devices. Full article
(This article belongs to the Special Issue AI for Edge Computing)
Show Figures

Figure 1

66 pages, 13905 KiB  
Review
A State-of-the-Art Survey on Deep Learning Theory and Architectures
by Md Zahangir Alom, Tarek M. Taha, Chris Yakopcic, Stefan Westberg, Paheding Sidike, Mst Shamima Nasrin, Mahmudul Hasan, Brian C. Van Essen, Abdul A. S. Awwal and Vijayan K. Asari
Electronics 2019, 8(3), 292; https://doi.org/10.3390/electronics8030292 - 5 Mar 2019
Cited by 1326 | Viewed by 98878
Abstract
In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly and has been applied to most traditional application domains, as well as some new areas that present more [...] Read more.
In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly and has been applied to most traditional application domains, as well as some new areas that present more opportunities. Different methods have been proposed based on different categories of learning, including supervised, semi-supervised, and un-supervised learning. Experimental results show state-of-the-art performance using deep learning when compared to traditional machine learning approaches in the fields of image processing, computer vision, speech recognition, machine translation, art, medical imaging, medical information processing, robotics and control, bioinformatics, natural language processing, cybersecurity, and many others. This survey presents a brief survey on the advances that have occurred in the area of Deep Learning (DL), starting with the Deep Neural Network (DNN). The survey goes on to cover Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). Additionally, we have discussed recent developments, such as advanced variant DL techniques based on these DL approaches. This work considers most of the papers published after 2012 from when the history of deep learning began. Furthermore, DL approaches that have been explored and evaluated in different application domains are also included in this survey. We also included recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys that have been published on DL using neural networks and a survey on Reinforcement Learning (RL). However, those papers have not discussed individual advanced techniques for training large-scale deep learning models and the recently developed method of generative models. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Back to TopTop