10th Anniversary of Electronics—Hot Topic in Artificial Intelligence Circuits and Systems

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence Circuits and Systems (AICAS)".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 12936

Special Issue Editors


E-Mail Website
Guest Editor
Departamento de Física Atómica, Molecular y Nuclear, Universidad de Sevilla, 41080 Sevilla, Spain
Interests: quantum optics; quantum information; theoretical physics; quantum simulations; trapped ion physics; superconducting circuits; entanglement classification; entanglement generation; quantum biomimetics; artificial intelligence; machine learning; embedding quantum simulators; penning traps; quantum photonics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Brigham Young University, 450 Engineering Building, Provo, UT 84602-4099, USA
Interests: artificial intelligence; high-performance visual computing; robotic vision; real-time visual inspection automation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Since the first paper was published in Electronics in 2011, we are now celebrating the 10th anniversary of the journal. On this occasion, a Special Issue is being prepared, inviting members of the Editorial Board and outstanding renowned authors. The aim is to celebrate this important anniversary of the journal with a publication fully dedicated to the area of “Artificial Intelligence Circuits and Systems”. Past editors and authors will be invited to submit high-quality papers to the Special Issue.

Topics of interest include, but are not limited to:

  • Circuits and systems for AI
  • Deep learning/machine learning/AI algorithms
  • Tools/platforms for AI
  • Architecture for AI computing
  • Edge and cloud AI computing platforms
  • Hardware accelerators
  • Neuromorphic processors
  • Hardware/software co-design and design automation for AI systems
  • Advanced neural network design
  • Emerging applications: Deep learning for Internet-of-Things
  • Emerging applications of AI: Medical AI
  • Emerging applications of AI: Autonomous Vehicle
  • Emerging applications of AI: Smart Factory and Environment

Prof. D.Sc Valeri Mladenov
Prof. Dr. Lucas Lamata
Prof. Dr. Dah-Jye Lee
Prof. Dr. Nikolay Hinov
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 2589 KiB  
Article
AI Models for Predicting Readmission of Pneumonia Patients within 30 Days after Discharge
by Jiin-Chyr Hsu, Fu-Hsing Wu, Hsuan-Hung Lin, Dah-Jye Lee, Yung-Fu Chen and Chih-Sheng Lin
Electronics 2022, 11(5), 673; https://doi.org/10.3390/electronics11050673 - 22 Feb 2022
Cited by 3 | Viewed by 2575
Abstract
A model with capability for precisely predicting readmission is a target being pursued worldwide. The objective of this study is to design predictive models using artificial intelligence methods and data retrieved from the National Health Insurance Research Database of Taiwan for identifying high-risk [...] Read more.
A model with capability for precisely predicting readmission is a target being pursued worldwide. The objective of this study is to design predictive models using artificial intelligence methods and data retrieved from the National Health Insurance Research Database of Taiwan for identifying high-risk pneumonia patients with 30-day all-cause readmissions. An integrated genetic algorithm (GA) and support vector machine (SVM), namely IGS, were used to design predictive models optimized with three objective functions. In IGS, GA was used for selecting salient features and optimal SVM parameters, while SVM was used for constructing the models. For comparison, logistic regression (LR) and deep neural network (DNN) were also applied for model construction. The IGS model with AUC used as the objective function achieved an accuracy, sensitivity, specificity, and area under ROC curve (AUC) of 70.11%, 73.46%, 69.26%, and 0.7758, respectively, outperforming the models designed with LR (65.77%, 78.44%, 62.54%, and 0.7689, respectively) and DNN (61.50%, 79.34%, 56.95%, and 0.7547, respectively), as well as previously reported models constructed using thedata of electronic health records with an AUC of 0.71–0.74. It can be used for automatically detecting pneumonia patients with a risk of all-cause readmissions within 30 days after discharge so as to administer suitable interventions to reduce readmission and healthcare costs. Full article
Show Figures

Figure 1

12 pages, 1681 KiB  
Article
Automatic RTL Generation Tool of FPGAs for DNNs
by Seojin Jang, Wei Liu, Sangun Park and Yongbeom Cho
Electronics 2022, 11(3), 402; https://doi.org/10.3390/electronics11030402 - 28 Jan 2022
Cited by 2 | Viewed by 3383
Abstract
With the increasing use of multi-purpose artificial intelligence of things (AIOT) devices, embedded field-programmable gate arrays (FPGA) represent excellent platforms for deep neural network (DNN) acceleration on edge devices. FPGAs possess the advantages of low latency and high energy efficiency, but the scarcity [...] Read more.
With the increasing use of multi-purpose artificial intelligence of things (AIOT) devices, embedded field-programmable gate arrays (FPGA) represent excellent platforms for deep neural network (DNN) acceleration on edge devices. FPGAs possess the advantages of low latency and high energy efficiency, but the scarcity of FPGA development resources challenges the deployment of DNN-based edge devices. Register-transfer level programming, hardware verification, and precise resource allocation are needed to build a high-performance FPGA accelerator for DNNs. These tasks present a challenge and are time consuming for even experienced hardware developers. Therefore, we propose an automated, collaborative design process employing an automatic design space exploration tool; an automatic DNN engine enables the tool to reshape and parse a DNN model from software to hardware. We also introduce a long short-term memory (LSTM)-based model to predict performance and generate a DNN model that suits the developer requirements automatically. We demonstrate our design scheme with three FPGAs: a zcu104, a zcu102, and a Cyclone V SoC (system on chip). The results show that our hardware-based edge accelerator exhibits superior throughput compared with the most advanced edge graphics processing unit. Full article
Show Figures

Figure 1

14 pages, 2100 KiB  
Article
Deep Q-Learning-Based Neural Network with Privacy Preservation Method for Secure Data Transmission in Internet of Things (IoT) Healthcare Application
by Nirmala Devi Kathamuthu, Annadurai Chinnamuthu, Nelson Iruthayanathan, Manikandan Ramachandran and Amir H. Gandomi
Electronics 2022, 11(1), 157; https://doi.org/10.3390/electronics11010157 - 4 Jan 2022
Cited by 27 | Viewed by 3423
Abstract
The healthcare industry is being transformed by the Internet of Things (IoT), as it provides wide connectivity among physicians, medical devices, clinical and nursing staff, and patients to simplify the task of real-time monitoring. As the network is vast and heterogeneous, opportunities and [...] Read more.
The healthcare industry is being transformed by the Internet of Things (IoT), as it provides wide connectivity among physicians, medical devices, clinical and nursing staff, and patients to simplify the task of real-time monitoring. As the network is vast and heterogeneous, opportunities and challenges are presented in gathering and sharing information. Focusing on patient information such as health status, medical devices used by such patients must be protected to ensure safety and privacy. Healthcare information is confidentially shared among experts for analyzing healthcare and to provide treatment on time for patients. Cryptographic and biometric systems are widely used, including deep-learning (DL) techniques to authenticate and detect anomalies, andprovide security for medical systems. As sensors in the network are energy-restricted devices, security and efficiency must be balanced, which is the most important concept to be considered while deploying a security system based on deep-learning approaches. Hence, in this work, an innovative framework, the deep Q-learning-based neural network with privacy preservation method (DQ-NNPP), was designed to protect data transmission from external threats with less encryption and decryption time. This method is used to process patient data, which reduces network traffic. This process also reduces the cost and error of communication. Comparatively, the proposed model outperformed some standard approaches, such as thesecure and anonymous biometric based user authentication scheme (SAB-UAS), MSCryptoNet, and privacy-preserving disease prediction (PPDP). Specifically, the proposed method achieved accuracy of 93.74%, sensitivity of 92%, specificity of 92.1%, communication overhead of 67.08%, 58.72 ms encryption time, and 62.72 ms decryption time. Full article
Show Figures

Figure 1

16 pages, 2030 KiB  
Article
Efficient Binarized Convolutional Layers for Visual Inspection Applications on Resource-Limited FPGAs and ASICs
by Taylor Simons and Dah-Jye Lee
Electronics 2021, 10(13), 1511; https://doi.org/10.3390/electronics10131511 - 23 Jun 2021
Cited by 3 | Viewed by 1657
Abstract
There has been a recent surge in publications related to binarized neural networks (BNNs), which use binary values to represent both the weights and activations in deep neural networks (DNNs). Due to the bitwise nature of BNNs, there have been many efforts to [...] Read more.
There has been a recent surge in publications related to binarized neural networks (BNNs), which use binary values to represent both the weights and activations in deep neural networks (DNNs). Due to the bitwise nature of BNNs, there have been many efforts to implement BNNs on ASICs and FPGAs. While BNNs are excellent candidates for these kinds of resource-limited systems, most implementations still require very large FPGAs or CPU-FPGA co-processing systems. Our work focuses on reducing the computational cost of BNNs even further, making them more efficient to implement on FPGAs. We target embedded visual inspection tasks, like quality inspection sorting on manufactured parts and agricultural produce sorting. We propose a new binarized convolutional layer, called the neural jet features layer, that learns well-known classic computer vision kernels that are efficient to calculate as a group. We show that on visual inspection tasks, neural jet features perform comparably to standard BNN convolutional layers while using less computational resources. We also show that neural jet features tend to be more stable than BNN convolution layers when training small models. Full article
Show Figures

Figure 1

Back to TopTop