Special Issue "Recent Advances on Circuits and Systems for Artificial Intelligence"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence Circuits and Systems (AICAS)".

Deadline for manuscript submissions: closed (31 December 2021).

Special Issue Editors

Prof. Dr. Esteban Tlelo-Cuautle
E-Mail Website
Guest Editor
Department of Electronics, INAOE, Tonantzintla, Puebla 72840, Mexico
Interests: integrated circuits; optimization by metaheuristics; fractional-order chaotic systems; security in the Internet of Things; analog/RF and mixed-signal design automation tools
Special Issues, Collections and Topics in MDPI journals
Dr. Walter Leon-Salas
E-Mail Website
Guest Editor
School of Engineering Technology, Purdue University, West Lafayette, IN, USA
Interests: mixed-signal integrated circuits; embedded systems; wireless sensors energy harvesting; optical communications and image sensors

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) is facing challenges in the design and applications of circuits and systems to introduce and to enhance multidisciplinary applications in the real world. From the introduction of new electronic devices designed with integrated circuit technology to the development of systems for applications in the Internet of Things (IoT) and AI, researchers focus their efforts on improving data analyses to make decisions without a human interview. In this manner, this Special Issue collects recent advances on devices, circuits, and systems to improve applications in the areas of AI and AI of things (AIoT). This new technological revolution not only on circuits and systems but also on industrial applications to AIoT will establish directions to identify emerging lines for future research.

This Special Issue is devoted to summarizing the recent developments and evolution of AI and AIoT while emphasizing the design and application of circuits and systems technologies.

Potential topics include but are not limited to the following:

  • Devices, circuits, and systems in the new era of AI;
  • Analog/digital devices, circuits, and systems for AI;
  • Modeling, simulation, optimization, and design automation tools for AI;
  • Embedded/hybrid hardware and computing for AI;
  • Speech/video signal processing circuits and systems for AI;
  • AI circuits and systems for security and cryptography applications;
  • AI circuits and systems for biomedical, autonomous, and human–machine systems;
  • Emerging applications of AI

Prof. Dr. Esteban Tlelo-Cuautle
Dr. Walter Leon-Salas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
A Multi-Valued Simplified Halpern–Shoham–Moszkowski Logic for Gradable Verifiability in Reasoning about Digital Circuits
Electronics 2021, 10(15), 1817; https://doi.org/10.3390/electronics10151817 - 29 Jul 2021
Viewed by 300
Abstract
In 1983, B. Moszkowski introduced a first interval-interpreted temporal logic system, the so-called Interval Temporal Logic (ITL), as a system suitable to express mutual relations inside intervals for reasonings about digital circuits. In 1991, Halpern and Shoham proposed a new temporal system (HS) [...] Read more.
In 1983, B. Moszkowski introduced a first interval-interpreted temporal logic system, the so-called Interval Temporal Logic (ITL), as a system suitable to express mutual relations inside intervals for reasonings about digital circuits. In 1991, Halpern and Shoham proposed a new temporal system (HS) to describe external relations between intervals. This paper is aimed at proposing a basis-type combination of HS and a simplified ITL end extends it towards a multi-valued system—also capable of rendering a gradable justification of agents in a similar contexts of reasoning about digital circuits. This newly introduced system is semantically interpreted in the so-called fibred semantics. Full article
(This article belongs to the Special Issue Recent Advances on Circuits and Systems for Artificial Intelligence)
Show Figures

Figure 1

Article
Approaching Optimal Nonlinear Dimensionality Reduction by a Spiking Neural Network
Electronics 2021, 10(14), 1679; https://doi.org/10.3390/electronics10141679 - 14 Jul 2021
Cited by 1 | Viewed by 449
Abstract
This work deals with the presentation of a spiking neural network as a means for efficiently solving the reduction of dimensionality of data in a nonlinear manner. The underneath neural model, which can be integrated as neuromorphic hardware, becomes suitable for intelligent processing [...] Read more.
This work deals with the presentation of a spiking neural network as a means for efficiently solving the reduction of dimensionality of data in a nonlinear manner. The underneath neural model, which can be integrated as neuromorphic hardware, becomes suitable for intelligent processing in edge computing within Internet of Things systems. In this sense, to achieve a meaningful performance with a low complexity one-layer spiking neural network, the training phase uses the metaheuristic Artificial Bee Colony algorithm with an objective function from the principals in the machine learning science, namely, the modified Stochastic Neighbor Embedding algorithm. To demonstrate this fact, complex benchmark data were used and the results were compared with those generated by a reference network with continuous-sigmoid neurons. The goal of this work is to demonstrate via numerical experiments another method for training spiking neural networks, where the used optimizer comes from metaheuristics. Therefore, the key issue is defining the objective function, which can relate optimally the information at both sides of the spiking neural network. Certainly, machine learning techniques have advanced in defining efficient loss functions that can become suitable objective function candidates in the metaheuristic training phase. The practicality of these ideas is shown in this article. We use MSE values for evaluating the relative quality of the results and also co-ranking matrices. Full article
(This article belongs to the Special Issue Recent Advances on Circuits and Systems for Artificial Intelligence)
Show Figures

Graphical abstract

Article
Energy and Performance Trade-Off Optimization in Heterogeneous Computing via Reinforcement Learning
Electronics 2020, 9(11), 1812; https://doi.org/10.3390/electronics9111812 - 02 Nov 2020
Cited by 9 | Viewed by 1065
Abstract
This paper suggests an optimisation approach in heterogeneous computing systems to balance energy power consumption and efficiency. The work proposes a power measurement utility for a reinforcement learning (PMU-RL) algorithm to dynamically adjust the resource utilisation of heterogeneous platforms in order to minimise [...] Read more.
This paper suggests an optimisation approach in heterogeneous computing systems to balance energy power consumption and efficiency. The work proposes a power measurement utility for a reinforcement learning (PMU-RL) algorithm to dynamically adjust the resource utilisation of heterogeneous platforms in order to minimise power consumption. A reinforcement learning (RL) technique is applied to analyse and optimise the resource utilisation of field programmable gate array (FPGA) control state capabilities, which is built for a simulation environment with a Xilinx ZYNQ multi-processor systems-on-chip (MPSoC) board. In this study, the balance operation mode for improving power consumption and performance is established to dynamically change the programmable logic (PL) end work state. It is based on an RL algorithm that can quickly discover the optimization effect of PL on different workloads to improve energy efficiency. The results demonstrate a substantial reduction of 18% in energy consumption without affecting the application’s performance. Thus, the proposed PMU-RL technique has the potential to be considered for other heterogeneous computing platforms. Full article
(This article belongs to the Special Issue Recent Advances on Circuits and Systems for Artificial Intelligence)
Show Figures

Figure 1

Article
An Artificial Neural Network Approach and a Data Augmentation Algorithm to Systematize the Diagnosis of Deep-Vein Thrombosis by Using Wells’ Criteria
Electronics 2020, 9(11), 1810; https://doi.org/10.3390/electronics9111810 - 02 Nov 2020
Cited by 2 | Viewed by 1070
Abstract
The use of a back-propagation artificial neural network (ANN) to systematize the reliability of a Deep Vein Thrombosis (DVT) diagnostic by using Wells’ criteria is introduced herein. In this paper, a new ANN model is proposed to improve the Accuracy when dealing with [...] Read more.
The use of a back-propagation artificial neural network (ANN) to systematize the reliability of a Deep Vein Thrombosis (DVT) diagnostic by using Wells’ criteria is introduced herein. In this paper, a new ANN model is proposed to improve the Accuracy when dealing with a highly unbalanced dataset. To create the training dataset, a new data augmentation algorithm based on statistical data known as the prevalence of DVT of real cases reported in literature and from the public hospital is proposed. The above is used to generate one dataset of 10,000 synthetic cases. Each synthetic case has nine risk factors according to Wells’ criteria and also the use of two additional factors, such as gender and age, is proposed. According to interviews with medical specialists, a training scheme was established. In addition, a new algorithm is presented to improve the Accuracy and Sensitivity/Recall. According to the proposed algorithm, two thresholds of decision were found, the first one is 0.484, which is to improve Accuracy. The other one is 0.138 to improve Sensitivity/Recall. The Accuracy achieved is 90.99%, which is greater than that obtained with other related machine learning methods. The proposed ANN model was validated performing the k-fold cross validation technique using a dataset with 10,000 synthetic cases. The test was performed by using 59 real cases obtained from a regional hospital, achieving an Accuracy of 98.30%. Full article
(This article belongs to the Special Issue Recent Advances on Circuits and Systems for Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop