Applied AI-Based Platform Technology and Application

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (30 June 2022) | Viewed by 21268

Special Issue Editor

Department of Software and Computer Engineering, Ajou University, Suwon 16499, Republic of Korea
Interests: in-vehicle network security; industrial control system security; digital forensics; anomaly detection algorithm
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In the midst of the fourth industrial revolution, our society is undergoing rapid and diverse changes. Here, the contactless environment caused by COVID-19, which started in 2020, is rapidly applying information communication technologies to the entire society, and this trend is taking the form of a digital transformation, i.e., the integration of various digital technologies into all areas of our real life and business. In this era, various IoT devices are appearing and evolving, and these devices are based on many open platforms, which are in turn based on Linux. In addition, the huge amount of data generated by these devices requires a variety of big data technologies based on Artificial Intelligence. As such, our main research subject has further expanded from the Internet of things to the Internet of Everything, and a unique human-centered environment has been added to create more diverse devices and services based on the Internet of Behavior. In addition, technologies applying artificial intelligence are solving various problems in society, such as speech recognition, natural language processing, pattern recognition, regression, estimation, and prediction. The open platform will evolve further based on Linux and various open operating systems, and different services and contents based on these platforms will require big data processing based on artificial intelligence. Cybersecurity and privacy issues are not to be overlooked, of course. As the number of various devices and services increases, the demand for personal information protection will also increase, and so will attacks by hackers and malware. When it comes to legal disputes, it is also necessary to investigate and analyze accidents from a digital forensic point of view.

This Special Issue aims to encourage the most recent and advanced research applications, experiments, results, and developments in platform technology and application, such as big data, smart grid, multimedia, mobility, digital forensics, information security and privacy, the Internet of Things, and artificial intelligence, with a special focus on their practical applications to science, engineering, industry, medicine, robotics, manufacturing, entertainment, optimization, business, and other fields. We kindly invite researchers and practitioners to contribute their high-quality original research or review articles on these topics to this Special Issue.

Topics appropriate for this Special Issue include but are not necessarily limited to:

  • Machine-learning-based platform technology and application;
  • Clustering and classification of industrial data processing;
  • Industrial AI-based applications for platform technology and application;
  • AI security, trust, assurance, and resilience;
  • Digital forensics for platform technology and application;
  • Multimedia and distributed systems for platform technology;
  • Smart home and M2M (machine to machine) network for platform technology;
  • Scientific and big data visualization platform;
  • Business management and intelligence;
  • Security and privacy issues in emerging platforms.

Prof. Dr. Taeshik Shon
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machine-learning-based platform technology and application
  • Clustering and classification of industrial data processing
  • Industrial AI-based applications for platform technology and application
  • AI security, trust, assurance, and resilience
  • Digital forensics for platform technology and application
  • Multimedia and distributed systems for platform technology
  • Smart home and M2M (machine to machine) network for platform technology
  • Scientific and big data visualization platform
  • Business management and intelligence
  • Security and privacy issues in emerging platforms

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 3520 KiB  
Article
Genetic Algorithm for the Optimization of a Building Power Consumption Prediction Model
by Seungmin Oh, Junchul Yoon, Yoona Choi, Young-Ae Jung and Jinsul Kim
Electronics 2022, 11(21), 3591; https://doi.org/10.3390/electronics11213591 - 03 Nov 2022
Cited by 6 | Viewed by 1964
Abstract
Accurately predicting power consumption is essential to ensure a safe power supply. Various technologies have been studied to predict power consumption, but the prediction of power consumption using deep learning models has been quite successful. However, in order to predict power consumption by [...] Read more.
Accurately predicting power consumption is essential to ensure a safe power supply. Various technologies have been studied to predict power consumption, but the prediction of power consumption using deep learning models has been quite successful. However, in order to predict power consumption by utilizing deep learning models, it is necessary to find an appropriate set of hyper-parameters. This introduces the problem of complexity and wide search areas. The power consumption field should be accurately predicted in various distributed areas. To this end, a customized consumption prediction deep learning model is needed, which is essential for optimizing the hyper-parameters that are suitable for the environment. However, typical deep learning model users lack the knowledge needed to find the optimal values of parameters. To solve this problem, we propose a method for finding the optimal values of parameters for learning. In addition, the layer parameters of deep learning models are optimized by applying genetic algorithms. In this paper, we propose a hyper-parameter optimization method that solves the time and cost problems that depend on existing methods or experiences. We derive a hyper-parameter optimization plan that solves the existing method or experience-dependent time and cost problems. As a result, the RNN model achieved a 30% and 21% better mean squared error and mean absolute error, respectively, than did the arbitrary deep learning model, and the LSTM model was able to achieve 9% and 5% higher performance. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application)
Show Figures

Figure 1

42 pages, 17362 KiB  
Article
Reservation-Based 3D Intersection Traffic Control System for Autonomous Unmanned Aerial Vehicles
by Areeya Rubenecia, Myungwhan Choi and Hyo-Hyun Choi
Electronics 2022, 11(3), 309; https://doi.org/10.3390/electronics11030309 - 19 Jan 2022
Viewed by 1218
Abstract
We present a three-dimensional (3D) intersection traffic management platform for small autonomous Unmanned Aerial Vehicles (UAVs), particularly quadcopters, in urban airspace. Assuming many autonomous UAVs are approaching a shared airspace, where UAVs have varying sources and destinations, we propose a system model for [...] Read more.
We present a three-dimensional (3D) intersection traffic management platform for small autonomous Unmanned Aerial Vehicles (UAVs), particularly quadcopters, in urban airspace. Assuming many autonomous UAVs are approaching a shared airspace, where UAVs have varying sources and destinations, we propose a system model for a 3D intersection that aims to provide safe and systematic management of UAVs. We also devised a scheduling scheme to ensure that the intersection is efficiently utilized and that there are no collisions among the UAVs in the intersection. The scheduling scheme applies the reservation-based approach, which is sensitive to the sequence of the UAVs in scheduling, thus genetic algorithm is used to determine the best sequence of the UAVs. Simulations were performed to evaluate the efficiency of the system. We also show through the simulations that our scheduling scheme reduces the UAVs’ average time in the system by 27 percent compared with when the UAVs are scheduled in a first-come, first-served manner for the highly crowded intersection. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application)
Show Figures

Figure 1

13 pages, 1283 KiB  
Communication
Mobility-Aware Hybrid Flow Rule Cache Scheme in Software-Defined Access Networks
by Youngjun Kim, Jinwoo Park and Yeunwoong Kyung
Electronics 2022, 11(1), 160; https://doi.org/10.3390/electronics11010160 - 05 Jan 2022
Cited by 6 | Viewed by 1742
Abstract
Due to the dynamic mobility feature, the proactive flow rule cache method has become one promising solution in software-defined networking (SDN)-based access networks to reduce the number of flow rule installation procedures between the forwarding nodes and SDN controller. However, since there is [...] Read more.
Due to the dynamic mobility feature, the proactive flow rule cache method has become one promising solution in software-defined networking (SDN)-based access networks to reduce the number of flow rule installation procedures between the forwarding nodes and SDN controller. However, since there is a flow rule cache limit for the forwarding node, an efficient flow rule cache strategy is required. To address this challenge, this paper proposes the mobility-aware hybrid flow rule cache scheme. Based on the comparison between the delay requirement of the incoming flow and the response delay of the controller, the proposed scheme decides to install the flow rule either proactively or reactively for the target candidate forwarding nodes. To find the optimal number of proactive flow rules considering the flow rule cache limits, an integer linear programming (ILP) problem is formulated and solved using the heuristic method. Extensive simulation results demonstrate that the proposed scheme outperforms the existing schemes in terms of the flow table utilization ratio, flow rule installation delay, and flow rules hit ratio under various settings. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application)
Show Figures

Figure 1

9 pages, 250 KiB  
Article
Sentiment-Target Word Pair Extraction Model Using Statistical Analysis of Sentence Structures
by Jaechoon Jo, Gyeongmin Kim and Kinam Park
Electronics 2021, 10(24), 3187; https://doi.org/10.3390/electronics10243187 - 20 Dec 2021
Cited by 1 | Viewed by 2465
Abstract
Product information has been propagated online via forums and social media. Lots of merchandise are recommended via an expert system method and is considered for purchase by online comments or product reviews. For predicting people’s opinions on products, studying people’s thoughts via extracting [...] Read more.
Product information has been propagated online via forums and social media. Lots of merchandise are recommended via an expert system method and is considered for purchase by online comments or product reviews. For predicting people’s opinions on products, studying people’s thoughts via extracting information in documents is referred to as sentiment analysis. Finding sentiment-target word pairs is an important sentiment mining research issue. With the Korean language, as the predicate appears at the very end, it is not easy to find the exact word pairs without first identifying the syntactic structure of the sentence. In this study, we propose a model that parses sentence structures and extracts sentiment-target word pairs from the parse tree. The proposed model extracts the sentiment-target word pairs that appear in the sentence by using parsing and statistical methods. For extracting sentiment-target word pairs, this model uses a sentiment word extractor and a target word extractor. After testing data from 4000 movie reviews, the applicable model showed high performance in both accuracy 93.25 (+14.45) and F1-score 82.29 (+3.31) compared with others. However, improvements in the recall rate (−0.35) are needed and computational costs must be reduced. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application)
Show Figures

Figure 1

21 pages, 1340 KiB  
Article
Stable, Low Power and Bit-Interleaving Aware SRAM Memory for Multi-Core Processing Elements
by Nandakishor Yadav, Youngbae Kim, Shuai Li and Kyuwon Ken Choi
Electronics 2021, 10(21), 2724; https://doi.org/10.3390/electronics10212724 - 08 Nov 2021
Cited by 4 | Viewed by 2909
Abstract
The machine learning and convolutional neural network (CNN)-based intelligent artificial accelerator needs significant parallel data processing from the cache memory. The separate read port is mostly used to design built-in computational memory (CRAM) to reduce the data processing bottleneck. This memory uses multi-port [...] Read more.
The machine learning and convolutional neural network (CNN)-based intelligent artificial accelerator needs significant parallel data processing from the cache memory. The separate read port is mostly used to design built-in computational memory (CRAM) to reduce the data processing bottleneck. This memory uses multi-port reading and writing operations, which reduces stability and reliability. In this paper, we proposed a self-adaptive 12T SRAM cell to increase the read stability for multi-port operation. The self-adaptive technique increases stability and reliability. We increased the read stability by refreshing the storing node in the read mode of operation. The proposed technique also prevents the bit-interleaving problem. Further, we offered a butterfly-inspired SRAM bank to increase the performance and reduce the power dissipation. The proposed SRAM saves 12% more total power than the state-of-the-art 12T SRAM cell-based SRAM. We improve the write performance by 28.15% compared with the state-of-the-art 12T SRAM design. The total area overhead of the proposed architecture compared to the conventional 6T SRAM cell-based SRAM is only 1.9 times larger than the 6T SRAM cell. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application)
Show Figures

Figure 1

12 pages, 3313 KiB  
Article
Ext4 and XFS File System Forensic Framework Based on TSK
by Hyungchan Kim, Sungbum Kim, Yeonghun Shin, Wooyeon Jo, Seokjun Lee and Taeshik Shon
Electronics 2021, 10(18), 2310; https://doi.org/10.3390/electronics10182310 - 20 Sep 2021
Cited by 4 | Viewed by 4602
Abstract
Recently, the number of Internet of Things (IoT) devices, such as artificial intelligence (AI) speakers and smartwatches, using a Linux-based file system has increased. Moreover, these devices are connected to the Internet and generate vast amounts of data. To efficiently manage these generated [...] Read more.
Recently, the number of Internet of Things (IoT) devices, such as artificial intelligence (AI) speakers and smartwatches, using a Linux-based file system has increased. Moreover, these devices are connected to the Internet and generate vast amounts of data. To efficiently manage these generated data and improve the processing speed, the function is improved by updating the file system version or using new file systems, such as an Extended File System (XFS), B-tree file system (Btrfs), or Flash-Friendly File System (F2FS). However, in the process of updating the existing file system, the metadata structure may be changed or the analysis of the newly released file system may be insufficient, making it impossible for existing commercial tools to extract and restore deleted files. In an actual forensic investigation, when deleted files become unrecoverable, important clues may be missed, making it difficult to identify the culprit. Accordingly, a framework for extracting and recovering files based on The Sleuth Kit (TSK) is proposed by deriving the metadata changed in Ext4 file system journal checksum v3 and XFS file system v5. Thereafter, by comparing the accuracy and recovery rate of the proposed framework with existing commercial tools using the experimental dataset, we conclude that sustained research on file systems should be conducted from the perspective of forensics. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application)
Show Figures

Figure 1

15 pages, 2595 KiB  
Article
A Novel Ultra-Low Power 8T SRAM-Based Compute-in-Memory Design for Binary Neural Networks
by Youngbae Kim, Shuai Li, Nandakishor Yadav and Kyuwon Ken Choi
Electronics 2021, 10(17), 2181; https://doi.org/10.3390/electronics10172181 - 06 Sep 2021
Cited by 3 | Viewed by 2957
Abstract
We propose a novel ultra-low-power, voltage-based compute-in-memory (CIM) design with a new single-ended 8T SRAM bit cell structure. Since the proposed SRAM bit cell uses a single bitline for CIM calculation with decoupled read and write operations, it supports a much higher energy [...] Read more.
We propose a novel ultra-low-power, voltage-based compute-in-memory (CIM) design with a new single-ended 8T SRAM bit cell structure. Since the proposed SRAM bit cell uses a single bitline for CIM calculation with decoupled read and write operations, it supports a much higher energy efficiency. In addition, to separate read and write operations, the stack structure of the read unit minimizes leakage power consumption. Moreover, the proposed bit cell structure provides better read and write stability due to the isolated read path, write path and greater pull-up ratio. Compared to the state-of-the-art SRAM-CIM, our proposed SRAM-CIM does not require extra transistors for CIM vector-matrix multiplication. We implemented a 16 k (128 × 128) bit cell array for the computation of 128× neurons, and used 64× binary inputs (0 or 1) and 64 × 128 binary weights (−1 or +1) values for the binary neural networks (BNNs). Each row of the bit cell array corresponding to a single neuron consists of a total of 128 cells, 64× cells for dot-product and 64× replicas cells for ADC reference. Additionally, 64× replica cells consist of 32× cells for ADC reference and 32× cells for offset calibration. We used a row-by-row ADC for the quantized outputs of each neuron, which supports 1–7 bits of output for each neuron. The ADC uses the sweeping method using 32× duplicate bit cells, and the sweep cycle is set to 2N1+1, where N is the number of output bits. The simulation is performed at room temperature (27 °C) using 45 nm technology via Synopsys Hspice, and all transistors in bitcells use the minimum size considering the area, power, and speed. The proposed SRAM-CIM has reduced power consumption for vector-matrix multiplication by 99.96% compared to the existing state-of-the-art SRAM-CIM. Furthermore, because of the decoupled reading unit from an internal node of latch, there is no feedback from the reading unit, with read static noise, and margin-free results. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application)
Show Figures

Figure 1

16 pages, 11759 KiB  
Article
Video Object Detection Using Event-Aware Convolutional Lstm and Object Relation Networks
by Chen Zhang, Zhengyu Xia and Joohee Kim
Electronics 2021, 10(16), 1918; https://doi.org/10.3390/electronics10161918 - 10 Aug 2021
Viewed by 2083
Abstract
Common video-based object detectors exploit temporal contextual information to improve the performance of object detection. However, detecting objects under challenging conditions has not been thoroughly studied yet. In this paper, we focus on improving the detection performance for challenging events such as aspect [...] Read more.
Common video-based object detectors exploit temporal contextual information to improve the performance of object detection. However, detecting objects under challenging conditions has not been thoroughly studied yet. In this paper, we focus on improving the detection performance for challenging events such as aspect ratio change, occlusion, or large motion. To this end, we propose a video object detection network using event-aware ConvLSTM and object relation networks. Our proposed event-aware ConvLSTM is able to highlight the area where those challenging events take place. Compared with traditional ConvLSTM, with the proposed method it is easier to exploit temporal contextual information to support video-based object detectors under challenging events. To further improve the detection performance, an object relation module using supporting frame selection is applied to enhance the pooled features for target ROI. It effectively selects the features of the same object from one of the reference frames rather than all of them. Experimental results on ImageNet VID dataset show that the proposed method achieves mAP of 81.0% without any post processing and can handle challenging events efficiently in video object detection. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application)
Show Figures

Graphical abstract

Back to TopTop