Applications and Methodologies of Artificial Intelligence in Big Data Analysis

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 22386

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
Department of Computer Science and Information Engineering, Chang Gung University, Guishan 33302, Taiwan
Interests: artificial intelligence; deep learning; big data analysis; medical image data analysis; prediction model design; IoT; fog and edge computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Laboratoire ERIC, Institut de Communication, Université de Lyon, Lyon 2, 69676 Bron CEDEX, France
Interests: business intelligence & big data management; business analytics & data discovery; distributed data warehousing; intensive queries in big data warehouses; mapreduce and spark technologies; text warehousing & OLAP text cubes; semantic OLAP & social OLAP; analysis of social graphs cubes; community detection and evaluation

E-Mail Website
Guest Editor
Department of Electrical Engineering, Charles W. Davidson College of Engineering, San Jose State University, One Washington Square, San Jose, CA 95192, USA
Interests: computer and communication networks; TCP/IP Internet; client-server; WEB; traffic load balancing; VoIP; video and streaming over IP; multimedia networks; design of networking equipments; modems; switches and routers; wireless and mobile networks and wireless sensor networks

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) and Big Data Analytics are two of the highly promising technologies considered by data scientists and big corporations. The emergence of robotics has introduced an autonomy, which requires no human intervention in the implementation of the decisions. Deep learning is considered an advanced version of AI through which various machines can send or receive data and learn new concepts by analyzing the data. Big data helps organizations in analyzing their existing data and in drawing meaningful insights from the same. The exponential growth of technologies like AI and big data analytics can be used for anomaly detection, pattern recognition, and industrial fault detection, and can boost market analysis insights. It is a need of the time to design efficient algorithms, models and methodologies of AI to analyze the big data generated from various sources, such as industry, healthcare, medical and the financial market.

The topics of interest in this SI include, but are not limited to:

  • Applications of machine learning
  • Deep learning algorithms
  • IoT and smart city big data analysis
  • Imaging big data analysis
  • Natural language processing and speech recognition
  • Medical imaging data analysis
  • Applications of deep learning in image data analysis
  • Pattern recognition and applications of AI
Prof. Prasan Kumar Sahoo
Prof. Omar BOUSSAID
Prof. Nader F. Mir
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial Intelligence
  • Machine Learning
  • Deep Learning
  • Big Data
  • Internet of Things
  • Language Processing
  • Image analysis

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 6703 KiB  
Article
A Hybrid Prognostics Deep Learning Model for Remaining Useful Life Prediction
by Zhiyuan Xie, Shichang Du, Jun Lv, Yafei Deng and Shiyao Jia
Electronics 2021, 10(1), 39; https://doi.org/10.3390/electronics10010039 - 29 Dec 2020
Cited by 17 | Viewed by 4004
Abstract
Remaining Useful Life (RUL) prediction is significant in indicating the health status of the sophisticated equipment, and it requires historical data because of its complexity. The number and complexity of such environmental parameters as vibration and temperature can cause non-linear states of data, [...] Read more.
Remaining Useful Life (RUL) prediction is significant in indicating the health status of the sophisticated equipment, and it requires historical data because of its complexity. The number and complexity of such environmental parameters as vibration and temperature can cause non-linear states of data, making prediction tremendously difficult. Conventional machine learning models such as support vector machine (SVM), random forest, and back propagation neural network (BPNN), however, have limited capacity to predict accurately. In this paper, a two-phase deep-learning-model attention-convolutional forget-gate recurrent network (AM-ConvFGRNET) for RUL prediction is proposed. The first phase, forget-gate convolutional recurrent network (ConvFGRNET) is proposed based on a one-dimensional analog long short-term memory (LSTM), which removes all the gates except the forget gate and uses chrono-initialized biases. The second phase is the attention mechanism, which ensures the model to extract more specific features for generating an output, compensating the drawbacks of the FGRNET that it is a black box model and improving the interpretability. The performance and effectiveness of AM-ConvFGRNET for RUL prediction is validated by comparing it with other machine learning methods and deep learning methods on the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dataset and a dataset of ball screw experiment. Full article
Show Figures

Figure 1

28 pages, 12053 KiB  
Article
A Study on Acer Mono Sap Integration Management System Based on Energy Harvesting Electric Device and Sap Big Data Analysis Model
by Se-Hoon Jung, Jun-Yeong Kim, Jun Park, Jun-Ho Huh and Chun-Bo Sim
Electronics 2020, 9(11), 1979; https://doi.org/10.3390/electronics9111979 - 23 Nov 2020
Cited by 2 | Viewed by 2705
Abstract
This study set out to invent an Information and Communication Technologies (ICT)-based smart Acer mono sap collection electric device to make efficient use of the labor force by reducing inefficient activities of old manual work to record sap exudation and state information. Based [...] Read more.
This study set out to invent an Information and Communication Technologies (ICT)-based smart Acer mono sap collection electric device to make efficient use of the labor force by reducing inefficient activities of old manual work to record sap exudation and state information. Based on the assumption that environmental information would have close connections with Acer mono sap exudation to reinforce the competitive edge of production in forest products, the study analyzed correlations between Acer mono sap exudation and environmental information and predicted Acer mono exudation. A smart collection of electric devices would gather data about Acer mono sap exudation per hour on outdoor temperature, humidity, conductivity, and wind direction and velocity, and was installed in four areas in the Republic of Korea, including Sancheong, Gwangyang, Geoje, and Inje. Collected data were used to analyze correlations between environmental information and Acer mono sap exudation using four different algorithms, including linear regression, Support Vector Machine (SVM), Artificial Neural Network (ANN), and random forest, to predict Acer mono sap exudation. Remarkable outcomes were obtained across all the algorithms except for linear regression, demonstrating close connections between environmental information and Acer mono sap exudation. The random forest model, which showed the most outstanding performance, was used to make a mobile app capable of providing predicted Acer mono sap exudation and collected environmental information. Full article
Show Figures

Figure 1

19 pages, 3612 KiB  
Article
Reinforcement Learning Based Passengers Assistance System for Crowded Public Transportation in Fog Enabled Smart City
by Gone Neelakantam, Djeane Debora Onthoni and Prasan Kumar Sahoo
Electronics 2020, 9(9), 1501; https://doi.org/10.3390/electronics9091501 - 13 Sep 2020
Cited by 5 | Viewed by 4036
Abstract
Crowding in city public transportation systems is a primary issue that causes delay in the mobility of passengers. Moreover, scheduled and unscheduled events in a city lead to excess crowding situations at the metro or bus stations. The Internet of Things (IoT) devices [...] Read more.
Crowding in city public transportation systems is a primary issue that causes delay in the mobility of passengers. Moreover, scheduled and unscheduled events in a city lead to excess crowding situations at the metro or bus stations. The Internet of Things (IoT) devices could be used for data collection, which are related to crowding situations in a smart city. The fog computing data centers located in different zones of a smart city can process and analyze the collected data to assist the passengers how to commute smoothly with minimum waiting time in the crowded situation. In this paper, Q-learning based passengers assistance system is designed to assist the commuters in finding less crowded bus and metro stations to avoid long queues of waiting. The traffic congestion and crowded situation data are processed in the fog computing data centers. From our experimental results, it is found that our proposed method can achieve higher reward values, which can be used to minimize the passengers’ waiting time with minimum computational delay as compared to the cloud computing platform. Full article
Show Figures

Figure 1

20 pages, 939 KiB  
Article
Approaching the Optimal Solution of the Maximal α-quasi-clique Local Community Problem
by Patricia Conde-Cespedes
Electronics 2020, 9(9), 1438; https://doi.org/10.3390/electronics9091438 - 3 Sep 2020
Viewed by 1957
Abstract
Complex networks analysis (CNA) has attracted so much attention in the last few years. An interesting task in CNA complex network analysis is community detection. In this paper, we focus on Local Community Detection, which is the problem of detecting the community of [...] Read more.
Complex networks analysis (CNA) has attracted so much attention in the last few years. An interesting task in CNA complex network analysis is community detection. In this paper, we focus on Local Community Detection, which is the problem of detecting the community of a given node of interest in the whole network. Moreover, we study the problem of finding local communities of high density, known as α-quasi-cliques in graph theory (for high values of α in the interval ]0,1[). Unfortunately, the higher α is, the smaller the communities become. This led to the maximal α-quasi-clique community of a given node problem, which is, the problem of finding local communities that are α-quasi-cliques of maximal size. This problem is NP-hard, then, to approach the optimal solution, some heuristics exist. When α is high (>0.5) the diameter of a maximal α-quasi-clique is at most 2. Based on this property, we propose an algorithm to calculate an upper bound to approach the optimal solution. We evaluate our method in real networks and conclude that, in most cases, the bound is very accurate. Furthermore, for a real small network, the optimal value is exactly achieved in more than 80% of cases. Full article
Show Figures

Figure 1

13 pages, 1625 KiB  
Article
A Machine Learning and Integration Based Architecture for Cognitive Disorder Detection Used for Early Autism Screening
by Jesús Peral, David Gil, Sayna Rotbei, Sandra Amador, Marga Guerrero and Hadi Moradi
Electronics 2020, 9(3), 516; https://doi.org/10.3390/electronics9030516 - 21 Mar 2020
Cited by 17 | Viewed by 4698
Abstract
About 15% of the world’s population suffers from some form of disability. In developed countries, about 1.5% of children are diagnosed with autism. Autism is a developmental disorder distinguished mainly by impairments in social interaction and communication and by restricted and repetitive behavior. [...] Read more.
About 15% of the world’s population suffers from some form of disability. In developed countries, about 1.5% of children are diagnosed with autism. Autism is a developmental disorder distinguished mainly by impairments in social interaction and communication and by restricted and repetitive behavior. Since the cause of autism is still unknown, there have been many studies focused on screening for autism based on behavioral features. Thus, the main purpose of this paper is to present an architecture focused on data integration and analytics, allowing the distributed processing of input data. Furthermore, the proposed architecture allows the identification of relevant features as well as of hidden correlations among parameters. To this end, we propose a methodology able to integrate diverse data sources, even data that are collected separately. This methodology increases the data variety which can lead to the identification of more correlations between diverse parameters. We conclude the paper with a case study that used autism data in order to validate our proposed architecture, which showed very promising results. Full article
Show Figures

Figure 1

14 pages, 1597 KiB  
Article
An Efficient and Unique TF/IDF Algorithmic Model-Based Data Analysis for Handling Applications with Big Data Streaming
by Celestine Iwendi, Suresh Ponnan, Revathi Munirathinam, Kathiravan Srinivasan and Chuan-Yu Chang
Electronics 2019, 8(11), 1331; https://doi.org/10.3390/electronics8111331 - 11 Nov 2019
Cited by 46 | Viewed by 4290
Abstract
As the field of data science grows, document analytics has become a more challenging task for rough classification, response analysis, and text summarization. These tasks are used for the analysis of text data from various intelligent sensing systems. The conventional approach for data [...] Read more.
As the field of data science grows, document analytics has become a more challenging task for rough classification, response analysis, and text summarization. These tasks are used for the analysis of text data from various intelligent sensing systems. The conventional approach for data analytics and text processing is not useful for big data coming from intelligent systems. This work proposes a novel TF/IDF algorithm with the temporal Louvain approach to solve the above problem. Such an approach is supposed to help the categorization of documents into hierarchical structures showing the relationship between variables, which is a boon to analysts making essential decisions. This paper used public corpora, such as Reuters-21578 and 20 Newsgroups for massive-data analytic experimentation. The result shows the efficacy of the proposed algorithm in terms of accuracy and execution time across six datasets. The proposed approach is validated to bring value to big text data analysis. Big data handling with map-reduce has led to tremendous growth and support for tasks like categorization, sentiment analysis, and higher-quality accuracy from the input data. Outperforming the state-of-the-art approach in terms of accuracy and execution time for six datasets ensures proper validation. Full article
Show Figures

Figure 1

Back to TopTop