Next Issue
Volume 3, June
Previous Issue
Volume 2, December
 
 

Big Data Cogn. Comput., Volume 3, Issue 1 (March 2019) – 19 articles

Cover Story (view full-size image): Though the term “big data” is widely used in biomedical publications, it has a wide array of definitions. This ambiguity, therefore, raises the question: what does the term “big data” mean when used in a scientific document? In an attempt to answer this question, this paper uses text mining to compare publications that use the term big data with those that do not. The 100 classifiers generated by this method can correctly distinguish between big data and non-big data documents and identify terms specific to big data themes (`computational', `mining', and `challenges'), as well as terms that indicate the relevant research field (`genomics'). These results indicate that there is a detectable and stable difference between publications that use the term big data and those that do not. Moreover, the use of the term big data in a publication seems to indicate a distinct type of research in the biomedical [...] Read more.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
18 pages, 1484 KiB  
Article
Big Data Management Canvas: A Reference Model for Value Creation from Data
by Michael Kaufmann
Big Data Cogn. Comput. 2019, 3(1), 19; https://doi.org/10.3390/bdcc3010019 - 11 Mar 2019
Cited by 22 | Viewed by 14792
Abstract
Many big data projects are technology-driven and thus, expensive and inefficient. It is often unclear how to exploit existing data resources and map data, systems and analytics results to actual use cases. Existing big data reference models are mostly either technological or business-oriented [...] Read more.
Many big data projects are technology-driven and thus, expensive and inefficient. It is often unclear how to exploit existing data resources and map data, systems and analytics results to actual use cases. Existing big data reference models are mostly either technological or business-oriented in nature, but do not consequently align both aspects. To address this issue, a reference model for big data management is proposed that operationalizes value creation from big data by linking business targets with technical implementation. The purpose of this model is to provide a goal- and value-oriented framework to effectively map and plan purposeful big data systems aligned with a clear value proposition. Based on an epistemic model that conceptualizes big data management as a cognitive system, the solution space of data value creation is divided into five layers: preparation, analysis, interaction, effectuation, and intelligence. To operationalize the model, each of these layers is subdivided into corresponding business and IT aspects to create a link from use cases to technological implementation. The resulting reference model, the big data management canvas, can be applied to classify and extend existing big data applications and to derive and plan new big data solutions, visions, and strategies for future projects. To validate the model in the context of existing information systems, the paper describes three cases of big data management in existing companies. Full article
Show Figures

Figure 1

16 pages, 975 KiB  
Article
DHCP Hierarchical Failover (DHCP-HF) Servers over a VPN Interconnected Campus
by Lucas Trombeta and Nunzio Marco Torrisi
Big Data Cogn. Comput. 2019, 3(1), 18; https://doi.org/10.3390/bdcc3010018 - 05 Mar 2019
Cited by 5 | Viewed by 4815
Abstract
This work presents a strategy to scale out the fault-tolerant dynamic host configuration protocol (DHCP) algorithm over multiple interconnected local networks. The proposed model is open and used as an alternative to commercial solutions for a multi-campus institution with facilities in different regions [...] Read more.
This work presents a strategy to scale out the fault-tolerant dynamic host configuration protocol (DHCP) algorithm over multiple interconnected local networks. The proposed model is open and used as an alternative to commercial solutions for a multi-campus institution with facilities in different regions that are interconnected point-to-point using a dedicated link. When the DHCP scope has to be managed and structured over multiple geographic locations that are VPN connected, it requires physical redundancy, which can be provided by a failover server. The proposed solution overcomes the limitation placed on the number of failover servers as defined in the DHCP failover (DHCP-F) protocol, which specifies the use of one primary and one secondary server. Moreover, the presented work also contributes to improving the DHCP-F specification relative to a number of practical workarounds, such as the use of a virtualized DHCP server. Therefore, this research assumes a recovery strategy that is based on physical servers distributed among different locations and not centralized as clustered virtual machines. The proposed method was evaluated by simulations to investigate the impact of this solution in terms of network traffic generated over the VPN links in order to keep the failover service running using the proposed approach. Full article
Show Figures

Figure 1

35 pages, 14029 KiB  
Article
VizTract: Visualization of Complex Social Networks for Easy User Perception
by Ramya Akula and Ivan Garibay
Big Data Cogn. Comput. 2019, 3(1), 17; https://doi.org/10.3390/bdcc3010017 - 21 Feb 2019
Cited by 4 | Viewed by 4492
Abstract
Social networking platforms connect people from all around the world. Because of their user-friendliness and easy accessibility, their traffic is increasing drastically. Such active participation has caught the attention of many research groups that are focusing on understanding human behavior to study the [...] Read more.
Social networking platforms connect people from all around the world. Because of their user-friendliness and easy accessibility, their traffic is increasing drastically. Such active participation has caught the attention of many research groups that are focusing on understanding human behavior to study the dynamics of these social networks. Oftentimes, perceiving these networks is hard, mainly due to either the large size of the data involved or the ineffective use of visualization strategies. This work introduces VizTract to ease the visual perception of complex social networks. VizTract is a two-level graph abstraction visualization tool that is designed to visualize both hierarchical and adjacency information in a tree structure. We use the Facebook dataset from the Social Network Analysis Project from Stanford University. On this data, social groups are referred as circles, social network users as nodes, and interactions as edges between the nodes. Our approach is to present a visual overview that represents the interactions between circles, then let the user navigate this overview and select the nodes in the circles to obtain more information on demand. VizTract aim to reduce visual clutter without any loss of information during visualization. VizTract enhances the visual perception of complex social networks to help better understand the dynamics of the network structure. VizTract within a single frame not only reduces the complexity but also avoids redundancy of the nodes and the rendering time. The visualization techniques used in VizTract are the force-directed layout, circle packing, cluster dendrogram, and hierarchical edge bundling. Furthermore, to enhance the visual information perception, VizTract provides interaction techniques such as selection, path highlight, mouse-hover, and bundling strength. This method helps social network researchers to display large networks in a visually effective way that is conducive to ease interpretation and analysis. We conduct a study to evaluate the user experience of the system and then collect information about their perception via a survey. The goal of the study is to know how humans can interpret the network when visualized using different visualization methods. Our results indicate that users heavily prefer those visualization techniques that aggregate the information and the connectivity within a given space, such as hierarchical edge bundling. Full article
Show Figures

Figure 1

23 pages, 284 KiB  
Article
Global Solutions vs. Local Solutions for the AI Safety Problem
by Alexey Turchin, David Denkenberger and Brian Patrick Green
Big Data Cogn. Comput. 2019, 3(1), 16; https://doi.org/10.3390/bdcc3010016 - 20 Feb 2019
Cited by 8 | Viewed by 5877
Abstract
There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous [...] Read more.
There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four groups: 1. No AI: AGI technology is banned or its use is otherwise prevented; 2. One AI: the first superintelligent AI is used to prevent the creation of any others; 3. Net of AIs as AI police: a balance is created between many AIs, so they evolve as a net and can prevent any rogue AI from taking over the world; 4. Humans inside AI: humans are augmented or part of AI. We explore many ideas, both old and new, regarding global solutions for AI safety. They include changing the number of AI teams, different forms of “AI Nanny” (non-self-improving global control AI system able to prevent creation of dangerous AIs), selling AI safety solutions, and sending messages to future AI. Not every local solution scales to a global solution or does it ethically and safely. The choice of the best local solution should include understanding of the ways in which it will be scaled up. Human-AI teams or a superintelligent AI Service as suggested by Drexler may be examples of such ethically scalable local solutions, but the final choice depends on some unknown variables such as the speed of AI progress. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
29 pages, 3400 KiB  
Article
Intelligent Recommender System for Big Data Applications Based on the Random Neural Network
by Will Serrano
Big Data Cogn. Comput. 2019, 3(1), 15; https://doi.org/10.3390/bdcc3010015 - 18 Feb 2019
Cited by 8 | Viewed by 4810
Abstract
Online market places make their profit based on their advertisements or sales commission while businesses have the commercial interest to rank higher on recommendations to attract more customers. Web users cannot be guaranteed that the products provided by recommender systems within Big Data [...] Read more.
Online market places make their profit based on their advertisements or sales commission while businesses have the commercial interest to rank higher on recommendations to attract more customers. Web users cannot be guaranteed that the products provided by recommender systems within Big Data are either exhaustive or relevant to their needs. This article analyses the product rank relevance provided by different commercial Big Data recommender systems (Grouplens film, Trip Advisor and Amazon); it also proposes an Intelligent Recommender System (IRS) based on the Random Neural Network; IRS acts as an interface between the customer and the different Recommender Systems that iteratively adapts to the perceived user relevance. In addition, a relevance metric that combines both relevance and rank is presented; this metric is used to validate and compare the performance of the proposed algorithm. On average, IRS outperforms the Big Data recommender systems after learning iteratively from its customer. Full article
(This article belongs to the Special Issue Big-Data Driven Multi-Criteria Decision-Making)
Show Figures

Figure 1

14 pages, 2321 KiB  
Review
A Review of Facial Landmark Extraction in 2D Images and Videos Using Deep Learning
by Matteo Bodini
Big Data Cogn. Comput. 2019, 3(1), 14; https://doi.org/10.3390/bdcc3010014 - 13 Feb 2019
Cited by 42 | Viewed by 9269
Abstract
The task of facial landmark extraction is fundamental in several applications which involve facial analysis, such as facial expression analysis, identity and face recognition, facial animation, and 3D face reconstruction. Taking into account the most recent advances resulting from deep-learning techniques, the performance [...] Read more.
The task of facial landmark extraction is fundamental in several applications which involve facial analysis, such as facial expression analysis, identity and face recognition, facial animation, and 3D face reconstruction. Taking into account the most recent advances resulting from deep-learning techniques, the performance of methods for facial landmark extraction have been substantially improved, even on in-the-wild datasets. Thus, this article presents an updated survey on facial landmark extraction on 2D images and video, focusing on methods that make use of deep-learning techniques. An analysis of many approaches comparing the performances is provided. In summary, an analysis of common datasets, challenges, and future research directions are provided. Full article
Show Figures

Figure 1

12 pages, 1126 KiB  
Article
Usage of the Term Big Data in Biomedical Publications: A Text Mining Approach
by Allard J. van Altena, Perry D. Moerland, Aeilko H. Zwinderman and Sílvia Delgado Olabarriaga
Big Data Cogn. Comput. 2019, 3(1), 13; https://doi.org/10.3390/bdcc3010013 - 06 Feb 2019
Viewed by 3061
Abstract
In this study, we attempt to assess the value of the term Big Data when used by researchers in their publications. For this purpose, we systematically collected a corpus of biomedical publications that use and do not use the term Big Data. These [...] Read more.
In this study, we attempt to assess the value of the term Big Data when used by researchers in their publications. For this purpose, we systematically collected a corpus of biomedical publications that use and do not use the term Big Data. These documents were used as input to a machine learning classifier to determine how well they can be separated into two groups and to determine the most distinguishing classification features. We generated 100 classifiers that could correctly distinguish between Big Data and non-Big Data documents with an area under the Receiver Operating Characteristic (ROC) curve of 0.96. The differences between the two groups were characterized by terms specific to Big Data themes—such as ‘computational’, ‘mining’, and ‘challenges’—and also by terms that indicate the research field, such as ‘genomics’. The ROC curves when plotted for various time intervals showed no difference over time. We conclude that there is a detectable and stable difference between publications that use the term Big Data and those that do not. Furthermore, the use of the term Big Data within a publication seems to indicate a distinct type of research in the biomedical field. Therefore, we conclude that value can be attributed to the term Big Data when used in a publication and this value has not changed over time. Full article
Show Figures

Figure 1

17 pages, 468 KiB  
Review
Big Data and Climate Change
by Hossein Hassani, Xu Huang and Emmanuel Silva
Big Data Cogn. Comput. 2019, 3(1), 12; https://doi.org/10.3390/bdcc3010012 - 02 Feb 2019
Cited by 61 | Viewed by 12932
Abstract
Climate science as a data-intensive subject has overwhelmingly affected by the era of big data and relevant technological revolutions. The big successes of big data analytics in diverse areas over the past decade have also prompted the expectation of big data and its [...] Read more.
Climate science as a data-intensive subject has overwhelmingly affected by the era of big data and relevant technological revolutions. The big successes of big data analytics in diverse areas over the past decade have also prompted the expectation of big data and its efficacy on the big problem—climate change. As an emerging topic, climate change has been at the forefront of the big climate data analytics implementations and exhaustive research have been carried out covering a variety of topics. This paper aims to present an outlook of big data in climate change studies over the recent years by investigating and summarising the current status of big data applications in climate change related studies. It is also expected to serve as a one-stop reference directory for researchers and stakeholders with an overview of this trending subject at a glance, which can be useful in guiding future research and improvements in the exploitation of big climate data. Full article
Show Figures

Figure 1

14 pages, 1602 KiB  
Article
A Domain-Oriented Analysis of the Impact of Machine Learning—The Case of Retailing
by Felix Weber and Reinhard Schütte
Big Data Cogn. Comput. 2019, 3(1), 11; https://doi.org/10.3390/bdcc3010011 - 24 Jan 2019
Cited by 26 | Viewed by 13735
Abstract
Information technologies in general and artifical intelligence (AI) in particular try to shift operational task away from a human actor. Machine learning (ML) is a discipline within AI that deals with learning improvement based on data. Subsequently, retailing and wholesaling, which are known [...] Read more.
Information technologies in general and artifical intelligence (AI) in particular try to shift operational task away from a human actor. Machine learning (ML) is a discipline within AI that deals with learning improvement based on data. Subsequently, retailing and wholesaling, which are known for their high proportion of human work and at the same time low profit margins, can be regarded as a natural fit for the application of AI and ML tools. This article examines the current prevalence of the use of machine learning in the industry. The paper uses two disparate approaches to identify the scientific and practical state-of-the-art within the domain: a literature review on the major scientific databases and an empirical study of the 10 largest international retail companies and their adoption of ML technologies in the domain are combined with each other. This text does not present a prototype using machine learning techniques. Instead of a consideration and comparison of the particular algorythms and approaches, the underling problems and operational tasks that are elementary for the specific domain are identified. Based on a comprehensive literature review the main problem types that ML can serve, and the associated ML techniques, are evaluated. An empirical study of the 10 largest retail companies and their ML adoption shows that the practical market adoption is highly variable. The pioneers have extensively integrated applications into everyday business, while others only show a small set of early prototypes. However, some others show neither active use nor efforts to apply such a technology. Following this, a structured approach is taken to analyze the value-adding core processes of retail companies. The current scientific and practical application scenarios and possibilities are illustrated in detail. In summary, there are numerous possible applications in all areas. In particular, in areas where future forecasts and predictions are needed (like marketing or replenishment), the use of ML today is both scientifically and practically highly developed. Full article
Show Figures

Figure 1

22 pages, 2173 KiB  
Article
Modelling Early Word Acquisition through Multiplex Lexical Networks and Machine Learning
by Massimo Stella
Big Data Cogn. Comput. 2019, 3(1), 10; https://doi.org/10.3390/bdcc3010010 - 24 Jan 2019
Cited by 22 | Viewed by 5217
Abstract
Early language acquisition is a complex cognitive task. Recent data-informed approaches showed that children do not learn words uniformly at random but rather follow specific strategies based on the associative representation of words in the mental lexicon, a conceptual system enabling human cognitive [...] Read more.
Early language acquisition is a complex cognitive task. Recent data-informed approaches showed that children do not learn words uniformly at random but rather follow specific strategies based on the associative representation of words in the mental lexicon, a conceptual system enabling human cognitive computing. Building on this evidence, the current investigation introduces a combination of machine learning techniques, psycholinguistic features (i.e., frequency, length, polysemy and class) and multiplex lexical networks, representing the semantics and phonology of the mental lexicon, with the aim of predicting normative acquisition of 529 English words by toddlers between 22 and 26 months. Classifications using logistic regression and based on four psycholinguistic features achieve the best baseline cross-validated accuracy of 61.7% when half of the words have been acquired. Adding network information through multiplex closeness centrality enhances accuracy (up to 67.7%) more than adding multiplex neighbourhood density/degree (62.4%) or multiplex PageRank versatility (63.0%) or the best single-layer network metric, i.e., free association degree (65.2%), instead. Multiplex closeness operationalises the structural relevance of words for semantic and phonological information flow. These results indicate that the whole, global, multi-level flow of information and structure of the mental lexicon influence word acquisition more than single-layer or local network features of words when considered in conjunction with language norms. The highlighted synergy of multiplex lexical structure and psycholinguistic norms opens new ways for understanding human cognition and language processing through powerful and data-parsimonious cognitive computing approaches. Full article
(This article belongs to the Special Issue Computational Models of Cognition and Learning)
Show Figures

Figure 1

3 pages, 167 KiB  
Editorial
Acknowledgement to Reviewers of Big Data and Cognitive Computing in 2018
by Big Data and Cognitive Computing Editorial Office
Big Data Cogn. Comput. 2019, 3(1), 9; https://doi.org/10.3390/bdcc3010009 - 21 Jan 2019
Viewed by 2550
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
29 pages, 3282 KiB  
Article
Fog Computing for Internet of Things (IoT)-Aided Smart Grid Architectures
by Md. Muzakkir Hussain and M.M. Sufyan Beg
Big Data Cogn. Comput. 2019, 3(1), 8; https://doi.org/10.3390/bdcc3010008 - 19 Jan 2019
Cited by 52 | Viewed by 7394
Abstract
The fast-paced development of power systems necessitates the smart grid (SG) to facilitate real-time control and monitoring with bidirectional communication and electricity flows. In order to meet the computational requirements for SG applications, cloud computing (CC) provides flexible resources and services shared in [...] Read more.
The fast-paced development of power systems necessitates the smart grid (SG) to facilitate real-time control and monitoring with bidirectional communication and electricity flows. In order to meet the computational requirements for SG applications, cloud computing (CC) provides flexible resources and services shared in network, parallel processing, and omnipresent access. Even though CC model is considered to be efficient for SG, it fails to guarantee the Quality-of-Experience (QoE) requirements for the SG services, viz. latency, bandwidth, energy consumption, and network cost. Fog Computing (FC) extends CC by deploying localized computing and processing facilities into the edge of the network, offering location-awareness, low latency, and latency-sensitive analytics for mission critical requirements of SG applications. By deploying localized computing facilities at the premise of users, it pre-stores the cloud data and distributes to SG users with fast-rate local connections. In this paper, we first examine the current state of cloud based SG architectures and highlight the motivation(s) for adopting FC as a technology enabler for real-time SG analytics. We also present a three layer FC-based SG architecture, characterizing its features towards integrating massive number of Internet of Things (IoT) devices into future SG. We then propose a cost optimization model for FC that jointly investigates data consumer association, workload distribution, virtual machine placement and Quality-of-Service (QoS) constraints. The formulated model is a Mixed-Integer Nonlinear Programming (MINLP) problem which is solved using Modified Differential Evolution (MDE) algorithm. We evaluate the proposed framework on real world parameters and show that for a network with approximately 50% time critical applications, the overall service latency for FC is nearly half to that of cloud paradigm. We also observed that the FC lowers the aggregated power consumption of the generic CC model by more than 44%. Full article
Show Figures

Figure 1

13 pages, 1805 KiB  
Article
An Enhanced Inference Algorithm for Data Sampling Efficiency and Accuracy Using Periodic Beacons and Optimization
by James Jin Kang, Kiran Fahd and Sitalakshmi Venkatraman
Big Data Cogn. Comput. 2019, 3(1), 7; https://doi.org/10.3390/bdcc3010007 - 16 Jan 2019
Cited by 1 | Viewed by 3139
Abstract
Transferring data from a sensor or monitoring device in electronic health, vehicular informatics, or Internet of Things (IoT) networks has had the enduring challenge of improving data accuracy with relative efficiency. Previous works have proposed the use of an inference system at the [...] Read more.
Transferring data from a sensor or monitoring device in electronic health, vehicular informatics, or Internet of Things (IoT) networks has had the enduring challenge of improving data accuracy with relative efficiency. Previous works have proposed the use of an inference system at the sensor device to minimize the data transfer frequency as well as the size of data to save network usage and battery resources. This has been implemented using various algorithms in sampling and inference, with a tradeoff between accuracy and efficiency. This paper proposes to enhance the accuracy without compromising efficiency by introducing new algorithms in sampling through a hybrid inference method. The experimental results show that accuracy can be significantly improved, whilst the efficiency is not diminished. These algorithms will contribute to saving operation and maintenance costs in data sampling, where resources of computational and battery are constrained and limited, such as in wireless personal area networks emerged with IoT networks. Full article
Show Figures

Figure 1

21 pages, 1223 KiB  
Article
The Next Generation Cognitive Security Operations Center: Adaptive Analytic Lambda Architecture for Efficient Defense against Adversarial Attacks
by Konstantinos Demertzis, Nikos Tziritas, Panayiotis Kikiras, Salvador Llopis Sanchez and Lazaros Iliadis
Big Data Cogn. Comput. 2019, 3(1), 6; https://doi.org/10.3390/bdcc3010006 - 10 Jan 2019
Cited by 24 | Viewed by 7209
Abstract
A Security Operations Center (SOC) is a central technical level unit responsible for monitoring, analyzing, assessing, and defending an organization’s security posture on an ongoing basis. The SOC staff works closely with incident response teams, security analysts, network engineers and organization managers using [...] Read more.
A Security Operations Center (SOC) is a central technical level unit responsible for monitoring, analyzing, assessing, and defending an organization’s security posture on an ongoing basis. The SOC staff works closely with incident response teams, security analysts, network engineers and organization managers using sophisticated data processing technologies such as security analytics, threat intelligence, and asset criticality to ensure security issues are detected, analyzed and finally addressed quickly. Those techniques are part of a reactive security strategy because they rely on the human factor, experience and the judgment of security experts, using supplementary technology to evaluate the risk impact and minimize the attack surface. This study suggests an active security strategy that adopts a vigorous method including ingenuity, data analysis, processing and decision-making support to face various cyber hazards. Specifically, the paper introduces a novel intelligence driven cognitive computing SOC that is based exclusively on progressive fully automatic procedures. The proposed λ-Architecture Network Flow Forensics Framework (λ-ΝF3) is an efficient cybersecurity defense framework against adversarial attacks. It implements the Lambda machine learning architecture that can analyze a mixture of batch and streaming data, using two accurate novel computational intelligence algorithms. Specifically, it uses an Extreme Learning Machine neural network with Gaussian Radial Basis Function kernel (ELM/GRBFk) for the batch data analysis and a Self-Adjusting Memory k-Nearest Neighbors classifier (SAM/k-NN) to examine patterns from real-time streams. It is a forensics tool for big data that can enhance the automate defense strategies of SOCs to effectively respond to the threats their environments face. Full article
Show Figures

Figure 1

13 pages, 1541 KiB  
Article
Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach
by Steven Umbrello
Big Data Cogn. Comput. 2019, 3(1), 5; https://doi.org/10.3390/bdcc3010005 - 06 Jan 2019
Cited by 30 | Viewed by 10716
Abstract
This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values into AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report [...] Read more.
This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values into AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on Artificial Intelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be used to further strengthen design coordination efforts. VSD is shown to be both able to distill these common values as well as provide a framework for stakeholder coordination. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
Show Figures

Graphical abstract

18 pages, 3485 KiB  
Article
Two-Level Fault Diagnosis of SF6 Electrical Equipment Based on Big Data Analysis
by Hongxia Miao, Heng Zhang, Minghua Chen, Bensheng Qi and Jiyong Li
Big Data Cogn. Comput. 2019, 3(1), 4; https://doi.org/10.3390/bdcc3010004 - 03 Jan 2019
Cited by 6 | Viewed by 3221
Abstract
With the increase of the operating time of sulphur hexafluoride (SF6) electrical equipment, the different degrees of discharge may occur inside the equipment. It makes the insulation performance of the equipment decline and will cause serious damage to the equipment. Therefore, it is [...] Read more.
With the increase of the operating time of sulphur hexafluoride (SF6) electrical equipment, the different degrees of discharge may occur inside the equipment. It makes the insulation performance of the equipment decline and will cause serious damage to the equipment. Therefore, it is of practical significance to diagnose fault and assess state for SF6 electrical equipment. In recent years, the frequency of monitoring data acquisition for SF6 electrical equipment has been continuously improved and the scope of collection has been continuously expanded, which makes massive data accumulated in the substation database. In order to quickly process massive SF6 electrical equipment condition monitoring data, we built a two-level fault diagnosis model for SF6 electrical equipment on the Hadoop platform. And we use the MapReduce framework to achieve the parallelization of the fault diagnosis algorithm, which further improves the speed of fault diagnosis for SF6 electrical equipment. Full article
Show Figures

Figure 1

21 pages, 1160 KiB  
Review
Doppler Radar-Based Non-Contact Health Monitoring for Obstructive Sleep Apnea Diagnosis: A Comprehensive Review
by Vinh Phuc Tran, Adel Ali Al-Jumaily and Syed Mohammed Shamsul Islam
Big Data Cogn. Comput. 2019, 3(1), 3; https://doi.org/10.3390/bdcc3010003 - 01 Jan 2019
Cited by 56 | Viewed by 9255
Abstract
Today’s rapid growth of elderly populations and aging problems coupled with the prevalence of obstructive sleep apnea (OSA) and other health related issues have affected many aspects of society. This has led to high demands for a more robust healthcare monitoring, diagnosing and [...] Read more.
Today’s rapid growth of elderly populations and aging problems coupled with the prevalence of obstructive sleep apnea (OSA) and other health related issues have affected many aspects of society. This has led to high demands for a more robust healthcare monitoring, diagnosing and treatments facilities. In particular to Sleep Medicine, sleep has a key role to play in both physical and mental health. The quality and duration of sleep have a direct and significant impact on people’s learning, memory, metabolism, weight, safety, mood, cardio-vascular health, diseases, and immune system function. The gold-standard for OSA diagnosis is the overnight sleep monitoring system using polysomnography (PSG). However, despite the quality and reliability of the PSG system, it is not well suited for long-term continuous usage due to limited mobility as well as causing possible irritation, distress, and discomfort to patients during the monitoring process. These limitations have led to stronger demands for non-contact sleep monitoring systems. The aim of this paper is to provide a comprehensive review of the current state of non-contact Doppler radar sleep monitoring technology and provide an outline of current challenges and make recommendations on future research directions to practically realize and commercialize the technology for everyday usage. Full article
(This article belongs to the Special Issue Health Assessment in the Big Data Era)
Show Figures

Figure 1

13 pages, 257 KiB  
Article
Towards AI Welfare Science and Policies
by Soenke Ziesche and Roman Yampolskiy
Big Data Cogn. Comput. 2019, 3(1), 2; https://doi.org/10.3390/bdcc3010002 - 27 Dec 2018
Cited by 11 | Viewed by 6283
Abstract
In light of fast progress in the field of AI there is an urgent demand for AI policies. Bostrom et al. provide “a set of policy desiderata”, out of which this article attempts to contribute to the “interests of digital minds”. The focus [...] Read more.
In light of fast progress in the field of AI there is an urgent demand for AI policies. Bostrom et al. provide “a set of policy desiderata”, out of which this article attempts to contribute to the “interests of digital minds”. The focus is on two interests of potentially sentient digital minds: to avoid suffering and to have the freedom of choice about their deletion. Various challenges are considered, including the vast range of potential features of digital minds, the difficulties in assessing the interests and wellbeing of sentient digital minds, and the skepticism that such research may encounter. Prolegomena to abolish suffering of sentient digital minds as well as to measure and specify wellbeing of sentient digital minds are outlined by means of the new field of AI welfare science, which is derived from animal welfare science. The establishment of AI welfare science serves as a prerequisite for the formulation of AI welfare policies, which regulate the wellbeing of sentient digital minds. This article aims to contribute to sentiocentrism through inclusion, thus to policies for antispeciesism, as well as to AI safety, for which wellbeing of AIs would be a cornerstone. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
13 pages, 925 KiB  
Article
Comparative Study between Big Data Analysis Techniques in Intrusion Detection
by Mounir Hafsa and Farah Jemili
Big Data Cogn. Comput. 2019, 3(1), 1; https://doi.org/10.3390/bdcc3010001 - 20 Dec 2018
Cited by 24 | Viewed by 5388
Abstract
Cybersecurity ventures expect that cyber-attack damage costs will rise to $11.5 billion in 2019 and that a business will fall victim to a cyber-attack every 14 seconds. Notice here that the time frame for such an event is seconds. With petabytes of data [...] Read more.
Cybersecurity ventures expect that cyber-attack damage costs will rise to $11.5 billion in 2019 and that a business will fall victim to a cyber-attack every 14 seconds. Notice here that the time frame for such an event is seconds. With petabytes of data generated each day, this is a challenging task for traditional intrusion detection systems (IDSs). Protecting sensitive information is a major concern for both businesses and governments. Therefore, the need for a real-time, large-scale and effective IDS is a must. In this work, we present a cloud-based, fault tolerant, scalable and distributed IDS that uses Apache Spark Structured Streaming and its Machine Learning library (MLlib) to detect intrusions in real-time. To demonstrate the efficacy and effectivity of this system, we implement the proposed system within Microsoft Azure Cloud, as it provides both processing power and storage capabilities. A decision tree algorithm is used to predict the nature of incoming data. For this task, the use of the MAWILab dataset as a data source will give better insights about the system capabilities against cyber-attacks. The experimental results showed a 99.95% accuracy and more than 55,175 events per second were processed by the proposed system on a small cluster. Full article
Previous Issue
Next Issue
Back to TopTop