Next Issue
Volume 11, May
Previous Issue
Volume 11, March

Table of Contents

Information, Volume 11, Issue 4 (April 2020) – 60 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Increasingly complex car automation creates challenges for drivers to understand how and when to [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Cryptocurrencies Perception Using Wikipedia and Google Trends
Information 2020, 11(4), 234; https://doi.org/10.3390/info11040234 - 24 Apr 2020
Viewed by 2110
Abstract
In this research we presented different approaches to investigate the possible relationships between the largest crowd-based knowledge source and the market potential of particular cryptocurrencies. Identification of such relations is crucial because their existence may be used to create a broad spectrum of [...] Read more.
In this research we presented different approaches to investigate the possible relationships between the largest crowd-based knowledge source and the market potential of particular cryptocurrencies. Identification of such relations is crucial because their existence may be used to create a broad spectrum of analyses and reports about cryptocurrency projects and to obtain a comprehensive outlook of the blockchain domain. The activities on the blockchain reach different levels of anonymity which renders them hard objects of studies. In particular, the standard tools used to characterize social trends and variables that describe cryptocurrencies’ situations are unsuitable to be used in the environment that extensively employs cryptographic techniques to hide real users. The employment of Wikipedia to trace crypto assets value need examination because the portal allows gathering of different opinions—content of the articles is edited by a group of people. Consequently, the information can be more attractive and useful for the readers than in case of non-collaborative sources of information. Wikipedia Articles often appears in the premium position of such search engines as Google, Bing, Yahoo and others. One may expect different demand on information about particular cryptocurrency depending on the different events (e.g., sharp fluctuations of price). Wikipedia offers only information about cryptocurrencies that are important from the point of view of language community of the users in Wikipedia. This “filter” helps to better identify those cryptocurrencies that have a significant influence on the regional markets. The models encompass linkages between different variables and properties. In one model cryptocurrency projects are ranked with the means of articles sentiment and quality. In another model, Wikipedia visits are linked to cryptocurrencies’ popularity. Additionally, the interactions between information demand in different Wikipedia language versions are elaborated. They are used to assess the geographical esteem of certain crypto coins. The information about the legal status of cryptocurrency technologies in different states that are offered by Wikipedia is used in another proposed model. It allows assessment of the adoption of cryptocurrencies in a given legislature. Finally, a model is developed that joins Wikipedia articles editions and deletions with the social sentiment towards particular cryptocurrency projects. The mentioned analytical purposes that permit assessment of the popularity of blockchain technologies in different local communities are not the only results of the paper. The models can show which country has the biggest demand on particular cryptocurrencies, such as Bitcoin, Ethereum, Ripple, Bitcoin Cash, Monero, Litecoin, Dogecoin and others. Full article
(This article belongs to the Special Issue Blockchain and Smart Contract Technologies)
Show Figures

Figure 1

Open AccessDiscussion
Checklist for Expert Evaluation of HMIs of Automated Vehicles—Discussions on Its Value and Adaptions of the Method within an Expert Workshop
Information 2020, 11(4), 233; https://doi.org/10.3390/info11040233 - 24 Apr 2020
Cited by 1 | Viewed by 644
Abstract
Within a workshop on evaluation methods for automated vehicles (AVs) at the Driving Assessment 2019 symposium in Santa Fe; New Mexico, a heuristic evaluation methodology that aims at supporting the development of human–machine interfaces (HMIs) for AVs was presented. The goal of the [...] Read more.
Within a workshop on evaluation methods for automated vehicles (AVs) at the Driving Assessment 2019 symposium in Santa Fe; New Mexico, a heuristic evaluation methodology that aims at supporting the development of human–machine interfaces (HMIs) for AVs was presented. The goal of the workshop was to bring together members of the human factors community to discuss the method and to further promote the development of HMI guidelines and assessment methods for the design of HMIs of automated driving systems (ADSs). The workshop included hands-on experience of rented series production partially automated vehicles, the application of the heuristic assessment method using a checklist, and intensive discussions about possible revisions of the checklist and the method itself. The aim of the paper is to summarize the results of the workshop, which will be used to further improve the checklist method and make the process available to the scientific community. The participants all had previous experience in HMI design of driver assistance systems, as well as development and evaluation methods. They brought valuable ideas into the discussion with regard to the overall value of the tool against the background of the intended application, concrete improvements of the checklist (e.g., categorization of items; checklist items that are currently perceived as missing or redundant in the checklist), when in the design process the tool should be applied, and improvements for the usability of the checklist. Full article
Show Figures

Figure 1

Open AccessArticle
Verification Method for Accumulative Event Relation of Message Passing Behavior with Process Tree for IoT Systems
Information 2020, 11(4), 232; https://doi.org/10.3390/info11040232 - 23 Apr 2020
Viewed by 557
Abstract
In this paper, we proposed a verification method for the message passing behavior of IoT systems by checking the accumulative event relation of process models. In an IoT system, it is hard to verify the behavior of message passing by only looking at [...] Read more.
In this paper, we proposed a verification method for the message passing behavior of IoT systems by checking the accumulative event relation of process models. In an IoT system, it is hard to verify the behavior of message passing by only looking at the sequence of packet transmissions recorded in the system log. We proposed a method to extract event relations from the log and check for any minor deviations that exist in the system. Using process mining, we extracted the variation of a normal process model from the log. We checked for any deviation that is hard to be detected unless the model is accumulated and stacked over time. Message passing behavior can be verified by comparing the similarity of the process tree model, which represents the execution relation between each message passing event. As a result, we can detect minor deviations such as missing events and perturbed event order with occurrence probability as low as 3%. Full article
Show Figures

Figure 1

Open AccessArticle
Smart Waste Monitoring System as an Initiative to Develop a Digital Territory in Riobamba City
Information 2020, 11(4), 231; https://doi.org/10.3390/info11040231 - 22 Apr 2020
Viewed by 587
Abstract
Digital territories focus on community transformation through sustainable development, saving resources in local governments, bridging the digital gap, and using technology to build smart infrastructure. This article presents the design and implementation of a smart system, called the Waste Treatment System (WTS), for [...] Read more.
Digital territories focus on community transformation through sustainable development, saving resources in local governments, bridging the digital gap, and using technology to build smart infrastructure. This article presents the design and implementation of a smart system, called the Waste Treatment System (WTS), for controlling parameters of waste decomposition in the trash bins installed in Riobamba city (Ecuador). The prototype allows monitoring in real time both the amount of waste and the level of rottenness of garbage by means of the measurement of different parameters that indicate the characteristics of the leachates generated inside. The motivation of this work was to yield an efficient solution to urban waste treatment that optimizes resources in the collection process by providing real-time information to improve collection frequency vehicles and also reduce emissions by the decomposition of organic waste. The tests allowed assessing technical aspects such as the maximum coverage of wireless communication, the transmission channel capacity for each prototype, the data-processing requirements, and other more particular parameters such as the production of leachates due to the frequency of collection and the environmental conditions, which will be useful in future work on environmental impact. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

Open AccessArticle
Antonyms: A Computer Game to Improve Inhibitory Control of Impulsivity in Children with Attention Deficit/Hyperactivity Disorder (ADHD)
Information 2020, 11(4), 230; https://doi.org/10.3390/info11040230 - 22 Apr 2020
Viewed by 577
Abstract
The design of a computer-supported serious game concerning inhibition skills in children with Attention Deficit/Hyperactivity Disorder (ADHD) is reported. The game consists of a series of activities, each eliciting the tendency to respond in an immediate, inadequate way. The game is based on [...] Read more.
The design of a computer-supported serious game concerning inhibition skills in children with Attention Deficit/Hyperactivity Disorder (ADHD) is reported. The game consists of a series of activities, each eliciting the tendency to respond in an immediate, inadequate way. The game is based on the Dual Pathway Model of ADHD proposed by Sonuga-Barke. In the game, children must block impulsive tendencies, reflect upon the situation, inhibit irrelevant thoughts, and find the non-intuitive solution. In the game, the player personifies a superhero, who is asked to save a realm on the opposite side of the Earth (Antonyms) where things happen according to the opposite of the usual rules. The hero faces a series of challenges, in the form of mini-games, to free the planet from enemies crossing different scenarios. To succeed in the game, the player should change his/her attitude by thinking before performing any action rather than acting on impulse. The player is induced to be reflective and thoughtful as well. Results from the evaluation of a preliminary version of the serious game are reported. They support the notion that Antonyms is an adequate tool to lead children to inhibit their tendency to behave impulsively. Full article
(This article belongs to the Special Issue Advances in Mobile Gaming and Games-based Leaning)
Open AccessArticle
Fvsoomm a Fuzzy Vectorial Space Model and Method of Personality, Cognitive Dissonance and Emotion in Decision Making
Information 2020, 11(4), 229; https://doi.org/10.3390/info11040229 - 21 Apr 2020
Viewed by 547
Abstract
The purpose of this extension of the ESM’2019 conference paper is to propose some means to implement an artificial thinking model that simulates human psychological behavior. The first necessary model is the time fuzzy vector space model (TFVS). Traditional fuzzy logic uses fuzzification/defuzzification, [...] Read more.
The purpose of this extension of the ESM’2019 conference paper is to propose some means to implement an artificial thinking model that simulates human psychological behavior. The first necessary model is the time fuzzy vector space model (TFVS). Traditional fuzzy logic uses fuzzification/defuzzification, fuzzy rules and implication to assess and combine several significant attributes to make deductions. The originality of TFVS is not to be another fuzzy logic model but rather a fuzzy object-oriented model which implements a dynamic object structural, behavior analogy and which encapsulates time fuzzy vectors in the object components and their attributes. The second model is a fuzzy vector space object oriented model and method (FVSOOMM) that describes how-to realize step by step the appropriate TFVS from the ontology class diagram designed with the Unified Modeling Language (UML). The third contribution concerns the cognitive model (Emotion, Personality, Interactions, Knowledge (Connaissance) and Experience) EPICE the layers of which are necessary to design the features of the artificial thinking model (ATM). The findings are that the TFVS model provides the appropriate time modelling tools to design and implement the layers of the EPICE model and thus the cognitive pyramids of the ATM. In practice, the emotion of cognitive dissonance during buying decisions is proposed and a game addiction application depicts the gamer decision process implementation with TFVS and finite state automata. Future works propose a platform to automate the implementation of TFVS according to the steps of the FVSOOMM method. An application is a case-based reasoning temporal approach based on TFVS and on dynamic distances computing between time resultant vectors in order to assess and compare similar objects’ evolution. The originality of this work is to provide models, tools and a method to design and implement some features of an artificial thinking model. Full article
(This article belongs to the Special Issue Selected Papers from ESM 2019)
Show Figures

Figure 1

Open AccessArticle
Digital Objects, Digital Subjects and Digital Societies: Deontology in the Age of Digitalization
Information 2020, 11(4), 228; https://doi.org/10.3390/info11040228 - 20 Apr 2020
Viewed by 632
Abstract
Digitalization affects the relation between human agents and technological objects. This paper looks at digital behavior change technologies (BCT) from a deontological perspective. It identifies three moral requirements that are relevant for ethical approaches in the tradition of Kantian deontology: epistemic rationalism, motivational [...] Read more.
Digitalization affects the relation between human agents and technological objects. This paper looks at digital behavior change technologies (BCT) from a deontological perspective. It identifies three moral requirements that are relevant for ethical approaches in the tradition of Kantian deontology: epistemic rationalism, motivational rationalism and deliberational rationalism. It argues that traditional Kantian ethics assumes human ‘subjects’ to be autonomous agents, whereas ‘objects’ are mere passive tools. Digitalization, however, challenges this Cartesian subject-object dualism: digital technologies become more and more autonomous and take on agency. Similarly, human subjects can outsource agency and will-power to technologies. In addition, our intersubjective relations are being more and more shaped by digital technologies. The paper therefore re-examines the three categories ‘subject’, ‘object’ and ‘intersubjectivity’ in light of digital BCTs and suggests deontological guidelines for digital objects, digital subjects and a digitally mediated intersubjectivity, based on a re-examination of the requirements of epistemic, motivational and deliberational rationalism. Full article
(This article belongs to the Special Issue The Future of Human Digitization)
Show Figures

Figure 1

Open AccessArticle
The Transition from Natural/Traditional Goods to Organic Products in an Emerging Market
Information 2020, 11(4), 227; https://doi.org/10.3390/info11040227 - 19 Apr 2020
Viewed by 627
Abstract
The consumption of natural, green, organic products represents an increasingly important subject for contemporary society, organizations, consumers and researchers. Demographic and cultural factors, traditions and consumption habits, along with the individual desire to adopt a healthy lifestyle in accordance with principles of sustainability [...] Read more.
The consumption of natural, green, organic products represents an increasingly important subject for contemporary society, organizations, consumers and researchers. Demographic and cultural factors, traditions and consumption habits, along with the individual desire to adopt a healthy lifestyle in accordance with principles of sustainability and environmental protection are relevant vectors in the search, choice and consumption of green products. Producers and retailers have identified the interest of modern consumers, introducing a varied range of green grocery and non-food products to match expectations and needs. Using the case study method, this paper highlights the transition of the organic market in an emerging European country: Romania. During the era of state economy, organic and natural products were interchangeable, but after liberalization of the market, the rise of the organic sector began with the establishment of inspection and certification bodies, establishment of procedures, and the appearance of specialized agricultural farms, processors and sellers. Consumers understood soon enough the advantages and benefits of organic products and a healthy lifestyle, and the market for organic products has been developing steadily. We show the current state of development and discuss its evolution, outlining the different market statistics, and making recommendations regarding future development possibilities. Full article
(This article belongs to the Special Issue Green Marketing)
Open AccessArticle
A Hierarchical Decision-Making Method with a Fuzzy Ant Colony Algorithm for Mission Planning of Multiple UAVs
Information 2020, 11(4), 226; https://doi.org/10.3390/info11040226 - 19 Apr 2020
Viewed by 614
Abstract
Unmanned aerial vehicles (UAVs) received an unprecedented surge of people’s interest worldwide in recent years. This paper investigates the specific problem of cooperative mission planning for multiple UAVs on the battlefield from a hierarchical decision-making perspective. From the view of the actual mission [...] Read more.
Unmanned aerial vehicles (UAVs) received an unprecedented surge of people’s interest worldwide in recent years. This paper investigates the specific problem of cooperative mission planning for multiple UAVs on the battlefield from a hierarchical decision-making perspective. From the view of the actual mission planning issue, the two key problems to be solved in UAV collaborative mission planning are mission allocation and route planning. In this paper, both of these problems are taken into account via a hierarchical decision-making model. Firstly, we use a target clustering algorithm to divide the original targets into target subgroups, where each target subgroup contains multiple targets. Secondly, a fuzzy ant colony algorithm is used to calculate the global path between target subgroups for a single-target group. Thirdly, a fuzzy ant colony algorithm is also used to calculate the local path between multiple targets for a single-target subgroup. After three levels of decision-making, the complete path for multiple UAVs can be obtained. In order to improve the efficiency of a collaborative task between different types of UAVs, a cooperative communication strategy is developed, which can reduce the number of UAVs performing tasks. Finally, experimental results demonstrate the effectiveness of the proposed cooperative mission planning and cooperative communication strategy for multiple UAVs. Full article
Show Figures

Figure 1

Open AccessArticle
Digital Media: Empowerment and Equality
Information 2020, 11(4), 225; https://doi.org/10.3390/info11040225 - 18 Apr 2020
Viewed by 595
Abstract
This study investigated the use of digital media, specifically social media technologies, in the workplace in Taiwan. The data for this study were collected through an online survey. Participants responded to questions asking whether social technologies could be a source of empowerment, leading [...] Read more.
This study investigated the use of digital media, specifically social media technologies, in the workplace in Taiwan. The data for this study were collected through an online survey. Participants responded to questions asking whether social technologies could be a source of empowerment, leading to equality. Respondents included female and male employees. The findings reveal that both genders use social technology platforms for business support, experience benefits, and believe that these technologies could provide empowerment for success. Detailed results are reported in this paper, including a comparative analysis. The differences between women and men using Facebook and YouTube were significant. Women in Taiwan have a higher awareness of the benefits of social technologies, specifically Facebook, when used for business support and empowerment. This paper reveals a comparison between the attitudes of women and men when using social technologies and investigates the realization of the economic empowerment component. Full article
Show Figures

Figure 1

Open AccessArticle
Adversarial Hard Attention Adaptation
Information 2020, 11(4), 224; https://doi.org/10.3390/info11040224 - 18 Apr 2020
Viewed by 555
Abstract
Domain adaptation is critical to transfer the invaluable source domain knowledge to the target domain. In this paper, for a particular visual attention model, saying hard attention, we consider to adapt the learned hard attention to the unlabeled target domain. To tackle this [...] Read more.
Domain adaptation is critical to transfer the invaluable source domain knowledge to the target domain. In this paper, for a particular visual attention model, saying hard attention, we consider to adapt the learned hard attention to the unlabeled target domain. To tackle this kind of hard attention adaptation, a novel adversarial reward strategy is proposed to train the policy of the target domain agent. In this adversarial training framework, the target domain agent competes with the discriminator which takes the attention features generated from the both domain agents as input and tries its best to distinguish them, and thus the target domain policy is learned to align the local attention feature to its source domain counterpart. We evaluated our model on the benchmarks of the cross-domain tasks, such as the centered digits datasets and the enlarged non-centered digits datasets. The experimental results show that our model outperforms the ADDA and other existing methods. Full article
(This article belongs to the Special Issue Machine Learning on Scientific Data and Information)
Show Figures

Figure 1

Open AccessArticle
Image Aesthetic Assessment Based on Latent Semantic Features
Information 2020, 11(4), 223; https://doi.org/10.3390/info11040223 - 17 Apr 2020
Viewed by 589
Abstract
Image aesthetic evaluation refers to the subjective aesthetic evaluation of images. Computational aesthetics has been widely concerned due to the limitations of subjective evaluation. Aiming at the problem that the existing evaluation methods of image aesthetic quality only extract the low-level features of [...] Read more.
Image aesthetic evaluation refers to the subjective aesthetic evaluation of images. Computational aesthetics has been widely concerned due to the limitations of subjective evaluation. Aiming at the problem that the existing evaluation methods of image aesthetic quality only extract the low-level features of images and they have a low correlation with human subjective perception, this paper proposes an aesthetic evaluation model based on latent semantic features. The aesthetic features of images are extracted by superpixel segmentation that is based on weighted density POI (Point of Interest), which includes semantic features, texture features, and color features. These features are mapped to feature words by LLC (Locality-constrained Linear Coding) and, furthermore, latent semantic features are extracted using the LDA (Latent Dirichlet Allocation). Finally, the SVM classifier is used to establish the classification prediction model of image aesthetics. The experimental results on the AVA dataset show that the feature coding based on latent semantics proposed in this paper improves the adaptability of the image aesthetic prediction model, and the correlation with human subjective perception reaches 83.75%. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

Open AccessFeature PaperArticle
One Archaeology: A Manifesto for the Systematic and Effective Use of Mapped Data from Archaeological Fieldwork and Research
Information 2020, 11(4), 222; https://doi.org/10.3390/info11040222 - 17 Apr 2020
Viewed by 761
Abstract
The Infrastructure for Spatial Information in Europe (INSPIRE) Directive (2007) requires public organisations across Europe to share environmentally-related spatial datasets to support decision making and management of the environment. Despite the environmental focus of INSPIRE, it offers limited guidance for archaeological datasets. Most [...] Read more.
The Infrastructure for Spatial Information in Europe (INSPIRE) Directive (2007) requires public organisations across Europe to share environmentally-related spatial datasets to support decision making and management of the environment. Despite the environmental focus of INSPIRE, it offers limited guidance for archaeological datasets. Most primary data is created outside, but ultimately curated within, the public sector. As spatial evidence from fieldwork activities is not considered by the Directive, it overlooks a range of barriers to sharing data, such as project-based fieldwork, a lack of data standards, and formatting and licencing variations. This paper submits that these challenges are best addressed through the formalised management of primary research data through an archaeological Spatial Data Infrastructure (SDI). SDIs deliver more efficient data management and release economic value by saving time and money. Better stewardship of archaeological data will also lead to more informed research and stewardship of the historic environment. ARIADNE already provides a digital infrastructure for research data, but the landscape and spatial component has been largely overlooked. However, rather than developing a separate solution, the full potential of spatial data from archaeological research can and should be realised through ARIADNE. Full article
(This article belongs to the Special Issue Digital Humanities)
Show Figures

Figure 1

Open AccessArticle
Deep Homography for License Plate Detection
Information 2020, 11(4), 221; https://doi.org/10.3390/info11040221 - 17 Apr 2020
Viewed by 545
Abstract
The orientation of plate images in license plate recognition is one of the factors that influence its accuracy. In particular, tilted plate images are harder to detect and recognize characters with than aligned ones. To this end, the rectification of plates in a [...] Read more.
The orientation of plate images in license plate recognition is one of the factors that influence its accuracy. In particular, tilted plate images are harder to detect and recognize characters with than aligned ones. To this end, the rectification of plates in a preprocessing step is essential to improve their performance. We propose deep models to estimate four-corner coordinates of tilted plates. Since the predicted corners can then be used to rectify plate images, they can help improve plate recognition in plate recognition. The main contributions of this work are a set of open-structured hybrid networks to predict corner positions and a novel loss function that combines pixel-wise differences with position-wise errors, producing performance improvements. Regarding experiments using proprietary plate images, one of the proposed modes produces a 3.1% improvement over the established warping method. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
The Usage of Smartphone and Mobile Applications from the Point of View of Customers in Poland
Information 2020, 11(4), 220; https://doi.org/10.3390/info11040220 - 17 Apr 2020
Viewed by 561
Abstract
The main objective of this article was to identify the conditions for the use of smartphones and mobile applications in Poland in the second half of 2018. The scope of the present analysis was limited to a selected sample of more than 470 [...] Read more.
The main objective of this article was to identify the conditions for the use of smartphones and mobile applications in Poland in the second half of 2018. The scope of the present analysis was limited to a selected sample of more than 470 respondents, and it examined the group of the most active users of smartphones and mobile applications. The author adopted the CAWI (computer associated web interview) method, which was previously verified by a randomly selected pilot sample, in his study. The obtained results were compared with the findings of other studies. They indicated that users of smartphones and mobile applications in Poland do not differ in their assessments from users in Europe and around the world. In this context, the key implication for researchers is the identified level of development of the use of smartphones and mobile applications in Poland at the end of 2018. The main limitation of the research was the selection of the research sample, which consisted only of members of the academic community. The scope of this article aimed to fill a gap in terms of the quantitative and qualitative methods that are applied to examine the use of mobile devices and mobile software. At the same time, this study creates the foundations for further research on intercultural differences. It is important to note that the present research sample needs to be extended beyond the academic community for the research results to be fully generalized. Full article
Show Figures

Figure 1

Open AccessArticle
Task-Oriented Muscle Synergy Extraction Using An Autoencoder-Based Neural Model
Information 2020, 11(4), 219; https://doi.org/10.3390/info11040219 - 17 Apr 2020
Viewed by 843
Abstract
The growing interest in wearable robots opens the challenge for developing intuitive and natural control strategies. Among several human–machine interaction approaches, myoelectric control consists of decoding the motor intention from muscular activity (or EMG signals) with the aim of driving prosthetic or assistive [...] Read more.
The growing interest in wearable robots opens the challenge for developing intuitive and natural control strategies. Among several human–machine interaction approaches, myoelectric control consists of decoding the motor intention from muscular activity (or EMG signals) with the aim of driving prosthetic or assistive robotic devices accordingly, thus establishing an intimate human–machine connection. In this scenario, bio-inspired approaches, e.g., synergy-based controllers, are revealed to be the most robust. However, synergy-based myo-controllers already proposed in the literature consider muscle patterns that are computed considering only the total variance reconstruction rate of the EMG signals, without taking into account the performance of the controller in the task (or application) space. In this work, extending a previous study, the authors presented an autoencoder-based neural model able to extract muscles synergies for motion intention detection while optimizing the task performance in terms of force/moment reconstruction. The proposed neural topology has been validated with EMG signals acquired from the main upper limb muscles during planar isometric reaching tasks performed in a virtual environment while wearing an exoskeleton. The presented model has been compared with the non-negative matrix factorization algorithm (i.e., the most used approach in the literature) in terms of muscle synergy extraction quality, and with three techniques already presented in the literature in terms of goodness of shoulder and elbow predicted moments. The results of the experimental comparisons have showed that the proposed model outperforms the state-of-art synergy-based joint moment estimators at the expense of the quality of the EMG signals reconstruction. These findings demonstrate that a trade-off, between the capability of the extracted muscle synergies to better describe the EMG signals variability and the task performance in terms of force reconstruction, can be achieved. The results of this study might open new horizons on synergies extraction methodologies, optimized synergy-based myo-controllers and, perhaps, reveals useful hints about their origin. Full article
(This article belongs to the Special Issue Computational Sport Science and Sport Analytics)
Show Figures

Figure 1

Open AccessArticle
TEEDA: An Interactive Platform for Matching Data Providers and Users in the Data Marketplace
Information 2020, 11(4), 218; https://doi.org/10.3390/info11040218 - 16 Apr 2020
Viewed by 613
Abstract
Improvements in Web platforms for data exchange and trading are creating more opportunities for users to obtain data from data providers of different domains. However, the current data exchange platforms are limited to unilateral information provision from data providers to users. In contrast, [...] Read more.
Improvements in Web platforms for data exchange and trading are creating more opportunities for users to obtain data from data providers of different domains. However, the current data exchange platforms are limited to unilateral information provision from data providers to users. In contrast, there are insufficient means for data providers to learn what kinds of data users desire and for what purposes. In this paper, we propose and discuss the description items for sharing users’ calls for data as data requests in the data marketplace. We also discuss structural differences in data requests and providable data using variables, as well as possibilities of data matching. In the study, we developed an interactive platform, named “treasuring every encounter of data affairs” (TEEDA), to facilitate matching and interactions between data providers and users. The basic features of TEEDA are described in this paper. From experiments, we found the same distributions of the frequency of variables but different distributions of the number of variables in each piece of data, which are important factors to consider in the discussion of data matching in the data marketplace. Full article
(This article belongs to the Special Issue CDEC: Cross-disciplinary Data Exchange and Collaboration)
Show Figures

Figure 1

Open AccessArticle
Security and Privacy of QR Code Applications: A Comprehensive Study, General Guidelines and Solutions
Information 2020, 11(4), 217; https://doi.org/10.3390/info11040217 - 16 Apr 2020
Viewed by 580
Abstract
The widespread use of smartphones is boosting the market take-up of dedicated applications and among them, barcode scanning applications. Several barcodes scanners are available but show security and privacy weaknesses. In this paper, we provide a comprehensive security and privacy analysis of 100 [...] Read more.
The widespread use of smartphones is boosting the market take-up of dedicated applications and among them, barcode scanning applications. Several barcodes scanners are available but show security and privacy weaknesses. In this paper, we provide a comprehensive security and privacy analysis of 100 barcode scanner applications. According to our analysis, there are some apps that provide security services including checking URLs and adopting cryptographic solutions, and other apps that guarantee user privacy by supporting least privilege permission lists. However, there are also apps that deceive the users by providing security and privacy protections that are weaker than what is claimed. We analyzed 100 barcode scanner applications and we categorized them based on the real security features they provide, or on their popularity. From the analysis, we extracted a set of recommendations that developers should follow in order to build usable, secure and privacy-friendly barcode scanning applications. Based on them, we also implemented BarSec Droid, a proof of concept Android application for barcode scanning. We then conducted a user experience test on our app and we compared it with DroidLa, the most popular/secure QR code reader app. The results show that our app has nice features, such as ease of use, provides security trust, is effective and efficient. Full article
(This article belongs to the Special Issue Cyberspace Security, Privacy & Forensics)
Show Figures

Figure 1

Open AccessReview
In-Band Full Duplex Wireless LANs: Medium Access Control Protocols, Design Issues and Their Challenges
Information 2020, 11(4), 216; https://doi.org/10.3390/info11040216 - 16 Apr 2020
Viewed by 555
Abstract
In-band full duplex wireless medium access control (MAC) protocol is essential in order to enable higher layers of the protocol stack to exploit the maximum benefits from physical layer full duplex technology. Unlike half duplex wireless local area network, a full duplex MAC [...] Read more.
In-band full duplex wireless medium access control (MAC) protocol is essential in order to enable higher layers of the protocol stack to exploit the maximum benefits from physical layer full duplex technology. Unlike half duplex wireless local area network, a full duplex MAC protocol has to deal with several unique issues and challenges that arise because of the dynamic nature of the wireless environment. In this paper, we have discussed several existing full duplex MAC protocols and have shown qualitative comparisons among these full duplex MAC protocols. Full duplex in-band wireless communication has the potential to double the capacity of wireless network. Inter-client Interference (ICI) is a hindrance in achieving double spectral efficiency of the in-band full-duplex wireless medium. In this paper, we have classified existing solutions to the ICI problem and compared the solutions with respect to the proposed approaches, their advantages and disadvantages.We have also identified and discussed several issues and challenges of designing a full duplex MAC protocol. Results of qualitative comparisons of various wireless full duplex MAC protocols may be applied to design new protocols as well as researchers may find the identified issues and challenges helpful to solve various problems of a full duplex MAC protocol. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

Open AccessArticle
Adoption of Sustainable Technology in the Malaysian SMEs Sector: Does the Role of Government Matter?
Information 2020, 11(4), 215; https://doi.org/10.3390/info11040215 - 16 Apr 2020
Viewed by 547
Abstract
This paper looks at the role of government as a novel dimension in the adoption of sustainable technology by small and medium enterprises (SME) in Malaysia. This determinant stems from the fact that, in many transitional economies, private sector organizations encounter resource constraints [...] Read more.
This paper looks at the role of government as a novel dimension in the adoption of sustainable technology by small and medium enterprises (SME) in Malaysia. This determinant stems from the fact that, in many transitional economies, private sector organizations encounter resource constraints as a barrier to innovation adoption. This is especially the case with sustainable technology incorporated into business operations. Therefore, third party intervention into the adoption process becomes inevitable and it is considered to make the adoption process more effective. A government has both the power and resources to play a pivotal role in the adoption of sustainable technology. Given this state of affairs, this study examines the government’s role as a critical factor in achieving smooth and efficient adoption. The theory of reasoned action (TRA) serves as the theoretical underpinning of this study. The data were collected from a sample of 263 SMEs in Malaysia. Partial least squares structural equation modeling (PLS-SEM) was used to analyze the data. It was found that government policies and subsidies are critical in encouraging the adoption of sustainable technology in Malaysia. This paper discusses the implications for government-driven adoption of sustainable technology, identifies the limitations of the analysis, and avenues of future research in this very relevant and expanding field. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

Open AccessArticle
Iterative Truncated Unscented Particle Filter
Information 2020, 11(4), 214; https://doi.org/10.3390/info11040214 - 16 Apr 2020
Viewed by 517
Abstract
The particle filter method is a basic tool for inference on nonlinear partially observed Markov process models. Recently, it has been applied to solve constrained nonlinear filtering problems. Incorporating constraints could improve the state estimation performance compared to unconstrained state estimation. This paper [...] Read more.
The particle filter method is a basic tool for inference on nonlinear partially observed Markov process models. Recently, it has been applied to solve constrained nonlinear filtering problems. Incorporating constraints could improve the state estimation performance compared to unconstrained state estimation. This paper introduces an iterative truncated unscented particle filter, which provides a state estimation method with inequality constraints. In this method, the proposal distribution is generated by an iterative unscented Kalman filter that is supplemented with a designed truncation method to satisfy the constraints. The detailed iterative unscented Kalman filter and truncation method is provided and incorporated into the particle filter framework. Experimental results show that the proposed algorithm is superior to other similar algorithms. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
Anti-Shake HDR Imaging Using RAW Image Data
Information 2020, 11(4), 213; https://doi.org/10.3390/info11040213 - 16 Apr 2020
Viewed by 572
Abstract
Camera shaking and object movement can cause the output images to suffer from blurring, noise, and other artifacts, leading to poor image quality and low dynamic range. Raw images contain minimally processed data from the image sensor compared with JPEG images. In this [...] Read more.
Camera shaking and object movement can cause the output images to suffer from blurring, noise, and other artifacts, leading to poor image quality and low dynamic range. Raw images contain minimally processed data from the image sensor compared with JPEG images. In this paper, an anti-shake high-dynamic-range imaging method is presented. This method is more robust to camera motion than previous techniques. An algorithm based on information entropy is employed to choose a reference image from the raw image sequence. To further improve the robustness of the proposed method, the Oriented FAST and Rotated BRIEF (ORB) algorithm is adopted to register the inputs, and a simple Laplacian pyramid fusion method is implanted to generate the high-dynamic-range image. Additionally, a large dataset with 435 various exposure image sequences is collected, which includes the corresponding JPEG image sequences to test the effectiveness of the proposed method. The experimental results illustrate that the proposed method achieves better performance in terms of anti-shake ability and preserves more details for real scene images than traditional algorithms. Furthermore, the proposed method is suitable for extreme-exposure image pairs, which can be applied to binocular vision systems to acquire high-quality real scene images, and has a lower algorithm complexity than deep learning-based fusion methods. Full article
(This article belongs to the Special Issue Computational Intelligence for Audio Signal Processing)
Show Figures

Figure 1

Open AccessArticle
A Systematic Exploration of Deep Neural Networks for EDA-Based Emotion Recognition
Information 2020, 11(4), 212; https://doi.org/10.3390/info11040212 - 15 Apr 2020
Viewed by 537
Abstract
Subject-independent emotion recognition based on physiological signals has become a research hotspot. Previous research has proved that electrodermal activity (EDA) signals are an effective data resource for emotion recognition. Benefiting from their great representation ability, an increasing number of deep neural networks have [...] Read more.
Subject-independent emotion recognition based on physiological signals has become a research hotspot. Previous research has proved that electrodermal activity (EDA) signals are an effective data resource for emotion recognition. Benefiting from their great representation ability, an increasing number of deep neural networks have been applied for emotion recognition, and they can be classified as a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), or a combination of these (CNN+RNN). However, there has been no systematic research on the predictive power and configurations of different deep neural networks in this task. In this work, we systematically explore the configurations and performances of three adapted deep neural networks: ResNet, LSTM, and hybrid ResNet-LSTM. Our experiments use the subject-independent method to evaluate the three-class classification on the MAHNOB dataset. The results prove that the CNN model (ResNet) reaches a better accuracy and F1 score than the RNN model (LSTM) and the CNN+RNN model (hybrid ResNet-LSTM). Extensive comparisons also reveal that our three deep neural networks with EDA data outperform previous models with handcraft features on emotion recognition, which proves the great potential of the end-to-end DNN method. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

Open AccessArticle
Investigation of Spoken-Language Detection and Classification in Broadcasted Audio Content
Information 2020, 11(4), 211; https://doi.org/10.3390/info11040211 - 15 Apr 2020
Viewed by 508
Abstract
The current paper focuses on the investigation of spoken-language classification in audio broadcasting content. The approach reflects a real-word scenario, encountered in modern media/monitoring organizations, where semi-automated indexing/documentation is deployed, which could be facilitated by the proposed language detection preprocessing. Multilingual audio recordings [...] Read more.
The current paper focuses on the investigation of spoken-language classification in audio broadcasting content. The approach reflects a real-word scenario, encountered in modern media/monitoring organizations, where semi-automated indexing/documentation is deployed, which could be facilitated by the proposed language detection preprocessing. Multilingual audio recordings of specific radio streams are formed into a small dataset, which is used for the adaptive classification experiments, without seeking—at this step—for a generic language recognition model. Specifically, hierarchical discrimination schemes are followed to separate voice signals before classifying the spoken languages. Supervised and unsupervised machine learning is utilized at various windowing configurations to test the validity of our hypothesis. Besides the analysis of the achieved recognition scores (partial and overall), late integration models are proposed for semi-automatically annotation of new audio recordings. Hence, data augmentation mechanisms are offered, aiming at gradually formulating a Generic Audio Language Classification Repository. This database constitutes a program-adaptive collection that, beside the self-indexing metadata mechanisms, could facilitate generic language classification models in the future, through state-of-art techniques like deep learning. This approach matches the investigatory inception of the project, which seeks for indicators that could be applied in a second step with a larger dataset and/or an already pre-trained model, with the purpose to deliver overall results. Full article
Show Figures

Figure 1

Open AccessArticle
Recognizing Indonesian Acronym and Expansion Pairs with Supervised Learning and MapReduce
Information 2020, 11(4), 210; https://doi.org/10.3390/info11040210 - 15 Apr 2020
Viewed by 586
Abstract
During the previous decades, intelligent identification of acronym and expansion pairs from a large corpus has garnered considerable research attention, particularly in the fields of text mining, entity extraction, and information retrieval. Herein, we present an improved approach to recognize the accurate acronym [...] Read more.
During the previous decades, intelligent identification of acronym and expansion pairs from a large corpus has garnered considerable research attention, particularly in the fields of text mining, entity extraction, and information retrieval. Herein, we present an improved approach to recognize the accurate acronym and expansion pairs from a large Indonesian corpus. Generally, an acronym can be either a combination of uppercase letters or a sequence of speech sounds (syllables). Our proposed approach can be computationally divided into four steps: (1) acronym candidate identification; (2) acronym and expansion pair collection; (3) feature generation; and (4) acronym and expansion pair recognition using supervised learning techniques. Further, we introduce eight numerical features and evaluate their effectiveness in representing the acronym and expansion pairs based on the precision, recall, and F-measure. Furthermore, we compare the k-nearest neighbors (K-NN), support vector machine (SVM), and bidirectional encoder representations from transformers (BERT) algorithms in terms of accurate acronym and expansion pair classification. The experimental results indicate that the SVM polynomial model that considers eight features exhibits the highest accuracy (97.93%), surpassing those of the SVM polynomial model that considers five features (90.45%), the K-NN algorithm with k = 3 that considers eight features (96.82%), the K-NN algorithm with k = 3 that considers five features (95.66%), BERT-Base model (81.64%), and BERT-Base Multilingual Cased model (88.10%). Moreover, we analyze the performance of the Hadoop technology using various numbers of data nodes to identify the acronym and expansion pairs and obtain their feature vectors. The results reveal that the Hadoop cluster containing a large number of data nodes is faster than that with fewer data nodes when processing from ten million to one hundred million pairs of acronyms and expansions. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

Open AccessArticle
The Effect of Augmented Reality on Students’ Learning Performance in Stem Education
Information 2020, 11(4), 209; https://doi.org/10.3390/info11040209 - 15 Apr 2020
Viewed by 546
Abstract
The effect of one of the most popular 3D visualization and modelling technologies with haptic and touch feedback possibilities—augmented reality (AR)—is analysed herein. That includes a specific solution, incorporating augmented reality. A case study for delivering STEM (science, technology, engineering, and mathematics) content [...] Read more.
The effect of one of the most popular 3D visualization and modelling technologies with haptic and touch feedback possibilities—augmented reality (AR)—is analysed herein. That includes a specific solution, incorporating augmented reality. A case study for delivering STEM (science, technology, engineering, and mathematics) content using this tool at one secondary school in Sofia is presented. The experience gained in one school year of using facilities for a STEM enrichment program has been examined. Full article
(This article belongs to the Special Issue Selected Papers from ESM 2019)
Show Figures

Figure 1

Open AccessArticle
Forecasting Appliances Failures: A Machine-Learning Approach to Predictive Maintenance
Information 2020, 11(4), 208; https://doi.org/10.3390/info11040208 - 14 Apr 2020
Viewed by 610
Abstract
Heating appliances consume approximately 48 % of the energy spent on household appliances every year. Furthermore, a malfunctioning device can increase the cost even further. Thus, there is a need to create methods that can identify the equipment’s malfunctions and eventual failures before [...] Read more.
Heating appliances consume approximately 48 % of the energy spent on household appliances every year. Furthermore, a malfunctioning device can increase the cost even further. Thus, there is a need to create methods that can identify the equipment’s malfunctions and eventual failures before they occur. This is only possible with a combination of data acquisition, analysis and prediction/forecast. This paper presents an infrastructure that supports the previously mentioned capabilities and was deployed for failure detection in boilers, making possible to forecast faults and errors. We also present our initial predictive maintenance models based on the collected data. Full article
(This article belongs to the Special Issue Machine Learning for Big Data--Big Data Service 2019)
Show Figures

Figure 1

Open AccessArticle
Ensemble Deep Learning Models for Heart Disease Classification: A Case Study from Mexico
Information 2020, 11(4), 207; https://doi.org/10.3390/info11040207 - 14 Apr 2020
Viewed by 786
Abstract
Heart diseases are highly ranked among the leading causes of mortality in the world. They have various types including vascular, ischemic, and hypertensive heart disease. A large number of medical features are reported for patients in the Electronic Health Records (EHR) that allow [...] Read more.
Heart diseases are highly ranked among the leading causes of mortality in the world. They have various types including vascular, ischemic, and hypertensive heart disease. A large number of medical features are reported for patients in the Electronic Health Records (EHR) that allow physicians to diagnose and monitor heart disease. We collected a dataset from Medica Norte Hospital in Mexico that includes 800 records and 141 indicators such as age, weight, glucose, blood pressure rate, and clinical symptoms. Distribution of the collected records is very unbalanced on the different types of heart disease, where 17% of records have hypertensive heart disease, 16% of records have ischemic heart disease, 7% of records have mixed heart disease, and 8% of records have valvular heart disease. Herein, we propose an ensemble-learning framework of different neural network models, and a method of aggregating random under-sampling. To improve the performance of the classification algorithms, we implement a data preprocessing step with features selection. Experiments were conducted with unidirectional and bidirectional neural network models and results showed that an ensemble classifier with a BiLSTM or BiGRU model with a CNN model had the best classification performance with accuracy and F1-score between 91% and 96% for the different types of heart disease. These results are competitive and promising for heart disease dataset. We showed that ensemble-learning framework based on deep models could overcome the problem of classifying an unbalanced heart disease dataset. Our proposed framework can lead to highly accurate models that are adapted for clinical real data and diagnosis use. Full article
(This article belongs to the Special Issue Artificial Intelligence and Decision Support Systems)
Show Figures

Figure 1

Open AccessArticle
A New Evaluation Methodology for Quality Goals Extended by D Number Theory and FAHP
Information 2020, 11(4), 206; https://doi.org/10.3390/info11040206 - 13 Apr 2020
Viewed by 594
Abstract
Evaluation of quality goals is an important issue in process management, which essentially is a multi-attribute decision-making (MADM) problem. The process of assessment inevitably involves uncertain information. The two crucial points in an MADM problem are to obtain weight of attributes and to [...] Read more.
Evaluation of quality goals is an important issue in process management, which essentially is a multi-attribute decision-making (MADM) problem. The process of assessment inevitably involves uncertain information. The two crucial points in an MADM problem are to obtain weight of attributes and to handle uncertain information. D number theory is a new mathematical tool to deal with uncertain information, which is an extension of evidence theory. The fuzzy analytic hierarchy process (FAHP) provides a hierarchical way to model MADM problems, and the comparison analysis among attributes is applied to obtain the weight of attributes. FAHP uses a triangle fuzzy number rather than a crisp number to represent the evaluation information, which fully considers the hesitation to give a evaluation. Inspired by the features of D number theory and FAHP, a D-FAHP method is proposed to evaluate quality goals in this paper. Within the proposed method, FAHP is used to obtain the weight of each attribute, and the integration property of D number theory is carried out to fuse information. A numerical example is presented to demonstrate the effectiveness of the proposed method. Some necessary discussions are provided to illustrate the advantages of the proposed method. Full article
(This article belongs to the Special Issue Artificial Intelligence and Decision Support Systems)
Show Figures

Figure 1

Open AccessArticle
Web Radio Automation for Audio Stream Management in the Era of Big Data
Information 2020, 11(4), 205; https://doi.org/10.3390/info11040205 - 11 Apr 2020
Viewed by 659
Abstract
Radio is evolving in a changing digital media ecosystem. Audio-on-demand has shaped the landscape of big unstructured audio data available online. In this paper, a framework for knowledge extraction is introduced, to improve discoverability and enrichment of the provided content. A web application [...] Read more.
Radio is evolving in a changing digital media ecosystem. Audio-on-demand has shaped the landscape of big unstructured audio data available online. In this paper, a framework for knowledge extraction is introduced, to improve discoverability and enrichment of the provided content. A web application for live radio production and streaming is developed. The application offers typical live mixing and broadcasting functionality, while performing real-time annotation as a background process by logging user operation events. For the needs of a typical radio station, a supervised speaker classification model is trained for the recognition of 24 known speakers. The model is based on a convolutional neural network (CNN) architecture. Since not all speakers are known in radio shows, a CNN-based speaker diarization method is also proposed. The trained model is used for the extraction of fixed-size identity d-vectors. Several clustering algorithms are evaluated, having the d-vectors as input. The supervised speaker recognition model for 24 speakers scores an accuracy of 88.34%, while unsupervised speaker diarization scores a maximum accuracy of 87.22%, as tested on an audio file with speech segments from three unknown speakers. The results are considered encouraging regarding the applicability of the proposed methodology. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop