Next Issue
Volume 13, February
Previous Issue
Volume 12, December
 
 

Information, Volume 13, Issue 1 (January 2022) – 42 articles

Cover Story (view full-size image): Two-dimensional space embeddings are a popular means to gain insight into high-dimensional data. However, these embeddings suffer from distortions that occur both at the global inter-cluster and the local intra-cluster levels. The former leads to misinterpretation of the distances between the various N–D cluster populations, while the latter hampers the appreciation of their individual shapes and composition, which we call cluster appearance. In this paper, we propose techniques to overcome these limitations by conveying the N–D cluster appearance through N–D-based Scagnostics metrics and a framework inspired by illustrative design. We validated and refined our design choices via a series of user studies. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
Context-Aware Collaborative Filtering Using Context Similarity: An Empirical Comparison
Information 2022, 13(1), 42; https://doi.org/10.3390/info13010042 - 17 Jan 2022
Cited by 3 | Viewed by 1766
Abstract
Recommender systems can assist with decision-making by delivering a list of item recommendations tailored to user preferences. Context-aware recommender systems additionally consider context information and adapt the recommendations to different situations. A process of context matching, therefore, enables the system to utilize rating [...] Read more.
Recommender systems can assist with decision-making by delivering a list of item recommendations tailored to user preferences. Context-aware recommender systems additionally consider context information and adapt the recommendations to different situations. A process of context matching, therefore, enables the system to utilize rating profiles in the matched contexts to produce context-aware recommendations. However, it suffers from the sparsity problem since users may not rate items in various context situations. One of the major solutions to alleviate the sparsity issue is measuring the similarity of contexts and utilizing rating profiles with similar contexts to build the recommendation model. In this paper, we summarize the context-aware collaborative filtering methods using context similarity, and deliver an empirical comparison based on multiple context-aware data sets. Full article
(This article belongs to the Special Issue Information Retrieval, Recommender Systems and Adaptive Systems)
Show Figures

Figure 1

Article
A Literature Survey of Recent Advances in Chatbots
Information 2022, 13(1), 41; https://doi.org/10.3390/info13010041 - 15 Jan 2022
Cited by 16 | Viewed by 7354
Abstract
Chatbots are intelligent conversational computer systems designed to mimic human conversation to enable automated online guidance and support. The increased benefits of chatbots led to their wide adoption by many industries in order to provide virtual assistance to customers. Chatbots utilise methods and [...] Read more.
Chatbots are intelligent conversational computer systems designed to mimic human conversation to enable automated online guidance and support. The increased benefits of chatbots led to their wide adoption by many industries in order to provide virtual assistance to customers. Chatbots utilise methods and algorithms from two Artificial Intelligence domains: Natural Language Processing and Machine Learning. However, there are many challenges and limitations in their application. In this survey we review recent advances on chatbots, where Artificial Intelligence and Natural Language processing are used. We highlight the main challenges and limitations of current work and make recommendations for future research investigation. Full article
(This article belongs to the Special Issue Natural Language Interface for Smart Systems)
Show Figures

Figure 1

Article
Exploiting an Ontological Model to Study COVID-19 Contagion Chains in Sustainable Smart Cities
Information 2022, 13(1), 40; https://doi.org/10.3390/info13010040 - 14 Jan 2022
Viewed by 1537
Abstract
The COVID-19 pandemic has caused the deaths of millions of people around the world. The scientific community faces a tough struggle to reduce the effects of this pandemic. Several investigations dealing with different perspectives have been carried out. However, it is not easy [...] Read more.
The COVID-19 pandemic has caused the deaths of millions of people around the world. The scientific community faces a tough struggle to reduce the effects of this pandemic. Several investigations dealing with different perspectives have been carried out. However, it is not easy to find studies focused on COVID-19 contagion chains. A deep analysis of contagion chains may contribute new findings that can be used to reduce the effects of COVID-19. For example, some interesting chains with specific behaviors could be identified and more in-depth analyses could be performed to investigate the reasons for such behaviors. To represent, validate and analyze the information of contagion chains, we adopted an ontological approach. Ontologies are artificial intelligence techniques that have become widely accepted solutions for the representation of knowledge and corresponding analyses. The semantic representation of information by means of ontologies enables the consistency of the information to be checked, as well as automatic reasoning to infer new knowledge. The ontology was implemented in Ontology Web Language (OWL), which is a formal language based on description logics. This approach could have a special impact on smart cities, which are characterized as using information to enhance the quality of basic services for citizens. In particular, health services could take advantage of this approach to reduce the effects of COVID-19. Full article
(This article belongs to the Special Issue Evolution of Smart Cities and Societies Using Emerging Technologies)
Show Figures

Figure 1

Article
Reversing Jensen’s Inequality for Information-Theoretic Analyses
Information 2022, 13(1), 39; https://doi.org/10.3390/info13010039 - 13 Jan 2022
Cited by 2 | Viewed by 1441
Abstract
In this work, we propose both an improvement and extensions of a reverse Jensen inequality due to Wunder et al. (2021). The new proposed inequalities are fairly tight and reasonably easy to use in a wide variety of situations, as demonstrated in several [...] Read more.
In this work, we propose both an improvement and extensions of a reverse Jensen inequality due to Wunder et al. (2021). The new proposed inequalities are fairly tight and reasonably easy to use in a wide variety of situations, as demonstrated in several application examples that are relevant to information theory. Moreover, the main ideas behind the derivations turn out to be applicable to generate bounds to expectations of multivariate convex/concave functions, as well as functions that are not necessarily convex or concave. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

Article
3D Reconstruction with Coronary Artery Based on Curve Descriptor and Projection Geometry-Constrained Vasculature Matching
Information 2022, 13(1), 38; https://doi.org/10.3390/info13010038 - 13 Jan 2022
Viewed by 1304
Abstract
This paper presents a novel method based on a curve descriptor and projection geometry constrained for vessel matching. First, an LM (Leveberg–Marquardt) algorithm is proposed to optimize the matrix of geometric transformation. Combining with parameter adjusting and the trust region method, the error [...] Read more.
This paper presents a novel method based on a curve descriptor and projection geometry constrained for vessel matching. First, an LM (Leveberg–Marquardt) algorithm is proposed to optimize the matrix of geometric transformation. Combining with parameter adjusting and the trust region method, the error between 3D reconstructed vessel projection and the actual vessel can be minimized. Then, CBOCD (curvature and brightness order curve descriptor) is proposed to indicate the degree of the self-occlusion of blood vessels during angiography. Next, the error matrix constructed from the error of epipolar matching is used in point pairs matching of the vascular through dynamic programming. Finally, the recorded radius of vessels helps to construct ellipse cross-sections and samples on it to get a point set around the centerline and the point set is converted to mesh for reconstructing the surface of vessels. The validity and applicability of the proposed methods have been verified through experiments that result in the significant improvement of 3D reconstruction accuracy in terms of average back-projection errors. Simultaneously, due to precise point-pair matching, the smoothness of the reconstructed 3D coronary artery is guaranteed. Full article
(This article belongs to the Special Issue Biosignal and Medical Image Processing)
Show Figures

Figure 1

Article
Evaluation of Continuous Power-Down Schemes
Information 2022, 13(1), 37; https://doi.org/10.3390/info13010037 - 13 Jan 2022
Viewed by 1208
Abstract
We consider a power-down system with two states—“on” and “off”—and a continuous set of power states. The system has to respond to requests for service in the “on” state and, after service, the system can power off or switch to any of the [...] Read more.
We consider a power-down system with two states—“on” and “off”—and a continuous set of power states. The system has to respond to requests for service in the “on” state and, after service, the system can power off or switch to any of the intermediate power-saving states. The choice of states determines the cost to power on for subsequent requests. The protocol for requests is “online”, which means that the decision as to which intermediate state (or the off-state) the system will switch has to be made without knowledge of future requests. We model a linear and a non-linear system, and we consider different online strategies, namely piece-wise linear, logarithmic and exponential. We provide results under online competitive analysis, which have relevance for the integration of renewable energy sources into the smart grid. Our analysis shows that while piece-wise linear systems are not specific for any type of system, logarithmic strategies work well for slack systems, whereas exponential systems are better suited for busy systems. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2020 & 2021))
Show Figures

Figure 1

Article
Dual-Hybrid Modeling for Option Pricing of CSI 300ETF
Information 2022, 13(1), 36; https://doi.org/10.3390/info13010036 - 13 Jan 2022
Cited by 1 | Viewed by 1132
Abstract
The reasonable pricing of options can effectively help investors avoid risks and obtain benefits, which plays a very important role in the stability of the financial market. The traditional single option pricing model often fails to meet the ideal expectations due to its [...] Read more.
The reasonable pricing of options can effectively help investors avoid risks and obtain benefits, which plays a very important role in the stability of the financial market. The traditional single option pricing model often fails to meet the ideal expectations due to its limited conditions. Combining an economic model with a deep learning model to establish a hybrid model provides a new method to improve the prediction accuracy of the pricing model. This includes the usage of real historical data of about 10,000 sets of CSI 300 ETF options from January to December 2020 for experimental analysis. Aiming at the prediction problem of CSI 300ETF option pricing, based on the importance of random forest features, the Convolutional Neural Network and Long Short-Term Memory model (CNN-LSTM) in deep learning is combined with a typical stochastic volatility Heston model and stochastic interests CIR model in parameter models. The dual hybrid pricing model of the call option and the put option of CSI 300ETF is established. The dual-hybrid model and the reference model are integrated with ridge regression to further improve the forecasting effect. The results show that the dual-hybrid pricing model proposed in this paper has high accuracy, and the prediction accuracy is tens to hundreds of times higher than the reference model; moreover, MSE can be as low as 0.0003. The article provides an alternative method for the pricing of financial derivatives. Full article
(This article belongs to the Special Issue Predictive Analytics and Data Science)
Show Figures

Figure 1

Article
Mean Received Resources Meet Machine Learning Algorithms to Improve Link Prediction Methods
Information 2022, 13(1), 35; https://doi.org/10.3390/info13010035 - 13 Jan 2022
Cited by 2 | Viewed by 862
Abstract
The analysis of social networks has attracted a lot of attention during the last two decades. These networks are dynamic: new links appear and disappear. Link prediction is the problem of inferring links that will appear in the future from the actual state [...] Read more.
The analysis of social networks has attracted a lot of attention during the last two decades. These networks are dynamic: new links appear and disappear. Link prediction is the problem of inferring links that will appear in the future from the actual state of the network. We use information from nodes and edges and calculate the similarity between users. The more users are similar, the higher the probability of their connection in the future will be. The similarity metrics play an important role in the link prediction field. Due to their simplicity and flexibility, many authors have proposed several metrics such as Jaccard, AA, and Katz and evaluated them using the area under the curve (AUC). In this paper, we propose a new parameterized method to enhance the AUC value of the link prediction metrics by combining them with the mean received resources (MRRs). Experiments show that the proposed method improves the performance of the state-of-the-art metrics. Moreover, we used machine learning algorithms to classify links and confirm the efficiency of the proposed combination. Full article
(This article belongs to the Collection Knowledge Graphs for Search and Recommendation)
Show Figures

Figure 1

Article
The Role of Trustworthiness Facets for Developing Social Media Applications: A Structured Literature Review
Information 2022, 13(1), 34; https://doi.org/10.3390/info13010034 - 13 Jan 2022
Cited by 2 | Viewed by 1422
Abstract
This work reviews existing research about attributes, which are assessed by individuals to evaluate the trustworthiness of (i) software applications, (ii) organizations (e.g., service providers), and (iii) other individuals. As these parties are part of social media services, previous research has identified the [...] Read more.
This work reviews existing research about attributes, which are assessed by individuals to evaluate the trustworthiness of (i) software applications, (ii) organizations (e.g., service providers), and (iii) other individuals. As these parties are part of social media services, previous research has identified the need for users to assess their trustworthiness. Based on the trustworthiness assessment, users decide whether they want to interact with them and whether such interactions appear safe. The literature review encompasses 264 works from which so-called trustworthiness facets of 100 papers could be identified. In addition to an overview of trustworthiness facets, this work further introduces a guideline for software engineers on how to select appropriate trustworthiness facets during the analysis of the problem space for the development of specific social media applications. It is exemplified by the problem of “catfishing” in online dating. Full article
Show Figures

Figure 1

Article
Cognitive Digital Twins for Resilience in Production: A Conceptual Framework
Information 2022, 13(1), 33; https://doi.org/10.3390/info13010033 - 12 Jan 2022
Cited by 5 | Viewed by 1807
Abstract
Digital Twins (DTs) are a core enabler of Industry 4.0 in manufacturing. Cognitive Digital Twins (CDTs), as an evolution, utilize services and tools towards enabling human-like cognitive capabilities in DTs. This paper proposes a conceptual framework for implementing CDTs to support resilience in [...] Read more.
Digital Twins (DTs) are a core enabler of Industry 4.0 in manufacturing. Cognitive Digital Twins (CDTs), as an evolution, utilize services and tools towards enabling human-like cognitive capabilities in DTs. This paper proposes a conceptual framework for implementing CDTs to support resilience in production, i.e., to enable manufacturing systems to identify and handle anomalies and disruptive events in production processes and to support decisions to alleviate their consequences. Through analyzing five real-life production cases in different industries, similarities and differences in their corresponding needs are identified. Moreover, a connection between resilience and cognition is established. Further, a conceptual architecture is proposed that maps the tools materializing cognition within the DT core together with a cognitive process that enables resilience in production by utilizing CDTs. Full article
Show Figures

Figure 1

Article
Adaptive Feature Pyramid Network to Predict Crisp Boundaries via NMS Layer and ODS F-Measure Loss Function
Information 2022, 13(1), 32; https://doi.org/10.3390/info13010032 - 12 Jan 2022
Cited by 1 | Viewed by 785
Abstract
Edge detection is one of the fundamental computer vision tasks. Recent methods for edge detection based on a convolutional neural network (CNN) typically employ the weighted cross-entropy loss. Their predicted results being thick and needing post-processing before calculating the optimal dataset scale (ODS) [...] Read more.
Edge detection is one of the fundamental computer vision tasks. Recent methods for edge detection based on a convolutional neural network (CNN) typically employ the weighted cross-entropy loss. Their predicted results being thick and needing post-processing before calculating the optimal dataset scale (ODS) F-measure for evaluation. To achieve end-to-end training, we propose a non-maximum suppression layer (NMS) to obtain sharp boundaries without the need for post-processing. The ODS F-measure can be calculated based on these sharp boundaries. So, the ODS F-measure loss function is proposed to train the network. Besides, we propose an adaptive multi-level feature pyramid network (AFPN) to better fuse different levels of features. Furthermore, to enrich multi-scale features learned by AFPN, we introduce a pyramid context module (PCM) that includes dilated convolution to extract multi-scale features. Experimental results indicate that the proposed AFPN achieves state-of-the-art performance on the BSDS500 dataset (ODS F-score of 0.837) and the NYUDv2 dataset (ODS F-score of 0.780). Full article
(This article belongs to the Special Issue Signal Processing Based on Convolutional Neural Network)
Show Figures

Figure 1

Article
Exploring Business Strategy Modelling with ArchiMate: A Case Study Approach
Information 2022, 13(1), 31; https://doi.org/10.3390/info13010031 - 12 Jan 2022
Cited by 1 | Viewed by 1207
Abstract
Enterprise architecture (EA) is a high-level abstraction of a business’ levels that aids in organizing planning and making better decisions. Researchers have concluded that the scope of EA is not focused only on technology planning but that the lack of business strategy and [...] Read more.
Enterprise architecture (EA) is a high-level abstraction of a business’ levels that aids in organizing planning and making better decisions. Researchers have concluded that the scope of EA is not focused only on technology planning but that the lack of business strategy and processes is the most important challenge of EA frameworks. The purpose of this article is to visualize the business strategy of a company using ArchiMate. Having a better understanding of how the concepts of strategic planning are used in businesses, we hope to improve their modelling with ArchiMate. This article adds to the existing literature by evaluating existing EA modelling languages and their skillfulness in model strategy. Furthermore, this article contributes to the identification of challenges in modelling and investigation of the ease of the use of language in the field of strategic planning. Furthermore, this article provides an approach to practitioners and EA architects who are attempting to develop efficient EA modelling projects and solve business complexity problems. Full article
(This article belongs to the Special Issue Business Process Management)
Show Figures

Figure 1

Article
A Comparative Study of Users versus Non-Users’ Behavioral Intention towards M-Banking Apps’ Adoption
Information 2022, 13(1), 30; https://doi.org/10.3390/info13010030 - 11 Jan 2022
Cited by 5 | Viewed by 1818
Abstract
The banking sector has been considered as one of the primary adopters of Information and Communications Technologies. Especially during the last years, they have invested a lot into the digital transformation of their business process. Concerning their retail customers, banks realized very early [...] Read more.
The banking sector has been considered as one of the primary adopters of Information and Communications Technologies. Especially during the last years, they have invested a lot into the digital transformation of their business process. Concerning their retail customers, banks realized very early the great potential abilities to provide value added self-services functions via mobile devices, mainly smartphones to them; thus, they have invested a lot into m-banking apps’ functionality. Furthermore, the COVID-19 pandemic has brought out different ways for financial transactions and even more mobile users have taken advantage of m-banking app services. Thus, the purpose of this empirical paper is to investigate the determinants that impact individuals on adopting or not m-banking apps. Specifically, it examines two groups of individuals, users (adopters) and non-users (non-adopters) of m-banking apps, and aims to reveal if there are differences and similarities between the factors that impact them on adopting or not this type of m-banking services. To our knowledge, this is the second scientific attempt where these two groups of individuals have been compared on this topic. The paper proposes a comprehensive conceptual model by extending Venkatech’s et al. (2003) Unified Theory of Acceptance and Use of Technology (UTAUT) with ICT facilitators (i.e., reward and security) and ICT inhibitors (i.e., risk and anxiety), as well as the recommendation factor. However, this study intends to fill the research gap by investigating and proving for the first time the impact of social influence, reward and anxiety factors on behavioral intention, the relationship between risk and anxiety and the impact of behavioral intention on recommendation via the application of Confirmatory Factor Analysis and Structural Equation Modeling (SEM) statistical techniques. The results reveal a number of differences regarding the factors that impact or not these two groups towards m-banking app adoption; thus, it provides new insights regarding m-banking app adoption in a slightly examined scientific field. Thus, the study intends to assist the banking sector in better understanding their customers with the aim to formulate and apply customized m-business strategies and increase not only the adoption of m-banking apps but also the level of their further use. Full article
Show Figures

Figure 1

Article
An Education Process Mining Framework: Unveiling Meaningful Information for Understanding Students’ Learning Behavior and Improving Teaching Quality
Information 2022, 13(1), 29; https://doi.org/10.3390/info13010029 - 10 Jan 2022
Cited by 5 | Viewed by 1900
Abstract
This paper focuses on the study of automated process discovery using the Inductive visual Miner (IvM) and Directly Follows visual Miner (DFvM) algorithms to produce a valid process model for educational process mining in order to understand and predict the learning behavior of [...] Read more.
This paper focuses on the study of automated process discovery using the Inductive visual Miner (IvM) and Directly Follows visual Miner (DFvM) algorithms to produce a valid process model for educational process mining in order to understand and predict the learning behavior of students. These models were evaluated on the publicly available xAPI (Experience API or Experience Application Programming Interface) dataset, which is an education dataset intended for tracking students’ classroom activities, participation in online communities, and performance. Experimental results with several performance measures show the effectiveness of the developed process models in helping experts to better understand students’ learning behavioral patterns. Full article
(This article belongs to the Special Issue Information Technologies in Education, Research and Innovation)
Show Figures

Figure 1

Editorial
Large Scale Multimedia Management: Recent Challenges
Information 2022, 13(1), 28; https://doi.org/10.3390/info13010028 - 10 Jan 2022
Cited by 1 | Viewed by 687
Abstract
In recent years, we have witnessed an incredible and rapid growth of multimedia content in its different forms (2D and 3D images, text, sound, video, etc [...] Full article
Article
Automatic Curation of Court Documents: Anonymizing Personal Data
Information 2022, 13(1), 27; https://doi.org/10.3390/info13010027 - 10 Jan 2022
Cited by 3 | Viewed by 899
Abstract
In order to provide open access to data of public interest, it is often necessary to perform several data curation processes. In some cases, such as biological databases, curation involves quality control to ensure reliable experimental support for biological sequence data. In others, [...] Read more.
In order to provide open access to data of public interest, it is often necessary to perform several data curation processes. In some cases, such as biological databases, curation involves quality control to ensure reliable experimental support for biological sequence data. In others, such as medical records or judicial files, publication must not interfere with the right to privacy of the persons involved. There are also interventions in the published data with the aim of generating metadata that enable a better experience of querying and navigation. In all cases, the curation process constitutes a bottleneck that slows down general access to the data, so it is of great interest to have automatic or semi-automatic curation processes. In this paper, we present a solution aimed at the automatic curation of our National Jurisprudence Database, with special focus on the process of the anonymization of personal information. The anonymization process aims to hide the names of the participants involved in a lawsuit without losing the meaning of the narrative of facts. In order to achieve this goal, we need, not only to recognize person names but also resolve co-references in order to assign the same label to all mentions of the same person. Our corpus has significant differences in the spelling of person names, so it was clear from the beginning that pre-existing tools would not be able to reach a good performance. The challenge was to find a good way of injecting specialized knowledge about person names syntax while taking profit of previous capabilities of pre-trained tools. We fine-tuned an NER analyzer and we built a clusterization algorithm to solve co-references between named entities. We present our first results, which, for both tasks, are promising: We obtained a 90.21% of F1-micro in the NER task—from a 39.99% score before retraining the same analyzer in our corpus—and a 95.95% ARI score in clustering for co-reference resolution. Full article
(This article belongs to the Special Issue Information Technology and Emerging Legal Informatics (Legomatics))
Show Figures

Figure 1

Article
Extraction and Analysis of Social Networks Data to Detect Traffic Accidents
Information 2022, 13(1), 26; https://doi.org/10.3390/info13010026 - 10 Jan 2022
Cited by 3 | Viewed by 1364
Abstract
Traffic accident detection is an important strategy governments can use to implement policies intended to reduce accidents. They usually use techniques such as image processing, RFID devices, among others. Social network mining has emerged as a low-cost alternative. However, social networks come with [...] Read more.
Traffic accident detection is an important strategy governments can use to implement policies intended to reduce accidents. They usually use techniques such as image processing, RFID devices, among others. Social network mining has emerged as a low-cost alternative. However, social networks come with several challenges such as informal language and misspellings. This paper proposes a method to extract traffic accident data from Twitter in Spanish. The method consists of four phases. The first phase establishes the data collection mechanisms. The second consists of vectorially representing the messages and classifying them as accidents or non-accidents. The third phase uses named entity recognition techniques to detect the location. In the fourth phase, locations pass through a geocoder that returns their geographic coordinates. This method was applied to Bogota city and the data on Twitter were compared with the official traffic information source; comparisons showed some influence of Twitter on the commercial and industrial area of the city. The results reveal how effective the information on accidents reported on Twitter can be. It should therefore be considered as a source of information that may complement existing detection methods. Full article
(This article belongs to the Special Issue Decentralization and New Technologies for Social Media)
Show Figures

Figure 1

Article
Dual Co-Attention-Based Multi-Feature Fusion Method for Rumor Detection
Information 2022, 13(1), 25; https://doi.org/10.3390/info13010025 - 09 Jan 2022
Cited by 3 | Viewed by 1315
Abstract
Social media has become more popular these days due to widely used instant messaging. Nevertheless, rumor propagation on social media has become an increasingly important issue. The purpose of this study is to investigate the impact of various features in social media on [...] Read more.
Social media has become more popular these days due to widely used instant messaging. Nevertheless, rumor propagation on social media has become an increasingly important issue. The purpose of this study is to investigate the impact of various features in social media on rumor detection, propose a dual co-attention-based multi-feature fusion method for rumor detection, and explore the detection capability of the proposed method in early rumor detection tasks. The proposed BERT-based Dual Co-attention Neural Network (BDCoNN) method for rumor detection, which uses BERT for word embedding. It simultaneously integrates features from three sources: publishing user profiles, source tweets, and comments. In the BDCoNN method, user discrete features and identity descriptors in user profiles are extracted using a one-dimensional convolutional neural network (CNN) and TextCNN, respectively. The bidirectional gate recurrent unit network (BiGRU) with a hierarchical attention mechanism is used to learn the hidden layer representation of tweet sequence and comment sequence. A dual collaborative attention mechanism is used to explore the correlation among publishing user profiles, tweet content, and comments. Then the feature vector is fed into classifier to identify the implicit differences between rumor spreaders and non-rumor spreaders. In this study, we conducted several experiments on the Weibo and CED datasets collected from microblog. The results show that the proposed method achieves the state-of-the-art performance compared with baseline methods, which is 5.2% and 5% higher than the dEFEND. The F1 value is increased by 4.4% and 4%, respectively. In addition, this paper conducts research on early rumor detection tasks, which verifies the proposed method detects rumors more quickly and accurately than competitors. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence)
Show Figures

Figure 1

Article
A New Class of Autopoietic and Cognitive Machines
Information 2022, 13(1), 24; https://doi.org/10.3390/info13010024 - 08 Jan 2022
Cited by 3 | Viewed by 1511
Abstract
Making computing machines mimic living organisms has captured the imagination of many since the dawn of digital computers. However, today’s artificial intelligence technologies fall short of replicating even the basic autopoietic and cognitive behaviors found in primitive biological systems. According to Charles Darwin, [...] Read more.
Making computing machines mimic living organisms has captured the imagination of many since the dawn of digital computers. However, today’s artificial intelligence technologies fall short of replicating even the basic autopoietic and cognitive behaviors found in primitive biological systems. According to Charles Darwin, the difference in mind between humans and higher animals, great as it is, certainly is one of degree and not of kind. Autopoiesis refers to the behavior of a system that replicates itself and maintains identity and stability while facing fluctuations caused by external influences. Cognitive behaviors model the system’s state, sense internal and external changes, analyze, predict and take action to mitigate any risk to its functional fulfillment. How did intelligence evolve? what is the relationship between the mind and body? Answers to these questions should guide us to infuse autopoietic and cognitive behaviors into digital machines. In this paper, we show how to use the structural machine to build a cognitive reasoning system that integrates the knowledge from various digital symbolic and sub-symbolic computations. This approach is analogous to how the neocortex repurposed the reptilian brain and paves the path for digital machines to mimic living organisms using an integrated knowledge representation from different sources. Full article
(This article belongs to the Special Issue Fundamental Problems of Information Studies)
Show Figures

Figure 1

Correction
Correction: Liu et al. Research on Building DSM Fusion Method Based on Adaptive Spline and Target Characteristic Guidance. Information 2021, 12, 467
Information 2022, 13(1), 23; https://doi.org/10.3390/info13010023 - 07 Jan 2022
Viewed by 552
Abstract
Missing Citation [...] Full article
Review
Cyber Security in the Maritime Industry: A Systematic Survey of Recent Advances and Future Trends
Information 2022, 13(1), 22; https://doi.org/10.3390/info13010022 - 06 Jan 2022
Cited by 7 | Viewed by 5018
Abstract
The paper presents a classification of cyber attacks within the context of the state of the art in the maritime industry. A systematic categorization of vessel components has been conducted, complemented by an analysis of key services delivered within ports. The vulnerabilities of [...] Read more.
The paper presents a classification of cyber attacks within the context of the state of the art in the maritime industry. A systematic categorization of vessel components has been conducted, complemented by an analysis of key services delivered within ports. The vulnerabilities of the Global Navigation Satellite System (GNSS) have been given particular consideration since it is a critical subcategory of many maritime infrastructures and, consequently, a target for cyber attacks. Recent research confirms that the dramatic proliferation of cyber crimes is fueled by increased levels of integration of new enabling technologies, such as IoT and Big Data. The trend to greater systems integration is, however, compelling, yielding significant business value by facilitating the operation of autonomous vessels, greater exploitation of smart ports, a reduction in the level of manpower and a marked improvement in fuel consumption and efficiency of services. Finally, practical challenges and future research trends have been highlighted. Full article
(This article belongs to the Special Issue Cyber-Security for the Maritime Industry)
Show Figures

Figure 1

Article
A Rating Prediction Recommendation Model Combined with the Optimizing Allocation for Information Granularity of Attributes
Information 2022, 13(1), 21; https://doi.org/10.3390/info13010021 - 05 Jan 2022
Cited by 1 | Viewed by 758
Abstract
In recent years, graph neural networks (GNNS) have been demonstrated to be a powerful way to learn graph data. The existing recommender systems based on the implicit factor models mainly use the interactive information between users and items for training and learning. A [...] Read more.
In recent years, graph neural networks (GNNS) have been demonstrated to be a powerful way to learn graph data. The existing recommender systems based on the implicit factor models mainly use the interactive information between users and items for training and learning. A user–item graph, a user–attribute graph, and an item–attribute graph are constructed according to the interactions between users and items. The latent factors of users and items can be learned in these graph structure data. There are many methods for learning the latent factors of users and items. Still, they do not fully consider the influence of node attribute information on the representation of the latent factors of users and items. We propose a rating prediction recommendation model, short for LNNSR, utilizing the level of information granularity allocated on each attribute by developing a granular neural network. The different granularity distribution proportion weights of each attribute can be learned in the granular neural network. The learned granularity allocation proportion weights are integrated into the latent factor representation of users and items. Thus, we can capture user-embedding representations and item-embedding representations more accurately, and it can also provide a reasonable explanation for the recommendation results. Finally, we concatenate the user latent factor-embedding and the item latent factor-embedding and then feed it into a multi-layer perceptron for rating prediction. Extensive experiments on two real-world datasets demonstrate the effectiveness of the proposed framework. Full article
(This article belongs to the Special Issue Recommendation Algorithms and Web Mining)
Show Figures

Figure 1

Article
Linguistic Mathematical Relationships Saved or Lost in Translating Texts: Extension of the Statistical Theory of Translation and Its Application to the New Testament
Information 2022, 13(1), 20; https://doi.org/10.3390/info13010020 - 04 Jan 2022
Cited by 1 | Viewed by 889
Abstract
The purpose of the paper is to extend the general theory of translation to texts written in the same language and show some possible applications. The main result shows that the mutual mathematical relationships of texts in a language have been saved or [...] Read more.
The purpose of the paper is to extend the general theory of translation to texts written in the same language and show some possible applications. The main result shows that the mutual mathematical relationships of texts in a language have been saved or lost in translating them into another language and consequently texts have been mathematically distorted. To make objective comparisons, we have defined a “likeness index”—based on probability and communication theory of noisy binary digital channels-and have shown that it can reveal similarities and differences of texts. We have applied the extended theory to the New Testament translations and have assessed how much the mutual mathematical relationships present in the original Greek texts have been saved or lost in 36 languages. To avoid the inaccuracy, due to the small sample size from which the input data (regression lines) are calculated, we have adopted a “renormalization” based on Monte Carlo simulations whose results we consider as “experimental”. In general, we have found that in many languages/translations the original linguistic relationships have been lost and texts mathematically distorted. The theory can be applied to texts translated by machines. Because the theory deals with linear regression lines, the concepts of signal-to-noise-ratio and likenss index can be applied any time a scientific/technical problem involves two or more linear regression lines, therefore it is not limited to linguistic variables but it is universal. Full article
(This article belongs to the Special Issue Techniques and Data Analysis in Cultural Heritage)
Show Figures

Figure 1

Article
Evaluating the Impact of Integrating Similar Translations into Neural Machine Translation
Information 2022, 13(1), 19; https://doi.org/10.3390/info13010019 - 04 Jan 2022
Cited by 1 | Viewed by 1262
Abstract
Previous research has shown that simple methods of augmenting machine translation training data and input sentences with translations of similar sentences (or fuzzy matches), retrieved from a translation memory or bilingual corpus, lead to considerable improvements in translation quality, as assessed by [...] Read more.
Previous research has shown that simple methods of augmenting machine translation training data and input sentences with translations of similar sentences (or fuzzy matches), retrieved from a translation memory or bilingual corpus, lead to considerable improvements in translation quality, as assessed by a limited set of automatic evaluation metrics. In this study, we extend this evaluation by calculating a wider range of automated quality metrics that tap into different aspects of translation quality and by performing manual MT error analysis. Moreover, we investigate in more detail how fuzzy matches influence translations and where potential quality improvements could still be made by carrying out a series of quantitative analyses that focus on different characteristics of the retrieved fuzzy matches. The automated evaluation shows that the quality of NFR translations is higher than the NMT baseline in terms of all metrics. However, the manual error analysis did not reveal a difference between the two systems in terms of total number of translation errors; yet, different profiles emerged when considering the types of errors made. Finally, in our analysis of how fuzzy matches influence NFR translations, we identified a number of features that could be used to improve the selection of fuzzy matches for NFR data augmentation. Full article
(This article belongs to the Special Issue Machine Translation for Conquering Language Barriers)
Show Figures

Figure 1

Article
Three-Dimensional LiDAR Decoder Design for Autonomous Vehicles in Smart Cities
Information 2022, 13(1), 18; https://doi.org/10.3390/info13010018 - 04 Jan 2022
Viewed by 1234
Abstract
With the advancement of artificial intelligence, deep learning technology is applied in many fields. The autonomous car system is one of the most important application areas of artificial intelligence. LiDAR (Light Detection and Ranging) is one of the most critical components of self-driving [...] Read more.
With the advancement of artificial intelligence, deep learning technology is applied in many fields. The autonomous car system is one of the most important application areas of artificial intelligence. LiDAR (Light Detection and Ranging) is one of the most critical components of self-driving cars. LiDAR can quickly scan the environment to obtain a large amount of high-precision three-dimensional depth information. Self-driving cars use LiDAR to reconstruct the three-dimensional environment. The autonomous car system can identify various situations in the vicinity through the information provided by LiDAR and choose a safer route. This paper is based on Velodyne HDL-64 LiDAR to decode data packets of LiDAR. The decoder we designed converts the information of the original data packet into X, Y, and Z point cloud data so that the autonomous vehicle can use the decoded information to reconstruct the three-dimensional environment and perform object detection and object classification. In order to prove the performance of the proposed LiDAR decoder, we use the standard original packets used for the comparison of experimental data, which are all taken from the Map GMU (George Mason University). The average decoding time of a frame is 7.678 milliseconds. Compared to other methods, the proposed LiDAR decoder has higher decoding speed and efficiency. Full article
Show Figures

Figure 1

Article
Towards a Bibliometric Mapping of Network Public Opinion Studies
Information 2022, 13(1), 17; https://doi.org/10.3390/info13010017 - 03 Jan 2022
Cited by 5 | Viewed by 1340
Abstract
To grasp the current status of network public opinion (NPO) research and explore the knowledge base and hot trends from a quantitative perspective, we retrieved 1385 related papers and conducted a bibliometric mapping analysis on them. Co-occurrence analysis, cluster analysis, co-citation analysis and [...] Read more.
To grasp the current status of network public opinion (NPO) research and explore the knowledge base and hot trends from a quantitative perspective, we retrieved 1385 related papers and conducted a bibliometric mapping analysis on them. Co-occurrence analysis, cluster analysis, co-citation analysis and keyword burst analysis were performed using VOSviewer and CiteSpace software. The results show that the NPO is mainly distributed in the disciplinary fields associated with journalism and communication and public management. There are four main hotspots: analysis of public opinion, analysis of communication channels, technical means and challenges faced. The knowledge base in the field of NPO research includes social media, user influence, and user influence related to opinion dynamic modeling and sentiment analysis. With the advent of the era of big data, big data technology has been widely used in various fields and to some extent can be said to be the research frontier in the field. Transforming big data public opinion into early warning, realizing in-depth analysis and accurate prediction of public opinion as well as improving decision-making ability of public opinion are the future research directions of NPO. Full article
(This article belongs to the Special Issue Information Spreading on Networks)
Show Figures

Figure 1

Article
Empirical Assessment of the Long-Term Impact of an Embedded Systems Programming Requalification Programme
Information 2022, 13(1), 16; https://doi.org/10.3390/info13010016 - 30 Dec 2021
Viewed by 810
Abstract
Digital transformation has increased the demand for skilled Information Technology (IT) professionals, to an extent that universities cannot satisfy it with newly graduated students. Furthermore, the economical downturn has created difficulties and scarcity of opportunities in other areas of activity. This combination of [...] Read more.
Digital transformation has increased the demand for skilled Information Technology (IT) professionals, to an extent that universities cannot satisfy it with newly graduated students. Furthermore, the economical downturn has created difficulties and scarcity of opportunities in other areas of activity. This combination of factors led to the need to consider requalification programmes that enable individuals with diverse specialisations and backgrounds to realign their careers to the IT area. This has led to the creation of many coding bootcamps, providing intensive full-time courses focused on unemployed people or unhappy with their jobs, and individuals seeking a career change. A multidisciplinary group of higher education teachers, in collaboration with several industry stakeholders, have designed and promoted an embedded systems programming course, using an intensive project-based learning approach comprising 6 months of daylong classes and a 9 months internship. Having finished two editions of the programme, a questionnaire was presented to the students that finished successfully, in order to evaluate the long-term benefits to graduates and companies. This paper presents a brief discussion of the programme organisation and pedagogical methodologies, as well as the results of the questionnaire, conducted following a Goal–Question–Metric (GQM) approach. The results demonstrate very positive outcomes, both for graduates and companies. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

Article
Beyond Importance Scores: Interpreting Tabular ML by Visualizing Feature Semantics
Information 2022, 13(1), 15; https://doi.org/10.3390/info13010015 - 30 Dec 2021
Cited by 1 | Viewed by 1552
Abstract
Interpretability is becoming an active research topic as machine learning (ML) models are more widely used to make critical decisions. Tabular data are one of the most commonly used modes of data in diverse applications such as healthcare and finance. Much of the [...] Read more.
Interpretability is becoming an active research topic as machine learning (ML) models are more widely used to make critical decisions. Tabular data are one of the most commonly used modes of data in diverse applications such as healthcare and finance. Much of the existing interpretability methods used for tabular data only report feature-importance scores—either locally (per example) or globally (per model)—but they do not provide interpretation or visualization of how the features interact. We address this limitation by introducing Feature Vectors, a new global interpretability method designed for tabular datasets. In addition to providing feature-importance, Feature Vectors discovers the inherent semantic relationship among features via an intuitive feature visualization technique. Our systematic experiments demonstrate the empirical utility of this new method by applying it to several real-world datasets. We further provide an easy-to-use Python package for Feature Vectors. Full article
(This article belongs to the Special Issue Foundations and Challenges of Interpretable ML)
Show Figures

Figure 1

Review
Power to the Teachers: An Exploratory Review on Artificial Intelligence in Education
Information 2022, 13(1), 14; https://doi.org/10.3390/info13010014 - 29 Dec 2021
Cited by 9 | Viewed by 2621
Abstract
This exploratory review attempted to gather evidence from the literature by shedding light on the emerging phenomenon of conceptualising the impact of artificial intelligence in education. The review utilised the PRISMA framework to review the analysis and synthesis process encompassing the search, screening, [...] Read more.
This exploratory review attempted to gather evidence from the literature by shedding light on the emerging phenomenon of conceptualising the impact of artificial intelligence in education. The review utilised the PRISMA framework to review the analysis and synthesis process encompassing the search, screening, coding, and data analysis strategy of 141 items included in the corpus. Key findings extracted from the review incorporate a taxonomy of artificial intelligence applications with associated teaching and learning practice and a framework for helping teachers to develop and self-reflect on the skills and capabilities envisioned for employing artificial intelligence in education. Implications for ethical use and a set of propositions for enacting teaching and learning using artificial intelligence are demarcated. The findings of this review contribute to developing a better understanding of how artificial intelligence may enhance teachers’ roles as catalysts in designing, visualising, and orchestrating AI-enabled teaching and learning, and this will, in turn, help to proliferate AI-systems that render computational representations based on meaningful data-driven inferences of the pedagogy, domain, and learner models. Full article
(This article belongs to the Special Issue Artificial Intelligence and Games Science in Education)
Show Figures

Figure 1

Article
Impact on Inference Model Performance for ML Tasks Using Real-Life Training Data and Synthetic Training Data from GANs
Information 2022, 13(1), 9; https://doi.org/10.3390/info13010009 - 28 Dec 2021
Viewed by 1274
Abstract
Collecting and labeling of good balanced training data are usually very difficult and challenging under real conditions. In addition to classic modeling methods, Generative Adversarial Networks (GANs) offer a powerful possibility to generate synthetic training data. In this paper, we evaluate the hybrid [...] Read more.
Collecting and labeling of good balanced training data are usually very difficult and challenging under real conditions. In addition to classic modeling methods, Generative Adversarial Networks (GANs) offer a powerful possibility to generate synthetic training data. In this paper, we evaluate the hybrid usage of real-life and generated synthetic training data in different fractions and the effect on model performance. We found that a usage of up to 75% synthetic training data can compensate for both time-consuming and costly manual annotation while the model performance in our Deep Learning (DL) use case stays in the same range compared to a 100% share in hand-annotated real images. Using synthetic training data specifically tailored to induce a balanced dataset, special care can be taken concerning events that happen only on rare occasions and a prompt industrial application of ML models can be executed without too much delay, making these feasible and economically attractive for a wide scope of industrial applications in process and manufacturing industries. Hence, the main outcome of this paper is that our methodology can help to leverage the implementation of many different industrial Machine Learning and Computer Vision applications by making them economically maintainable. It can be concluded that a multitude of industrial ML use cases that require large and balanced training data containing all information that is relevant for the target model can be solved in the future following the findings that are presented in this study. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop