Next Issue
Volume 12, November
Previous Issue
Volume 12, September
 
 

Information, Volume 12, Issue 10 (October 2021) – 48 articles

Cover Story (view full-size image): In information retrieval (IR), the semantic gap represents the mismatch between users’ queries and how retrieval models answer these queries. In this paper, we explore how to use external knowledge resources to enhance bag-of-words representations and reduce the effect of the semantic gap between queries and documents. In this regard, we propose several simple but effective knowledge-based query expansion and reduction techniques, and evaluate them in relation to the medical domain. The experimental analyses on different test collections for precision medicine IR show the effectiveness of the developed techniques. In particular, a specific subset of query reformulations allows retrieval models to achieve top-performing results in all the considered test collections. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Communication
Algebraic Fault Analysis of SHA-256 Compression Function and Its Application
Information 2021, 12(10), 433; https://doi.org/10.3390/info12100433 - 19 Oct 2021
Viewed by 1429
Abstract
Cryptographic hash functions play an essential role in various aspects of cryptography, such as message authentication codes, pseudorandom number generation, digital signatures, and so on. Thus, the security of their hardware implementations is an important research topic. Hao et al. proposed an algebraic [...] Read more.
Cryptographic hash functions play an essential role in various aspects of cryptography, such as message authentication codes, pseudorandom number generation, digital signatures, and so on. Thus, the security of their hardware implementations is an important research topic. Hao et al. proposed an algebraic fault analysis (AFA) for the SHA-256 compression function in 2014. They showed that one could recover the whole of an unknown input of the SHA-256 compression function by injecting 65 faults and analyzing the outputs under normal and fault injection conditions. They also presented an almost universal forgery attack on HMAC-SHA-256 using this result. In our work, we conducted computer experiments for various fault-injection conditions in the AFA for the SHA-256 compression function. As a result, we found that one can recover the whole of an unknown input of the SHA-256 compression function by injecting an average of only 18 faults on average. We also conducted an AFA for the SHACAL-2 block cipher and an AFA for the SHA-256 compression function, enabling almost universal forgery of the chopMD-MAC function. Full article
(This article belongs to the Special Issue Side Channel Attacks and Defenses on Cryptography)
Show Figures

Figure 1

Article
An Ontological Approach to Enhancing Information Sharing in Disaster Response
Information 2021, 12(10), 432; https://doi.org/10.3390/info12100432 - 19 Oct 2021
Cited by 1 | Viewed by 1351
Abstract
Managing complex disaster situations is a challenging task because of the large number of actors involved and the critical nature of the events themselves. In particular, the different terminologies and technical vocabularies that are being exchanged among Emergency Responders (ERs) may lead to [...] Read more.
Managing complex disaster situations is a challenging task because of the large number of actors involved and the critical nature of the events themselves. In particular, the different terminologies and technical vocabularies that are being exchanged among Emergency Responders (ERs) may lead to misunderstandings. Maintaining a shared semantics for exchanged data is a major challenge. To help to overcome these issues, we elaborate a modular suite of ontologies called POLARISCO that formalizes the complex knowledge of the ERs. Such a shared vocabulary resolves inconsistent terminologies and promotes semantic interoperability among ERs. In this work, we discuss developing POLARISCO as an extension of Basic Formal Ontology (BFO) and the Common Core Ontologies (CCO). We conclude by presenting a real use-case to check the efficiency and applicability of the proposed ontology. Full article
Show Figures

Figure 1

Article
Towards Edge Computing Using Early-Exit Convolutional Neural Networks
Information 2021, 12(10), 431; https://doi.org/10.3390/info12100431 - 19 Oct 2021
Cited by 1 | Viewed by 1090
Abstract
In computer vision applications, mobile devices can transfer the inference of Convolutional Neural Networks (CNNs) to the cloud due to their computational restrictions. Nevertheless, besides introducing more network load concerning the cloud, this approach can make unfeasible applications that require low latency. A [...] Read more.
In computer vision applications, mobile devices can transfer the inference of Convolutional Neural Networks (CNNs) to the cloud due to their computational restrictions. Nevertheless, besides introducing more network load concerning the cloud, this approach can make unfeasible applications that require low latency. A possible solution is to use CNNs with early exits at the network edge. These CNNs can pre-classify part of the samples in the intermediate layers based on a confidence criterion. Hence, the device sends to the cloud only samples that have not been satisfactorily classified. This work evaluates the performance of these CNNs at the computational edge, considering an object detection application. For this, we employ a MobiletNetV2 with early exits. The experiments show that the early classification can reduce the data load and the inference time without imposing losses to the application performance. Full article
Show Figures

Figure 1

Article
File System Support for Privacy-Preserving Analysis and Forensics in Low-Bandwidth Edge Environments
Information 2021, 12(10), 430; https://doi.org/10.3390/info12100430 - 18 Oct 2021
Cited by 1 | Viewed by 1354
Abstract
In this paper, we present initial results from our distributed edge systems research in the domain of sustainable harvesting of common good resources in the Arctic Ocean. Specifically, we are developing a digital platform for real-time privacy-preserving sustainability management in the domain of [...] Read more.
In this paper, we present initial results from our distributed edge systems research in the domain of sustainable harvesting of common good resources in the Arctic Ocean. Specifically, we are developing a digital platform for real-time privacy-preserving sustainability management in the domain of commercial fishery surveillance operations. This is in response to potentially privacy-infringing mandates from some governments to combat overfishing and other sustainability challenges. Our approach is to deploy sensory devices and distributed artificial intelligence algorithms on mobile, offshore fishing vessels and at mainland central control centers. To facilitate this, we need a novel data plane supporting efficient, available, secure, tamper-proof, and compliant data management in this weakly connected offshore environment. We have built our first prototype of Dorvu, a novel distributed file system in this context. Our devised architecture, the design trade-offs among conflicting properties, and our initial experiences are further detailed in this paper. Full article
(This article belongs to the Special Issue Artificial Intelligence on the Edge)
Show Figures

Figure 1

Article
Study on Customized Shuttle Transit Mode Responding to Spatiotemporal Inhomogeneous Demand in Super-Peak
Information 2021, 12(10), 429; https://doi.org/10.3390/info12100429 - 18 Oct 2021
Viewed by 790
Abstract
Instantaneous mega-traffic flow has long been one of the major challenges in the management of mega-cities. It is difficult for the public transportation system to cope directly with transient mega-capacity flows, and the uneven spatiotemporal distribution of demand is the main cause. To [...] Read more.
Instantaneous mega-traffic flow has long been one of the major challenges in the management of mega-cities. It is difficult for the public transportation system to cope directly with transient mega-capacity flows, and the uneven spatiotemporal distribution of demand is the main cause. To this end, this paper proposed a customized shuttle bus transportation model based on the “boarding-transfer-alighting” framework, with the goal of minimizing operational costs and maximizing service quality to address mega-transit demand with uneven spatiotemporal distribution. The fleet application is constructed by a pickup and delivery problem with time window and transfer (PDPTWT) model, and a heuristic algorithm based on Tabu Search and ALNS is proposed to solve the large-scale computational problem. Numerical tests show that the proposed algorithm has the same accuracy as the commercial solution software, but has a higher speed. When the demand size is 10, the proposed algorithm can save 24,000 times of time. In addition, 6 reality-based cases are presented, and the results demonstrate that the designed option can save 9.93% of fleet cost, reduce 45.27% of vehicle waiting time, and 33.05% of passenger waiting time relative to other existing customized bus modes when encountering instantaneous passenger flows with time and space imbalance. Full article
Show Figures

Figure 1

Article
Robust and Precise Matching Algorithm Combining Absent Color Indexing and Correlation Filter
Information 2021, 12(10), 428; https://doi.org/10.3390/info12100428 - 18 Oct 2021
Viewed by 703
Abstract
This paper presents a novel method that absorbs the strong discriminative ability from absent color indexing (ABC) to enhance sensitivity and combines it with a correlation filter (CF) for obtaining a higher precision; this method is named ABC-CF. First, by separating the original [...] Read more.
This paper presents a novel method that absorbs the strong discriminative ability from absent color indexing (ABC) to enhance sensitivity and combines it with a correlation filter (CF) for obtaining a higher precision; this method is named ABC-CF. First, by separating the original color histogram, apparent and absent colors are introduced. Subsequently, an automatic threshold acquisition is proposed using a mean color histogram. Next, a histogram intersection is selected to calculate the similarity. Finally, CF follows them to solve the drift caused by ABC during the matching process. The novel approach proposed in this paper realizes robustness in distortion of target images and higher margins in fundamental matching problems, and then achieves more precise matching in positions. The effectiveness of the proposed approach can be evaluated in the comparative experiments with other representative methods by use of the open data. Full article
Show Figures

Figure 1

Article
On Information Orders on Metric Spaces
Information 2021, 12(10), 427; https://doi.org/10.3390/info12100427 - 18 Oct 2021
Viewed by 812
Abstract
Information orders play a central role in the mathematical foundations of Computer Science. Concretely, they are a suitable tool to describe processes in which the information increases successively in each step of the computation. In order to provide numerical quantifications of the amount [...] Read more.
Information orders play a central role in the mathematical foundations of Computer Science. Concretely, they are a suitable tool to describe processes in which the information increases successively in each step of the computation. In order to provide numerical quantifications of the amount of information in the aforementioned processes, S.G. Matthews introduced the notions of partial metric and Scott-like topology. The success of partial metrics is given mainly by two facts. On the one hand, they can induce the so-called specialization partial order, which is able to encode the existing order structure in many examples of spaces that arise in a natural way in Computer Science. On the other hand, their associated topology is Scott-like when the partial metric space is complete and, thus, it is able to describe the aforementioned increasing information processes in such a way that the supremum of the sequence always exists and captures the amount of information, measured by the partial metric; it also contains no information other than that which may be derived from the members of the sequence. R. Heckmann showed that the method to induce the partial order associated with a partial metric could be retrieved as a particular case of a celebrated method for generating partial orders through metrics and non-negative real-valued functions. Motivated by this fact, we explore this general method from an information orders theory viewpoint. Specifically, we show that such a method captures the essence of information orders in such a way that the function under consideration is able to quantify the amount of information and, in addition, its measurement can be used to distinguish maximal elements. Moreover, we show that this method for endowing a metric space with a partial order can also be applied to partial metric spaces in order to generate new partial orders different from the specialization one. Furthermore, we show that given a complete metric space and an inf-continuous function, the partially ordered set induced by this general method enjoys rich properties. Concretely, we will show not only its order-completeness but the directed-completeness and, in addition, that the topology induced by the metric is Scott-like. Therefore, such a mathematical structure could be used for developing metric-based tools for modeling increasing information processes in Computer Science. As a particular case of our new results, we retrieve, for a complete partial metric space, the above-explained celebrated fact about the Scott-like character of the associated topology and, in addition, that the induced partial ordered set is directed-complete and not only order-complete. Full article
Article
Critical Factors for Predicting Users’ Acceptance of Digital Museums for Experience-Influenced Environments
Information 2021, 12(10), 426; https://doi.org/10.3390/info12100426 - 17 Oct 2021
Cited by 8 | Viewed by 1921
Abstract
Digital museums that use modern technology are gradually replacing traditional museums to stimulate personal growth and promote cultural exchange and social enrichment. With the development and popularization of the mobile Internet, user experience has become a concern in this field. From the perspective [...] Read more.
Digital museums that use modern technology are gradually replacing traditional museums to stimulate personal growth and promote cultural exchange and social enrichment. With the development and popularization of the mobile Internet, user experience has become a concern in this field. From the perspective of the dynamic stage of user experience, in this study, we expand ECM and TAM by combining the characteristics of users and systems, thereby, constructing the theoretical model and 12 hypotheses about the influencing factors of users’ continuance intentions toward digital museums. A total of 262 valid questionnaires were collected, and the structural equation model tested the model. This study identifies variables that play a role and influence online behavior in a specific experiential environment: (1) Perceived playfulness, perceived usefulness, and satisfaction are the critical variables that affect users’ continuance intentions. (2) Expectation confirmation has a significant influence on perceived playfulness, perceived ease of use, and satisfaction. (3) Media richness is an essential driver of confirmation, perceived ease of use, and perceived usefulness. The conclusions can be used as a reference for managers to promote the construction and innovation of digital museums and provide a better experience to meet users’ needs. Full article
Show Figures

Figure 1

Article
Missing Data Imputation in Internet of Things Gateways
Information 2021, 12(10), 425; https://doi.org/10.3390/info12100425 - 17 Oct 2021
Cited by 3 | Viewed by 1280
Abstract
In an Internet of Things (IoT) environment, sensors collect and send data to application servers through IoT gateways. However, these data may be missing values due to networking problems or sensor malfunction, which reduces applications’ reliability. This work proposes a mechanism to predict [...] Read more.
In an Internet of Things (IoT) environment, sensors collect and send data to application servers through IoT gateways. However, these data may be missing values due to networking problems or sensor malfunction, which reduces applications’ reliability. This work proposes a mechanism to predict and impute missing data in IoT gateways to achieve greater autonomy at the network edge. These gateways typically have limited computing resources. Therefore, the missing data imputation methods must be simple and provide good results. Thus, this work presents two regression models based on neural networks to impute missing data in IoT gateways. In addition to the prediction quality, we analyzed both the execution time and the amount of memory used. We validated our models using six years of weather data from Rio de Janeiro, varying the missing data percentages. The results show that the neural network regression models perform better than the other imputation methods analyzed, based on the averages and repetition of previous values, for all missing data percentages. In addition, the neural network models present a short execution time and need less than 140 KiB of memory, which allows them to run on IoT gateways. Full article
Show Figures

Figure 1

Article
Improving Undergraduate Novice Programmer Comprehension through Case-Based Teaching with Roles of Variables to Provide Scaffolding
Information 2021, 12(10), 424; https://doi.org/10.3390/info12100424 - 16 Oct 2021
Cited by 2 | Viewed by 1061
Abstract
A role-based teaching approach was proposed in order to decrease the cognitive load placed by the case-based teaching method in the undergraduate novice programmer comprehension. The results are evaluated by using the SOLO (Structure of Observed Learning Outcomes) taxonomy. Data analysis suggested novice [...] Read more.
A role-based teaching approach was proposed in order to decrease the cognitive load placed by the case-based teaching method in the undergraduate novice programmer comprehension. The results are evaluated by using the SOLO (Structure of Observed Learning Outcomes) taxonomy. Data analysis suggested novice programmers with role-based teaching tended to experience better performances, including the SOLO level of program comprehension, program debugging scores, program explaining scores, except for programming language knowledge scores, compared with the classical case-based teaching method. Considering the SOLO category of program comprehension and performances, evidence that the roles of variables can provide scaffolding to understand case programs through combining its program structure with its related problem domain is discussed, and the SOLO categories for relational reasoning are proposed. Meanwhile, the roles of variables can assist the novice in learning programming language knowledge. These results indicate that combing case-based teaching with the role of variables is an effective way to improve novice program comprehension. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

Review
Method to Address Complexity in Organizations Based on a Comprehensive Overview
Information 2021, 12(10), 423; https://doi.org/10.3390/info12100423 - 16 Oct 2021
Cited by 2 | Viewed by 1179
Abstract
Digitalization increasingly enforces organizations to accommodate changes and gain resilience. Emerging technologies, changing organizational structures and dynamic work environments bring opportunities and pose new challenges to organizations. Such developments, together with the growing volume and variety of the exchanged data, mainly yield complexity. [...] Read more.
Digitalization increasingly enforces organizations to accommodate changes and gain resilience. Emerging technologies, changing organizational structures and dynamic work environments bring opportunities and pose new challenges to organizations. Such developments, together with the growing volume and variety of the exchanged data, mainly yield complexity. This complexity often represents a solid barrier to efficiency and impedes understanding, controlling, and improving processes in organizations. Hence, organizations are prevailingly seeking to identify and avoid unnecessary complexity, which is an odd mixture of different factors. Similarly, in research, much effort has been put into measuring, reviewing, and studying complexity. However, these efforts are highly fragmented and lack a joint perspective. Further, this negatively affects the complexity research acceptance by practitioners. In this study, we extend the body of knowledge on complexity research and practice addressing its high fragmentation. In particular, a comprehensive literature analysis of complexity research is conducted to capture different types of complexity in organizations. The results are comparatively analyzed, and a morphological box containing three aspects and ten features is developed. In addition, an established multi-dimensional complexity framework is employed to synthesize the results. Using the findings from these analyses and adopting the Goal Question Metric, we propose a method for complexity management. This method serves to provide key insights and decision support in the form of extensive guidelines for addressing complexity. Thus, our findings can assist organizations in their complexity management initiatives. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

Article
Relativistic Effects on Satellite–Ground Two–Way Precise Time Synchronization
Information 2021, 12(10), 422; https://doi.org/10.3390/info12100422 - 15 Oct 2021
Viewed by 778
Abstract
An ultrahigh precise clock (space optical clock) will be installed onboard a low-orbit spacecraft (a usual expression for a low-orbit satellite operating on an orbit at an altitude of less than 1000 km) in the future, which will be expected to obtain better [...] Read more.
An ultrahigh precise clock (space optical clock) will be installed onboard a low-orbit spacecraft (a usual expression for a low-orbit satellite operating on an orbit at an altitude of less than 1000 km) in the future, which will be expected to obtain better time-frequency performance in a microgravity environment, and provide the possible realization of ultrahigh precise long-range time synchronization. The advancement of the microwave two-way time synchronization method can offer an effective solution for developing time-frequency transfer technology. In this study, we focus on a method of precise satellite-ground two-way time synchronization and present their key aspects. For reducing the relativistic effects on two-way precise time synchronization, we propose a high-precision correction method. We show the results of tests using simulated data with fully realistic effects such as atmospheric delays, orbit errors, and earth gravity, and demonstrate the satisfactory performance of the methods. The accuracy of the relativistic error correction method is investigated in terms of the spacecraft attitude error, phase center calibration error (the residual error after calibrating phase center offset), and precise orbit determination (POD) error. The results show that the phase center calibration error and POD error contribute greatly to the residual of relativistic correction, at approximately 0.1~0.3 ps, and time synchronization accuracy better than 0.6 ps can be achieved with our proposed methods. In conclusion, the relativistic error correction method is effective, and the satellite-ground two-way precise time synchronization method yields more accurate results. The results of Beidou two-way time synchronization system can only achieve sub-ns accuracy, while the final accuracy obtained by the methods in this paper can improved to ps-level. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

Article
The Digital Dimension of Mobilities: Mapping Spatial Relationships between Corporeal and Digital Displacements in Barcelona
Information 2021, 12(10), 421; https://doi.org/10.3390/info12100421 - 15 Oct 2021
Cited by 1 | Viewed by 1392
Abstract
This paper explores the ways in which technologies reshape everyday activities, adopting a mobility perspective of the digital environment, which is reframed in terms of the constitutive/substitutive element of corporeal mobility. We propose the construction of a Digital Mobility Index, quantified by measuring [...] Read more.
This paper explores the ways in which technologies reshape everyday activities, adopting a mobility perspective of the digital environment, which is reframed in terms of the constitutive/substitutive element of corporeal mobility. We propose the construction of a Digital Mobility Index, quantified by measuring the usage typology in which the technology is employed to enable mobility. Through a digital perspective on mobilities, it is possible to investigate how embodied practices and experiences of different modes of physical or virtual displacements are facilitated and emerge through technologies. The role of technologies in facilitating the anchoring of mobilities, transporting the tangible and intangible flow of goods, and in mediating social relations through space and time is emphasized through analysis of how digital usage can reproduce models typical of the neoliberal city, the effects of which in terms of spatial (in)justice have been widely discussed in the literature. The polarization inherent to the digital divide has been characterized by a separation between what has been called the “space of flows” (well connected, mobile, and offering more opportunities) and the “space of places” (poorly connected, fixed, and isolated). This digital divide indeed takes many forms, including divisions between classes, urban locations, and national spaces. By mapping “hyper- and hypo-mobilized” territories in Barcelona, this paper examines two main dimensions of digital inequality, on the one hand identifying the usage of the technological and digital in terms of the capacity to reach services and places, and on the other, measuring the territorial demographic and economic propensity to access to ICT as a predictive insight into the geographies of the social gap which emerge at municipal level. This approach complements conventional data sources such as municipal statistics and the digital divide enquiry conducted in Barcelona into the underlying digital capacities of the city and the digital skills of the population. Full article
(This article belongs to the Special Issue Beyond Digital Transformation: Digital Divides and Digital Dividends)
Show Figures

Figure 1

Article
Industrial Networks Driven by SDN Technology for Dynamic Fast Resilience
Information 2021, 12(10), 420; https://doi.org/10.3390/info12100420 - 15 Oct 2021
Cited by 1 | Viewed by 1048
Abstract
Software-Defined Networking (SDN) provides the prospect of logically centralized management in industrial networks and simplified programming among devices. It also facilitates the reconfiguration of connectivity when there is a network element failure. This paper presents a new Industrial SDN (ISDN) resilience that addresses [...] Read more.
Software-Defined Networking (SDN) provides the prospect of logically centralized management in industrial networks and simplified programming among devices. It also facilitates the reconfiguration of connectivity when there is a network element failure. This paper presents a new Industrial SDN (ISDN) resilience that addresses the gap between two types of resilience: the first is restoration while the second is protection. Using a restoration approach increases the recovery time proportionally to the number of affected flows contrarily to the protection approach which attains the fast recovery. Nevertheless, the protection approach utilizes more flow rules (flow entries) in the switch which in return increments the lookup time taken to discover an appropriate flow entry in the flow table. This can have a negative effect on the end-to-end delay before a failure occurs (in the normal situation). In order to balance both approaches, we propose a Mixed Fast Resilience (MFR) approach to ensure the fast recovery of the primary path without any impact on the end-to-end delay in the normal situation. In the MFR, the SDN controller establishes a new path after failure detection and this is based on flow rules stored in its memory through the dynamic hash table structure as the internal flow table. At that time, it transmits the flow rules to all switches across the appropriate secondary path simultaneously from the failure point to the destination switch. Moreover, these flow rules which correspond to secondary paths are cached in the hash table by considering the current minimum path weight. This strategy leads to reduction in the load at the SDN controller and the calculation time of a new working path. The MFR approach applies the dual primary by considering several metrics such as packet-loss probability, delay, and bandwidth which are the Quality of Service (QoS) requirements for many industrial applications. Thus, we have built a simulation network and conducted an experimental testbed. The results showed that our resilience approach reduces the failure recovery time as opposed to the restoration approaches and is more scalable than a protection approach. In the normal situation, the MFR approach reduces the lookup time and end-to-end delay than a protection approach. Furthermore, the proposed approach improves the performance by minimizing the packet loss even under failing links. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Article
Financial Volatility Forecasting: A Sparse Multi-Head Attention Neural Network
Information 2021, 12(10), 419; https://doi.org/10.3390/info12100419 - 14 Oct 2021
Cited by 1 | Viewed by 1167
Abstract
Accurately predicting the volatility of financial asset prices and exploring its laws of movement have profound theoretical and practical guiding significance for financial market risk early warning, asset pricing, and investment portfolio design. The traditional methods are plagued by the problem of substandard [...] Read more.
Accurately predicting the volatility of financial asset prices and exploring its laws of movement have profound theoretical and practical guiding significance for financial market risk early warning, asset pricing, and investment portfolio design. The traditional methods are plagued by the problem of substandard prediction performance or gradient optimization. This paper proposes a novel volatility prediction method based on sparse multi-head attention (SP-M-Attention). This model discards the two-dimensional modeling strategy of time and space of the classic deep learning model. Instead, the solution is to embed a sparse multi-head attention calculation module in the network. The main advantages are that (i) it uses the inherent advantages of the multi-head attention mechanism to achieve parallel computing, (ii) it reduces the computational complexity through sparse measurements and feature compression of volatility, and (iii) it avoids the gradient problems caused by long-range propagation and therefore, is more suitable than traditional methods for the task of analysis of long time series. In the end, the article conducts an empirical study on the effectiveness of the proposed method through real datasets of major financial markets. Experimental results show that the prediction performance of the proposed model on all real datasets surpasses all benchmark models. This discovery will aid financial risk management and the optimization of investment strategies. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence Using Real Data)
Show Figures

Figure 1

Article
Could a Conversational AI Identify Offensive Language?
Information 2021, 12(10), 418; https://doi.org/10.3390/info12100418 - 12 Oct 2021
Cited by 3 | Viewed by 1729
Abstract
In recent years, we have seen a wide use of Artificial Intelligence (AI) applications in the Internet and everywhere. Natural Language Processing and Machine Learning are important sub-fields of AI that have made Chatbots and Conversational AI applications possible. Those algorithms are built [...] Read more.
In recent years, we have seen a wide use of Artificial Intelligence (AI) applications in the Internet and everywhere. Natural Language Processing and Machine Learning are important sub-fields of AI that have made Chatbots and Conversational AI applications possible. Those algorithms are built based on historical data in order to create language models, however historical data could be intrinsically discriminatory. This article investigates whether a Conversational AI could identify offensive language and it will show how large language models often produce quite a bit of unethical behavior because of bias in the historical data. Our low-level proof-of-concept will present the challenges to detect offensive language in social media and it will discuss some steps to propitiate strong results in the detection of offensive language and unethical behavior using a Conversational AI. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2020 & 2021))
Show Figures

Figure 1

Article
Cybersecurity Awareness Framework for Academia
Information 2021, 12(10), 417; https://doi.org/10.3390/info12100417 - 12 Oct 2021
Cited by 3 | Viewed by 3859
Abstract
Cybersecurity is a multifaceted global phenomenon representing complex socio-technical challenges for governments and private sectors. With technology constantly evolving, the types and numbers of cyberattacks affect different users in different ways. The majority of recorded cyberattacks can be traced to human errors. Despite [...] Read more.
Cybersecurity is a multifaceted global phenomenon representing complex socio-technical challenges for governments and private sectors. With technology constantly evolving, the types and numbers of cyberattacks affect different users in different ways. The majority of recorded cyberattacks can be traced to human errors. Despite being both knowledge- and environment-dependent, studies show that increasing users’ cybersecurity awareness is found to be one of the most effective protective approaches. However, the intangible nature, socio-technical dependencies, constant technological evolutions, and ambiguous impact make it challenging to offer comprehensive strategies for better communicating and combatting cyberattacks. Research in the industrial sector focused on creating institutional proprietary risk-aware cultures. In contrast, in academia, where cybersecurity awareness should be at the core of an academic institution’s mission to ensure all graduates are equipped with the skills to combat cyberattacks, most of the research focused on understanding students’ attitudes and behaviors after infusing cybersecurity awareness topics into some courses in a program. This work proposes a conceptual Cybersecurity Awareness Framework to guide the implementation of systems to improve the cybersecurity awareness of graduates in any academic institution. This framework comprises constituents designed to continuously improve the development, integration, delivery, and assessment of cybersecurity knowledge into the curriculum of a university across different disciplines and majors; this framework would thus lead to a better awareness among all university graduates, the future workforce. This framework may be adjusted to serve as a blueprint that, once adjusted by academic institutions to accommodate their missions, guides institutions in developing or amending their policies and procedures for the design and assessment of cybersecurity awareness. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

Article
An Approach to Ranking the Sources of Information Dissemination in Social Networks
Information 2021, 12(10), 416; https://doi.org/10.3390/info12100416 - 11 Oct 2021
Cited by 1 | Viewed by 998
Abstract
The problem of countering the spread of destructive content in social networks is currently relevant for most countries of the world. Basically, automatic monitoring systems are used to detect the sources of the spread of malicious information, and automated systems, operators, and counteraction [...] Read more.
The problem of countering the spread of destructive content in social networks is currently relevant for most countries of the world. Basically, automatic monitoring systems are used to detect the sources of the spread of malicious information, and automated systems, operators, and counteraction scenarios are used to counteract it. The paper suggests an approach to ranking the sources of the distribution of messages with destructive content. In the process of ranking objects by priority, the number of messages created by the source and the integral indicator of the involvement of its audience are considered. The approach realizes the identification of the most popular and active sources of dissemination of destructive content. The approach does not require the analysis of graphs of relationships and provides an increase in the efficiency of the operator. The proposed solution is applicable both to brand reputation monitoring systems and for countering cyberbullying and the dissemination of destructive information in social networks. Full article
(This article belongs to the Special Issue Information Spreading on Networks)
Show Figures

Figure 1

Article
Short Word-Length Entering Compressive Sensing Domain: Improved Energy Efficiency in Wireless Sensor Networks
Information 2021, 12(10), 415; https://doi.org/10.3390/info12100415 - 11 Oct 2021
Viewed by 939
Abstract
This work combines compressive sensing and short word-length techniques to achieve localization and target tracking in wireless sensor networks with energy-efficient communication between the network anchors and the fusion center. Gradient descent localization is performed using time-of-arrival (TOA) data which are indicative of [...] Read more.
This work combines compressive sensing and short word-length techniques to achieve localization and target tracking in wireless sensor networks with energy-efficient communication between the network anchors and the fusion center. Gradient descent localization is performed using time-of-arrival (TOA) data which are indicative of the distance between anchors and the target thereby achieving range-based localization. The short word-length techniques considered are delta modulation and sigma-delta modulation. The energy efficiency is due to the reduction of the data volume transmitted from anchors to the fusion center by employing any of the two delta modulation variants with compressive sensing techniques. Delta modulation allows the transmission of one bit per TOA sample. The communication energy efficiency is increased by RⱮ, R ≥ 1, where R is the sample reduction ratio of compressive sensing, and Ɱ is the number of bits originally present in a TOA-sample word. It is found that the localization system involving sigma-delta modulation has a superior performance to that using delta-modulation or pure compressive sampling alone, in terms of both energy efficiency and localization error in the presence of TOA measurement noise and transmission noise, owing to the noise shaping property of sigma-delta modulation. Full article
(This article belongs to the Special Issue Smart Systems for Information Processing in Sensor Networks)
Show Figures

Figure 1

Article
Text Mining and Sentiment Analysis of Newspaper Headlines
Information 2021, 12(10), 414; https://doi.org/10.3390/info12100414 - 09 Oct 2021
Cited by 4 | Viewed by 4938
Abstract
Text analytics are well-known in the modern era for extracting information and patterns from text. However, no study has attempted to illustrate the pattern and priorities of newspaper headlines in Bangladesh using a combination of text analytics techniques. The purpose of this paper [...] Read more.
Text analytics are well-known in the modern era for extracting information and patterns from text. However, no study has attempted to illustrate the pattern and priorities of newspaper headlines in Bangladesh using a combination of text analytics techniques. The purpose of this paper is to examine the pattern of words that appeared on the front page of a well-known daily English newspaper in Bangladesh, The Daily Star, in 2018 and 2019. The elucidation of that era’s possible social and political context was also attempted using word patterns. The study employs three widely used and contemporary text mining techniques: word clouds, sentiment analysis, and cluster analysis. The word cloud reveals that election, kill, cricket, and Rohingya-related terms appeared more than 60 times in 2018, whereas BNP, poll, kill, AL, and Khaleda appeared more than 80 times in 2019. These indicated the country’s passion for cricket, political turmoil, and Rohingya-related issues. Furthermore, sentiment analysis reveals that words of fear and negative emotions appeared more than 600 times, whereas anger, anticipation, sadness, trust, and positive-type emotions came up more than 400 times in both years. Finally, the clustering method demonstrates that election, politics, deaths, digital security act, Rohingya, and cricket-related words exhibit similarity and belong to a similar group in 2019, whereas rape, deaths, road, and fire-related words clustered in 2018 alongside a similar-appearing group. In general, this analysis demonstrates how vividly the text mining approach depicts Bangladesh’s social, political, and law-and-order situation, particularly during election season and the country’s cricket craze, and also validates the significance of the text mining approach to understanding the overall view of a country during a particular time in an efficient manner. Full article
(This article belongs to the Special Issue Text Mining: Classification, Clustering and Extraction Techniques)
Show Figures

Figure 1

Article
New Approach of Measuring Human Personality Traits Using Ontology-Based Model from Social Media Data
Information 2021, 12(10), 413; https://doi.org/10.3390/info12100413 - 08 Oct 2021
Cited by 6 | Viewed by 1838
Abstract
Human online activities leave digital traces that provide a perfect opportunity to understand their behavior better. Social media is an excellent place to spark conversations or state opinions. Thus, it generates large-scale textual data. In this paper, we harness those data to support [...] Read more.
Human online activities leave digital traces that provide a perfect opportunity to understand their behavior better. Social media is an excellent place to spark conversations or state opinions. Thus, it generates large-scale textual data. In this paper, we harness those data to support the effort of personality measurement. Our first contribution is to develop the Big Five personality trait-based model to detect human personalities from their textual data in the Indonesian language. The model uses an ontology approach instead of the more famous machine learning model. The former better captures the meaning and intention of phrases and words in the domain of human personality. The legacy and more thorough ways to assess nature are by doing interviews or by giving questionnaires. Still, there are many real-life applications where we need to possess an alternative method, which is cheaper and faster than the legacy methodology to select individuals based on their personality. The second contribution is to support the model implementation by building a personality measurement platform. We use two distinct features for the model: an n-gram sorting algorithm to parse the textual data and a crowdsourcing mechanism that facilitates public involvement contributing to the ontology corpus addition and filtering. Full article
Show Figures

Figure 1

Article
GPR Investigation at the Archaeological Site of Le Cesine, Lecce, Italy
Information 2021, 12(10), 412; https://doi.org/10.3390/info12100412 - 08 Oct 2021
Viewed by 1077
Abstract
In this contribution, we present some results achieved in the archaeological site of Le Cesine, close to Lecce, in southern Italy. The investigations have been performed in a site close to the Adriatic Sea, only slightly explored up to now, and where the [...] Read more.
In this contribution, we present some results achieved in the archaeological site of Le Cesine, close to Lecce, in southern Italy. The investigations have been performed in a site close to the Adriatic Sea, only slightly explored up to now, and where the presence of an ancient Roman harbour is alleged on the basis of remains visible above all under the current sea level. This measurement campaign has been performed in the framework of a short-term scientific mission (STSM) performed in the framework of the European Cost Action 17131 (acronym SAGA), and has been aimed to identify possible points where future localized excavation might and hopefully will be performed in the next few years. Both a traditional elaboration and an innovative data processing based on a linear inverse scattering model have been performed on the data. Full article
(This article belongs to the Special Issue Techniques and Data Analysis in Cultural Heritage)
Show Figures

Figure 1

Article
Big-Data Management: A Driver for Digital Transformation?
Information 2021, 12(10), 411; https://doi.org/10.3390/info12100411 - 07 Oct 2021
Cited by 6 | Viewed by 3204
Abstract
The rapid evolution of technology has led to a global increase in data. Due to the large volume of data, a new characterization occurred in order to better describe the new situation, namel. big data. Living in the Era of Information, businesses are [...] Read more.
The rapid evolution of technology has led to a global increase in data. Due to the large volume of data, a new characterization occurred in order to better describe the new situation, namel. big data. Living in the Era of Information, businesses are flooded with information through data processing. The digital age has pushed businesses towards finding a strategy to transform themselves in order to overtake market changes, successfully compete, and gain a competitive advantage. The aim of current paper is to extensively analyze the existing online literature to find the main (most valuable) components of big-data management according to researchers and the business community. Moreover, analysis was conducted to help readers in understanding how these components can be used from existing businesses during the process of digital transformation. Full article
Show Figures

Figure 1

Article
How Many Participants Are Required for Validation of Automated Vehicle Interfaces in User Studies?
Information 2021, 12(10), 410; https://doi.org/10.3390/info12100410 - 06 Oct 2021
Viewed by 875
Abstract
Empirical validation and verification procedures require the sophisticated development of research methodology. Therefore, researchers and practitioners in human–machine interaction and the automotive domain have developed standardized test protocols for user studies. These protocols are used to evaluate human–machine interfaces (HMI) for driver distraction [...] Read more.
Empirical validation and verification procedures require the sophisticated development of research methodology. Therefore, researchers and practitioners in human–machine interaction and the automotive domain have developed standardized test protocols for user studies. These protocols are used to evaluate human–machine interfaces (HMI) for driver distraction or automated driving. A system or HMI is validated in regard to certain criteria that it can either pass or fail. One important aspect is the number of participants to include in the study and the respective number of potential failures concerning the pass/fail criteria of the test protocol. By applying binomial tests, the present work provides recommendations on how many participants should be included in a user study. It sheds light on the degree to which inferences from a sample with specific pass/fail ratios to a population is permitted. The calculations take into account different sample sizes and different numbers of observations within a sample that fail the criterion of interest. The analyses show that required sample sizes increase to high numbers with a rising degree of controllability that is assumed for a population. The required sample sizes for a specific controllability verification (e.g., 85%) also increase if there are observed cases of fails in regard to the safety criteria. In conclusion, the present work outlines potential sample sizes and valid inferences about populations and the number of observed failures in a user study. Full article
Show Figures

Figure 1

Article
Combating Fake News with Transformers: A Comparative Analysis of Stance Detection and Subjectivity Analysis
Information 2021, 12(10), 409; https://doi.org/10.3390/info12100409 - 03 Oct 2021
Cited by 3 | Viewed by 1456
Abstract
The widespread use of social networks has brought to the foreground a very important issue, the veracity of the information circulating within them. Many natural language processing methods have been proposed in the past to assess a post’s content with respect to its [...] Read more.
The widespread use of social networks has brought to the foreground a very important issue, the veracity of the information circulating within them. Many natural language processing methods have been proposed in the past to assess a post’s content with respect to its reliability; however, end-to-end approaches are not comparable in ability to human beings. To overcome this, in this paper, we propose the use of a more modular approach that produces indicators about a post’s subjectivity and the stance provided by the replies it has received to date, letting the user decide whether (s)he trusts or does not trust the provided information. To this end, we fine-tuned state-of-the-art transformer-based language models and compared their performance with previous related work on stance detection and subjectivity analysis. Finally, we discuss the obtained results. Full article
(This article belongs to the Special Issue Information Spreading on Networks)
Show Figures

Figure 1

Article
VERCASM-CPS: Vulnerability Analysis and Cyber Risk Assessment for Cyber-Physical Systems
Information 2021, 12(10), 408; https://doi.org/10.3390/info12100408 - 30 Sep 2021
Cited by 8 | Viewed by 1883
Abstract
Since Cyber-Physical Systems (CPS) are widely used in critical infrastructures, it is essential to protect their assets from cyber attacks to increase the level of security, safety and trustworthiness, prevent failure developments, and minimize losses. It is necessary to analyze the CPS configuration [...] Read more.
Since Cyber-Physical Systems (CPS) are widely used in critical infrastructures, it is essential to protect their assets from cyber attacks to increase the level of security, safety and trustworthiness, prevent failure developments, and minimize losses. It is necessary to analyze the CPS configuration in an automatic mode to detect the most vulnerable CPS components and reconfigure or replace them promptly. In this paper, we present a methodology to determine the most secure CPS configuration by using a public database of cyber vulnerabilities to identify the most secure CPS components. We also integrate the CPS cyber risk analysis with a Controlled Moving Target Defense, which either replaces the vulnerable CPS components or re-configures the CPS to harden it, while the vulnerable components are being replaced. Our solution helps to design a more secure CPS by updating the configuration of existing CPS to make them more resilient against cyber attacks. In this paper, we will compare cyber risk scores for different CPS configurations and show that the Windows® 10 build 20H2 operating system is more secure than Linux Ubuntu® 20.04, while Red Hat® Enterprise® Linux is the most secure in some system configurations. Full article
(This article belongs to the Special Issue Secure and Trustworthy Cyber–Physical Systems)
Show Figures

Figure 1

Article
Application of Multi-Criteria Decision-Making Models for the Evaluation Cultural Websites: A Framework for Comparative Analysis
Information 2021, 12(10), 407; https://doi.org/10.3390/info12100407 - 30 Sep 2021
Cited by 3 | Viewed by 1432
Abstract
Websites in the post COVID-19 era play a very important role as the Internet gains more visitors. A website may significantly contribute to the electronic presence of a cultural organization, such as a museum, but its success should be confirmed by an evaluation [...] Read more.
Websites in the post COVID-19 era play a very important role as the Internet gains more visitors. A website may significantly contribute to the electronic presence of a cultural organization, such as a museum, but its success should be confirmed by an evaluation experiment. Taking into account the importance of such an experiment, we present in this paper DEWESA, a generalized framework that uses and compares multi-criteria decision-making models for the evaluation of cultural websites. DEWESA presents in detail the steps that have to be followed for applying and comparing multi-criteria decision-making models for cultural websites’ evaluation. The framework is implemented in the current paper for the evaluation of museum websites. In the particular case study, five different models are implemented (SAW, WPM, TOPSIS, VIKOR, and PROMETHEE II) and compared. The comparative analysis is completed by a sensitivity analysis, in which the five multi-criteria decision-making models are compared concerning their robustness. Full article
(This article belongs to the Special Issue Evaluating Methods and Decision Making)
Show Figures

Figure 1

Article
PFMNet: Few-Shot Segmentation with Query Feature Enhancement and Multi-Scale Feature Matching
Information 2021, 12(10), 406; https://doi.org/10.3390/info12100406 - 30 Sep 2021
Viewed by 1010
Abstract
The datasets in the latest semantic segmentation model often need to be manually labeled for each pixel, which is time-consuming and requires much effort. General models are unable to make better predictions, for new categories of information that have never been seen before, [...] Read more.
The datasets in the latest semantic segmentation model often need to be manually labeled for each pixel, which is time-consuming and requires much effort. General models are unable to make better predictions, for new categories of information that have never been seen before, than the few-shot segmentation that has emerged. However, the few-shot segmentation is still faced up with two challenges. One is the inadequate exploration of semantic information conveyed in the high-level features, and the other is the inconsistency of segmenting objects at different scales. To solve these two problems, we have proposed a prior feature matching network (PFMNet). It includes two novel modules: (1) the Query Feature Enhancement Module (QFEM), which makes full use of the high-level semantic information in the support set to enhance the query feature, and (2) the multi-scale feature matching module (MSFMM), which increases the matching probability of multi-scales of objects. Our method achieves an intersection over union average score of 61.3% for one-shot segmentation and 63.4% for five-shot segmentation, which surpasses the state-of-the-art results by 0.5% and 1.5%, respectively. Full article
Show Figures

Figure 1

Article
UGRansome1819: A Novel Dataset for Anomaly Detection and Zero-Day Threats
Information 2021, 12(10), 405; https://doi.org/10.3390/info12100405 - 30 Sep 2021
Cited by 4 | Viewed by 2066
Abstract
This research attempts to introduce the production methodology of an anomaly detection dataset using ten desirable requirements. Subsequently, the article presents the produced dataset named UGRansome, created with up-to-date and modern network traffic (netflow), which represents cyclostationary patterns of normal and abnormal classes [...] Read more.
This research attempts to introduce the production methodology of an anomaly detection dataset using ten desirable requirements. Subsequently, the article presents the produced dataset named UGRansome, created with up-to-date and modern network traffic (netflow), which represents cyclostationary patterns of normal and abnormal classes of threatening behaviours. It was discovered that the timestamp of various network attacks is inferior to one minute and this feature pattern was used to record the time taken by the threat to infiltrate a network node. The main asset of the proposed dataset is its implication in the detection of zero-day attacks and anomalies that have not been explored before and cannot be recognised by known threats signatures. For instance, the UDP Scan attack has been found to utilise the lowest netflow in the corpus, while the Razy utilises the highest one. In turn, the EDA2 and Globe malware are the most abnormal zero-day threats in the proposed dataset. These feature patterns are included in the corpus, but derived from two well-known datasets, namely, UGR’16 and ransomware that include real-life instances. The former incorporates cyclostationary patterns while the latter includes ransomware features. The UGRansome dataset was tested with cross-validation and compared to the KDD99 and NSL-KDD datasets to assess the performance of Ensemble Learning algorithms. False alarms have been minimized with a null empirical error during the experiment, which demonstrates that implementing the Random Forest algorithm applied to UGRansome can facilitate accurate results to enhance zero-day threats detection. Additionally, most zero-day threats such as Razy, Globe, EDA2, and TowerWeb are recognised as advanced persistent threats that are cyclostationary in nature and it is predicted that they will be using spamming and phishing for intrusion. Lastly, achieving the UGRansome balance was found to be NP-Hard due to real life-threatening classes that do not have a uniform distribution in terms of several instances. Full article
Show Figures

Figure 1

Article
Biological Tissue Damage Monitoring Method Based on IMWPE and PNN during HIFU Treatment
Information 2021, 12(10), 404; https://doi.org/10.3390/info12100404 - 30 Sep 2021
Cited by 2 | Viewed by 875
Abstract
Biological tissue damage monitoring is an indispensable part of high-intensity focused ultrasound (HIFU) treatment. As a nonlinear method, multi-scale permutation entropy (MPE) is widely used in the monitoring of biological tissue. However, the traditional MPE method neglects the amplitude information when calculating the [...] Read more.
Biological tissue damage monitoring is an indispensable part of high-intensity focused ultrasound (HIFU) treatment. As a nonlinear method, multi-scale permutation entropy (MPE) is widely used in the monitoring of biological tissue. However, the traditional MPE method neglects the amplitude information when calculating the time series complexity, and the stability of MPE is poor due to the defects in the coarse-grained process. In order to solve the above problems, the method of improved coarse-grained multi-scale weighted permutation entropy (IMWPE) is proposed in this paper. Compared with the MPE, the IMWPE method not only includes the amplitude of signal when calculating the signal complexity, but also improves the stability of entropy value. The IMWPE method is applied to the HIFU echo signals during HIFU treatment, and the probabilistic neural network (PNN) is used for monitoring the biological tissue damage. The results show that compared with multi-scale sample entropy (MSE)-PNN and MPE-PNN methods, the proposed IMWPE-PNN method can correctly identify all the normal tissues, and can more effectively identify damaged tissues and denatured tissues. The recognition rate for the three kinds of biological tissues is higher, up to 96.7%. This means that the IMWPE-PNN method can better monitor the status of biological tissue damage during HIFU treatment. Full article
(This article belongs to the Special Issue Biosignal and Medical Image Processing)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop