Previous Issue
Volume 15, May
 
 

Information, Volume 15, Issue 6 (June 2024) – 70 articles

Cover Story (view full-size image): Structured science summaries using properties beyond traditional keywords enhance science findability. Current methods, such as those used by the Open Research Knowledge Graph (ORKG), involve manual curation, which is labor-intensive and inconsistent. We propose using Large Language Models (LLMs) to automatically suggest these properties. Our study compares ORKG’s manually curated properties with those generated by LLMs, evaluating performance from the following four perspectives: semantic alignment, property mapping accuracy, cosine similarity, and expert surveys. LLMs show potential as recommendation systems for structuring science, but further fine-tuning is recommended to improve their alignment with scientific tasks and mimicry of human expertise. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 25362 KiB  
Article
An Anomaly Detection Approach to Determine Optimal Cutting Time in Cheese Formation
by Andrea Loddo, Davide Ghiani, Alessandra Perniciano, Luca Zedda, Barbara Pes and Cecilia Di Ruberto
Information 2024, 15(6), 360; https://doi.org/10.3390/info15060360 (registering DOI) - 18 Jun 2024
Abstract
The production of cheese, a beloved culinary delight worldwide, faces challenges in maintaining consistent product quality and operational efficiency. One crucial stage in this process is determining the precise cutting time during curd formation, which significantly impacts the quality of the cheese. Misjudging [...] Read more.
The production of cheese, a beloved culinary delight worldwide, faces challenges in maintaining consistent product quality and operational efficiency. One crucial stage in this process is determining the precise cutting time during curd formation, which significantly impacts the quality of the cheese. Misjudging this timing can lead to the production of inferior products, harming a company’s reputation and revenue. Conventional methods often fall short of accurately assessing variations in coagulation conditions due to the inherent potential for human error. To address this issue, we propose an anomaly-detection-based approach. In this approach, we treat the class representing curd formation as the anomaly to be identified. Our proposed solution involves utilizing a one-class, fully convolutional data description network, which we compared against several state-of-the-art methods to detect deviations from the standard coagulation patterns. Encouragingly, our results show F1 scores of up to 0.92, indicating the effectiveness of our approach. Full article
32 pages, 8136 KiB  
Article
Social Media Influencers: Customer Attitudes and Impact on Purchase Behaviour
by Galina Ilieva, Tania Yankova, Margarita Ruseva, Yulia Dzhabarova, Stanislava Klisarova-Belcheva and Marin Bratkov
Information 2024, 15(6), 359; https://doi.org/10.3390/info15060359 (registering DOI) - 18 Jun 2024
Abstract
Social media marketing has become a crucial component of contemporary business strategies, significantly influencing brand visibility, customer engagement, and sales growth. The aim of this study is to investigate and determine the key factors guiding customer attitudes towards social media influencers, and, on [...] Read more.
Social media marketing has become a crucial component of contemporary business strategies, significantly influencing brand visibility, customer engagement, and sales growth. The aim of this study is to investigate and determine the key factors guiding customer attitudes towards social media influencers, and, on that basis, to explore their effects on purchase intentions regarding advertised products or services. A total of 376 filled-in questionnaires from an online survey were analysed. The main characteristics of digital influencers’ behaviour that affect consumer perceptions have been systematized and categorized through a combination of both traditional and advanced data analysis methods. Structural equation modelling (SEM), machine learning and multi-criteria decision-making (MCDM) methods were selected to uncover the hidden dependencies between variables from the perspective of social media users. The developed models elucidate the underlying relationships that shape the acceptance mechanism of influencers’ messages. The obtained results provide specific recommendations for stakeholders across the social media marketing value chain. Marketers can make informed decisions and optimize influencer marketing strategies to enhance user experience and increase conversion rates. Working collaboratively, marketers and influencers can create impactful and successful marketing campaigns that resonate with the target audience and drive meaningful results. Customers benefit from more tailored and engaging influencer content that aligns with their interests and preferences, fostering a stronger connection with brands and potentially affecting their purchase decisions. As the perception of customer satisfaction is an individual and evolving process, stakeholders should organize regular evaluations of influencer marketing data and explore the possibilities to ensure the continuous improvement of this e-marketing channel. Full article
Show Figures

Figure 1

18 pages, 1784 KiB  
Article
Multivariate Hydrological Modeling Based on Long Short-Term Memory Networks for Water Level Forecasting
by Jackson B. Renteria-Mena, Douglas Plaza and Eduardo Giraldo
Information 2024, 15(6), 358; https://doi.org/10.3390/info15060358 - 15 Jun 2024
Viewed by 354
Abstract
In the Department of Chocó, flooding poses a recurrent and significant challenge due to heavy rainfall and the dense network of rivers characterizing the region. However, the lack of adequate infrastructure to prevent and predict floods exacerbates this situation. The absence of early [...] Read more.
In the Department of Chocó, flooding poses a recurrent and significant challenge due to heavy rainfall and the dense network of rivers characterizing the region. However, the lack of adequate infrastructure to prevent and predict floods exacerbates this situation. The absence of early warning systems, the scarcity of meteorological and hydrological monitoring stations, and deficiencies in urban planning contribute to the vulnerability of communities to these phenomena. It is imperative to invest in flood prediction and prevention infrastructure, including advanced monitoring systems, the development of hydrological prediction models, and the construction of hydraulic infrastructure, to reduce risk and protect vulnerable communities in Chocó. Additionally, raising public awareness of the associated risks and encouraging the adoption of mitigation and preparedness measures throughout the population are essential. This study introduces a novel approach for the multivariate prediction of hydrological variables, specifically focusing on water level forecasts for two hydrological stations along the Atrato River in Colombia. The model, utilizing a specialized type of recurrent neural network (RNN) called the long short-term memory (LSTM) network, integrates data from hydrological variables, such as the flow, precipitation, and level. With a model architecture featuring four inputs and two outputs, where flow and precipitation serve as inputs and the level serves as the output for each station, the LSTM model is adept at capturing the complex dynamics and cross-correlations among these variables. Validation involves comparing the LSTM model’s performance with linear and nonlinear Autoregressive with Exogenous Input (NARX) models, considering factors such as the estimation error and computational time. Furthermore, this study explores different scenarios for water level prediction, aiming to utilize the proposed approach as an effective flood early warning system. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
11 pages, 1100 KiB  
Brief Report
Autonomous Vehicle Safety through the SIFT Method: A Conceptual Analysis
by Muhammad Anshari, Mohammad Nabil Almunawar, Masairol Masri, Norma Latif Fitriyani and Muhammad Syafrudin
Information 2024, 15(6), 357; https://doi.org/10.3390/info15060357 - 15 Jun 2024
Viewed by 177
Abstract
This study aims to provide a conceptual analysis of the dynamic transformations occurring in an autonomous vehicle (AV), placing a specific emphasis on the safety implications for pedestrians and passengers. AV, also known as self-driving automobiles, are positioned as potential disruptors in the [...] Read more.
This study aims to provide a conceptual analysis of the dynamic transformations occurring in an autonomous vehicle (AV), placing a specific emphasis on the safety implications for pedestrians and passengers. AV, also known as self-driving automobiles, are positioned as potential disruptors in the contemporary transportation landscape, offering heightened safety and improved traffic efficiency. Despite these promises, the intricate nature of road scenarios and the looming specter of misinformation pose challenges that can compromise the efficacy of AV decision-making. A crucial aspect of the proposed verification process is the incorporation of the stop, investigate the source, find better coverage, trace claims, quotes, and media to the original context (SIFT) method. The SIFT method, originally designed to combat misinformation, emerges as a valuable mechanism for enhancing AV safety by ensuring the accuracy and reliability of information influencing autonomous decision-making processes. Full article
(This article belongs to the Special Issue Automotive System Security: Recent Advances and Challenges)
Show Figures

Figure 1

21 pages, 971 KiB  
Article
Prominent User Segments in Online Consumer Recommendation Communities: Capturing Behavioral and Linguistic Qualities with User Comment Embeddings
by Apostolos Skotis and Christos Livas
Information 2024, 15(6), 356; https://doi.org/10.3390/info15060356 - 15 Jun 2024
Viewed by 148
Abstract
Online conversation communities have become an influential source of consumer recommendations in recent years. We propose a set of meaningful user segments which emerge from user embedding representations, based exclusively on comments’ text input. Data were collected from three popular recommendation communities in [...] Read more.
Online conversation communities have become an influential source of consumer recommendations in recent years. We propose a set of meaningful user segments which emerge from user embedding representations, based exclusively on comments’ text input. Data were collected from three popular recommendation communities in Reddit, covering the domains of book and movie suggestions. We utilized two neural language model methods to produce user embeddings, namely Doc2Vec and Sentence-BERT. Embedding interpretation issues were addressed by examining latent factors’ associations with behavioral, sentiment, and linguistic variables, acquired using the VADER, LIWC, and LFTK libraries in Python. User clusters were identified, having different levels of engagement and linguistic characteristics. The latent features of both approaches were strongly correlated with several user behavioral and linguistic indicators. Both approaches managed to capture significant variability in writing styles and quality, such as length, readability, use of function words, and complexity. However, the Doc2Vec features better described users by varying level of contribution, while S-BERT-based features were more closely adapted to users’ varying emotional engagement. Prominent segments revealed prolific users with formal, intuitive, emotionally distant, and highly analytical styles, as well as users who were less elaborate, less consistent, but more emotionally connected. The observed patterns were largely similar across communities. Full article
(This article belongs to the Section Information Processes)
22 pages, 1014 KiB  
Article
The Application of Machine Learning in Diagnosing the Financial Health and Performance of Companies in the Construction Industry
by Jarmila Horváthová, Martina Mokrišová and Alexander Schneider
Information 2024, 15(6), 355; https://doi.org/10.3390/info15060355 - 14 Jun 2024
Viewed by 215
Abstract
Diagnosing the financial health of companies and their performance is currently one of the basic questions that attracts the attention of researchers and experts in the field of finance and management. In this study, we focused on the proposal of models for measuring [...] Read more.
Diagnosing the financial health of companies and their performance is currently one of the basic questions that attracts the attention of researchers and experts in the field of finance and management. In this study, we focused on the proposal of models for measuring the financial health and performance of businesses. These models were built for companies doing business within the Slovak construction industry. Construction companies are identified by their higher liquidity and different capital structure compared to other industries. Therefore, simple classifiers are not able to effectively predict their financial health. In this paper, we investigated whether boosting ensembles are a suitable alternative for performance analysis. The result of the research is the finding that deep learning is a suitable approach aimed at measuring the financial health and performance of the analyzed sample of companies. The developed models achieved perfect classification accuracy when using the AdaBoost and Gradient-boosting algorithms. The application of a decision tree as a base learner also proved to be very appropriate. The result is a decision tree with adequate depth and very good interpretability. Full article
(This article belongs to the Special Issue AI Applications in Construction and Infrastructure)
14 pages, 1414 KiB  
Review
The Use of AI in Software Engineering: A Synthetic Knowledge Synthesis of the Recent Research Literature
by Peter Kokol
Information 2024, 15(6), 354; https://doi.org/10.3390/info15060354 - 14 Jun 2024
Viewed by 366
Abstract
Artificial intelligence (AI) has witnessed an exponential increase in use in various applications. Recently, the academic community started to research and inject new AI-based approaches to provide solutions to traditional software-engineering problems. However, a comprehensive and holistic understanding of the current status needs [...] Read more.
Artificial intelligence (AI) has witnessed an exponential increase in use in various applications. Recently, the academic community started to research and inject new AI-based approaches to provide solutions to traditional software-engineering problems. However, a comprehensive and holistic understanding of the current status needs to be included. To close the above gap, synthetic knowledge synthesis was used to induce the research landscape of the contemporary research literature on the use of AI in software engineering. The synthesis resulted in 15 research categories and 5 themes—namely, natural language processing in software engineering, use of artificial intelligence in the management of the software development life cycle, use of machine learning in fault/defect prediction and effort estimation, employment of deep learning in intelligent software engineering and code management, and mining software repositories to improve software quality. The most productive country was China (n = 2042), followed by the United States (n = 1193), India (n = 934), Germany (n = 445), and Canada (n = 381). A high percentage (n = 47.4%) of papers were funded, showing the strong interest in this research topic. The convergence of AI and software engineering can significantly reduce the required resources, improve the quality, enhance the user experience, and improve the well-being of software developers. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)
Show Figures

Figure 1

16 pages, 1604 KiB  
Review
Strategic Approaches in Network Communication and Information Security Risk Assessment
by Nadher Alsafwani, Yousef Fazea and Fuad Alnajjar
Information 2024, 15(6), 353; https://doi.org/10.3390/info15060353 - 14 Jun 2024
Viewed by 358
Abstract
Risk assessment is a critical sub-process in information security risk management (ISRM) that is used to identify an organization’s vulnerabilities and threats as well as evaluate current and planned security controls. Therefore, adequate resources and return on investments should be considered when reviewing [...] Read more.
Risk assessment is a critical sub-process in information security risk management (ISRM) that is used to identify an organization’s vulnerabilities and threats as well as evaluate current and planned security controls. Therefore, adequate resources and return on investments should be considered when reviewing assets. However, many existing frameworks lack granular guidelines and mostly operate on qualitative human input and feedback, which increases subjective and unreliable judgment within organizations. Consequently, current risk assessment methods require additional time and cost to test all information security controls thoroughly. The principal aim of this study is to critically review the Information Security Control Prioritization (ISCP) models that improve the Information Security Risk Assessment (ISRA) process, by using literature analysis to investigate ISRA’s main problems and challenges. We recommend that designing a streamlined and standardized Information Security Control Prioritization model would greatly reduce the uncertainty, cost, and time associated with the assessment of information security controls, thereby helping organizations prioritize critical controls reliably and more efficiently based on clear and practical guidelines. Full article
(This article belongs to the Section Information Security and Privacy)
Show Figures

Figure 1

24 pages, 1627 KiB  
Article
A Novel Radio Network Information Service (RNIS) to MEC Framework in B5G Networks
by Kaíque M. R. Cunha, Sand Correa, Fabrizzio Soares, Maria Ribeiro, Waldir Moreira, Raphael Gomes, Leandro A. Freitas and Antonio Oliveira-Jr
Information 2024, 15(6), 352; https://doi.org/10.3390/info15060352 - 13 Jun 2024
Viewed by 187
Abstract
Multi-Access Edge Computing (MEC) reduces latency, provides high-bandwidth applications with real-time performance and reliability, supporting new applications and services for the present and future Beyond the Fifth Generation (B5G). Radio Network Information Service (RNIS) plays a crucial role in obtaining information from the [...] Read more.
Multi-Access Edge Computing (MEC) reduces latency, provides high-bandwidth applications with real-time performance and reliability, supporting new applications and services for the present and future Beyond the Fifth Generation (B5G). Radio Network Information Service (RNIS) plays a crucial role in obtaining information from the Radio Access Network (RAN). With the advent of 5G, RNIS requires improvements to handle information from the new generations of RAN. In this scenario, improving the RNIS is essential to boost new applications according to the strict requirements imposed. Hence, this work proposes a new RNIS as a service to the MEC framework in B5G networks to improve MEC applications. The service is validated and evaluated, and demonstrates the ability to adequately serve a large number of MEC apps (two, four, six and eight) and from 100 to 2000 types of User Equipment (UE). Full article
(This article belongs to the Special Issue Advances in Communication Systems and Networks)
22 pages, 3983 KiB  
Article
Leveraging Machine Learning to Analyze Semantic User Interactions in Visual Analytics
by Dong Hyun Jeong, Bong Keun Jeong and Soo Yeon Ji
Information 2024, 15(6), 351; https://doi.org/10.3390/info15060351 - 13 Jun 2024
Viewed by 231
Abstract
In the field of visualization, understanding users’ analytical reasoning is important for evaluating the effectiveness of visualization applications. Several studies have been conducted to capture and analyze user interactions to comprehend this reasoning process. However, few have successfully linked these interactions to users’ [...] Read more.
In the field of visualization, understanding users’ analytical reasoning is important for evaluating the effectiveness of visualization applications. Several studies have been conducted to capture and analyze user interactions to comprehend this reasoning process. However, few have successfully linked these interactions to users’ reasoning processes. This paper introduces an approach that addresses the limitation by correlating semantic user interactions with analysis decisions using an interactive wire transaction analysis system and a visual state transition matrix, both designed as visual analytics applications. The system enables interactive analysis for evaluating financial fraud in wire transactions. It also allows mapping captured user interactions and analytical decisions back onto the visualization to reveal their decision differences. The visual state transition matrix further aids in understanding users’ analytical flows, revealing their decision-making processes. Classification machine learning algorithms are applied to evaluate the effectiveness of our approach in understanding users’ analytical reasoning process by connecting the captured semantic user interactions to their decisions (i.e., suspicious, not suspicious, and inconclusive) on wire transactions. With the algorithms, an average of 72% accuracy is determined to classify the semantic user interactions. For classifying individual decisions, the average accuracy is 70%. Notably, the accuracy for classifying ‘inconclusive’ decisions is 83%. Overall, the proposed approach improves the understanding of users’ analytical decisions and provides a robust method for evaluating user interactions in visualization tools. Full article
(This article belongs to the Special Issue Information Visualization Theory and Applications)
Show Figures

Figure 1

17 pages, 5348 KiB  
Article
Machine Learning-Based Channel Estimation Techniques for ATSC 3.0
by Yu-Sun Liu, Shingchern D. You and Yu-Chun Lai
Information 2024, 15(6), 350; https://doi.org/10.3390/info15060350 - 13 Jun 2024
Viewed by 245
Abstract
Channel estimation accuracy significantly affects the performance of orthogonal frequency-division multiplexing (OFDM) systems. In the literature, there are quite a few channel estimation methods. However, the performances of these methods deteriorate considerably when the wireless channels suffer from nonlinear distortions and interferences. Machine [...] Read more.
Channel estimation accuracy significantly affects the performance of orthogonal frequency-division multiplexing (OFDM) systems. In the literature, there are quite a few channel estimation methods. However, the performances of these methods deteriorate considerably when the wireless channels suffer from nonlinear distortions and interferences. Machine learning (ML) shows great potential for solving nonparametric problems. This paper proposes ML-based channel estimation methods for systems with comb-type pilot patterns and random pilot symbols, such as ATSC 3.0. We compare their performances with conventional channel estimations in ATSC 3.0 systems for linear and nonlinear channel models. We also evaluate the robustness of the ML-based methods against channel model mismatch and signal-to-noise ratio (SNR) mismatch. The results show that the ML-based channel estimations achieve good mean squared error (MSE) performance for linear and nonlinear channels if the channel statistics used for the training stage match those of the deployment stage. Otherwise, the ML estimation models may overfit the training channel, leading to poor deployment performance. Furthermore, the deep neural network (DNN)-based method does not outperform the linear channel estimation methods in nonlinear channels. Full article
(This article belongs to the Special Issue Recent Advances in Communications Technology)
Show Figures

Figure 1

25 pages, 1272 KiB  
Article
Driving across Markets: An Analysis of a Human–Machine Interface in Different International Contexts
by Denise Sogemeier, Yannick Forster, Frederik Naujoks, Josef F. Krems and Andreas Keinath
Information 2024, 15(6), 349; https://doi.org/10.3390/info15060349 - 12 Jun 2024
Viewed by 338
Abstract
The design of automotive human–machine interfaces (HMIs) for global consumers’ needs to cater to a broad spectrum of drivers. This paper comprises benchmark studies and explores how users from international markets—Germany, China, and the United States—engage with the same automotive HMI. In real [...] Read more.
The design of automotive human–machine interfaces (HMIs) for global consumers’ needs to cater to a broad spectrum of drivers. This paper comprises benchmark studies and explores how users from international markets—Germany, China, and the United States—engage with the same automotive HMI. In real driving scenarios, N = 301 participants (premium vehicle owners) completed several tasks using different interaction modalities. The multi-method approach included both self-report measures to assess preference and satisfaction through well-established questionnaires and observational measures, namely experimenter ratings, to capture interaction performance. We observed a trend towards lower preference ratings in the Chinese sample. Further, interaction performance differed across the user groups, with self-reported preference not consistently aligning with observed performance. This dissociation accentuates the importance of integrating both measures in user studies. By employing benchmark data, we provide insights into varied market-based perspectives on automotive HMIs. The findings highlight the necessity for a nuanced approach to HMI design that considers diverse user preferences and interaction patterns. Full article
20 pages, 1007 KiB  
Article
HitSim: An Efficient Algorithm for Single-Source and Top-k SimRank Computation
by Jing Bai, Junfeng Zhou, Shuotong Chen, Ming Du, Ziyang Chen and Mengtao Min
Information 2024, 15(6), 348; https://doi.org/10.3390/info15060348 - 12 Jun 2024
Viewed by 220
Abstract
SimRank is a widely used metric for evaluating vertex similarity based on graph topology, with diverse applications such as large-scale graph mining and natural language processing. The objective of the single-source and top-k SimRank query problem is to retrieve the kvertices with [...] Read more.
SimRank is a widely used metric for evaluating vertex similarity based on graph topology, with diverse applications such as large-scale graph mining and natural language processing. The objective of the single-source and top-k SimRank query problem is to retrieve the kvertices with the largest SimRank to the source vertex. However, existing algorithms suffer from inefficiency as they require computing SimRank for all vertices to retrieve the top-k results. To address this issue, we propose an algorithm named HitSimthat utilizes a branch and bound strategy for the single-source and top-k query. HitSim initially partitions vertices into distinct sets based on their shortest-meeting lengths to the source vertex. Subsequently, it computes an upper bound of SimRank for each set. If the upper bound of a set is no larger than the minimum value of the current top-k results, HitSim efficiently batch-prunes the unpromising vertices within the set. However, in scenarios where the graph becomes dense, certain sets with large upper bounds may contain numerous vertices with small SimRank, leading to redundant overhead when processing these vertices. To address this issue, we propose an optimized algorithm named HitSim-OPT that computes the upper bound of SimRank for each vertex instead of each set, resulting in a fine-grained and efficient pruning process. The experimental results conducted on six real-world datasets demonstrate the performance of our algorithms in efficiently addressing the single-source and top-k query problem. Full article
Show Figures

Figure 1

9 pages, 6444 KiB  
Correction
Correction: Yi et al. SFS-AGGL: Semi-Supervised Feature Selection Integrating Adaptive Graph with Global and Local Information. Information 2024, 15, 57
by Yugen Yi, Haoming Zhang, Ningyi Zhang, Wei Zhou, Xiaomei Huang, Gengsheng Xie and Caixia Zheng
Information 2024, 15(6), 347; https://doi.org/10.3390/info15060347 - 12 Jun 2024
Viewed by 120
Abstract
In the original publication [...] Full article
Show Figures

Figure 2

18 pages, 462 KiB  
Article
Factors for Customers’ AI Use Readiness in Physical Retail Stores: The Interplay of Consumer Attitudes and Gender Differences
by Nina Kolar, Borut Milfelner and Aleksandra Pisnik
Information 2024, 15(6), 346; https://doi.org/10.3390/info15060346 - 12 Jun 2024
Viewed by 315
Abstract
In addressing the nuanced interplay between consumer attitudes and Artificial Intelligence (AI) use readiness in physical retail stores, the main objective of this study is to test the impacts of prior experience, as well as perceived risks with AI technologies, self-assessment of consumers’ [...] Read more.
In addressing the nuanced interplay between consumer attitudes and Artificial Intelligence (AI) use readiness in physical retail stores, the main objective of this study is to test the impacts of prior experience, as well as perceived risks with AI technologies, self-assessment of consumers’ ability to manage AI technologies, and the moderator role of gender in this relationship. Using a quantitative cross-sectional survey, data from 243 consumers familiar with AI technologies were analyzed using structural equation modeling (SEM) methods to explore these dynamics in the context of physical retail stores. Additionally, the moderating impacts were tested after the invariance analysis across both gender groups. Key findings indicate that positive prior experience with AI technologies positively influences AI use readiness in physical retail stores, while perceived risks with AI technologies serve as a deterrent. Gender differences significantly moderate these effects, with perceived risks with AI technologies more negatively impacting women’s AI use readiness and self-assessment of the ability to manage AI technologies showing a stronger positive impact on men’s AI use readiness. The study concludes that retailers must consider these gender-specific perceptions and attitudes toward AI to develop more effective strategies for technology integration. Our research also highlights the need to address gender-specific barriers and biases when adopting AI technology. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

19 pages, 2810 KiB  
Article
Large Language Models (LLMs) in Engineering Education: A Systematic Review and Suggestions for Practical Adoption
by Stefano Filippi and Barbara Motyl
Information 2024, 15(6), 345; https://doi.org/10.3390/info15060345 - 12 Jun 2024
Viewed by 334
Abstract
The use of large language models (LLMs) is now spreading in several areas of research and development. This work is concerned with systematically reviewing LLMs’ involvement in engineering education. Starting from a general research question, two queries were used to select 370 papers [...] Read more.
The use of large language models (LLMs) is now spreading in several areas of research and development. This work is concerned with systematically reviewing LLMs’ involvement in engineering education. Starting from a general research question, two queries were used to select 370 papers from the literature. Filtering them through several inclusion/exclusion criteria led to the selection of 20 papers. These were investigated based on eight dimensions to identify areas of engineering disciplines that involve LLMs, where they are most present, how this involvement takes place, and which LLM-based tools are used, if any. Addressing these key issues allowed three more specific research questions to be answered, offering a clear overview of the current involvement of LLMs in engineering education. The research outcomes provide insights into the potential and challenges of LLMs in transforming engineering education, contributing to its responsible and effective future implementation. This review’s outcomes could help address the best ways to involve LLMs in engineering education activities and measure their effectiveness as time progresses. For this reason, this study addresses suggestions on how to improve activities in engineering education. The systematic review on which this research is based conforms to the rules of the current literature regarding inclusion/exclusion criteria and quality assessments in order to make the results as objective as possible and easily replicable. Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
Show Figures

Figure 1

21 pages, 5173 KiB  
Article
Knowledge-Driven and Diffusion Model-Based Methods for Generating Historical Building Facades: A Case Study of Traditional Minnan Residences in China
by Sirui Xu, Jiaxin Zhang and Li Yunqin
Information 2024, 15(6), 344; https://doi.org/10.3390/info15060344 - 11 Jun 2024
Viewed by 194
Abstract
The preservation of historical traditional architectural ensembles faces multifaceted challenges, and the need for facade renovation and updates has become increasingly prominent. In conventional architectural updating and renovation processes, assessing design schemes and the redesigning component are often time-consuming and labor-intensive. The knowledge-driven [...] Read more.
The preservation of historical traditional architectural ensembles faces multifaceted challenges, and the need for facade renovation and updates has become increasingly prominent. In conventional architectural updating and renovation processes, assessing design schemes and the redesigning component are often time-consuming and labor-intensive. The knowledge-driven method utilizes a wide range of knowledge resources, such as historical documents, architectural drawings, and photographs, commonly used to guide and optimize the conservation, restoration, and management of architectural heritage. Recently, the emergence of artificial intelligence-generated content (AIGC) technologies has provided new solutions for creating architectural facades, introducing a new research paradigm to the renovation plans for historic districts with their variety of options and high efficiency. In this study, we propose a workflow combining Grasshopper with Stable Diffusion: starting with Grasshopper to generate concise line drawings, then using the ControlNet and low-rank adaptation (LoRA) models to produce images of traditional Minnan architectural facades, allowing designers to quickly preview and modify the facade designs during the renovation of traditional architectural clusters. Our research results demonstrate Stable Diffusion’s precise understanding and execution ability concerning architectural facade elements, capable of generating regional traditional architectural facades that meet architects’ requirements for style, size, and form based on existing images and prompt descriptions, revealing the immense potential for application in the renovation of traditional architectural groups and historic districts. It should be noted that the correlation between specific architectural images and proprietary term prompts still requires further addition due to the limitations of the database. Although the model generally performs well when trained on traditional Chinese ancient buildings, the accuracy and clarity of more complex decorative parts still need enhancement, necessitating further exploration of solutions for handling facade details in the future. Full article
(This article belongs to the Special Issue AI Applications in Construction and Infrastructure)
33 pages, 2156 KiB  
Article
Identification of Optimal Data Augmentation Techniques for Multimodal Time-Series Sensory Data: A Framework
by Nazish Ashfaq, Muhammad Hassan Khan and Muhammad Adeel Nisar
Information 2024, 15(6), 343; https://doi.org/10.3390/info15060343 - 11 Jun 2024
Viewed by 350
Abstract
Recently, the research community has shown significant interest in the continuous temporal data obtained from motion sensors in wearable devices. These data are useful for classifying and analysing different human activities in many application areas such as healthcare, sports and surveillance. The literature [...] Read more.
Recently, the research community has shown significant interest in the continuous temporal data obtained from motion sensors in wearable devices. These data are useful for classifying and analysing different human activities in many application areas such as healthcare, sports and surveillance. The literature has presented a multitude of deep learning models that aim to derive a suitable feature representation from temporal sensory input. However, the presence of a substantial quantity of annotated training data is crucial to adequately train the deep networks. Nevertheless, the data originating from the wearable devices are vast but ineffective due to a lack of labels which hinders our ability to train the models with optimal efficiency. This phenomenon leads to the model experiencing overfitting. The contribution of the proposed research is twofold: firstly, it involves a systematic evaluation of fifteen different augmentation strategies to solve the inadequacy problem of labeled data which plays a critical role in the classification tasks. Secondly, it introduces an automatic feature-learning technique proposing a Multi-Branch Hybrid Conv-LSTM network to classify human activities of daily living using multimodal data of different wearable smart devices. The objective of this study is to introduce an ensemble deep model that effectively captures intricate patterns and interdependencies within temporal data. The term “ensemble model” pertains to fusion of distinct deep models, with the objective of leveraging their own strengths and capabilities to develop a solution that is more robust and efficient. A comprehensive assessment of ensemble models is conducted using data-augmentation techniques on two prominent benchmark datasets: CogAge and UniMiB-SHAR. The proposed network employs a range of data-augmentation methods to improve the accuracy of atomic and composite activities. This results in a 5% increase in accuracy for composite activities and a 30% increase for atomic activities. Full article
(This article belongs to the Special Issue Human Activity Recognition and Biomedical Signal Processing)
Show Figures

Figure 1

24 pages, 4230 KiB  
Article
Understanding Local Government Cybersecurity Policy: A Concept Map and Framework
by Sk Tahsin Hossain, Tan Yigitcanlar, Kien Nguyen and Yue Xu
Information 2024, 15(6), 342; https://doi.org/10.3390/info15060342 - 10 Jun 2024
Viewed by 444
Abstract
Cybersecurity is a crucial concern for local governments as they serve as the primary interface between public and government services, managing sensitive data and critical infrastructure. While technical safeguards are integral to cybersecurity, the role of a well-structured policy is equally important as [...] Read more.
Cybersecurity is a crucial concern for local governments as they serve as the primary interface between public and government services, managing sensitive data and critical infrastructure. While technical safeguards are integral to cybersecurity, the role of a well-structured policy is equally important as it provides structured guidance to translate technical requirements into actionable protocols. This study reviews local governments’ cybersecurity policies to provide a comprehensive assessment of how these policies align with the National Institute of Standards and Technology’s Cybersecurity Framework 2.0, which is a widely adopted and commonly used cybersecurity assessment framework. This review offers local governments a mirror to reflect on their cybersecurity stance, identifying potential vulnerabilities and areas needing urgent attention. This study further extends the development of a cybersecurity policy framework, which local governments can use as a strategic tool. It provides valuable information on crucial cybersecurity elements that local governments must incorporate into their policies to protect confidential data and critical infrastructure. Full article
(This article belongs to the Special Issue Cybersecurity, Cybercrimes, and Smart Emerging Technologies)
Show Figures

Figure 1

15 pages, 21532 KiB  
Article
Social-STGMLP: A Social Spatio-Temporal Graph Multi-Layer Perceptron for Pedestrian Trajectory Prediction
by Dexu Meng, Guangzhe Zhao and Feihu Yan
Information 2024, 15(6), 341; https://doi.org/10.3390/info15060341 - 10 Jun 2024
Viewed by 257
Abstract
As autonomous driving technology advances, the imperative of ensuring pedestrian traffic safety becomes increasingly prominent within the design framework of autonomous driving systems. Pedestrian trajectory prediction stands out as a pivotal technology aiming to address this challenge by striving to precisely forecast pedestrians’ [...] Read more.
As autonomous driving technology advances, the imperative of ensuring pedestrian traffic safety becomes increasingly prominent within the design framework of autonomous driving systems. Pedestrian trajectory prediction stands out as a pivotal technology aiming to address this challenge by striving to precisely forecast pedestrians’ future trajectories, thereby enabling autonomous driving systems to execute timely and accurate decisions. However, the prevailing state-of-the-art models often rely on intricate structures and a substantial number of parameters, posing challenges in meeting the imperative demand for lightweight models within autonomous driving systems. To address these challenges, we introduce Social Spatio-Temporal Graph Multi-Layer Perceptron (Social-STGMLP), a novel approach that utilizes solely fully connected layers and layer normalization. Social-STGMLP operates by abstracting pedestrian trajectories into a spatio-temporal graph, facilitating the modeling of both the spatial social interaction among pedestrians and the temporal motion tendency inherent to pedestrians themselves. Our evaluation of Social-STGMLP reveals its superiority over the reference method, as evidenced by experimental results indicating reductions of 5% in average displacement error (ADE) and 17% in final displacement error (FDE). Full article
(This article belongs to the Section Artificial Intelligence)
30 pages, 1001 KiB  
Article
Genre Classification of Books in Russian with Stylometric Features: A Case Study
by Natalia Vanetik, Margarita Tiamanova, Genady Kogan and Marina Litvak
Information 2024, 15(6), 340; https://doi.org/10.3390/info15060340 - 7 Jun 2024
Viewed by 366
Abstract
Within the literary domain, genres function as fundamental organizing concepts that provide readers, publishers, and academics with a unified framework. Genres are discrete categories that are distinguished by common stylistic, thematic, and structural components. They facilitate the categorization process and improve our understanding [...] Read more.
Within the literary domain, genres function as fundamental organizing concepts that provide readers, publishers, and academics with a unified framework. Genres are discrete categories that are distinguished by common stylistic, thematic, and structural components. They facilitate the categorization process and improve our understanding of a wide range of literary expressions. In this paper, we introduce a new dataset for genre classification of Russian books, covering 11 literary genres. We also perform dataset evaluation for the tasks of binary and multi-class genre identification. Through extensive experimentation and analysis, we explore the effectiveness of different text representations, including stylometric features, in genre classification. Our findings clarify the challenges present in classifying Russian literature by genre, revealing insights into the performance of different models across various genres. Furthermore, we address several research questions regarding the difficulty of multi-class classification compared to binary classification, and the impact of stylometric features on classification accuracy. Full article
(This article belongs to the Special Issue Text Mining: Challenges, Algorithms, Tools and Applications)
Show Figures

Figure 1

16 pages, 1739 KiB  
Article
Light-Field Image Compression Based on a Two-Dimensional Prediction Coding Structure
by Jianrui Shao, Enjian Bai, Xueqin Jiang and Yun Wu
Information 2024, 15(6), 339; https://doi.org/10.3390/info15060339 - 7 Jun 2024
Viewed by 316
Abstract
Light-field images (LFIs) are gaining increased attention within the field of 3D imaging, virtual reality, and digital refocusing, owing to their wealth of spatial and angular information. The escalating volume of LFI data poses challenges in terms of storage and transmission. To address [...] Read more.
Light-field images (LFIs) are gaining increased attention within the field of 3D imaging, virtual reality, and digital refocusing, owing to their wealth of spatial and angular information. The escalating volume of LFI data poses challenges in terms of storage and transmission. To address this problem, this paper introduces an MSHPE (most-similar hierarchical prediction encoding) structure based on light-field multi-view images. By systematically exploring the similarities among sub-views, our structure obtains residual views through the subtraction of the encoded view from its corresponding reference view. Regarding the encoding process, this paper implements a new encoding scheme to process all residual views, achieving lossless compression. High-efficiency video coding (HEVC) is applied to encode select residual views, thereby achieving lossy compression. Furthermore, the introduced structure is conceptualized as a layered coding scheme, enabling progressive transmission and showing good random access performance. Experimental results demonstrate the superior compression performance attained by encoding residual views according to the proposed structure, outperforming alternative structures. Notably, when HEVC is employed for encoding residual views, significant bit savings are observed compared to the direct encoding of original views. The final restored view presents better detail quality, reinforcing the effectiveness of this approach. Full article
Show Figures

Figure 1

14 pages, 1202 KiB  
Article
The Impact of Operant Resources on the Task Performance of Learners via Knowledge Management Process
by Quoc Trung Pham, Canh Khiem Le, Dinh Thai Linh Huynh and Sanjay Misra
Information 2024, 15(6), 338; https://doi.org/10.3390/info15060338 - 7 Jun 2024
Viewed by 719
Abstract
In human resource management, training is considered one of the most effective ways to improve employees’ task performance. However, the effectiveness of training depends mostly on the resources and effort of learners, especially the operant resources. This study investigates the influence of operant [...] Read more.
In human resource management, training is considered one of the most effective ways to improve employees’ task performance. However, the effectiveness of training depends mostly on the resources and effort of learners, especially the operant resources. This study investigates the influence of operant resources on individual task performance within the framework of knowledge management. Building on existing research, a quantitative model was developed and tested using data from 296 Vietnamese managers and senior employees. Data analysis employed SPSS 21 and AMOS 24 software. The findings provide strong support for all nine proposed hypotheses, demonstrating a positive impact of operant resources on both learner behavior and subsequent task performance. The research highlights the significant role of individual operant resources in enhancing learning outcomes and employee effectiveness. Managerial implications are derived from these results, offering practical guidance for businesses to improve training activities and ultimately boost employee task performance. Full article
(This article belongs to the Special Issue Editorial Board Members’ Collection Series: "Information Processes")
Show Figures

Figure 1

23 pages, 1687 KiB  
Article
Production Scheduling Based on a Multi-Agent System and Digital Twin: A Bicycle Industry Case
by Vasilis Siatras, Emmanouil Bakopoulos, Panagiotis Mavrothalassitis, Nikolaos Nikolakis and Kosmas Alexopoulos
Information 2024, 15(6), 337; https://doi.org/10.3390/info15060337 - 6 Jun 2024
Viewed by 329
Abstract
The emerging digitalization in today’s industrial environments allows manufacturers to store online knowledge about production and use it to make better informed management decisions. This paper proposes a multi-agent framework enhanced with digital twin (DT) for production scheduling and optimization. Decentralized scheduling agents [...] Read more.
The emerging digitalization in today’s industrial environments allows manufacturers to store online knowledge about production and use it to make better informed management decisions. This paper proposes a multi-agent framework enhanced with digital twin (DT) for production scheduling and optimization. Decentralized scheduling agents interact to efficiently manage the work allocation in different segments of production. A DT is used to evaluate the performance of different scheduling decisions and to avoid potential risks and bottlenecks. Production managers can supervise the system’s decision-making processes and manually regulate them online. The multi-agent system (MAS) uses asset administration shells (AASs) for data modelling and communication, enabling interoperability and scalability. The framework was deployed and tested in an industrial pilot coming from the bicycle production industry, optimizing and controlling the short-term production schedule of the different departments. The evaluation resulted in a higher production rate, thus achieving higher production volume in a shorter time span. Managers were also able to coordinate schedules from different departments in a dynamic way and achieve early bottleneck detection. Full article
(This article belongs to the Special Issue Intelligent Agent and Multi-Agent System)
20 pages, 5717 KiB  
Article
Uncertainty-Driven Data Aggregation for Imitation Learning in Autonomous Vehicles
by Changquan Wang and Yun Wang
Information 2024, 15(6), 336; https://doi.org/10.3390/info15060336 - 6 Jun 2024
Viewed by 229
Abstract
Imitation learning has shown promise for autonomous driving, but suffers from covariate shift, where the policy performs poorly in unseen environments. DAgger is a popular approach that addresses this by leveraging expert demonstrations. However, DAgger’s frequent visits to sub-optimal states can lead to [...] Read more.
Imitation learning has shown promise for autonomous driving, but suffers from covariate shift, where the policy performs poorly in unseen environments. DAgger is a popular approach that addresses this by leveraging expert demonstrations. However, DAgger’s frequent visits to sub-optimal states can lead to several challenges. This paper proposes a novel DAgger framework that integrates Bayesian uncertainty estimation via mean field variational inference (MFVI) to address this issue. MFVI provides better-calibrated uncertainty estimates compared to prior methods. During training, the framework identifies both uncertain and critical states, querying the expert only for these states. This targeted data collection reduces the burden on the expert and improves data efficiency. Evaluations on the CARLA simulator demonstrate that our approach outperforms existing methods, highlighting the effectiveness of Bayesian uncertainty estimation and targeted data aggregation for imitation learning in autonomous driving. Full article
Show Figures

Figure 1

28 pages, 1806 KiB  
Article
Dynamic Workload Management System in the Public Sector
by Konstantinos C. Giotopoulos, Dimitrios Michalopoulos, Gerasimos Vonitsanos, Dimitris Papadopoulos, Ioanna Giannoukou and Spyros Sioutas
Information 2024, 15(6), 335; https://doi.org/10.3390/info15060335 - 6 Jun 2024
Viewed by 333
Abstract
Workload management is a cornerstone of contemporary human resource management with widespread applications in private and public sectors. The challenges in human resource management are particularly pronounced within the public sector: particularly in task allocation. The absence of a standardized workload distribution method [...] Read more.
Workload management is a cornerstone of contemporary human resource management with widespread applications in private and public sectors. The challenges in human resource management are particularly pronounced within the public sector: particularly in task allocation. The absence of a standardized workload distribution method presents a significant challenge and results in unnecessary costs in terms of man-hours and financial resources expended on surplus human resource utilization. In the current research, we analyze how to deal with the “race condition” above and propose a dynamic workload management model based on the response time required to implement each task. Our model is trained and tested using comprehensive employee data comprising 450 records for training, 100 records for testing, and 88 records for validation. Approximately 11% of the initial data are deemed either inaccurate or invalid. The deployment of the ANFIS algorithm provides a quantified capability for each employee to handle tasks in the public sector. The proposed idea is deployed in a virtualized platform where each employee is implemented as an independent node with specific capabilities. An upper limit of work acceptance is proposed based on a documented study and laws that suggest work time frames in each public body, ensuring that no employee reaches the saturation level of exhaustion. In addition, a variant of the “slow start” model is incorporated as a hybrid congestion control mechanism with exceptional outcomes, offering a gradual execution window for each node under test and providing a smooth and controlled start-up phase for new connections. The ultimate goal is to identify and outline the entire structure of the Greek public sector along with the capabilities of its employees, thereby determining the organization’s executive capacity. Full article
Show Figures

Figure 1

19 pages, 5424 KiB  
Systematic Review
Network Structure of Online Customer Reviews and Online Hotel Reviews: A Systematic Literature Review
by Maria Helena Pestana, Manuel Gageiro, José António C. Santos and Margarida Custódio Santos
Information 2024, 15(6), 334; https://doi.org/10.3390/info15060334 - 6 Jun 2024
Viewed by 350
Abstract
This study conducts a bibliometric analysis of online customer and hotel review research, aiming to provide insights into where each field comes from, stands now and ought to go in the future. In particular, this study examines how the existing research on online [...] Read more.
This study conducts a bibliometric analysis of online customer and hotel review research, aiming to provide insights into where each field comes from, stands now and ought to go in the future. In particular, this study examines how the existing research on online customer reviews can benefit future hotel review research. Data collected from Web-of-Science and Scopus created an expanded network of 797 core articles and 19,374 citations to identify intellectual structures, developing trends, and future research gaps. This study offers a visual overview of journals, institutions, countries, research themes and authors to assess the overall directions hotels can take. It underscores the necessity for rigorous and relevant research amid the proliferation of online reviews and emphasises the imperative for academia to bridge the gap between theoretical insights and practical applications within the dynamic tourism industry. This study provides researchers and industry professionals with useful tools to understand and deal with the complexities of online reviews. It also highlights the important role these reviews play in shaping the future of tourism strategies. Full article
Show Figures

Figure 1

17 pages, 2944 KiB  
Article
Measuring Potential People’s Acceptance of Mobility as a Service: Evidence from Pilot Surveys
by Corrado Rindone and Antonino Vitetta
Information 2024, 15(6), 333; https://doi.org/10.3390/info15060333 - 6 Jun 2024
Viewed by 271
Abstract
Sustainable mobility is one of the main challenges on a global level. In this context, the emerging Mobility as a Service (MaaS) plays an important role in the mobility of people. This paper investigates the main enabling factors for implementing the MaaS paradigm, [...] Read more.
Sustainable mobility is one of the main challenges on a global level. In this context, the emerging Mobility as a Service (MaaS) plays an important role in the mobility of people. This paper investigates the main enabling factors for implementing the MaaS paradigm, with a specific focus on the level of acceptance of this new technology. To achieve this objective, the proposed methodology for measuring the potential MaaS acceptance is based on a set of pilot surveys. The methodology integrates motivational surveys with Stated and Revealed Preference (SP, RP) and Technology Acceptance Models (TAM). The collected data are processed to obtain indicators that measure the potential level of MaaS acceptance. The main results of the two pilot experiments are illustrated by referring to urban and extra-urban mobility with or without physical barriers. The results obtained show that the level of MaaS acceptance grows with the increase in generalized transport costs perceived by the users. Full article
Show Figures

Figure 1

53 pages, 6188 KiB  
Review
A Survey of Text-Matching Techniques
by Peng Jiang and Xiaodong Cai
Information 2024, 15(6), 332; https://doi.org/10.3390/info15060332 - 5 Jun 2024
Viewed by 345
Abstract
Text matching, as a core technology of natural language processing, plays a key role in tasks such as question-and-answer systems and information retrieval. In recent years, the development of neural networks, attention mechanisms, and large-scale language models has significantly contributed to the advancement [...] Read more.
Text matching, as a core technology of natural language processing, plays a key role in tasks such as question-and-answer systems and information retrieval. In recent years, the development of neural networks, attention mechanisms, and large-scale language models has significantly contributed to the advancement of text-matching technology. However, the rapid development of the field also poses challenges in fully understanding the overall impact of these technological improvements. This paper aims to provide a concise, yet in-depth, overview of the field of text matching, sorting out the main ideas, problems, and solutions for text-matching methods based on statistical methods and neural networks, as well as delving into matching methods based on large-scale language models, and discussing the related configurations, API applications, datasets, and evaluation methods. In addition, this paper outlines the applications and classifications of text matching in specific domains and discusses the current open problems that are being faced and future research directions, to provide useful references for further developments in the field. Full article
Show Figures

Figure 1

24 pages, 543 KiB  
Article
A Comparison of Mixed and Partial Membership Diagnostic Classification Models with Multidimensional Item Response Models
by Alexander Robitzsch 
Information 2024, 15(6), 331; https://doi.org/10.3390/info15060331 - 5 Jun 2024
Viewed by 309
Abstract
Diagnostic classification models (DCM) are latent structure models with discrete multivariate latent variables. Recently, extensions of DCMs to mixed membership have been proposed. In this article, ordinary DCMs, mixed and partial membership models, and multidimensional item response theory (IRT) models are compared through [...] Read more.
Diagnostic classification models (DCM) are latent structure models with discrete multivariate latent variables. Recently, extensions of DCMs to mixed membership have been proposed. In this article, ordinary DCMs, mixed and partial membership models, and multidimensional item response theory (IRT) models are compared through analytical derivations, three example datasets, and a simulation study. It is concluded that partial membership DCMs are similar, if not structurally equivalent, to sufficiently complex multidimensional IRT models. Full article
(This article belongs to the Special Issue Second Edition of Predictive Analytics and Data Science)
Previous Issue
Back to TopTop