Next Issue
Volume 15, May
Previous Issue
Volume 15, March
 
 

Information, Volume 15, Issue 4 (April 2024) – 70 articles

Cover Story (view full-size image): The costs of deploying and expanding a 5G network are not negligible. Neutral hosts play a central role in providing cost-effective 5G services. This paper presents a new infrastructure-sharing 5G neutral host solution based on roaming and network slicing. Its flexible architecture supports both mobile network operators (MNOs) and non-MNO tenants. The proposed solution selects a set of 5G network functions (NFs) to provide all the necessary services to the tenants, improving the scalability and resource management, while ensuring traffic isolation and policy control over the end users. Another contribution of this work is the implementation of the proposed architecture in a simulation environment using open-source tools, as well as its functional requirement and performance analyses. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 7051 KiB  
Article
Sports Analytics: Data Mining to Uncover NBA Player Position, Age, and Injury Impact on Performance and Economics
by Vangelis Sarlis and Christos Tjortjis
Information 2024, 15(4), 242; https://doi.org/10.3390/info15040242 - 21 Apr 2024
Viewed by 576
Abstract
In the intersecting fields of data mining (DM) and sports analytics, the impact of socioeconomic, demographic, and injury-related factors on sports performance and economics has been extensively explored. A novel methodology is proposed and evaluated in this study, aiming to identify essential attributes [...] Read more.
In the intersecting fields of data mining (DM) and sports analytics, the impact of socioeconomic, demographic, and injury-related factors on sports performance and economics has been extensively explored. A novel methodology is proposed and evaluated in this study, aiming to identify essential attributes and metrics that influence the salaries and performance of NBA players. Feature selection techniques are utilized for estimating the financial impacts of injuries, while clustering algorithms are applied to analyse the relationship between player age, position, and advanced performance metrics. Through the application of PCA-driven pattern recognition and exploratory-based categorization, a detailed examination of the effects on earnings and performance is conducted. Findings indicate that peak performance is typically achieved between the ages of 27 and 29, whereas the highest salaries are received between the ages of 29 and 34. Additionally, musculoskeletal injuries are identified as the source of half of the financial costs related to health problems in the NBA. The association between demographics and financial analytics, particularly focusing on the position and age of NBA players, is also investigated, offering new insights into the economic implications of player attributes and health. Full article
(This article belongs to the Special Issue New Information Communication Technologies in the Digital Era)
Show Figures

Figure 1

16 pages, 359 KiB  
Article
Automated Trace Clustering Pipeline Synthesis in Process Mining
by Iuliana Malina Grigore, Gabriel Marques Tavares, Matheus Camilo da Silva, Paolo Ceravolo and Sylvio Barbon Junior
Information 2024, 15(4), 241; https://doi.org/10.3390/info15040241 - 20 Apr 2024
Viewed by 353
Abstract
Business processes have undergone a significant transformation with the advent of the process-oriented view in organizations. The increasing complexity of business processes and the abundance of event data have driven the development and widespread adoption of process mining techniques. However, the size and [...] Read more.
Business processes have undergone a significant transformation with the advent of the process-oriented view in organizations. The increasing complexity of business processes and the abundance of event data have driven the development and widespread adoption of process mining techniques. However, the size and noise of event logs pose challenges that require careful analysis. The inclusion of different sets of behaviors within the same business process further complicates data representation, highlighting the continued need for innovative solutions in the evolving field of process mining. Trace clustering is emerging as a solution to improve the interpretation of underlying business processes. Trace clustering offers benefits such as mitigating the impact of outliers, providing valuable insights, reducing data dimensionality, and serving as a preprocessing step in robust pipelines. However, designing an appropriate clustering pipeline can be challenging for non-experts due to the complexity of the process and the number of steps involved. For experts, it can be time-consuming and costly, requiring careful consideration of trade-offs. To address the challenge of pipeline creation, the paper proposes a genetic programming solution for trace clustering pipeline synthesis that optimizes a multi-objective function matching clustering and process quality metrics. The solution is applied to real event logs, and the results demonstrate improved performance in downstream tasks through the identification of sub-logs. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Information Systems)
Show Figures

Figure 1

12 pages, 746 KiB  
Article
Intuitionistic Fuzzy Sets for Spatial and Temporal Data Intervals
by Frederick Petry
Information 2024, 15(4), 240; https://doi.org/10.3390/info15040240 - 20 Apr 2024
Viewed by 285
Abstract
Spatial and temporal uncertainties are found in data for many critical applications. This paper describes the use of interval-based representations of some spatial and temporal information. Uncertainties in the information can arise from multiple sources in which degrees of support and non-support occur [...] Read more.
Spatial and temporal uncertainties are found in data for many critical applications. This paper describes the use of interval-based representations of some spatial and temporal information. Uncertainties in the information can arise from multiple sources in which degrees of support and non-support occur in evaluations. This motivates the use of intuitionistic fuzzy sets to permit the use of the positive and negative memberships to capture these uncertainties. The interval representations will include both simple and complex or nested intervals. The relationships between intervals such as overlapping, containing, etc. are then developed for both the simple and complex intervals. Such relationships are required to support the aggregation approaches of the interval information. Both averaging and merging approaches to interval aggregation are then developed. Furthermore, potential techniques for the associated aggregation of the interval intuitionistic fuzzy memberships are provided. A motivating example of maritime depth data required for safe navigation is used to illustrate the approach. Finally, some potential future developments are discussed. Full article
Show Figures

Figure 1

22 pages, 6807 KiB  
Article
Deep Learning-Based Road Pavement Inspection by Integrating Visual Information and IMU
by Chen-Chiung Hsieh, Han-Wen Jia, Wei-Hsin Huang and Mei-Hua Hsih
Information 2024, 15(4), 239; https://doi.org/10.3390/info15040239 - 20 Apr 2024
Viewed by 366
Abstract
This study proposes a deep learning method for pavement defect detection, focusing on identifying potholes and cracks. A dataset comprising 10,828 images is collected, with 8662 allocated for training, 1083 for validation, and 1083 for testing. Vehicle attitude data are categorized based on [...] Read more.
This study proposes a deep learning method for pavement defect detection, focusing on identifying potholes and cracks. A dataset comprising 10,828 images is collected, with 8662 allocated for training, 1083 for validation, and 1083 for testing. Vehicle attitude data are categorized based on three-axis acceleration and attitude change, with 6656 (64%) for training, 1664 (16%) for validation, and 2080 (20%) for testing. The Nvidia Jetson Nano serves as the vehicle-embedded system, transmitting IMU-acquired vehicle data and GoPro-captured images over a 5G network to the server. The server recognizes two damage categories, low-risk and high-risk, storing results in MongoDB. Severe damage triggers immediate alerts to maintenance personnel, while less severe issues are recorded for scheduled maintenance. The method selects YOLOv7 among various object detection models for pavement defect detection, achieving a mAP of 93.3%, a recall rate of 87.8%, a precision of 93.2%, and a processing speed of 30–40 FPS. Bi-LSTM is then chosen for vehicle vibration data processing, yielding 77% mAP, 94.9% recall rate, and 89.8% precision. Integration of the visual and vibration results, along with vehicle speed and travel distance, results in a final recall rate of 90.2% and precision of 83.7% after field testing. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
Show Figures

Figure 1

16 pages, 2490 KiB  
Article
Constructing Semantic Summaries Using Embeddings
by Georgia Eirini Trouli, Nikos Papadakis and Haridimos Kondylakis
Information 2024, 15(4), 238; https://doi.org/10.3390/info15040238 - 20 Apr 2024
Viewed by 361
Abstract
The increase in the size and complexity of large knowledge graphs now available online has resulted in the emergence of many approaches focusing on enabling the quick exploration of the content of those data sources. Structural non-quotient semantic summaries have been proposed in [...] Read more.
The increase in the size and complexity of large knowledge graphs now available online has resulted in the emergence of many approaches focusing on enabling the quick exploration of the content of those data sources. Structural non-quotient semantic summaries have been proposed in this direction that involve first selecting the most important nodes and then linking them, trying to extract the most useful subgraph out of the original graph. However, the current state of the art systems use costly centrality measures for identifying the most important nodes, whereas even costlier procedures have been devised for linking the selected nodes. In this paper, we address both those deficiencies by first exploiting embeddings for node selection, and then by meticulously selecting approximate algorithms for node linking. Experiments performed over two real-world big KGs demonstrate that the summaries constructed using our method enjoy better quality. Specifically, the coverage scores obtained were 0.8, 0.81, and 0.81 for DBpedia v3.9 and 0.94 for Wikidata dump 2018, across 20%, 25%, and 30% summary sizes, respectively. Additionally, our method can compute orders of magnitude faster than the state of the art. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Figure 1

18 pages, 1266 KiB  
Article
A Scalable and Automated Framework for Tracking the Likely Adoption of Emerging Technologies
by Lowri Williams, Eirini Anthi and Pete Burnap
Information 2024, 15(4), 237; https://doi.org/10.3390/info15040237 - 19 Apr 2024
Viewed by 294
Abstract
While new technologies are expected to revolutionise and become game-changers in improving the efficiency and practices of our daily lives, it is also critical to investigate and understand the barriers and opportunities faced by their adopters. Such findings can serve as an additional [...] Read more.
While new technologies are expected to revolutionise and become game-changers in improving the efficiency and practices of our daily lives, it is also critical to investigate and understand the barriers and opportunities faced by their adopters. Such findings can serve as an additional feature in the decisionmaking process when analysing the risks, costs, and benefits of adopting an emerging technology in a particular setting. Although several studies have attempted to perform such investigations, these approaches adopt a qualitative data collection methodology, which is limited in terms of the size of the targeted participant group and is associated with a significant manual overhead when transcribing and inferring results. This paper presents a scalable and automated framework for tracking the likely adoption and/or rejection of new technologies from a large landscape of adopters. In particular, a large corpus of social media texts containing references to emerging technologies was compiled. Text mining techniques were applied to extract the sentiments expressed towards technology aspects. In the context of the problem definition herein, we hypothesise that the expression of positive sentiment implies an increase in the likelihood of impacting a technology user’s acceptance to adopt, integrate, and/or use the technology, and negative sentiment implies an increase in the likelihood of impacting the rejection of emerging technologies by adopters. To quantitatively test our hypothesis, a ground truth analysis was performed to validate that the sentiments captured by the text mining approach were comparable to the results provided by human annotators when asked to label whether such texts positively or negatively impact their outlook towards adopting an emerging technology. The collected annotations demonstrated comparable results to those of the text mining approach, illustrating that the automatically extracted sentiments expressed towards technologies are useful features in understanding the landscape faced by technology adopters, as well as serving as an important decisionmaking component when, for example, recognising shifts in user behaviours, new demands, and emerging uncertainties. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

19 pages, 1382 KiB  
Article
A Fuzzy Synthetic Evaluation Approach to Assess Usefulness of Tourism Reviews by Considering Bias Identified in Sentiments and Articulacy
by Dimitrios K. Kardaras, Christos Troussas, Stavroula G. Barbounaki, Panagiota Tselenti and Konstantinos Armyras
Information 2024, 15(4), 236; https://doi.org/10.3390/info15040236 - 19 Apr 2024
Viewed by 308
Abstract
Assessing the usefulness of reviews has been the aim of several research studies. However, results regarding the significance of usefulness determinants are often contradictory, thus decreasing the accuracy of reviews’ helpfulness estimation. Also, bias in user reviews attributed to differences, e.g., in gender, [...] Read more.
Assessing the usefulness of reviews has been the aim of several research studies. However, results regarding the significance of usefulness determinants are often contradictory, thus decreasing the accuracy of reviews’ helpfulness estimation. Also, bias in user reviews attributed to differences, e.g., in gender, nationality, etc., may result in misleading judgments, thus diminishing reviews’ usefulness. Research is needed for sentiment analysis algorithms that incorporate bias embedded in reviews, thus improving their usefulness, readability, credibility, etc. This study utilizes fuzzy relations and fuzzy synthetic evaluation (FSE) in order to calculate reviews’ usefulness by incorporating users’ biases as expressed in terms of reviews’ articulacy and sentiment polarity. It selected and analyzed 95,678 hotel user reviews from Tripadvisor, written by users from five specific nationalities. The findings indicate that there are differences among nationalities in terms of the articulacy and sentiment of their reviews. The British are most consistent in their judgments expressed in titles and the main body of reviews. For the British and the Greeks, review titles suffice to convey any negative sentiments. The Dutch use fewer words in their reviews than the other nationalities. This study suggests that fuzzy logic captures subjectivity which is often found in reviews, and it can be used to quantify users’ behavioral differences, calculate reviews’ usefulness, and provide the means for developing more accurate voting systems. Full article
Show Figures

Figure 1

36 pages, 1803 KiB  
Article
An Overview on the Advancements of Support Vector Machine Models in Healthcare Applications: A Review
by Rosita Guido, Stefania Ferrisi, Danilo Lofaro and Domenico Conforti
Information 2024, 15(4), 235; https://doi.org/10.3390/info15040235 - 19 Apr 2024
Viewed by 390
Abstract
Support vector machines (SVMs) are well-known machine learning algorithms for classification and regression applications. In the healthcare domain, they have been used for a variety of tasks including diagnosis, prognosis, and prediction of disease outcomes. This review is an extensive survey on the [...] Read more.
Support vector machines (SVMs) are well-known machine learning algorithms for classification and regression applications. In the healthcare domain, they have been used for a variety of tasks including diagnosis, prognosis, and prediction of disease outcomes. This review is an extensive survey on the current state-of-the-art of SVMs developed and applied in the medical field over the years. Many variants of SVM-based approaches have been developed to enhance their generalisation capabilities. We illustrate the most interesting SVM-based models that have been developed and applied in healthcare to improve performance metrics on benchmark datasets, including hybrid classification methods that combine, for instance, optimization algorithms with SVMs. We even report interesting results found in medical applications related to real-world data. Several issues around SVMs, such as selection of hyperparameters and learning from data of questionable quality, are discussed as well. The several variants developed and introduced over the years could be useful in designing new methods to improve performance in critical fields such as healthcare, where accuracy, specificity, and other metrics are crucial. Finally, current research trends and future directions are underlined. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

30 pages, 2732 KiB  
Article
Exploiting Properties of Student Networks to Enhance Learning in Distance Education
by Rozita Tsoni, Evgenia Paxinou, Aris Gkoulalas-Divanis, Dimitrios Karapiperis, Dimitrios Kalles and Vassilios S. Verykios
Information 2024, 15(4), 234; https://doi.org/10.3390/info15040234 - 19 Apr 2024
Viewed by 596
Abstract
Distance Learning has become the “new normal”, especially during the pandemic and due to the technological advances that are incorporated into the teaching procedure. At the same time, the augmented use of the internet has blurred the borders between distance and conventional learning. [...] Read more.
Distance Learning has become the “new normal”, especially during the pandemic and due to the technological advances that are incorporated into the teaching procedure. At the same time, the augmented use of the internet has blurred the borders between distance and conventional learning. Students interact mainly through LMSs, leaving their digital traces that can be leveraged to improve the educational process. New knowledge derived from the analysis of digital data could assist educational stakeholders in instructional design and decision making regarding the level and type of intervention that would benefit learners. This work aims to propose an analysis model that can capture the students’ behaviors in a distance learning course delivered fully online, based on the clickstream data associated with the discussion forum, and additionally to suggest interpretable patterns that will support education administrators and tutors in the decision-making process. To achieve our goal, we use Social Network Analysis as networks represent complex interactions in a meaningful and easily interpretable way. Moreover, simple or complex network metrics are becoming available to provide valuable insights into the students’ social interaction. This study concludes that by leveraging the imprint of these actions in an LMS and using metrics of Social Network Analysis, differences can be spotted in the communicational patterns that go beyond simple participation recording. Although HITS and PageRank algorithms were created with completely different targeting, it is shown that they can also reveal methodological features in students’ communicational approach. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Figure 1

24 pages, 1873 KiB  
Article
Enhancing Child Safety in Online Gaming: The Development and Application of Protectbot, an AI-Powered Chatbot Framework
by Anum Faraz, Fardin Ahsan, Jinane Mounsef, Ioannis Karamitsos and Andreas Kanavos
Information 2024, 15(4), 233; https://doi.org/10.3390/info15040233 - 19 Apr 2024
Viewed by 369
Abstract
This study introduces Protectbot, an innovative chatbot framework designed to improve safety in children’s online gaming environments. At its core, Protectbot incorporates DialoGPT, a conversational Artificial Intelligence (AI) model rooted in Generative Pre-trained Transformer 2 (GPT-2) technology, engineered to simulate human-like interactions within [...] Read more.
This study introduces Protectbot, an innovative chatbot framework designed to improve safety in children’s online gaming environments. At its core, Protectbot incorporates DialoGPT, a conversational Artificial Intelligence (AI) model rooted in Generative Pre-trained Transformer 2 (GPT-2) technology, engineered to simulate human-like interactions within gaming chat rooms. The framework is distinguished by a robust text classification strategy, rigorously trained on the Publicly Available Natural 2012 (PAN12) dataset, aimed at identifying and mitigating potential sexual predatory behaviors through chat conversation analysis. By utilizing fastText for word embeddings to vectorize sentences, we have refined a support vector machine (SVM) classifier, achieving remarkable performance metrics, with recall, accuracy, and F-scores approaching 0.99. These metrics not only demonstrate the classifier’s effectiveness, but also signify a significant advancement beyond existing methodologies in this field. The efficacy of our framework is additionally validated on a custom dataset, composed of 71 predatory chat logs from the Perverted Justice website, further establishing the reliability and robustness of our classifier. Protectbot represents a crucial innovation in enhancing child safety within online gaming communities, providing a proactive, AI-enhanced solution to detect and address predatory threats promptly. Our findings highlight the immense potential of AI-driven interventions to create safer digital spaces for young users. Full article
(This article belongs to the Special Issue Do (AI) Chatbots Pose any Special Challenges for Trust and Privacy?)
Show Figures

Figure 1

17 pages, 456 KiB  
Article
Cloud Broker: Customizing Services for Cloud Market Requirements
by Evangelia Filiopoulou, Georgios Chatzithanasis, Christos Michalakelis and Mara Nikolaidou
Information 2024, 15(4), 232; https://doi.org/10.3390/info15040232 - 19 Apr 2024
Viewed by 415
Abstract
Cloud providers offer various purchasing options to enable users to tailor their costs according to their specific requirements, including on-demand, reserved instances, and spot instances. On-demand and spot instances satisfy short-term workloads, whereas reserved instances fulfill long-term instances. However, there are workloads that [...] Read more.
Cloud providers offer various purchasing options to enable users to tailor their costs according to their specific requirements, including on-demand, reserved instances, and spot instances. On-demand and spot instances satisfy short-term workloads, whereas reserved instances fulfill long-term instances. However, there are workloads that fall outside of either long-term or short-term categories. Consequently, there is a notable absence of services specifically tailored for medium-term workloads. On-demand services, while offering flexibility, often come with high costs. Spot instances, though cost-effective, carry the risk of termination. Reserved instances, while stable and less expensive, may have a remaining period that extends beyond the duration of users’ tasks. This gap underscores the need for solutions that address the unique requirements and challenges associated with medium-term workloads in the cloud computing landscape. This paper introduces a new cloud broker that introduces IaaS services for medium-term workloads. On one hand, this broker strategically reserves resources from providers, and on the other hand, it interacts with users. Its interaction with users is twofold. It collects users’ preferences regarding commitment term for medium-term workloads and then transforms the leased resources based on commitment term, aligning with the requirements of most users. To ensure profitability, the broker sells these services utilizing an auction algorithm. Hence, in this paper, an auction algorithm is introduced and developed, which treats cloud services as virtual assets and integrates the depreciation over time. The findings affirm the lack of services that fulfill medium workloads while ensuring the financial viabilty and profitability of the broker, given that the estimated return on investment (ROI) is acceptable. Full article
(This article belongs to the Special Issue Technoeconomics of the Internet of Things)
Show Figures

Figure 1

17 pages, 900 KiB  
Article
Two-Stage Convolutional Neural Network for Classification of Movement Patterns in Tremor Patients
by Patricia Weede, Piotr Dariusz Smietana, Gregor Kuhlenbäumer, Günther Deuschl and Gerhard Schmidt
Information 2024, 15(4), 231; https://doi.org/10.3390/info15040231 - 18 Apr 2024
Viewed by 330
Abstract
Accurate tremor classification is crucial for effective patient management and treatment. However, clinical diagnoses are often hindered by misdiagnoses, necessitating the development of robust technical methods. Here, we present a two-stage convolutional neural network (CNN)-based system for classifying physiological tremor, essential tremor (ET), [...] Read more.
Accurate tremor classification is crucial for effective patient management and treatment. However, clinical diagnoses are often hindered by misdiagnoses, necessitating the development of robust technical methods. Here, we present a two-stage convolutional neural network (CNN)-based system for classifying physiological tremor, essential tremor (ET), and Parkinson’s disease (PD) tremor. Employing acceleration signals from the hands of 408 patients, our system utilizes both medically motivated signal features and (nearly) raw data (by means of spectrograms) as system inputs. Our model employs a hybrid approach of data-based and feature-based methods to leverage the strengths of both while mitigating their weaknesses. By incorporating various data augmentation techniques for model training, we achieved an overall accuracy of 88.12%. This promising approach demonstrates improved accuracy in discriminating between the three tremor types, paving the way for more precise tremor diagnosis and enhanced patient care. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning II)
Show Figures

Figure 1

21 pages, 6769 KiB  
Article
A Novel Dynamic Contextual Feature Fusion Model for Small Object Detection in Satellite Remote-Sensing Images
by Hongbo Yang and Shi Qiu
Information 2024, 15(4), 230; https://doi.org/10.3390/info15040230 - 18 Apr 2024
Viewed by 369
Abstract
Ground objects in satellite images pose unique challenges due to their low resolution, small pixel size, lack of texture features, and dense distribution. Detecting small objects in satellite remote-sensing images is a difficult task. We propose a new detector focusing on contextual information [...] Read more.
Ground objects in satellite images pose unique challenges due to their low resolution, small pixel size, lack of texture features, and dense distribution. Detecting small objects in satellite remote-sensing images is a difficult task. We propose a new detector focusing on contextual information and multi-scale feature fusion. Inspired by the notion that surrounding context information can aid in identifying small objects, we propose a lightweight context convolution block based on dilated convolutions and integrate it into the convolutional neural network (CNN). We integrate dynamic convolution blocks during the feature fusion step to enhance the high-level feature upsampling. An attention mechanism is employed to focus on the salient features of objects. We have conducted a series of experiments to validate the effectiveness of our proposed model. Notably, the proposed model achieved a 3.5% mean average precision (mAP) improvement on the satellite object detection dataset. Another feature of our approach is lightweight design. We employ group convolution to reduce the computational cost in the proposed contextual convolution module. Compared to the baseline model, our method reduces the number of parameters by 30%, computational cost by 34%, and an FPS rate close to the baseline model. We also validate the detection results through a series of visualizations. Full article
Show Figures

Figure 1

33 pages, 418 KiB  
Article
A User Study on Modeling IoT-Aware Processes with BPMN 2.0
by Yusuf Kirikkayis, Michael Winter and Manfred Reichert
Information 2024, 15(4), 229; https://doi.org/10.3390/info15040229 - 18 Apr 2024
Viewed by 367
Abstract
Integrating the Internet of Things (IoT) into business process management (BPM) aims to increase the automation level, efficiency, transparency, and comprehensibility of the business processes taking place in the physical world. The IoT enables the seamless networking of physical devices, allowing for the [...] Read more.
Integrating the Internet of Things (IoT) into business process management (BPM) aims to increase the automation level, efficiency, transparency, and comprehensibility of the business processes taking place in the physical world. The IoT enables the seamless networking of physical devices, allowing for the enrichment of processes with real-time data about the physical world and, thus, for optimized process automation and monitoring. To realize these benefits, the modeling of IoT-aware processes needs to be appropriately supported. Despite the great attention paid to this topic, more clarity is needed about the current state of the art of corresponding modeling solutions. Capturing IoT characteristics in business process models visually or based on labels is essential to ensure effective design and communication of IoT-aware business processes. A clear discernibility of IoT characteristics can enable the precise modeling and analysis of IoT-aware processes and facilitate collaboration among different stakeholders. With an increasing number of process model elements, it becomes crucial that process model readers can understand the IoT aspects of business processes in order to make informed decisions and to optimize the processes with respect to IoT integration. This paper presents the results of a large user study (N = 249) that explored the perception of IoT aspects in BPMN 2.0 process models to gain insights into the IoT’s involvement in business processes that drive the successful implementation and communication of IoT-aware processes. Full article
(This article belongs to the Special Issue Recent Advances in IoT and Cyber/Physical System)
Show Figures

Figure 1

14 pages, 1081 KiB  
Article
Ensemble Modeling with a Bayesian Maximal Information Coefficient-Based Model of Bayesian Predictions on Uncertainty Data
by Tisinee Surapunt and Shuliang Wang
Information 2024, 15(4), 228; https://doi.org/10.3390/info15040228 - 18 Apr 2024
Viewed by 416
Abstract
Uncertainty presents unfamiliar circumstances or incomplete information that may be difficult to handle with a single model of a traditional machine learning algorithm. They are possibly limited by inadequate data, an ambiguous model, and learning performance to make a prediction. Therefore, ensemble modeling [...] Read more.
Uncertainty presents unfamiliar circumstances or incomplete information that may be difficult to handle with a single model of a traditional machine learning algorithm. They are possibly limited by inadequate data, an ambiguous model, and learning performance to make a prediction. Therefore, ensemble modeling is proposed as a powerful model for enhancing predictive capabilities and robustness. This study aims to apply Bayesian prediction to ensemble modeling because it can encode conditional dependencies between variables and present the reasoning model using the BMIC model. The BMIC has clarified knowledge in the model which is ready for learning. Then, it was selected as the base model to be integrated with well-known algorithms such as logistic regression, K-nearest neighbors, decision trees, random forests, support vector machines (SVMs), neural networks, naive Bayes, and XGBoost classifiers. Also, the Bayesian neural network (BNN) and the probabilistic Bayesian neural network (PBN) were considered to compare their performance as a single model. The findings of this study indicate that the ensemble model of the BMIC with some traditional algorithms, which are SVM, random forest, neural networks, and XGBoost classifiers, returns 96.3% model accuracy in prediction. It provides a more reliable model and a versatile approach to support decision-making. Full article
Show Figures

Figure 1

25 pages, 1661 KiB  
Article
An Optimized Deep Learning Approach for Detecting Fraudulent Transactions
by Said El Kafhali, Mohammed Tayebi and Hamza Sulimani
Information 2024, 15(4), 227; https://doi.org/10.3390/info15040227 - 18 Apr 2024
Viewed by 502
Abstract
The proliferation of new technologies and advancements in existing ones are altering our perspective of the world. So, continuous improvements are needed. A connected world filled with a vast amount of data was created as a result of the integration of these advanced [...] Read more.
The proliferation of new technologies and advancements in existing ones are altering our perspective of the world. So, continuous improvements are needed. A connected world filled with a vast amount of data was created as a result of the integration of these advanced technologies in the financial sector. The advantages of this connection came at the cost of more sophisticated and advanced attacks, such as fraudulent transactions. To address these illegal transactions, researchers and engineers have created and implemented various systems and models to detect fraudulent transactions; many of them produce better results than others. On the other hand, criminals change their strategies and technologies to imitate legitimate transactions. In this article, the objective is to propose an intelligent system for detecting fraudulent transactions using various deep learning architectures, including artificial neural networks (ANNs), recurrent neural networks (RNNs), and long short-term memory (LSTM). Furthermore, the Bayesian optimization algorithm is used for hyperparameter optimization. For the evaluation, a credit card fraudulent transaction dataset was used. Based on the many experiments conducted, the RNN architecture demonstrated better efficiency and yielded better results in a shorter computational time than the ANN LSTM architectures. Full article
Show Figures

Figure 1

28 pages, 2191 KiB  
Article
Chatbot Design and Implementation: Towards an Operational Model for Chatbots
by Alexander Skuridin and Martin Wynn
Information 2024, 15(4), 226; https://doi.org/10.3390/info15040226 - 17 Apr 2024
Viewed by 390
Abstract
The recent past has witnessed a growing interest in technologies for creating chatbots. Advances in Large Language Models for natural language processing are underpinning rapid progress in chatbot development, and experts predict revolutionary changes in the labour market as many manual tasks are [...] Read more.
The recent past has witnessed a growing interest in technologies for creating chatbots. Advances in Large Language Models for natural language processing are underpinning rapid progress in chatbot development, and experts predict revolutionary changes in the labour market as many manual tasks are replaced by virtual assistants in a range of business functions. As the new technology becomes more accessible and advanced, more companies are exploring the possibilities of implementing virtual assistants to automate routine tasks and improve service. This article reports on qualitative inductive research undertaken within a chatbot development team operating in a major international enterprise. The findings identify critical success factors for chatbot projects, and a model is developed and validated to support the planning and implementation of chatbot projects. The presented model can serve as an exemplary guide for researchers and practitioners working in this field. It is flexible and applicable in a wide range of business contexts, linking strategic business goals with execution steps. It is particularly applicable for teams with no experience in chatbot implementation, reducing uncertainty and managing decisions and risks throughout the project lifecycle, thereby increasing the likelihood of project success. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Economics and Business Management)
Show Figures

Figure 1

21 pages, 3674 KiB  
Article
Economic Scheduling Model of an Active Distribution Network Based on Chaotic Particle Swarm Optimization
by Yaxuan Xu, Jianuo Liu, Zhongqi Cui, Ziying Liu, Chenxu Dai, Xiangzhen Zang and Zhanlin Ji
Information 2024, 15(4), 225; https://doi.org/10.3390/info15040225 - 17 Apr 2024
Viewed by 369
Abstract
With the continuous increase in global energy demand and growing environmental awareness, the utilization of renewable energy has become a worldwide consensus. In order to address the challenges posed by the intermittent and unpredictable nature of renewable energy in distributed power distribution networks, [...] Read more.
With the continuous increase in global energy demand and growing environmental awareness, the utilization of renewable energy has become a worldwide consensus. In order to address the challenges posed by the intermittent and unpredictable nature of renewable energy in distributed power distribution networks, as well as to improve the economic and operational stability of distribution systems, this paper proposes the establishment of an active distribution network capable of accommodating renewable energy. The objective is to enhance the efficiency of new energy utilization. This study investigates optimal scheduling models for energy storage technologies and economic-operation dispatching techniques in distributed power distribution networks. Additionally, it develops a comprehensive demand response model, with real-time pricing and incentive policies aiming to minimize load peak–valley differentials. The control mechanism incorporates time-of-use pricing and integrates a chaos particle swarm algorithm for a holistic approach to solution finding. By coordinating and optimizing the control of distributed power sources, energy storage systems, and flexible loads, the active distribution network achieves minimal operational costs while meeting demand-side power requirements, striving to smooth out load curves as much as possible. Case studies demonstrate significant enhancements during off-peak periods, with an approximately 60% increase in the load power overall elevation of load factors during regular periods, as well as a reduction in grid loads during evening peak hours, with a maximum decrease of nearly 65 kW. This approach mitigates grid operational pressures and user expense, effectively enhancing the stability and economic efficiency in distribution network operations. Full article
(This article belongs to the Special Issue Optimization Algorithms for Engineering Applications)
Show Figures

Figure 1

17 pages, 6368 KiB  
Article
Violin Music Emotion Recognition with Fusion of CNN–BiGRU and Attention Mechanism
by Sihan Ma and Ruohua Zhou
Information 2024, 15(4), 224; https://doi.org/10.3390/info15040224 - 16 Apr 2024
Viewed by 320
Abstract
Music emotion recognition has garnered significant interest in recent years, as the emotions expressed through music can profoundly enhance our understanding of its deeper meanings. The violin, with its distinctive emotional expressiveness, has become a focal point in this field of research. To [...] Read more.
Music emotion recognition has garnered significant interest in recent years, as the emotions expressed through music can profoundly enhance our understanding of its deeper meanings. The violin, with its distinctive emotional expressiveness, has become a focal point in this field of research. To address the scarcity of specialized data, we developed a dataset specifically for violin music emotion recognition named VioMusic. This dataset offers a precise and comprehensive platform for the analysis of emotional expressions in violin music, featuring specialized samples and evaluations. Moreover, we implemented the CNN–BiGRU–Attention (CBA) model to establish a baseline system for music emotion recognition. Our experimental findings show that the CBA model effectively captures the emotional nuances in violin music, achieving mean absolute errors (MAE) of 0.124 and 0.129. The VioMusic dataset proves to be highly practical for advancing the study of emotion recognition in violin music, providing valuable insights and a robust framework for future research. Full article
Show Figures

Figure 1

20 pages, 541 KiB  
Article
An Extensive Performance Comparison between Feature Reduction and Feature Selection Preprocessing Algorithms on Imbalanced Wide Data
by Ismael Ramos-Pérez, José Antonio Barbero-Aparicio, Antonio Canepa-Oneto, Álvar Arnaiz-González and Jesús Maudes-Raedo
Information 2024, 15(4), 223; https://doi.org/10.3390/info15040223 - 16 Apr 2024
Viewed by 340
Abstract
The most common preprocessing techniques used to deal with datasets having high dimensionality and a low number of instances—or wide data—are feature reduction (FR), feature selection (FS), and resampling. This study explores the use of FR and resampling techniques, expanding the limited comparisons [...] Read more.
The most common preprocessing techniques used to deal with datasets having high dimensionality and a low number of instances—or wide data—are feature reduction (FR), feature selection (FS), and resampling. This study explores the use of FR and resampling techniques, expanding the limited comparisons between FR and filter FS methods in the existing literature, especially in the context of wide data. We compare the optimal outcomes from a previous comprehensive study of FS against new experiments conducted using FR methods. Two specific challenges associated with the use of FR are outlined in detail: finding FR methods that are compatible with wide data and the need for a reduction estimator of nonlinear approaches to process out-of-sample data. The experimental study compares 17 techniques, including supervised, unsupervised, linear, and nonlinear approaches, using 7 resampling strategies and 5 classifiers. The results demonstrate which configurations are optimal, according to their performance and computation time. Moreover, the best configuration—namely, k Nearest Neighbor (KNN) + the Maximal Margin Criterion (MMC) feature reducer with no resampling—is shown to outperform state-of-the-art algorithms. Full article
Show Figures

Figure 1

17 pages, 1665 KiB  
Article
Time Series Forecasting with Missing Data Using Generative Adversarial Networks and Bayesian Inference
by Xiaoou Li
Information 2024, 15(4), 222; https://doi.org/10.3390/info15040222 - 15 Apr 2024
Viewed by 494
Abstract
This paper tackles the challenge of time series forecasting in the presence of missing data. Traditional methods often struggle with such data, which leads to inaccurate predictions. We propose a novel framework that combines the strengths of Generative Adversarial Networks (GANs) and Bayesian [...] Read more.
This paper tackles the challenge of time series forecasting in the presence of missing data. Traditional methods often struggle with such data, which leads to inaccurate predictions. We propose a novel framework that combines the strengths of Generative Adversarial Networks (GANs) and Bayesian inference. The framework utilizes a Conditional GAN (C-GAN) to realistically impute missing values in the time series data. Subsequently, Bayesian inference is employed to quantify the uncertainty associated with the forecasts due to the missing data. This combined approach improves the robustness and reliability of forecasting compared to traditional methods. The effectiveness of our proposed method is evaluated on a real-world dataset of air pollution data from Mexico City. The results demonstrate the framework’s capability to handle missing data and achieve improved forecasting accuracy. Full article
(This article belongs to the Special Issue New Deep Learning Approach for Time Series Forecasting)
Show Figures

Figure 1

19 pages, 633 KiB  
Article
Phase Noise Effects on OFDM Chirp Communication Systems: Characteristics and Compensation
by Mengjiao Li and Wenqin Wang
Information 2024, 15(4), 221; https://doi.org/10.3390/info15040221 - 14 Apr 2024
Viewed by 423
Abstract
Orthogonal frequency-division multiplexing (OFDM) chirp waveforms are an attractive candidate to be a dual-function signal scheme for the joint radar and communication systems. OFDM chirp signals can not only be employed to transmit communication data through classic phase modulation, but also can perform [...] Read more.
Orthogonal frequency-division multiplexing (OFDM) chirp waveforms are an attractive candidate to be a dual-function signal scheme for the joint radar and communication systems. OFDM chirp signals can not only be employed to transmit communication data through classic phase modulation, but also can perform radar detection by applying linear frequency modulation for subcarriers. However, the performance of the OFDM chirp communication system under the phase noise environment still remains uninvestigated. This paper tries to discuss the influence of phase noise on OFDM chirp communication systems and proposes effective phase noise estimation and compensation methods. We find that the phase noise effect on OFDM chirp communication systems consists of a common phase error (CPE) and an inter-carrier interference (ICI). If not compensated, the performance of the dual-function systems can be seriously degraded. In particular, an exact expression for the signal-plus-interference to noise ratio (SINR) for the OFDM chirp communication system is derived and some critical parameters are analyzed to exhibit the phase noise effects on system performance. Moreover, two low-complexity estimation approaches, maximum likelihood (ML) and linear minimum mean square error (LMMSE), as well as two compensation approaches, the de-correlation and cancellation algorithms, are respectively utilized to eliminate the phase noise impairment. Finally, the phase noise effects and the effectiveness of the compensation approach are verified by extensive numerical results. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

12 pages, 4072 KiB  
Article
Early Parkinson’s Disease Diagnosis through Hand-Drawn Spiral and Wave Analysis Using Deep Learning Techniques
by Yingcong Huang, Kunal Chaturvedi, Al-Akhir Nayan, Mohammad Hesam Hesamian, Ali Braytee and Mukesh Prasad
Information 2024, 15(4), 220; https://doi.org/10.3390/info15040220 - 13 Apr 2024
Viewed by 657
Abstract
Parkinson’s disease (PD) is a chronic brain disorder affecting millions worldwide. It occurs when brain cells that produce dopamine, a chemical controlling movement, die or become damaged. This leads to PD, which causes problems with movement, balance, and posture. Early detection is crucial [...] Read more.
Parkinson’s disease (PD) is a chronic brain disorder affecting millions worldwide. It occurs when brain cells that produce dopamine, a chemical controlling movement, die or become damaged. This leads to PD, which causes problems with movement, balance, and posture. Early detection is crucial to slow its progression and improve the quality of life for PD patients. This paper proposes a handwriting-based prediction approach combining a cosine annealing scheduler with deep transfer learning. It utilizes the NIATS dataset, which contains handwriting samples from individuals with and without PD, to evaluate six different models: VGG16, VGG19, ResNet18, ResNet50, ResNet101, and Vit. This paper compares the performance of these models based on three metrics: accuracy, precision, and F1 score. The results showed that the VGG19 model, combined with the proposed method, achieved the highest average accuracy of 96.67%. Full article
(This article belongs to the Special Issue Editorial Board Members’ Collection Series: "Information Processes")
Show Figures

Figure 1

13 pages, 449 KiB  
Article
A Proactive Decision-Making Model for Evaluating the Reliability of Infrastructure Assets of a Railway System
by Daniel O. Aikhuele and Shahryar Sorooshian
Information 2024, 15(4), 219; https://doi.org/10.3390/info15040219 - 13 Apr 2024
Viewed by 396
Abstract
Railway infrastructure is generally classified as either fixed or movable infrastructure assets. Failure in any of the assets could lead to the complete shutdown and disruption of the entire system, economic loss, inconvenience to passengers and the train operating company(s), and can sometimes [...] Read more.
Railway infrastructure is generally classified as either fixed or movable infrastructure assets. Failure in any of the assets could lead to the complete shutdown and disruption of the entire system, economic loss, inconvenience to passengers and the train operating company(s), and can sometimes result in death or injury in the event of the derailment of the rolling stock. Considering the importance of the railway infrastructure assets, it is only necessary to continuously explore their behavior, reliability, and safety. In this paper, a proactive multi-criteria decision-making model that is based on an interval-valued intuitionistic fuzzy set and some reliability quantitative parameters has been proposed for the evaluation of the reliability of the infrastructure assets. Results from the evaluation show that the failure mode ‘Broken and defective rails’ has the most risk and reliability concerns. Hence, priority should be given to the failure mode to avoid a total system collapse. Full article
(This article belongs to the Special Issue Second Edition of Predictive Analytics and Data Science)
Show Figures

Figure 1

23 pages, 706 KiB  
Article
Using ML to Predict User Satisfaction with ICT Technology for Educational Institution Administration
by Hamad Almaghrabi, Ben Soh and Alice Li
Information 2024, 15(4), 218; https://doi.org/10.3390/info15040218 - 12 Apr 2024
Viewed by 436
Abstract
Effective and efficient use of information and communication technology (ICT) systems in the administration of educational organisations is crucial to optimise their performance. Earlier research on the identification and analysis of ICT users’ satisfaction with administration tasks in education is limited and inconclusive, [...] Read more.
Effective and efficient use of information and communication technology (ICT) systems in the administration of educational organisations is crucial to optimise their performance. Earlier research on the identification and analysis of ICT users’ satisfaction with administration tasks in education is limited and inconclusive, as they focus on using ICT for nonadministrative tasks. To address this gap, this study employs Artificial Intelligence (AI) and machine learning (ML) in conjunction with a survey technique to predict the satisfaction of ICT users. In doing so, it provides an insight into the key factors that impact users’ satisfaction with the ICT administrative systems. The results reveal that AI and ML models predict ICT user satisfaction with an accuracy of 94%, and identify the specific ICT features, such as usability, privacy, security, and Information Technology (IT) support as key determinants of satisfaction. The ability to predict user satisfaction is important as it allows organisations to make data-driven decisions on improving their ICT systems to better meet the needs and expectations of users, maximising labour effort while minimising resources, and identifying potential issues earlier. The findings of this study have important implications for the use of ML in improving the administration of educational institutions and providing valuable insights for decision-makers and developers. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Economics and Business Management)
Show Figures

Figure 1

21 pages, 2251 KiB  
Article
Predicting Individual Well-Being in Teamwork Contexts Based on Speech Features
by Tobias Zeulner, Gerhard Johann Hagerer, Moritz Müller, Ignacio Vazquez and Peter A. Gloor
Information 2024, 15(4), 217; https://doi.org/10.3390/info15040217 - 12 Apr 2024
Viewed by 536
Abstract
Current methods for assessing individual well-being in team collaboration at the workplace often rely on manually collected surveys. This limits continuous real-world data collection and proactive measures to improve team member workplace satisfaction. We propose a method to automatically derive social signals related [...] Read more.
Current methods for assessing individual well-being in team collaboration at the workplace often rely on manually collected surveys. This limits continuous real-world data collection and proactive measures to improve team member workplace satisfaction. We propose a method to automatically derive social signals related to individual well-being in team collaboration from raw audio and video data collected in teamwork contexts. The goal was to develop computational methods and measurements to facilitate the mirroring of individuals’ well-being to themselves. We focus on how speech behavior is perceived by team members to improve their well-being. Our main contribution is the assembly of an integrated toolchain to perform multi-modal extraction of robust speech features in noisy field settings and to explore which features are predictors of self-reported satisfaction scores. We applied the toolchain to a case study, where we collected videos of 20 teams with 56 participants collaborating over a four-day period in a team project in an educational environment. Our audiovisual speaker diarization extracted individual speech features from a noisy environment. As the dependent variable, team members filled out a daily PERMA (positive emotion, engagement, relationships, meaning, and accomplishment) survey. These well-being scores were predicted using speech features extracted from the videos using machine learning. The results suggest that the proposed toolchain was able to automatically predict individual well-being in teams, leading to better teamwork and happier team members. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence, 2nd Edition)
Show Figures

Figure 1

15 pages, 2781 KiB  
Article
There Are Infinite Ways to Formulate Code: How to Mitigate the Resulting Problems for Better Software Vulnerability Detection
by Jinghua Groppe, Sven Groppe, Daniel Senf and Ralf Möller
Information 2024, 15(4), 216; https://doi.org/10.3390/info15040216 - 11 Apr 2024
Viewed by 494
Abstract
Given a set of software programs, each being labeled either as vulnerable or benign, deep learning technology can be used to automatically build a software vulnerability detector. A challenge in this context is that there are countless equivalent ways to implement a particular [...] Read more.
Given a set of software programs, each being labeled either as vulnerable or benign, deep learning technology can be used to automatically build a software vulnerability detector. A challenge in this context is that there are countless equivalent ways to implement a particular functionality in a program. For instance, the naming of variables is often a matter of the personal style of programmers, and thus, the detection of vulnerability patterns in programs is made difficult. Current deep learning approaches to software vulnerability detection rely on the raw text of a program and exploit general natural language processing capabilities to address the problem of dealing with different naming schemes in instances of vulnerability patterns. Relying on natural language processing, and learning how to reveal variable reference structures from the raw text, is often too high a burden, however. Thus, approaches based on deep learning still exhibit problems generating a detector with decent generalization properties due to the naming or, more generally formulated, the vocabulary explosion problem. In this work, we propose techniques to mitigate this problem by making the referential structure of variable references explicit in input representations for deep learning approaches. Evaluation results show that deep learning models based on techniques presented in this article outperform raw text approaches for vulnerability detection. In addition, the new techniques also induce a very small main memory footprint. The efficiency gain of memory usage can be up to four orders of magnitude compared to existing methods as our experiments indicate. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

16 pages, 4061 KiB  
Article
LighterFace Model for Community Face Detection and Recognition
by Yuntao Shi, Hongfei Zhang, Wei Guo, Meng Zhou, Shuqin Li, Jie Li and Yu Ding
Information 2024, 15(4), 215; https://doi.org/10.3390/info15040215 - 11 Apr 2024
Viewed by 511
Abstract
This research proposes a face detection algorithm named LighterFace, which is aimed at enhancing detection speed to meet the demands of real-time community applications. Two pre-trained convolutional neural networks are combined, namely Cross Stage Partial Network (CSPNet), and ShuffleNetv2. Connecting the optimized network [...] Read more.
This research proposes a face detection algorithm named LighterFace, which is aimed at enhancing detection speed to meet the demands of real-time community applications. Two pre-trained convolutional neural networks are combined, namely Cross Stage Partial Network (CSPNet), and ShuffleNetv2. Connecting the optimized network with Global Attention Mechanism (GAMAttention) extends the model to compensate for the accuracy loss caused by optimizing the network structure. Additionally, the learning rate of the detection model is dynamically updated using the cosine annealing method, which enhances the convergence speed of the model during training. This paper analyzes the training of the LighterFace model on the WiderFace dataset and a custom community dataset, aiming to classify faces in real-life community settings. Compared to the mainstream YOLOv5 model, LighterFace demonstrates a significant reduction in computational demands by 85.4% while achieving a 66.3% increase in detection speed and attaining a 90.6% accuracy in face detection. It is worth noting that LighterFace generates high-quality cropped face images, providing valuable inputs for subsequent face recognition models such as DeepID. Additionally, the LighterFace model is specifically designed to run on edge devices with lower computational capabilities. Its real-time performance on a Raspberry Pi 3B+ validates the results. Full article
Show Figures

Figure 1

19 pages, 336 KiB  
Article
Automated Mapping of Common Vulnerabilities and Exposures to MITRE ATT&CK Tactics
by Ioana Branescu, Octavian Grigorescu and Mihai Dascalu
Information 2024, 15(4), 214; https://doi.org/10.3390/info15040214 - 10 Apr 2024
Viewed by 544
Abstract
Effectively understanding and categorizing vulnerabilities is vital in the ever-evolving cybersecurity landscape, since only one exposure can have a devastating effect on the entire system. Given the increasingly massive number of threats and the size of modern infrastructures, the need for structured, uniform [...] Read more.
Effectively understanding and categorizing vulnerabilities is vital in the ever-evolving cybersecurity landscape, since only one exposure can have a devastating effect on the entire system. Given the increasingly massive number of threats and the size of modern infrastructures, the need for structured, uniform cybersecurity knowledge systems arose. To tackle this challenge, the MITRE Corporation set up two powerful sources of cyber threat and vulnerability information, namely the Common Vulnerabilities and Exposures (CVEs) list focused on identifying and fixing software vulnerabilities, and the MITRE ATT&CK Enterprise Matrix, which is a framework for defining and categorizing adversary actions and ways to defend against them. At the moment, the two are not directly linked, even if such a link would have a significant positive impact on the cybersecurity community. This study aims to automatically map CVEs to the corresponding 14 MITRE ATT&CK tactics using state-of-the-art transformer-based models. Various architectures, from encoders to generative large-scale models, are employed to tackle this multilabel classification problem. Our results are promising, with a SecRoBERTa model performing best with an F1 score of 77.81%, which is closely followed by SecBERT (78.77%), CyBERT (78.54%), and TARS (78.01%), while GPT-4 showed a weak performance in zero-shot settings (22.04%). In addition, we perform an in-depth error analysis to better understand the models’ performance and limitations. We release the code used for all experiments as open source. Full article
(This article belongs to the Special Issue Advances in Cybersecurity and Reliability)
Show Figures

Figure 1

15 pages, 1706 KiB  
Article
A Flexible Infrastructure-Sharing 5G Network Architecture Based on Network Slicing and Roaming
by João P. Ferreira, Vinicius C. Ferreira, Sérgio L. Nogueira, João M. Faria and José A. Afonso
Information 2024, 15(4), 213; https://doi.org/10.3390/info15040213 - 10 Apr 2024
Viewed by 595
Abstract
The sharing of mobile network infrastructure has become a key topic with the introduction of 5G due to the high costs of deploying such infrastructures, with neutral host models coupled with features such as network function virtualization (NFV) and network slicing emerging as [...] Read more.
The sharing of mobile network infrastructure has become a key topic with the introduction of 5G due to the high costs of deploying such infrastructures, with neutral host models coupled with features such as network function virtualization (NFV) and network slicing emerging as viable solutions for the challenges in this area. With this in mind, this work presents the design, implementation, and test of a flexible infrastructure-sharing 5G network architecture capable of providing services to any type of client, whether an operator or not. The proposed architecture leverages 5G’s network slicing for traffic isolation and compliance with the policies of different clients, with roaming employed for the authentication of users of operator clients. The proposed architecture was implemented and tested in a simulation environment using the UERANSIM and Open5GS open-source tools. Qualitative tests successfully validated the authentication and the traffic isolation features provided by the slices for the two types of clients. Results also demonstrate that the proposed architecture has a positive impact on the performance of the neutral host network infrastructure, achieving 61.8%-higher throughput and 96.8%-lower packet loss ratio (PLR) in a scenario sharing the infrastructure among four clients and eight users when compared to a single client with all the network resources. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols II)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop