Harnessing Artificial Intelligence for Social and Semantic Understanding

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (31 March 2025) | Viewed by 34866

Special Issue Editors


E-Mail Website
Guest Editor
Department of Informatics and Computer Engineering, University of West Attica, Egaleo, Greece
Interests: IoT; data and social mining; environmental monitoring; semantics; knowledge representation; spatio-temporal

E-Mail Website
Guest Editor
Department of Informatics and Computer Engineering, University of West Attica, 12243 Egaleo, Greece
Interests: personalization; human–computer interaction; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Informatics and Computer Engineering, University of West Attica, 12243 Egaleo, Greece
Interests: semantic analysis; multimedia applications; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Informatics and Computer Engineering, University of West Attica, 12243 Egaleo, Greece
Interests: software engineering; educational technology; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In today's data-rich landscape, the demand for efficient artificial intelligence (AI) models, tools, and applications for data mining and machine learning is more pronounced than ever. The proliferation of social and semantic networks has generated a wealth of valuable information, presenting both opportunities and challenges. Researchers grapple with novel approaches to address emerging research questions, particularly in the realms of social information processing and semantic applications. The proposed Special Issue and the SMAP workshop series revolve around two central themes: first, the extraction, processing, and analysis of data, information, and knowledge using AI techniques; and second, the application of these results to construct effective models and tools. Ultimately, this interdisciplinary endeavor aims to foster diverse behaviors associated with AI activities, bridging the gap between social and semantic impact. Computer scientists are encouraged to contribute innovative AI solutions to tackle the inherent challenges posed by dynamic and semantically heterogeneous computational data. The call for relevant research manuscripts is open to all interested parties.

The content of the proposed Special Issue and the goals of the SMAP workshop series are organized around two main themes. The first theme focuses on efficiently extracting, processing, manipulating and analysing data, information and knowledge with the utilization of artificial intelligence in the process, while the second theme focuses on using the above results to effectively build models, tools and applications. The ultimate goal, of course, is to promote a variety of machine and/or human behaviours associated with related artificial intelligence activities.

In addition to the Open Call, selected papers that will be presented during SMAP 2024 will be invited to be submitted as extended versions to this Special Issue. In this case, the workshop paper should be cited and noted on the first page of the submitted paper; authors are asked to disclose that it is a workshop paper in their Cover Letter and include a statement on what has been changed compared to the original workshop paper. Each submission to this journal Special Issue should contain at least 50% new material, e.g., in the form of technical extensions, more in-depth evaluations, or additional use cases.

All submitted papers will undergo standard peer-review procedures. Accepted papers will be published in open access format in Computers and available on the Special Issue’s website.

Dr. Yorghos Voutos
Dr. Akrivi Krouska
Dr. Christos Troussas
Dr. Phivos Mylonas
Prof. Dr. Cleo Sgouropoulou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence (AI)
  • social network
  • semantic network

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 1751 KiB  
Article
Enhancing User Experiences in Digital Marketing Through Machine Learning: Cases, Trends, and Challenges
by Alexios Kaponis, Manolis Maragoudakis and Konstantinos Chrysanthos Sofianos
Computers 2025, 14(6), 211; https://doi.org/10.3390/computers14060211 - 29 May 2025
Viewed by 701
Abstract
Online marketing environments are rapidly being transformed by Artificial Intelligence (AI). This represents the implementation of Machine Learning (ML) that has significant potential in content personalization, enhanced usability, and hyper-targeted marketing, and it will reconfigure how businesses reach and serve customers. This systematic [...] Read more.
Online marketing environments are rapidly being transformed by Artificial Intelligence (AI). This represents the implementation of Machine Learning (ML) that has significant potential in content personalization, enhanced usability, and hyper-targeted marketing, and it will reconfigure how businesses reach and serve customers. This systematic examination of machine learning in the Digital Marketing (DM) industry is also closely examined, focusing on its effect on human–computer interaction (HCI). This research methodically elucidates how machine learning can be applied to the automation of strategies for user engagement that increase user experience (UX) and customer retention, and how to optimize recommendations from consumer behavior. The objective of the present study is to critically analyze the functional and ethical considerations of ML integration in DM and to evaluate its implications on data-driven personalization. Through selected case studies, the investigation also provides empirical evidence of the implications of ML applications on UX/customer loyalty as well as associated ethical aspects. These include algorithmic bias, concerns about the privacy of the data, and the need for greater transparency of ML-based decision-making processes. This research also contributes to the field by delivering actionable, data-driven strategies for marketing professionals and offering them frameworks to deal with the evolving responsibilities and tasks that accompany the introduction of ML technologies into DM. Full article
Show Figures

Figure 1

26 pages, 610 KiB  
Article
A Black-Box Analysis of the Capacity of ChatGPT to Generate Datasets of Human-like Comments
by Alejandro Rosete, Guillermo Sosa-Gómez and Omar Rojas
Computers 2025, 14(5), 162; https://doi.org/10.3390/computers14050162 - 27 Apr 2025
Viewed by 821
Abstract
This paper examines the ability of ChatGPT to generate synthetic comment datasets that mimic those produced by humans. To this end, a collection of datasets containing human comments, freely available in the Kaggle repository, was compared to comments generated via ChatGPT. The latter [...] Read more.
This paper examines the ability of ChatGPT to generate synthetic comment datasets that mimic those produced by humans. To this end, a collection of datasets containing human comments, freely available in the Kaggle repository, was compared to comments generated via ChatGPT. The latter were based on prompts designed to provide the necessary context for approximating human results. It was hypothesized that the responses obtained from ChatGPT would demonstrate a high degree of similarity with the human-generated datasets with regard to vocabulary usage. Two categories of prompts were analyzed, depending on whether they specified the desired length of the generated comments. The evaluation of the results primarily focused on the vocabulary used in each comment dataset, employing several analytical measures. This analysis yielded noteworthy observations, which reflect the current capabilities of ChatGPT in this particular task domain. It was observed that ChatGPT typically employs a reduced number of words compared to human respondents and tends to provide repetitive answers. Furthermore, the responses of ChatGPT have been observed to vary considerably when the length is specified. It is noteworthy that ChatGPT employs a smaller vocabulary, which does not always align with human language. Furthermore, the proportion of non-stop words in ChatGPT’s output is higher than that found in human communication. Finally, the vocabulary of ChatGPT is more closely aligned with human language than the similarity between the two configurations of ChatGPT. This alignment is particularly evident in the use of stop words. While it does not fully achieve the intended purpose, the generated vocabulary serves as a reasonable approximation, enabling specific applications such as the creation of word clouds. Full article
Show Figures

Figure 1

15 pages, 3742 KiB  
Article
An Innovative Approach to Topic Clustering for Social Media and Web Data Using AI
by Ioannis Kapantaidakis, Emmanouil Perakakis, George Mastorakis and Ioannis Kopanakis
Computers 2025, 14(4), 142; https://doi.org/10.3390/computers14040142 - 10 Apr 2025
Viewed by 919
Abstract
The vast amount of social media and web data offers valuable insights for purposes such as brand reputation management, topic research, competitive analysis, product development, and public opinion surveys. However, analysing these data to identify patterns and extract valuable insights is challenging due [...] Read more.
The vast amount of social media and web data offers valuable insights for purposes such as brand reputation management, topic research, competitive analysis, product development, and public opinion surveys. However, analysing these data to identify patterns and extract valuable insights is challenging due to the vast number of posts, which can number in the thousands within a single day. One practical approach is topic clustering, which creates clusters of mentions that refer to a specific topic. Following this process will create several manageable clusters, each containing hundreds or thousands of posts. These clusters offer a more meaningful overview of the discussed topics, eliminating the need to categorise each post manually. Several topic detection algorithms can achieve clustering of posts, such as LDA, NMF, BERTopic, etc. The existing algorithms, however, have several important drawbacks, including language constraints and slow or resource-intensive data processing. Moreover, the labels for the clusters typically consist of a few keywords that may not make sense unless one explores the mentions within the cluster. Recently, with the introduction of AI large language models, such as GPT-4, new techniques can be realised for topic clustering to address the aforementioned issues. Our novel approach (AI Mention Clustering) employs LLMs at its core to produce an algorithm for efficient and accurate topic clustering of web and social data. Our solution was tested on social and web data and compared to the popular existing algorithm of BERTopic, demonstrating superior resource efficiency and absolute accuracy of clustered documents. Furthermore, it produces summaries of the clusters that are easily understood by humans instead of just representative keywords. This approach enhances the productivity of social and web data researchers by providing more meaningful and interpretable results. Full article
Show Figures

Figure 1

22 pages, 2490 KiB  
Article
Developing a Crowdsourcing Digital Repository for Natural and Cultural Heritage Preservation and Promotion: A Report on the Experience in Zakynthos Island (Greece)
by Stergios Palamas, Yorghos Voutos, Katerina Kabassi and Phivos Mylonas
Computers 2025, 14(3), 108; https://doi.org/10.3390/computers14030108 - 17 Mar 2025
Viewed by 616
Abstract
The present study discusses the design and development of a digital repository for the preservation and dissemination of the cultural and natural heritage of Zakynthos Island (Greece). Following a crowdsourcing approach, the platform allows users to actively contribute to its content while aiming [...] Read more.
The present study discusses the design and development of a digital repository for the preservation and dissemination of the cultural and natural heritage of Zakynthos Island (Greece). Following a crowdsourcing approach, the platform allows users to actively contribute to its content while aiming to integrate scattered information from other relative initiatives. The platform is based on a popular Content Management System (CMS) to provide the core functionality, extended with the use of the CMS’s API to provide additional, personalized functionality for end-users, such as organizing content into thematic routes. The system also features a web application, mainly targeting users visiting the island of Zakynthos, and is developed exclusively with open web technologies and JavaScript frameworks. The web application is an alternative, map-centered, mobile-optimized front-end for the platform’s content featured in the CMS. A RESTful API is also provided, allowing integration with third-party systems and web applications, thereby expanding the repository’s reach and capabilities. Content delivery is personalized, based on users’ profiles, location, and preferences, enhancing engagement and usability. By integrating these features, the repository effectively preserves and makes accessible the unique cultural and natural heritage of Zakynthos to both local and global audiences. Full article
Show Figures

Figure 1

24 pages, 836 KiB  
Article
Fuzzy Memory Networks and Contextual Schemas: Enhancing ChatGPT Responses in a Personalized Educational System
by Christos Troussas, Akrivi Krouska, Phivos Mylonas, Cleo Sgouropoulou and Ioannis Voyiatzis
Computers 2025, 14(3), 89; https://doi.org/10.3390/computers14030089 - 4 Mar 2025
Viewed by 1030
Abstract
Educational AI systems often do not employ proper sophistication techniques to enhance learner interactions, organize their contextual knowledge or even deliver personalized feedback. To address this gap, this paper seeks to reform the way ChatGPT supports learners by employing fuzzy memory retention and [...] Read more.
Educational AI systems often do not employ proper sophistication techniques to enhance learner interactions, organize their contextual knowledge or even deliver personalized feedback. To address this gap, this paper seeks to reform the way ChatGPT supports learners by employing fuzzy memory retention and thematic clustering. To achieve this, three modules have been developed: (a) the Fuzzy Memory Module which models human memory retention using time decay fuzzy weights to assign relevance to user interactions, (b) the Schema Manager which then organizes these prioritized interactions into thematic clusters for structured contextual representation, and (c) the Response Generator which uses the output of the other two modules to provide feedback to ChatGPT by synthesizing personalized responses. The synergy of these three modules is a novel approach to intelligent and AI tutoring that enhances the output of ChatGPT to learners for a more personalized learning experience. The system was evaluated by 120 undergraduate students in the course of Java programming, and the results are very promising, showing memory retrieval accuracy, schema relevance and personalized response quality. The results also show the system outperforms traditional methods in delivering adaptive and contextually enriched educational feedback. Full article
Show Figures

Figure 1

30 pages, 1844 KiB  
Article
Exploring Machine Learning Methods for Aflatoxin M1 Prediction in Jordanian Breast Milk Samples
by Abdullah Aref, Eman Omar, Eman Alseidi, Nour Elhuda A. Alqudah and Sharaf Omar
Computers 2024, 13(11), 288; https://doi.org/10.3390/computers13110288 - 7 Nov 2024
Cited by 1 | Viewed by 1204
Abstract
The presence of aflatoxin M1 in breast milk poses a serious risk to the health of infants because of its potential to cause cancer and have negative effects on development. Detecting AFM1 in milk samples using conventional methods is often time-consuming and may [...] Read more.
The presence of aflatoxin M1 in breast milk poses a serious risk to the health of infants because of its potential to cause cancer and have negative effects on development. Detecting AFM1 in milk samples using conventional methods is often time-consuming and may not provide real-time monitoring capabilities. The use of machine learning techniques to forecast aflatoxin M1 levels in breast milk samples is examined in this study. To develop predictive models of aflatoxin M1 in breast milk, we employed well-known supervised machine learning algorithms such as Random Forest and Gradient Boosting. The findings show that machine learning can be used for the identification of aflatoxin M1 in breast milk. By actively monitoring breast quality, this research highlights the significance of machine learning in protecting babies’ health and advances the prediction skills in food safety. Full article
Show Figures

Figure 1

16 pages, 2121 KiB  
Article
Enhancement of Named Entity Recognition in Low-Resource Languages with Data Augmentation and BERT Models: A Case Study on Urdu
by Fida Ullah, Alexander Gelbukh, Muhammad Tayyab Zamir, Edgardo Manuel Felipe Riverόn and Grigori Sidorov
Computers 2024, 13(10), 258; https://doi.org/10.3390/computers13100258 - 10 Oct 2024
Cited by 2 | Viewed by 2324
Abstract
Identifying and categorizing proper nouns in text, known as named entity recognition (NER), is crucial for various natural language processing tasks. However, developing effective NER techniques for low-resource languages like Urdu poses challenges due to limited training data, particularly in the nastaliq script. [...] Read more.
Identifying and categorizing proper nouns in text, known as named entity recognition (NER), is crucial for various natural language processing tasks. However, developing effective NER techniques for low-resource languages like Urdu poses challenges due to limited training data, particularly in the nastaliq script. To address this, our study introduces a novel data augmentation method, “contextual word embeddings augmentation” (CWEA), for Urdu, aiming to enrich existing datasets. The extended dataset, comprising 160,132 tokens and 114,912 labeled entities, significantly enhances the coverage of named entities compared to previous datasets. We evaluated several transformer models on this augmented dataset, including BERT-multilingual, RoBERTa-Urdu-small, BERT-base-cased, and BERT-large-cased. Notably, the BERT-multilingual model outperformed others, achieving the highest macro F1 score of 0.982%. This surpassed the macro f1 scores of the RoBERTa-Urdu-small (0.884%), BERT-large-cased (0.916%), and BERT-base-cased (0.908%) models. Additionally, our neural network model achieved a micro F1 score of 96%, while the RNN model achieved 97% and the BiLSTM model achieved a macro F1 score of 96% on augmented data. Our findings underscore the efficacy of data augmentation techniques in enhancing NER performance for low-resource languages like Urdu. Full article
Show Figures

Figure 1

19 pages, 1495 KiB  
Article
Deep Learning for Predicting Attrition Rate in Open and Distance Learning (ODL) Institutions
by Juliana Ngozi Ndunagu, David Opeoluwa Oyewola, Farida Shehu Garki, Jude Chukwuma Onyeakazi, Christiana Uchenna Ezeanya and Elochukwu Ukwandu
Computers 2024, 13(9), 229; https://doi.org/10.3390/computers13090229 - 11 Sep 2024
Cited by 2 | Viewed by 1465
Abstract
Student enrollment is a vital aspect of educational institutions, encompassing active, registered and graduate students. All the same, some students fail to engage with their studies after admission and drop out along the line; this is known as attrition. The student attrition rate [...] Read more.
Student enrollment is a vital aspect of educational institutions, encompassing active, registered and graduate students. All the same, some students fail to engage with their studies after admission and drop out along the line; this is known as attrition. The student attrition rate is acknowledged as the most complicated and significant problem facing educational systems and is caused by institutional and non-institutional challenges. In this study, the researchers utilized a dataset obtained from the National Open University of Nigeria (NOUN) from 2012 to 2022, which included comprehensive information about students enrolled in various programs at the university who were inactive and had dropped out. The researchers used deep learning techniques, such as the Long Short-Term Memory (LSTM) model and compared their performance with the One-Dimensional Convolutional Neural Network (1DCNN) model. The results of this study revealed that the LSTM model achieved overall accuracy of 57.29% on the training data, while the 1DCNN model exhibited lower accuracy of 49.91% on the training data. The LSTM indicated a superior correct classification rate compared to the 1DCNN model. Full article
Show Figures

Figure 1

21 pages, 4836 KiB  
Article
Chef Dalle: Transforming Cooking with Multi-Model Multimodal AI
by Brendan Hannon, Yulia Kumar, J. Jenny Li and Patricia Morreale
Computers 2024, 13(7), 156; https://doi.org/10.3390/computers13070156 - 21 Jun 2024
Cited by 5 | Viewed by 5466
Abstract
In an era where dietary habits significantly impact health, technological interventions can offer personalized and accessible food choices. This paper introduces Chef Dalle, a recipe recommendation system that leverages multi-model and multimodal human-computer interaction (HCI) techniques to provide personalized cooking guidance. The application [...] Read more.
In an era where dietary habits significantly impact health, technological interventions can offer personalized and accessible food choices. This paper introduces Chef Dalle, a recipe recommendation system that leverages multi-model and multimodal human-computer interaction (HCI) techniques to provide personalized cooking guidance. The application integrates voice-to-text conversion via Whisper and ingredient image recognition through GPT-Vision. It employs an advanced recipe filtering system that utilizes user-provided ingredients to fetch recipes, which are then evaluated through multi-model AI through integrations of OpenAI, Google Gemini, Claude, and/or Anthropic APIs to deliver highly personalized recommendations. These methods enable users to interact with the system using voice, text, or images, accommodating various dietary restrictions and preferences. Furthermore, the utilization of DALL-E 3 for generating recipe images enhances user engagement. User feedback mechanisms allow for the refinement of future recommendations, demonstrating the system’s adaptability. Chef Dalle showcases potential applications ranging from home kitchens to grocery stores and restaurant menu customization, addressing accessibility and promoting healthier eating habits. This paper underscores the significance of multimodal HCI in enhancing culinary experiences, setting a precedent for future developments in the field. Full article
Show Figures

Figure 1

9 pages, 275 KiB  
Article
Mitigating Large Language Model Bias: Automated Dataset Augmentation and Prejudice Quantification
by Devam Mondal and Carlo Lipizzi
Computers 2024, 13(6), 141; https://doi.org/10.3390/computers13060141 - 4 Jun 2024
Cited by 2 | Viewed by 2751
Abstract
Despite the growing capabilities of large language models, concerns exist about the biases they develop. In this paper, we propose a novel, automated mechanism for debiasing through specified dataset augmentation in the lens of bias producers that can be useful in a variety [...] Read more.
Despite the growing capabilities of large language models, concerns exist about the biases they develop. In this paper, we propose a novel, automated mechanism for debiasing through specified dataset augmentation in the lens of bias producers that can be useful in a variety of industries, especially ones that are “restricted” and have limited data. We consider that bias can occur due to intrinsic model architecture and dataset quality. The two aspects are evaluated using two different metrics we created. We show that our dataset augmentation algorithm reduces bias as measured by our metrics. Our code can be found on an online GitHub repository. Full article
Show Figures

Figure 1

27 pages, 4827 KiB  
Article
Machine Learning-Based Crop Yield Prediction in South India: Performance Analysis of Various Models
by Uppugunduri Vijay Nikhil, Athiya M. Pandiyan, S. P. Raja and Zoran Stamenkovic
Computers 2024, 13(6), 137; https://doi.org/10.3390/computers13060137 - 29 May 2024
Cited by 15 | Viewed by 6243
Abstract
Agriculture is one of the most important activities that produces crop and food that is crucial for the sustenance of a human being. In the present day, agricultural products and crops are not only used for local demand, but globalization has allowed us [...] Read more.
Agriculture is one of the most important activities that produces crop and food that is crucial for the sustenance of a human being. In the present day, agricultural products and crops are not only used for local demand, but globalization has allowed us to export produce to other countries and import from other countries. India is an agricultural nation and depends a lot on its agricultural activities. Prediction of crop production and yield is a necessary activity that allows farmers to estimate storage, optimize resources, increase efficiency and decrease costs. However, farmers usually predict crops based on the region, soil, weather conditions and the crop itself based on experience and estimates which may not be very accurate especially with the constantly changing and unpredictable climactic conditions of the present day. To solve this problem, we aim to predict the production and yield of various crops such as rice, sorghum, cotton, sugarcane and rabi using Machine Learning (ML) models. We train these models with the weather, soil and crop data to predict future crop production and yields of these crops. We have compiled a dataset of attributes that impact crop production and yield from specific states in India and performed a comprehensive study of the performance of various ML Regression Models in predicting crop production and yield. The results indicated that the Extra Trees Regressor achieved the highest performance among the models examined. It attained a R-Squared score of 0.9615 and showed lowest Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) of 21.06 and 33.99. Following closely behind are the Random Forest Regressor and LGBM Regressor, achieving R-Squared scores of 0.9437 and 0.9398 respectively. Moreover, additional analysis revealed that tree-based models, showing a R-Squared score of 0.9353, demonstrate better performance compared to linear and neighbors-based models, which achieved R-Squared scores of 0.8568 and 0.9002 respectively. Full article
Show Figures

Figure 1

27 pages, 2040 KiB  
Article
Machine Learning Decision System on the Empirical Analysis of the Actual Usage of Interactive Entertainment: A Perspective of Sustainable Innovative Technology
by Rex Revian A. Guste and Ardvin Kester S. Ong
Computers 2024, 13(6), 128; https://doi.org/10.3390/computers13060128 - 23 May 2024
Cited by 2 | Viewed by 2469
Abstract
This study focused on the impact of Netflix’s interactive entertainment on Filipino consumers, seamlessly combining vantage points from consumer behavior and employing data analytics. This underlines the revolutionary aspect of interactive entertainment in the quickly expanding digital media ecosystem, particularly as Netflix pioneers [...] Read more.
This study focused on the impact of Netflix’s interactive entertainment on Filipino consumers, seamlessly combining vantage points from consumer behavior and employing data analytics. This underlines the revolutionary aspect of interactive entertainment in the quickly expanding digital media ecosystem, particularly as Netflix pioneers fresh content distribution techniques. The main objective of this study was to find the factors impacting the real usage of Netflix’s interactive entertainment among Filipino viewers, filling a critical gap in the existing literature. The major goal of using advanced data analytics techniques in this study was to understand the subtle dynamics affecting customer behavior in this setting. Specifically, the random forest classifier with hard and soft classifiers was assessed. The random forest compared to LightGBM was also employed, alongside the different algorithms of the artificial neural network. Purposive sampling was used to obtain responses from 258 people who had experienced Netflix’s interactive entertainment, resulting in a comprehensive dataset. The findings emphasized the importance of hedonic motivation, underlining the requirement for highly engaging and rewarding interactive material. Customer service and device compatibility, for example, have a significant impact on user uptake. Furthermore, behavioral intention and habit emerged as key drivers, revealing interactive entertainment’s long-term influence on user engagement. Practically, the research recommends strategic platform suggestions that emphasize continuous innovation, user-friendly interfaces, and user-centric methods. This study was able to fill in the gap in the literature on interactive entertainment, which contributes to a better understanding of consumer consumption and lays the groundwork for future research in the dynamic field of digital media. Moreover, this study offers essential insights into the intricate interaction of consumer preferences, technology breakthroughs, and societal influences in the ever-expanding environment of digital entertainment. Lastly, the comparative approach to the use of machine learning algorithms provides insights for future works to adopt and employ among human factors and consumer behavior-related studies. Full article
Show Figures

Figure 1

14 pages, 6899 KiB  
Article
A Hybrid Deep Learning Architecture for Apple Foliar Disease Detection
by Adnane Ait Nasser and Moulay A. Akhloufi
Computers 2024, 13(5), 116; https://doi.org/10.3390/computers13050116 - 7 May 2024
Cited by 4 | Viewed by 2333
Abstract
Incorrectly diagnosing plant diseases can lead to various undesirable outcomes. This includes the potential for the misuse of unsuitable herbicides, resulting in harm to both plants and the environment. Examining plant diseases visually is a complex and challenging procedure that demands considerable time [...] Read more.
Incorrectly diagnosing plant diseases can lead to various undesirable outcomes. This includes the potential for the misuse of unsuitable herbicides, resulting in harm to both plants and the environment. Examining plant diseases visually is a complex and challenging procedure that demands considerable time and resources. Moreover, it necessitates keen observational skills from agronomists and plant pathologists. Precise identification of plant diseases is crucial to enhance crop yields, ultimately guaranteeing the quality and quantity of production. The latest progress in deep learning (DL) models has demonstrated encouraging outcomes in the identification and classification of plant diseases. In the context of this study, we introduce a novel hybrid deep learning architecture named “CTPlantNet”. This architecture employs convolutional neural network (CNN) models and a vision transformer model to efficiently classify plant foliar diseases, contributing to the advancement of disease classification methods in the field of plant pathology research. This study utilizes two open-access datasets. The first one is the Plant Pathology 2020-FGVC-7 dataset, comprising a total of 3526 images depicting apple leaves and divided into four distinct classes: healthy, scab, rust, and multiple. The second dataset is Plant Pathology 2021-FGVC-8, containing 18,632 images classified into six categories: healthy, scab, rust, powdery mildew, frog eye spot, and complex. The proposed architecture demonstrated remarkable performance across both datasets, outperforming state-of-the-art models with an accuracy (ACC) of 98.28% for Plant Pathology 2020-FGVC-7 and 95.96% for Plant Pathology 2021-FGVC-8. Full article
Show Figures

Figure 1

20 pages, 601 KiB  
Article
Harnessing Machine Learning to Unveil Emotional Responses to Hateful Content on Social Media
by Ali Louati, Hassen Louati, Abdullah Albanyan, Rahma Lahyani, Elham Kariri and Abdulrahman Alabduljabbar
Computers 2024, 13(5), 114; https://doi.org/10.3390/computers13050114 - 29 Apr 2024
Cited by 2 | Viewed by 2409
Abstract
Within the dynamic realm of social media, the proliferation of harmful content can significantly influence user engagement and emotional health. This study presents an in-depth analysis that bridges diverse domains, from examining the aftereffects of personal online attacks to the intricacies of online [...] Read more.
Within the dynamic realm of social media, the proliferation of harmful content can significantly influence user engagement and emotional health. This study presents an in-depth analysis that bridges diverse domains, from examining the aftereffects of personal online attacks to the intricacies of online trolling. By leveraging an AI-driven framework, we systematically implemented high-precision attack detection, psycholinguistic feature extraction, and sentiment analysis algorithms, each tailored to the unique linguistic contexts found within user-generated content on platforms like Reddit. Our dataset, which spans a comprehensive spectrum of social media interactions, underwent rigorous analysis employing classical statistical methods, Bayesian estimation, and model-theoretic analysis. This multi-pronged methodological approach allowed us to chart the complex emotional responses of users subjected to online negativity, covering a spectrum from harassment and cyberbullying to subtle forms of trolling. Empirical results from our study reveal a clear dose–response effect; personal attacks are quantifiably linked to declines in user activity, with our data indicating a 5% reduction after 1–2 attacks, 15% after 3–5 attacks, and 25% after 6–10 attacks, demonstrating the significant deterring effect of such negative encounters. Moreover, sentiment analysis unveiled the intricate emotional reactions users have to these interactions, further emphasizing the potential for AI-driven methodologies to promote more inclusive and supportive digital communities. This research underscores the critical need for interdisciplinary approaches in understanding social media’s complex dynamics and sheds light on significant insights relevant to the development of regulation policies, the formation of community guidelines, and the creation of AI tools tailored to detect and counteract harmful content. The goal is to mitigate the impact of such content on user emotions and ensure the healthy engagement of users in online spaces. Full article
Show Figures

Figure 1

14 pages, 1352 KiB  
Article
MTL-AraBERT: An Enhanced Multi-Task Learning Model for Arabic Aspect-Based Sentiment Analysis
by Arwa Fadel, Mostafa Saleh, Reda Salama and Osama Abulnaja
Computers 2024, 13(4), 98; https://doi.org/10.3390/computers13040098 - 15 Apr 2024
Cited by 4 | Viewed by 2520
Abstract
Aspect-based sentiment analysis (ABSA) is a fine-grained type of sentiment analysis; it works on an aspect level. It mainly focuses on extracting aspect terms from text or reviews, categorizing the aspect terms, and classifying the sentiment polarities toward each aspect term and aspect [...] Read more.
Aspect-based sentiment analysis (ABSA) is a fine-grained type of sentiment analysis; it works on an aspect level. It mainly focuses on extracting aspect terms from text or reviews, categorizing the aspect terms, and classifying the sentiment polarities toward each aspect term and aspect category. Aspect term extraction (ATE) and aspect category detection (ACD) are interdependent and closely associated tasks. However, the majority of the current literature on Arabic aspect-based sentiment analysis (ABSA) deals with these tasks individually, assumes that aspect terms are already identified, or employs a pipeline model. Pipeline solutions employ single models for each task, where the output of the ATE model is utilized as the input for the ACD model. This sequential process can lead to the propagation of errors across different stages, as the performance of the ACD model is influenced by any errors produced by the ATE model. Therefore, the primary objective of this study was to investigate a multi-task learning approach based on transfer learning and transformers. We propose a multi-task learning model (MTL) that utilizes the pre-trained language model (AraBERT), namely, the MTL-AraBERT model, for extracting Arabic aspect terms and aspect categories simultaneously. Specifically, we focused on training a single model that simultaneously and jointly addressed both subtasks. Moreover, this paper also proposes a model integrating AraBERT, single pair classification, and BiLSTM/BiGRU that can be applied to aspect term polarity classification (APC) and aspect category polarity classification (ACPC). All proposed models were evaluated using the SemEval-2016 annotated dataset for the Arabic hotel dataset. The experiment results of the MTL model demonstrate that the proposed models achieved comparable or better performance than state-of-the-art works (F1-scores of 80.32% for the ATE and 68.21% for the ACD). The proposed SPC-BERT model demonstrated high accuracy, reaching 89.02% and 89.36 for APC and ACPC, respectively. These improvements hold significant potential for future research in Arabic ABSA. Full article
Show Figures

Figure 1

Back to TopTop