Previous Issue
Volume 16, April
 
 

Information, Volume 16, Issue 5 (May 2025) – 35 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 842 KiB  
Article
Exploring Large Language Models’ Ability to Describe Entity-Relationship Schema-Based Conceptual Data Models
by Andrea Avignone, Alessia Tierno, Alessandro Fiori and Silvia Chiusano
Information 2025, 16(5), 368; https://doi.org/10.3390/info16050368 (registering DOI) - 29 Apr 2025
Abstract
In the field of databases, Large Language Models (LLMs) have recently been studied for generating SQL queries from textual descriptions, while their use for conceptual or logical data modeling remains less explored. The conceptual design of relational databases commonly relies on the entity-relationship [...] Read more.
In the field of databases, Large Language Models (LLMs) have recently been studied for generating SQL queries from textual descriptions, while their use for conceptual or logical data modeling remains less explored. The conceptual design of relational databases commonly relies on the entity-relationship (ER) data model, where translation rules enable mapping an ER schema into corresponding relational tables with their constraints. Our study investigates the capability of LLMs to describe in natural language a database conceptual data model based on the ER schema. Whether for documentation, onboarding, or communication with non-technical stakeholders, LLMs can significantly improve the process of explaining the ER schema by generating accurate descriptions about how the components interact as well as the represented information. To guide the LLM with challenging constructs, specific hints are defined to provide an enriched ER schema. Different LLMs have been explored (ChatGPT 3.5 and 4, Llama2, Gemini, Mistral 7B) and different metrics (F1 score, ROUGE, perplexity) are used to assess the quality of the generated descriptions and compare the different LLMs. Full article
26 pages, 3283 KiB  
Article
Artificial Intelligence-Based Prediction Model for Maritime Vessel Type Identification
by Hrvoje Karna, Maja Braović, Anita Gudelj and Kristian Buličić
Information 2025, 16(5), 367; https://doi.org/10.3390/info16050367 (registering DOI) - 29 Apr 2025
Abstract
This paper presents an artificial intelligence-based model for the classification of maritime vessel images obtained by cameras operating in the visible part of the electromagnetic spectrum. It incorporates both the deep learning techniques for initial image representation and traditional image processing and machine [...] Read more.
This paper presents an artificial intelligence-based model for the classification of maritime vessel images obtained by cameras operating in the visible part of the electromagnetic spectrum. It incorporates both the deep learning techniques for initial image representation and traditional image processing and machine learning methods for subsequent image classification. The presented model is therefore a hybrid approach that uses the Inception v3 deep learning model for the purpose of image vectorization and a combination of SVM, kNN, logistic regression, Naïve Bayes, neural network, and decision tree algorithms for final image classification. The model is trained and tested on a custom dataset consisting of a total of 2915 images of maritime vessels. These images were split into three subsections: training (2444 images), validation (271 images), and testing (200 images). The images themselves encompassed 11 distinctive classes: cargo, container, cruise, fishing, military, passenger, pleasure, sailing, special, tanker, and non-class (objects that can be encountered at sea but do not represent maritime vessels). The presented model accurately classified 86.5% of the images used for training purposes and therefore demonstrated how a relatively straightforward model can still achieve high accuracy and potentially be useful in real-world operational environments aimed at sea surveillance and automatic situational awareness at sea. Full article
(This article belongs to the Section Artificial Intelligence)
26 pages, 1345 KiB  
Article
Benchmarking 21 Open-Source Large Language Models for Phishing Link Detection with Prompt Engineering
by Arbi Haza Nasution, Winda Monika, Aytug Onan and Yohei Murakami
Information 2025, 16(5), 366; https://doi.org/10.3390/info16050366 (registering DOI) - 29 Apr 2025
Abstract
Phishing URL detection is critical due to the severe cybersecurity threats posed by phishing attacks. While traditional methods rely heavily on handcrafted features and supervised machine learning, recent advances in large language models (LLMs) provide promising alternatives. This paper presents a comprehensive benchmarking [...] Read more.
Phishing URL detection is critical due to the severe cybersecurity threats posed by phishing attacks. While traditional methods rely heavily on handcrafted features and supervised machine learning, recent advances in large language models (LLMs) provide promising alternatives. This paper presents a comprehensive benchmarking study of 21 state-of-the-art open-source LLMs—including Llama3, Gemma, Qwen, Phi, DeepSeek, and Mistral—for phishing URL detection. We evaluate four key prompt engineering techniques—zero-shot, role-playing, chain-of-thought, and few-shot prompting—using a balanced, publicly available phishing URL dataset, with no fine-tuning or additional training of the models conducted, reinforcing the zero-shot, prompt-based nature as a distinctive aspect of our study. The results demonstrate that large open-source LLMs (≥27B parameters) achieve performance exceeding 90% F1-score without fine-tuning, closely matching proprietary models. Among the prompt strategies, few-shot prompting consistently delivers the highest accuracy (91.24% F1 with Llama3.3_70b), whereas chain-of-thought significantly lowers accuracy and increases inference time. Additionally, our analysis highlights smaller models (7B–27B parameters) offering strong performance with substantially reduced computational costs. This study underscores the practical potential of open-source LLMs for phishing detection and provides insights for effective prompt engineering in cybersecurity applications. Full article
35 pages, 1615 KiB  
Article
Toward Robust Security Orchestration and Automated Response in Security Operations Centers with a Hyper-Automation Approach Using Agentic Artificial Intelligence
by Ismail, Rahmat Kurnia, Zilmas Arjuna Brata, Ghitha Afina Nelistiani, Shinwook Heo, Hyeongon Kim and Howon Kim
Information 2025, 16(5), 365; https://doi.org/10.3390/info16050365 (registering DOI) - 29 Apr 2025
Abstract
The evolving landscape of cybersecurity threats demands the modernization of Security Operations Centers (SOCs) to enhance threat detection, response, and mitigation. Security Orchestration, Automation, and Response (SOAR) platforms play a crucial role in addressing operational inefficiencies; however, traditional no-code SOAR solutions face significant [...] Read more.
The evolving landscape of cybersecurity threats demands the modernization of Security Operations Centers (SOCs) to enhance threat detection, response, and mitigation. Security Orchestration, Automation, and Response (SOAR) platforms play a crucial role in addressing operational inefficiencies; however, traditional no-code SOAR solutions face significant limitations, including restricted flexibility, scalability challenges, inadequate support for advanced logic, and difficulties in managing large playbooks. These constraints hinder effective automation, reduce adaptability, and underutilize analysts’ technical expertise, underscoring the need for more sophisticated solutions. To address these challenges, we propose a hyper-automation SOAR platform powered by agentic-LLM, leveraging Large Language Models (LLMs) to optimize automation workflows. This approach shifts from rigid no-code playbooks to AI-generated code, providing a more flexible and scalable alternative while reducing operational complexity. Additionally, we introduce the IVAM framework, comprising three critical stages: (1) Investigation, structuring incident response into actionable steps based on tailored recommendations, (2) Validation, ensuring the accuracy and effectiveness of executed actions, (3) Active Monitoring, providing continuous oversight. By integrating AI-driven automation with the IVAM framework, our solution enhances investigation quality, improves response accuracy, and increases SOC efficiency in addressing modern cybersecurity threats. Full article
Show Figures

Figure 1

20 pages, 1902 KiB  
Article
Distantly Supervised Relation Extraction Method Based on Multi-Level Hierarchical Attention
by Zhaoxin Xuan, Hejing Zhao, Xin Li and Ziqi Chen
Information 2025, 16(5), 364; https://doi.org/10.3390/info16050364 - 29 Apr 2025
Abstract
Distantly Supervised Relation Extraction (DSRE) aims to automatically identify semantic relationships within large text corpora by aligning with external knowledge bases. Despite the success of current methods in automating data annotation, they introduce two main challenges: label noise and data long-tail distribution. Label [...] Read more.
Distantly Supervised Relation Extraction (DSRE) aims to automatically identify semantic relationships within large text corpora by aligning with external knowledge bases. Despite the success of current methods in automating data annotation, they introduce two main challenges: label noise and data long-tail distribution. Label noise results in inaccurate annotations, which can undermine the quality of relation extraction. The long-tail problem, on the other hand, leads to an imbalanced model that struggles to extract less frequent, long-tail relations. In this paper, we introduce a novel relation extraction framework based on multi-level hierarchical attention. This approach utilizes Graph Attention Networks (GATs) to model the hierarchical structure of the relations, capturing the semantic dependencies between relation types and generating relation embeddings that reflect the overall hierarchical framework. To improve the classification process, we incorporate a multi-level classification structure guided by hierarchical attention, which enhances the accuracy of both head and tail relation extraction. A local probability constraint is introduced to ensure coherence across the classification levels, fostering knowledge transfer from frequent to less frequent relations. Experimental evaluations on the New York Times (NYT) dataset demonstrate that our method outperforms existing baselines, particularly in the context of long-tail relation extraction, offering a comprehensive solution to the challenges of DSRE. Full article
Show Figures

Figure 1

24 pages, 2522 KiB  
Article
Digital Requirements for Systems Analysts in Europe: A Sectoral Analysis of Online Job Advertisements and ESCO Skills
by Konstantinos Charmanas, Konstantinos Georgiou, Nikolaos Mittas and Lefteris Angelis
Information 2025, 16(5), 363; https://doi.org/10.3390/info16050363 - 29 Apr 2025
Abstract
Systems analysts can be considered a valuable part of organizations, as their responsibilities and contributions concern the improvement of information systems, which constitute an irreplaceable part of organizations. Thus, by exploring the current labor market of systems analysts, researchers can gather valuable knowledge [...] Read more.
Systems analysts can be considered a valuable part of organizations, as their responsibilities and contributions concern the improvement of information systems, which constitute an irreplaceable part of organizations. Thus, by exploring the current labor market of systems analysts, researchers can gather valuable knowledge to understand some invaluable societal needs. In this context, the objectives of this study are to investigate the sets of digital skills from the European Skills, Competences, Qualifications, and Occupations (ESCO) taxonomy required by systems analysts in Europe and examine the key characteristics of various relevant sectors. For this purpose, a tool combining topic extraction, machine learning, and statistical analysis is utilized. The outcomes prove that systems analysts may indeed possess different types of digital skills, where 12 distinct topics are discovered, and that the professional, scientific, and technical activities demand the most unique sets of digital skills across 17 sectors. Ultimately, the findings show that the numerous sectors indeed have divergent requirements and should be approached accordingly. Overall, this study can offer valuable guidelines for identifying both the general duties of systems analysts and the specific needs of each sector. Also, the presented tool and methods may provide ideas for exploring different domains associated with content information and distinct groups. Full article
Show Figures

Figure 1

27 pages, 10552 KiB  
Article
Enhancing Dongba Pictograph Recognition Using Convolutional Neural Networks and Data Augmentation Techniques
by Shihui Li, Lan Thi Nguyen, Wirapong Chansanam, Natthakan Iam-On and Tossapon Boongoen
Information 2025, 16(5), 362; https://doi.org/10.3390/info16050362 - 29 Apr 2025
Abstract
The recognition of Dongba pictographs presents significant challenges due to the pitfalls in traditional feature extraction methods, classification algorithms’ high complexity, and generalization ability. This study proposes a convolutional neural network (CNN)-based image classification method to enhance the accuracy and efficiency of Dongba [...] Read more.
The recognition of Dongba pictographs presents significant challenges due to the pitfalls in traditional feature extraction methods, classification algorithms’ high complexity, and generalization ability. This study proposes a convolutional neural network (CNN)-based image classification method to enhance the accuracy and efficiency of Dongba pictograph recognition. The research begins with collecting and manually categorizing Dongba pictograph images, followed by these preprocessing steps to improve image quality: normalization, grayscale conversion, filtering, denoising, and binarization. The dataset, comprising 70,000 image samples, is categorized into 18 classes based on shape characteristics and manual annotations. A CNN model is then trained using a dataset that is split into training (with 70% of all the samples), validation (20%), and test (10%) sets. In particular, data augmentation techniques, including rotation, affine transformation, scaling, and translation, are applied to enhance classification accuracy. Experimental results demonstrate that the proposed model achieves a classification accuracy of 99.43% and consistently outperforms other conventional methods, with its performance peaking at 99.84% under optimized training conditions—specifically, with 75 training epochs and a batch size of 512. This study provides a robust and efficient solution for automatically classifying Dongba pictographs, contributing to their digital preservation and scholarly research. By leveraging deep learning techniques, the proposed approach facilitates the rapid and precise identification of Dongba hieroglyphs, supporting the ongoing efforts in cultural heritage preservation and the broader application of artificial intelligence in linguistic studies. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining: Innovations in Big Data Analytics)
Show Figures

Figure 1

35 pages, 644 KiB  
Review
Machine Learning in Baseball Analytics: Sabermetrics and Beyond
by Wenbing Zhao, Vyaghri Seetharamayya Akella, Shunkun Yang and Xiong Luo
Information 2025, 16(5), 361; https://doi.org/10.3390/info16050361 - 29 Apr 2025
Abstract
In this article, we provide a comprehensive review of machine learning-based sports analytics in baseball. This review is primarily guided by the following three research questions: (1) What baseball analytics problems have been studied using machine learning? (2) What data repositories have been [...] Read more.
In this article, we provide a comprehensive review of machine learning-based sports analytics in baseball. This review is primarily guided by the following three research questions: (1) What baseball analytics problems have been studied using machine learning? (2) What data repositories have been used? (3) What and how machine learning techniques have been employed for these studies? The findings of these research questions lead to several research contributions. First, we provide a taxonomy for baseball analytics problems. According to the proposed taxonomy, machine learning has been employed to (1) predict individual game plays; (2) determine player performance; (3) estimate player valuation; (4) predict future player injuries; and (5) project future game outcomes. Second, we identify a set of data repositories for baseball analytics studies. The most popular data repositories are Baseball Savant and Baseball Reference. Third, we conduct an in-depth analysis of the machine learning models applied in baseball analytics. The most popular machine learning models are random forest and support vector machine. Furthermore, only a small fraction of studies have rigorously followed the best practices in data preprocessing, machine learning model training, testing, and prediction outcome interpretation. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining: Innovations in Big Data Analytics)
Show Figures

Figure 1

35 pages, 4874 KiB  
Article
A COVID Support App Demonstrating the Use of a Rapid Persuasive System Design Approach
by Rashmi P. Payyanadan, Linda S. Angell and Amanda Zeidan
Information 2025, 16(5), 360; https://doi.org/10.3390/info16050360 - 29 Apr 2025
Abstract
Background: The persuasive systems design approach draws together theories around persuasive technology and their psychological foundations to form, alter and/or reinforce compliance, attitudes, and/or behaviors, which have been useful in building health and wellness apps. But with pandemics such as COVID and their [...] Read more.
Background: The persuasive systems design approach draws together theories around persuasive technology and their psychological foundations to form, alter and/or reinforce compliance, attitudes, and/or behaviors, which have been useful in building health and wellness apps. But with pandemics such as COVID and their ever-changing landscape, there is a need for such design processes to be even more time sensitive, while maintaining the inclusion of empirical evidence and rigorous testing that are the basis for the approach’s successful deployment and uptake. Objective: In response to this need, this study applied a recently developed rapid persuasive systems design (R-PSD) process to the development and testing of a COVID support app. The aim of this effort was to identify concrete steps for when and how to build new persuasion features on top of existing features in existing apps to support the changing landscape of target behaviors from COVID tracing and tracking, to long-term COVID support, information, and prevention. Methods: This study employed a two-fold approach to achieve this objective. First, a rapid persuasive systems design framework was implemented. A technology scan of current COVID apps was conducted to identify apps that had employed PSD principles, in the context of an ongoing analysis of behavioral challenges and needs that were surfacing in public health reports and other sources. Second, a test case of the R-PSD framework was implemented in the context of providing COVID support by building a COVID support app prototype. The COVID support prototype was then evaluated and tested to assess the effectiveness of the integrated approach. Results: The results of the study revealed the potential success that can be obtained from the application of the R-PSD framework to the development of rapid release apps. Importantly, this application provides the first concrete example of how the R-PSD framework can be operationalized to produce a functional, user-informed app under real-world time and resource constraints. Further, the persuasive design categories enabled the identification of essential persuasive features required for app development that are intended to facilitate, support, or precipitate behavior change. The small sample study facilitated the quick iteration of the app design to ensure time sensitivity and empirical evidence-based application improvements. The R-PSD approach can serve as a guided and practical design approach for future rapid release apps particularly in relation to the development of support apps for pandemics or other time-urgent community emergencies. Full article
Show Figures

Figure 1

19 pages, 1433 KiB  
Article
Optimized Deep Learning for Mammography: Augmentation and Tailored Architectures
by Syed Ibrar Hussain and Elena Toscano
Information 2025, 16(5), 359; https://doi.org/10.3390/info16050359 - 29 Apr 2025
Abstract
This paper investigates the categorization of mammogram images into benign, malignant and normal categories, providing novel approaches based on Deep Convolutional Neural Networks to the early identification and classification of breast lesions. Multiple DCNN models were tested to see how well deep learning [...] Read more.
This paper investigates the categorization of mammogram images into benign, malignant and normal categories, providing novel approaches based on Deep Convolutional Neural Networks to the early identification and classification of breast lesions. Multiple DCNN models were tested to see how well deep learning worked for difficult, multi-class categorization problems. These models were trained on pre-processed datasets with optimized hyperparameters (e.g., the batch size, learning rate, and dropout) which increased the precision of classification. Evaluation measures like confusion matrices, accuracy, and loss demonstrated their great classification efficiency with low overfitting and the validation results well aligned with the training. DenseNet-201 and MobileNet-V3 Large displayed significant generalization skills, whilst EfficientNetV2-B3 and NASNet Mobile struck the optimum mix of accuracy and efficiency, making them suitable for practical applications. The use of data augmentation also improved the management of data imbalances, resulting in more accurate large-scale detection. Unlike prior approaches, the combination of the architectures, pre-processing approaches, and data augmentation improved the system’s accuracy, indicating that these models are suitable for medical imaging tasks that require transfer learning. The results have shown precise and accurate classifications in terms of dealing with class imbalances and dataset poor quality. In particular, we have not defined a new framework for computer-aided diagnosis here, but we have reviewed a variety of promising solutions for future developments in this field. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

19 pages, 670 KiB  
Article
Quantifying Gender Bias in Large Language Models Using Information-Theoretic and Statistical Analysis
by Imran Mirza, Akbar Anbar Jafari, Cagri Ozcinar and Gholamreza Anbarjafari
Information 2025, 16(5), 358; https://doi.org/10.3390/info16050358 - 29 Apr 2025
Abstract
Large language models (LLMs) have revolutionized natural language processing across diverse domains, yet they also raise critical fairness and ethical concerns, particularly regarding gender bias. In this study, we conduct a systematic, mathematically grounded investigation of gender bias in four leading LLMs—GPT-4o, Gemini [...] Read more.
Large language models (LLMs) have revolutionized natural language processing across diverse domains, yet they also raise critical fairness and ethical concerns, particularly regarding gender bias. In this study, we conduct a systematic, mathematically grounded investigation of gender bias in four leading LLMs—GPT-4o, Gemini 1.5 Pro, Sonnet 3.5, and LLaMA 3.1:8b—by evaluating the gender distributions produced when generating “perfect personas” for a wide range of occupational roles spanning healthcare, engineering, and professional services. Leveraging standardized prompts, controlled experimental settings, and repeated trials, our methodology quantifies bias against an ideal uniform distribution using rigorous statistical measures and information-theoretic metrics. Our results reveal marked discrepancies: GPT-4o exhibits pronounced occupational gender segregation, disproportionately linking healthcare roles to female identities while assigning male labels to engineering and physically demanding positions. In contrast, Gemini 1.5 Pro, Sonnet 3.5, and LLaMA 3.1:8b predominantly favor female assignments, albeit with less job-specific precision. These findings demonstrate how architectural decisions, training data composition, and token embedding strategies critically influence gender representation. The study underscores the urgent need for inclusive datasets, advanced bias-mitigation techniques, and continuous model audits to develop AI systems that are not only free from stereotype perpetuation but actively promote equitable and representative information processing. Full article
(This article belongs to the Special Issue Fundamental Problems of Information Studies)
Show Figures

Figure 1

15 pages, 5001 KiB  
Article
Research on Tongue Image Segmentation and Classification Methods Based on Deep Learning and Machine Learning
by Bin Liu, Zeya Wang, Kang Yu, Yunfeng Wang, Haiying Zhang, Tingting Song and Hao Yang
Information 2025, 16(5), 357; https://doi.org/10.3390/info16050357 - 29 Apr 2025
Abstract
Tongue diagnosis is a crucial method in traditional Chinese medicine (TCM) for obtaining information about a patient’s health condition. In this study, we propose a tongue image segmentation method based on deep learning and a pixel-level tongue color classification method utilizing machine learning [...] Read more.
Tongue diagnosis is a crucial method in traditional Chinese medicine (TCM) for obtaining information about a patient’s health condition. In this study, we propose a tongue image segmentation method based on deep learning and a pixel-level tongue color classification method utilizing machine learning techniques such as support vector machine (SVM) and ridge regression. These two approaches together form a comprehensive framework that spans from tongue image acquisition to segmentation and analysis. This framework provides an objective and visualized representation of pixel-wise classification and proportion distribution within tongue images, effectively assisting TCM practitioners in diagnosing tongue conditions. It mitigates the reliance on subjective observations in traditional tongue diagnosis, reducing human bias and enhancing the objectivity of TCM diagnosis. The proposed framework consists of three main components: tongue image segmentation, pixel-wise classification, and tongue color classification. In the segmentation stage, we integrate the Segment Anything Model (SAM) into the overall segmentation network. This approach not only achieves an intersection over union (IoU) score above 0.95 across three tongue image datasets but also significantly reduces the labor-intensive annotation process required for training traditional segmentation models while improving the generalization capability of the segmentation model. For pixel-wise classification, we propose a lightweight pixel classification model based on SVM, achieving a classification accuracy of 92%. In the tongue color classification stage, we introduce a ridge regression model that classifies tongue color based on the proportion of different pixel categories. Using this method, the classification accuracy reaches 91.80%. The proposed approach enables accurate and efficient tongue image segmentation, provides an intuitive visualization of tongue color distribution, and objectively analyzes and quantifies the proportion of different tongue color categories. In the future, this framework holds potential for validation and optimization in clinical practice. Full article
Show Figures

Figure 1

33 pages, 36906 KiB  
Article
Making Images Speak: Human-Inspired Image Description Generation
by Chifaa Sebbane, Ikram Belhajem and Mohammed Rziza
Information 2025, 16(5), 356; https://doi.org/10.3390/info16050356 - 28 Apr 2025
Viewed by 30
Abstract
Despite significant advances in deep learning-based image captioning, many state-of-the-art models still struggle to balance visual grounding (i.e., accurate object and scene descriptions) with linguistic coherence (i.e., grammatical fluency and appropriate use of non-visual tokens such as articles and prepositions). To address these [...] Read more.
Despite significant advances in deep learning-based image captioning, many state-of-the-art models still struggle to balance visual grounding (i.e., accurate object and scene descriptions) with linguistic coherence (i.e., grammatical fluency and appropriate use of non-visual tokens such as articles and prepositions). To address these limitations, we propose a hybrid image captioning framework that integrates handcrafted and deep visual features. Specifically, we combine local descriptors—Scale-Invariant Feature Transform (SIFT) and Bag of Features (BoF)—with high-level semantic features extracted using ResNet50. This dual representation captures both fine-grained spatial details and contextual semantics. The decoder employs Bahdanau attention refined with an Attention-on-Attention (AoA) mechanism to optimize visual-textual alignment, while GloVe embeddings and a GRU-based sequence model ensure fluent language generation. The proposed system is trained on 200,000 image-caption pairs from the MS COCO train2014 dataset and evaluated on 50,000 held-out MS COCO pairs plus the Flickr8K benchmark. Our model achieves a CIDEr score of 128.3 and a SPICE score of 29.24, reflecting clear improvements over baselines in both semantic precision—particularly for spatial relationships—and grammatical fluency. These results validate that combining classical computer vision techniques with modern attention mechanisms yields more interpretable and linguistically precise captions, addressing key limitations in neural caption generation. Full article
29 pages, 2763 KiB  
Review
A Review of Computer Vision Technology for Football Videos
by Fucheng Zheng, Duaa Zuhair Al-Hamid, Peter Han Joo Chong, Cheng Yang and Xue Jun Li
Information 2025, 16(5), 355; https://doi.org/10.3390/info16050355 - 28 Apr 2025
Viewed by 45
Abstract
In the era of digital advancement, the integration of Deep Learning (DL) algorithms is revolutionizing performance monitoring in football. Due to restrictions on monitoring devices during games to prevent unfair advantages, coaches are tasked to analyze players’ movements and performance visually. As a [...] Read more.
In the era of digital advancement, the integration of Deep Learning (DL) algorithms is revolutionizing performance monitoring in football. Due to restrictions on monitoring devices during games to prevent unfair advantages, coaches are tasked to analyze players’ movements and performance visually. As a result, Computer Vision (CV) technology has emerged as a vital non-contact tool for performance analysis, offering numerous opportunities to enhance the clarity, accuracy, and intelligence of sports event observations. However, existing CV studies in football face critical challenges, including low-resolution imagery of distant players and balls, severe occlusion in crowded scenes, motion blur during rapid movements, and the lack of large-scale annotated datasets tailored for dynamic football scenarios. This review paper fills this gap by comprehensively analyzing advancements in CV, particularly in four key areas: player/ball detection and tracking, motion prediction, tactical analysis, and event detection in football. By exploring these areas, this review offers valuable insights for future research on using CV technology to improve sports performance. Future directions should prioritize super-resolution techniques to enhance video quality and improve small-object detection performance, collaborative efforts to build diverse and richly annotated datasets, and the integration of contextual game information (e.g., score differentials and time remaining) to improve predictive models. The in-depth analysis of current State-Of-The-Art (SOTA) CV techniques provides researchers with a detailed reference to further develop robust and intelligent CV systems in football. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

28 pages, 1881 KiB  
Article
Enabling Collaborative Forensic by Design for the Internet of Vehicles
by Ahmed M. Elmisery and Mirela Sertovic
Information 2025, 16(5), 354; https://doi.org/10.3390/info16050354 - 28 Apr 2025
Viewed by 92
Abstract
The progress in automotive technology, communication protocols, and embedded systems has propelled the development of the Internet of Vehicles (IoV). In this system, each vehicle acts as a sophisticated sensing platform that collects environmental and vehicular data. These data assist drivers and infrastructure [...] Read more.
The progress in automotive technology, communication protocols, and embedded systems has propelled the development of the Internet of Vehicles (IoV). In this system, each vehicle acts as a sophisticated sensing platform that collects environmental and vehicular data. These data assist drivers and infrastructure engineers in improving navigation safety, pollution control, and traffic management. Digital artefacts stored within vehicles can serve as critical evidence in road crime investigations. Given the interconnected and autonomous nature of intelligent vehicles, the effective identification of road crimes and the secure collection and preservation of evidence from these vehicles are essential for the successful implementation of the IoV ecosystem. Traditional digital forensics has primarily focused on in-vehicle investigations. This paper addresses the challenges of extending artefact identification to an IoV framework and introduces the Collaborative Forensic Platform for Electronic Artefacts (CFPEA). The CFPEA framework implements a collaborative forensic-by-design mechanism that is designed to securely collect, store, and share artefacts from the IoV environment. It enables individuals and groups to manage artefacts collected by their intelligent vehicles and store them in a non-proprietary format. This approach allows crime investigators and law enforcement agencies to gain access to real-time and highly relevant road crime artefacts that have been previously unknown to them or out of their reach, while enabling vehicle owners to monetise the use of their sensed artefacts. The CFPEA framework assists in identifying pertinent roadside units and evaluating their datasets, enabling the autonomous extraction of evidence for ongoing investigations. Leveraging CFPEA for artefact collection in road crime cases offers significant benefits for solving crimes and conducting thorough investigations. Full article
(This article belongs to the Special Issue Information Sharing and Knowledge Management)
Show Figures

Figure 1

33 pages, 3397 KiB  
Article
Optimizing YouTube Video Visibility and Engagement: The Impact of Keywords on Fisheries’ Product Campaigns in the Supply Chain Sector
by Emmanouil Giankos, Nikolaos T. Giannakopoulos and Damianos P. Sakas
Information 2025, 16(5), 353; https://doi.org/10.3390/info16050353 - 27 Apr 2025
Viewed by 177
Abstract
YouTube has emerged as a powerful platform for digital content distribution, particularly in niche sectors such as fisheries and environmental sustainability. This study examines the impact of specific keywords on video visibility and engagement, focusing on fishery-related YouTube channels within the broader supply [...] Read more.
YouTube has emerged as a powerful platform for digital content distribution, particularly in niche sectors such as fisheries and environmental sustainability. This study examines the impact of specific keywords on video visibility and engagement, focusing on fishery-related YouTube channels within the broader supply chain context. Using a statistical analysis with R software, this study isolates the influence of keywords while controlling for macro-characteristics such as video duration, title length, and description length. The findings reveal that while most structural video attributes do not significantly impact views, keyword optimization in video titles is crucial in improving discoverability. Additionally, a positive correlation between views and user engagement (likes) is confirmed, highlighting the role of interaction in content promotion. These insights provide actionable recommendations for content creators seeking to enhance their digital outreach while offering theoretical contributions to search engine optimization (SEO) and social media marketing strategies. Full article
(This article belongs to the Section Information Applications)
Show Figures

Graphical abstract

15 pages, 29428 KiB  
Article
Color as a High-Value Quantitative Tool for PET/CT Imaging
by Michail Marinis, Sofia Chatziioannou and Maria Kallergi
Information 2025, 16(5), 352; https://doi.org/10.3390/info16050352 - 27 Apr 2025
Viewed by 131
Abstract
The successful application of artificial intelligence (AI) techniques for the quantitative analysis of hybrid medical imaging data such as PET/CT is challenged by the differences in the type of information and image quality between the two modalities. The purpose of this work was [...] Read more.
The successful application of artificial intelligence (AI) techniques for the quantitative analysis of hybrid medical imaging data such as PET/CT is challenged by the differences in the type of information and image quality between the two modalities. The purpose of this work was to develop color-based, pre-processing methodologies for PET/CT data that could yield a better starting point for subsequent diagnosis and image processing and analysis. Two methods are proposed that are based on the encoding of Hounsfield Units (HU) and Standardized Uptake Values (SUVs) in separate transformed .png files as reversible color information in combination with .png basic information metadata based on DICOM attributes. Linux Ubuntu using Python was used for the implementation and pilot testing of the proposed methodologies on brain 18F-FDG PET/CT scans acquired with different PET/CT systems. The range of HUs and SUVs was mapped using novel weighted color distribution functions that allowed for a balanced representation of the data and an improved visualization of anatomic and metabolic differences. The pilot application of the proposed mapping codes yielded CT and PET images where it was easier to pinpoint variations in anatomy and metabolic activity and offered a potentially better starting point for the subsequent fully automated quantitative analysis of specific regions of interest or observer evaluation. It should be noted that the output .png files contained all the raw values and may be treated as raw DICOM input data. Full article
Show Figures

Figure 1

20 pages, 1792 KiB  
Article
A Lightweight Deep Learning Model for Profiled SCA Based on Random Convolution Kernels
by Yu Ou, Yongzhuang Wei, René Rodríguez-Aldama and Fengrong Zhang
Information 2025, 16(5), 351; https://doi.org/10.3390/info16050351 - 27 Apr 2025
Viewed by 62
Abstract
In deep learning-based side-channel analysis (DL-SCA), there may be a proliferation of model parameters as the number of trace power points increases, especially in the case of raw power traces. Determining how to design a lightweight deep learning model that can handle a [...] Read more.
In deep learning-based side-channel analysis (DL-SCA), there may be a proliferation of model parameters as the number of trace power points increases, especially in the case of raw power traces. Determining how to design a lightweight deep learning model that can handle a trace with more power points and has fewer parameters and lower time costs for profiled SCAs appears to be a challenge. In this article, a DL-SCA model is proposed by introducing a non-trained DL technique called random convolutional kernels, which allows us to extract the features of leakage like using a transformer model. The model is then processed by a classifier with an attention mechanism, which finally outputs the probability vector for the candidate keys. Moreover, we analyze the performance and complexity of the random kernels and discuss how they work in theory. On several public AES datasets, the experimental results show that the number of required profiling traces and trainable parameters reduce, respectively, by over 70% and 94% compared with state-of-the-art works, while ensuring that the number of power traces required to recover the real key is acceptable. Importantly, differing from previous SCA models, our architecture eliminates the dependency between the feature length of power traces and the number of trainable parameters, which allows for the architecture to be applied to the case of raw power traces. Full article
(This article belongs to the Special Issue Hardware Security and Trust, 2nd Edition)
Show Figures

Graphical abstract

18 pages, 1644 KiB  
Article
The Exploration of Combining Hologram-like Images and Pedagogical Agent Gesturing
by Robert O. Davis, Joseph Vincent, Eun Byul Yang, Yong Jik Lee and Ji Hae Lee
Information 2025, 16(5), 350; https://doi.org/10.3390/info16050350 - 27 Apr 2025
Viewed by 121
Abstract
The split-attention principle suggests that separating onscreen information sources can overburden working memory and impede learning. While research has traditionally focused on the separation of images and text, relatively little is known about the impact of multiple simultaneous visual inputs. This study examined [...] Read more.
The split-attention principle suggests that separating onscreen information sources can overburden working memory and impede learning. While research has traditionally focused on the separation of images and text, relatively little is known about the impact of multiple simultaneous visual inputs. This study examined the split-attention principle in a multimedia environment featuring a pedagogical agent performing gestures, with hologram-like images either integrated centrally with the agent or spatially separated. A within-subjects design (N = 80) investigated the impact on satisfaction, cognitive load, and cued recall. The quantitative findings revealed no significant differences between the two spatial conditions. Preliminary qualitative insights from a limited sample of six individual interviews suggested that some participants may employ strategies to simplify complex designs and manage perceived cognitive load. Based on these limited qualitative observations, this research tentatively proposes the “pruning principle”, a metacognitive strategy where learners actively “prune” extraneous information to optimize cognitive resources. These findings underscore the importance of considering individual differences and metacognitive strategies in multimedia design. Full article
Show Figures

Figure 1

15 pages, 2021 KiB  
Article
Toward Annotation, Visualization, and Reproducible Archiving of Human–Human Dialog Video Recording Applications
by Verena Schreyer, Marco Xaver Bornschlegl and Matthias Hemmje
Information 2025, 16(5), 349; https://doi.org/10.3390/info16050349 - 26 Apr 2025
Viewed by 116
Abstract
The COVID-19 pandemic increased the number of video conferences, for example, through online teaching and home office meetings. Even in the medical environment, consultation sessions are now increasingly conducted in the form of video conferencing. This includes sessions between psychotherapists and one or [...] Read more.
The COVID-19 pandemic increased the number of video conferences, for example, through online teaching and home office meetings. Even in the medical environment, consultation sessions are now increasingly conducted in the form of video conferencing. This includes sessions between psychotherapists and one or more call participants (individual/group calls). To subsequently document and analyze patient conversations, as well as any other human–human dialog, it is possible to record these video conferences. This allows experts to concentrate better on the conversation during the dialog and to perform analysis afterward. Artificial intelligence (AI) and its machine learning approach, which has already been used extensively for innovations, can provide support for subsequent analyses. Among other things, emotion recognition algorithms can be used to determine dialog participants’ emotions and record them automatically. This can alert experts to any noticeable sections of the conversation during subsequent analysis, thus simplifying the analysis process. As a result, experts can identify the cause of such sections based on emotion sequence data and exchange ideas with other experts within the context of an analysis tool. Full article
(This article belongs to the Special Issue Advances in Human-Centered Artificial Intelligence)
Show Figures

Graphical abstract

25 pages, 4027 KiB  
Article
Edge-Optimized Deep Learning Architectures for Classification of Agricultural Insects with Mobile Deployment
by Muhammad Hannan Akhtar, Ibrahim Eksheir and Tamer Shanableh
Information 2025, 16(5), 348; https://doi.org/10.3390/info16050348 - 25 Apr 2025
Viewed by 122
Abstract
The deployment of machine learning models on mobile platforms has ushered in a new era of innovation across diverse sectors, including agriculture, where such applications hold immense promise for empowering farmers with cutting-edge technologies. In this context, the threat posed by insects to [...] Read more.
The deployment of machine learning models on mobile platforms has ushered in a new era of innovation across diverse sectors, including agriculture, where such applications hold immense promise for empowering farmers with cutting-edge technologies. In this context, the threat posed by insects to crop yields during harvest has escalated, fueled by factors such as evolution and climate change-induced shifts in insect behavior. To address this challenge, smart insect monitoring systems and detection models have emerged as crucial tools for farmers and IoT-based systems, enabling interventions to safeguard crops. The primary contribution of this study lies in its systematic investigation of model optimization techniques for edge deployment, including Post-Training Quantization, Quantization-Aware Training, and Data Representative Quantization. As such, we address the crucial need for efficient, on-site pest detection tools in agricultural settings. We provide a detailed analysis of the trade-offs between model size, inference speed, and accuracy across different optimization approaches, ensuring practical applicability in resource-constrained farming environments. Our study explores various methodologies for model development, including the utilization of Mobile-ViT and EfficientNet architectures, coupled with transfer learning and fine-tuning techniques. Using the Dangerous Farm Insects Dataset, we achieve an accuracy of 82.6% and 77.8% on validation and test datasets, respectively, showcasing the efficacy of our approach. Furthermore, we investigate quantization techniques to optimize model performance for on-device inference, ensuring seamless deployment on mobile devices and other edge devices without compromising accuracy. The best quantized model, produced through Post-Training Quantization, was able to maintain a classification accuracy of 77.8% while significantly reducing the model size from 33 MB to 9.6 MB. To validate the generalizability of our solution, we extended our experiments to the larger IP102 dataset. The quantized model produced using Post-Training Quantization was able to maintain a classification accuracy of 59.6% while also reducing the model size from 33 MB to 9.6 MB, thus demonstrating that our solution maintains a competitive performance across a broader range of insect classes. Full article
(This article belongs to the Special Issue Intelligent Information Technology)
Show Figures

Figure 1

19 pages, 4763 KiB  
Article
Application of a KAN-LSTM Fusion Model for Stress Prediction in Large-Diameter Pipelines
by Zechao Li and Shiwei Qin
Information 2025, 16(5), 347; https://doi.org/10.3390/info16050347 - 25 Apr 2025
Viewed by 62
Abstract
Accurately predicting stress in large-diameter sewage pipelines is critical for ensuring their structural reliability and safety. To meet the safety requirements of large-diameter concrete pipes, we propose a hybrid model that integrates Kolmogorov-Arnold Networks (KAN) with Long Short-Term Memory (LSTM) neural networks. The [...] Read more.
Accurately predicting stress in large-diameter sewage pipelines is critical for ensuring their structural reliability and safety. To meet the safety requirements of large-diameter concrete pipes, we propose a hybrid model that integrates Kolmogorov-Arnold Networks (KAN) with Long Short-Term Memory (LSTM) neural networks. The model is trained and validated using actual pipeline monitoring data, ensuring that it accurately captures both the temporal dependencies and nonlinear stress patterns inherent in such systems. By modifying the fully connected layer of the original LSTM model, we develop a novel LSTM-KAN model and evaluate its performance through comprehensive predictive analysis. Comparisons with a traditional LSTM model reveal that the LSTM-KAN model—in which the fully connected layer is replaced by a KAN layer—achieves significantly lower loss and higher accuracy with fewer training iterations. Specifically, the proposed model attains a mean absolute error (MAE) of 0.033, a root mean square error (RMSE) of 0.035, and a coefficient of determination (R2) of 0.92, underscoring its superior accuracy and efficiency, and it can be used for the long-term prediction of stress in large-diameter pipes. Moreover, the integration of KAN significantly improves the nonlinear modeling capacity of the conventional LSTM, enabling the hybrid model to effectively capture complex stress variations under variable operating conditions. This work not only provides novel technical support for the application of deep learning in pipeline stress prediction but also offers a robust framework adaptable to other structural health monitoring applications. Full article
Show Figures

Figure 1

23 pages, 812 KiB  
Article
Innovation in Manufacturing Within the Digital Intelligence Context: Examining Faultlines Through Information Processing
by Kangli Zhang and Jinwei Zhu
Information 2025, 16(5), 346; https://doi.org/10.3390/info16050346 - 25 Apr 2025
Viewed by 168
Abstract
In the context of digital intelligence, innovation is vital for manufacturing enterprises to establish sustainable competitive advantages. As the cornerstone of decision-making, the information-processing capability of top management teams plays an essential role in driving organizational success. Using panel data from A-Share manufacturing [...] Read more.
In the context of digital intelligence, innovation is vital for manufacturing enterprises to establish sustainable competitive advantages. As the cornerstone of decision-making, the information-processing capability of top management teams plays an essential role in driving organizational success. Using panel data from A-Share manufacturing listed companies between 2015 and 2023, we conducted programming in the R language employing hierarchical clustering and k-means algorithms for faultline grouping calculations. The empirical analysis portion utilized STATA software, where the Hausman test was implemented to determine the use of a fixed-effects model for computation. The results demonstrate that task-related faultlines, driven by factors such as educational background, tenure, career experience, and years of service, have a positive impact on innovation performance. In contrast, relationship-related faultlines influenced by gender and age exhibit a negative effect. Furthermore, long-term investment decision preferences mediate the relationship between faultlines and innovation performance. Performance expectation gaps amplify the positive influence of task-related faultlines and mitigate the negative effects of relationship-related faultlines. In comparison with the majority subgroup, when the chairperson is part of a minority subgroup, the faultline has a more significant impact on innovation performance. This study presents a novel framework for fostering innovation within the manufacturing industry under the digital intelligence context. By combining R programming with empirical analysis, we thoroughly examine how the characteristics of top management teams’ faultlines influence innovation performance through an information processing perspective. Our findings provide actionable insights for optimizing executive structures and aligning decision-making strategies, thereby advancing organizational effectiveness. Full article
(This article belongs to the Special Issue Decision Models for Economics and Business Management)
Show Figures

Figure 1

30 pages, 3620 KiB  
Article
Stroke Detection in Brain CT Images Using Convolutional Neural Networks: Model Development, Optimization and Interpretability
by Hassan Abdi, Mian Usman Sattar, Raza Hasan, Vishal Dattana and Salman Mahmood
Information 2025, 16(5), 345; https://doi.org/10.3390/info16050345 - 24 Apr 2025
Viewed by 109
Abstract
Stroke detection using medical imaging plays a crucial role in early diagnosis and treatment planning. In this study, we propose a Convolutional Neural Network (CNN)-based model for detecting strokes from brain Computed Tomography (CT) images. The model is trained on a dataset consisting [...] Read more.
Stroke detection using medical imaging plays a crucial role in early diagnosis and treatment planning. In this study, we propose a Convolutional Neural Network (CNN)-based model for detecting strokes from brain Computed Tomography (CT) images. The model is trained on a dataset consisting of 2501 images, including both normal and stroke cases, and employs a series of preprocessing steps, including resizing, normalization, data augmentation, and splitting into training, validation, and test sets. The CNN architecture comprises three convolutional blocks followed by dense layers optimized through hyperparameter tuning to maximize performance. Our model achieved a validation accuracy of 97.2%, with precision and recall values of 96%, demonstrating high efficacy in stroke classification. Additionally, interpretability techniques such as Local Interpretable Model-agnostic Explanations (LIME), occlusion sensitivity, and saliency maps were used to visualize the model’s decision-making process, enhancing transparency and trust for clinical use. The results suggest that deep learning models, particularly CNNs, can provide valuable support for medical professionals in detecting strokes, offering both high performance and interpretability. The model demonstrates moderate generalizability, achieving 89.73% accuracy on an external, patient-independent dataset of 9900 CT images, underscoring the need for further optimization in diverse clinical settings. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Graphical abstract

25 pages, 1964 KiB  
Article
Hate Speech Detection and Online Public Opinion Regulation Using Support Vector Machine Algorithm: Application and Impact on Social Media
by Siyuan Li and Zhi Li
Information 2025, 16(5), 344; https://doi.org/10.3390/info16050344 - 24 Apr 2025
Viewed by 174
Abstract
Detecting hate speech in social media is challenging due to its rarity, high-dimensional complexity, and implicit expression via sarcasm or spelling variations, rendering linear models ineffective. In this study, the SVM (Support Vector Machine) algorithm is used to map text features from low-dimensional [...] Read more.
Detecting hate speech in social media is challenging due to its rarity, high-dimensional complexity, and implicit expression via sarcasm or spelling variations, rendering linear models ineffective. In this study, the SVM (Support Vector Machine) algorithm is used to map text features from low-dimensional to high-dimensional space using kernel function techniques to meet complex nonlinear classification challenges. By maximizing the category interval to locate the optimal hyperplane and combining nuclear techniques to implicitly adjust the data distribution, the classification accuracy of hate speech detection is significantly improved. Data collection leverages social media APIs (Application Programming Interface) and customized crawlers with OAuth2.0 authentication and keyword filtering, ensuring relevance. Regular expressions validate data integrity, followed by preprocessing steps such as denoising, stop-word removal, and spelling correction. Word embeddings are generated using Word2Vec’s Skip-gram model, combined with TF-IDF (Term Frequency–Inverse Document Frequency) weighting to capture contextual semantics. A multi-level feature extraction framework integrates sentiment analysis via lexicon-based methods and BERT for advanced sentiment recognition. Experimental evaluations on two datasets demonstrate the SVM model’s effectiveness, achieving accuracies of 90.42% and 92.84%, recall rates of 88.06% and 90.79%, and average inference times of 3.71 ms and 2.96 ms. These results highlight the model’s ability to detect implicit hate speech accurately and efficiently, supporting real-time monitoring. This research contributes to creating a safer online environment by advancing hate speech detection methodologies. Full article
(This article belongs to the Special Issue Information Technology in Society)
Show Figures

Figure 1

23 pages, 3846 KiB  
Article
Efficient Context-Preserving Encoding and Decoding of Compositional Structures Using Sparse Binary Representations
by Roman Malits and Avi Mendelson
Information 2025, 16(5), 343; https://doi.org/10.3390/info16050343 - 24 Apr 2025
Viewed by 142
Abstract
Despite their unprecedented success, artificial neural networks suffer extreme opacity and weakness in learning general knowledge from limited experience. Some argue that the key to overcoming those limitations in artificial neural networks is efficiently combining continuity with compositionality principles. While it is unknown [...] Read more.
Despite their unprecedented success, artificial neural networks suffer extreme opacity and weakness in learning general knowledge from limited experience. Some argue that the key to overcoming those limitations in artificial neural networks is efficiently combining continuity with compositionality principles. While it is unknown how the brain encodes and decodes information in a way that enables both rapid responses and complex processing, there is evidence that the neocortex employs sparse distributed representations for this task. This is an active area of research. This work deals with one of the challenges in this field related to encoding and decoding nested compositional structures, which are essential for representing complex real-world concepts. One of the algorithms in this field is called context-dependent thinning (CDT). A distinguishing feature of CDT relative to other methods is that the CDT-encoded vector remains similar to each component input and combinations of similar inputs. In this work, we propose a novel encoding method termed CPSE, based on CDT ideas. In addition, we propose a novel decoding method termed CPSD, based on triadic memory. The proposed algorithms extend CDT by allowing both encoding and decoding of information, including the composition order. In addition, the proposed algorithms allow to optimize the amount of compute and memory needed to achieve the desired encoding/decoding performance. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

18 pages, 1491 KiB  
Article
Using Natural Language Processing and Machine Learning to Detect Online Radicalisation in the Maldivian Language, Dhivehi
by Hussain Ibrahim, Ahmed Ibrahim and Michael N. Johnstone
Information 2025, 16(5), 342; https://doi.org/10.3390/info16050342 - 24 Apr 2025
Viewed by 117
Abstract
Early detection of online radical content is important for intelligence services to combat radicalisation and terrorism. The motivation for this research was the lack of language tools in the detection of radicalisation in the Maldivian language, Dhivehi. This research applied Machine Learning and [...] Read more.
Early detection of online radical content is important for intelligence services to combat radicalisation and terrorism. The motivation for this research was the lack of language tools in the detection of radicalisation in the Maldivian language, Dhivehi. This research applied Machine Learning and Natural Language Processing (NLP) to detect online radicalisation content in Dhivehi, with the incorporation of domain-specific knowledge. The research used Machine Learning to evaluate the most effective technique for detection of radicalisation text in Dhivehi and used interviews with Subject Matter Experts and self-deradicalised individuals to validate the results, add contextual information and improve recognition accuracy. The contributions of this research to the existing body of knowledge include datasets in the form of labelled radical/non-radical text, sentiment corpus of radical words and primary interview data of self-deradicalised individuals and a technique for detection of radicalisation text in Dhivehi for the first time using Machine Learning. We found that the Naïve Bayes algorithm worked best for the detection of radicalisation text in Dhivehi with an Accuracy of 87.67%, Precision of 85.35%, Recall of 92.52% and an F2 score of 91%. Inclusion of the radical words identified through the interviews with SMEs as a count feature improved the performance of ML algorithms and Naïve Bayes by 9.57%. Full article
Show Figures

Figure 1

14 pages, 1318 KiB  
Article
Exploring the Application of Text-to-Image Generation Technology in Art Education at Vocational Senior High Schools in Taiwan
by Chin-Wen Liao, Hsiang-Wei Chen, Bo-Siang Chen, I-Chi Wang, Wei-Sho Ho and Wei-Lun Huang
Information 2025, 16(5), 341; https://doi.org/10.3390/info16050341 - 23 Apr 2025
Viewed by 132
Abstract
Exploring the potential of text-to-image generation technology in Taiwanese vocational high school art courses, this study employs a conceptual framework of technology integration, creative thinking, and metacognitive abilities, focusing on its effects on teaching strategies as well as students’ digital art creation skills [...] Read more.
Exploring the potential of text-to-image generation technology in Taiwanese vocational high school art courses, this study employs a conceptual framework of technology integration, creative thinking, and metacognitive abilities, focusing on its effects on teaching strategies as well as students’ digital art creation skills and cognitive and creative development. The study was conducted through a multi-methodological approach that includes a systematic literature review plus participatory action research and qualitative analysis. The results showed that integrating text-to-image technology with education boosted students’ interest in activities such as prompt design and project creation and suited themes like landscapes and conceptual art. Testing AI tools enhanced technical proficiency (average of 3.95/5), while pedagogy shifted to project-based learning, increasing engagement. Students’ digital art skills improved from 3.26 to 3.78 (16% growth), with creativity and originality (3.82/5), style diversity, visual complexity, and divergent thinking notably advanced. The technology also fostered metacognitive skills and critical thinking, proving to be an effective teaching aid beyond a mere digital tool. This discovery provides a fresh theoretical viewpoint and instructional procedures for high school art education curricula, anchored in technology, and highlights the importance of nurturing students’ innovativeness and adaptability within the contemporary digital age. Full article
(This article belongs to the Special Issue Generative AI Technologies: Shaping the Future of Higher Education)
17 pages, 13068 KiB  
Article
A Novel Involution-Based Lightweight Network for Fabric Defect Detection
by Zhenxia Ke, Lingjie Yu, Chao Zhi, Tao Xue and Yuming Zhang
Information 2025, 16(5), 340; https://doi.org/10.3390/info16050340 - 23 Apr 2025
Viewed by 79
Abstract
For automatic fabric defect detection with deep learning, diverse textures and defect forms are often required for a large training set. However, the computation cost of convolution neural networks (CNNs)-based models is very high. This research proposed an involution-enabled Faster R-CNN network by [...] Read more.
For automatic fabric defect detection with deep learning, diverse textures and defect forms are often required for a large training set. However, the computation cost of convolution neural networks (CNNs)-based models is very high. This research proposed an involution-enabled Faster R-CNN network by using the bottleneck structure of the residual network. The involution has two advantages over convolution: first, it can capture a larger range of receptive fields in the spatial dimension; then, parameters are shared in the channel dimension to reduce information redundancy, thus reducing parameters and computation. The detection performance is evaluated by Params, floating-point operations per second (FLOPs), and average precision (AP) in the collected dataset containing 6308 defective fabric images. The experiment results demonstrate that the proposed involution-based network achieves a lighter model, with Params reduced to 31.21 M and FLOPs decreased to 176.19 G, compared to the Faster R-CNN’s 41.14 M Params and 206.68 G FLOPs. Additionally, it slightly improves the detection effect of large defects, increasing the AP value from 50.5% to 51.1%. The findings of this research could offer a promising solution for efficient fabric defect detection in practical textile manufacturing. Full article
Show Figures

Figure 1

20 pages, 780 KiB  
Article
A Fuzzy-Neural Model for Personalized Learning Recommendations Grounded in Experiential Learning Theory
by Christos Troussas, Akrivi Krouska, Phivos Mylonas and Cleo Sgouropoulou
Information 2025, 16(5), 339; https://doi.org/10.3390/info16050339 - 23 Apr 2025
Viewed by 115
Abstract
Personalized learning is a defining characteristic of current education, with flexible and adaptable experiences that respond to individual learners’ requirements and approaches to learning. Traditional implementations of educational theories—such as Kolb’s Experiential Learning Theory—often follow rule-based approaches, offering predefined structures but lacking adaptability [...] Read more.
Personalized learning is a defining characteristic of current education, with flexible and adaptable experiences that respond to individual learners’ requirements and approaches to learning. Traditional implementations of educational theories—such as Kolb’s Experiential Learning Theory—often follow rule-based approaches, offering predefined structures but lacking adaptability to dynamically changing learner behavior. In contrast, AI-based approaches such as artificial neural networks (ANNs) have high adaptability but lack interpretability. In this work, a new model, a fuzzy-ANN model, is developed that combines fuzzy logic with ANNs to make recommendations for activities in the learning process, overcoming current model weaknesses. In the first stage, fuzzy logic is used to map Kolb’s dimensions of learning style onto continuous membership values, providing a flexible and easier-to-interpret representation of learners’ preferred approaches to learning. These fuzzy weights are then processed in an ANN, enabling refinement and improvement in learning recommendations through analysis of patterns and adaptable learning. To make recommendations adapt and develop over time, a Weighted Sum Model (WSM) is used, combining learner activity trends and real-time feedback in dynamically updating proposed activity recommendations. Experimental evaluation in an educational environment shows that the model effectively generates personalized and changing experiences for learners, in harmony with learners’ requirements and activity trends. Full article
(This article belongs to the Special Issue Advances in Human-Centered Artificial Intelligence)
Show Figures

Figure 1

Previous Issue
Back to TopTop