Next Issue
Volume 16, August
Previous Issue
Volume 16, June
 
 

Information, Volume 16, Issue 7 (July 2025) – 110 articles

Cover Story (view full-size image): This paper presents a vision system for a UR5 cobot using a webcam and Python-based control for automated object sorting from 2D images. The system detects, classifies, and locates objects based on shape, using OpenCV, NumPy, and scikit-learn with an MLP classifier. Calibration included lens distortion correction, hand-in-eye setup, and virtual plane definition for 3D position estimation. Identification methods involved Hu moment contour similarity, SIFT with FLANN, and MLP-based neural networks. Performance was evaluated using accuracy, precision, sensitivity, specificity, and F1-score, with MLP outperforming classical methods in all metrics. The approach aligns with Industry 4.0, enabling efficient, flexible vision systems using accessible tools without requiring deep expertise in computer vision or machine learning. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 327 KiB  
Article
Trust in Generative AI Tools: A Comparative Study of Higher Education Students, Teachers, and Researchers
by Elena Đerić, Domagoj Frank and Marin Milković
Information 2025, 16(7), 622; https://doi.org/10.3390/info16070622 - 21 Jul 2025
Viewed by 1229
Abstract
Generative AI (GenAI) tools, including ChatGPT, Microsoft Copilot, and Google Gemini, are rapidly reshaping higher education by transforming how students, educators, and researchers engage with learning, teaching, and academic work. Despite their growing presence, the adoption of GenAI remains inconsistent, largely due to [...] Read more.
Generative AI (GenAI) tools, including ChatGPT, Microsoft Copilot, and Google Gemini, are rapidly reshaping higher education by transforming how students, educators, and researchers engage with learning, teaching, and academic work. Despite their growing presence, the adoption of GenAI remains inconsistent, largely due to the absence of universal guidelines and trust-related concerns. This study examines how trust, defined across three key dimensions (accuracy and relevance, privacy protection, and nonmaliciousness), influences the adoption and use of GenAI tools in academic environments. Using survey data from 823 participants across different academic roles, this study employs multiple regression analysis to explore the relationship between trust, user characteristics, and behavioral intention. The results reveal that trust is primarily experience-driven. Frequency of use, duration of use, and self-assessed proficiency significantly predict trust, whereas demographic factors, such as gender and academic role, have no significant influence. Furthermore, trust emerges as a strong predictor of behavioral intention to adopt GenAI tools. These findings reinforce trust calibration theory and extend the UTAUT2 framework to the context of GenAI in education. This study highlights that fostering appropriate trust through transparent policies, privacy safeguards, and practical training is critical for enabling responsible, ethical, and effective integration of GenAI into higher education. Full article
(This article belongs to the Section Artificial Intelligence)
33 pages, 11180 KiB  
Article
New Permutation-Free Quantum Circuits for Implementing 3- and 4-Qubit Unitary Operations
by Artyom M. Grigoryan
Information 2025, 16(7), 621; https://doi.org/10.3390/info16070621 - 21 Jul 2025
Cited by 1 | Viewed by 431
Abstract
The article presents the quantum signal-induced heap transform (QsiHT) method of the QR-decomposition of multi-qubit operations. This transform can be generated by a given signal, by using different paths, or orders, of processing the data. We propose using the concept of the fast [...] Read more.
The article presents the quantum signal-induced heap transform (QsiHT) method of the QR-decomposition of multi-qubit operations. This transform can be generated by a given signal, by using different paths, or orders, of processing the data. We propose using the concept of the fast path of calculation of the QsiHT and applying such transforms on each stage of the matrix decomposition. This allows us to build quantum circuits for multi-qubit unitary operation without permutations. Unitary operations with real and complex matrices are considered. The cases of 3- and 4-qubit operations are described in detail with quantum circuits. These circuits use a maximum of 28 and 120 Givens rotation gates for 3- and 4-qubit real operations, respectively. All rotations are performing only on adjacent bit planes. For complex unitary operation, each of the Givens gates is used in pairs with two Z-rotation gates. These two types of rotations and the global phase gate are the universal gate set for multi-qubit operations. The presented approach can be used for implementing quantum circuits for n-qubits when n2, with a maximum of (4n/22n1) Givens rotations and no permutations. Full article
Show Figures

Graphical abstract

19 pages, 4037 KiB  
Article
YOLO-MFD: Object Detection for Multi-Scenario Fires
by Fuchuan Mo, Shen Liu, Sitong Wu, Ruiyuan Chen and Tiecheng Song
Information 2025, 16(7), 620; https://doi.org/10.3390/info16070620 - 21 Jul 2025
Viewed by 341
Abstract
Fire refers to a disaster caused by combustion that is uncontrolled in the temporal and spatial dimensions, occurring in diverse complex scenarios where timely and effective detection is crucial. However, existing fire detection methods are often challenged by the deformation of smoke and [...] Read more.
Fire refers to a disaster caused by combustion that is uncontrolled in the temporal and spatial dimensions, occurring in diverse complex scenarios where timely and effective detection is crucial. However, existing fire detection methods are often challenged by the deformation of smoke and flames, resulting in missed detections. It is difficult to accurately extract fire features in complex backgrounds, and there are also significant difficulties in detecting small targets, such as small flames. To address this, this paper proposes a YOLO-Multi-scenario Fire Detector (YOLO-MFD) for multi-scenario fire detection. Firstly, to resolve missed detection caused by deformation of smoke and flames, a Scale Adaptive Perception Module (SAPM) is proposed. Secondly, aiming at the suppression of significant fire features by complex backgrounds, a Feature Adaptive Weighting Module (FAWM) is introduced to enhance the feature representation of fire. Finally, considering the difficulty in detecting small flames, a fine-grained Small Object Feature Extraction Module (SOFEM) is developed. Additionally, given the scarcity of multi-scenario fire datasets, this paper constructs a Multi-scenario Fire Dataset (MFDB). Experimental results on MFDB demonstrate that the proposed YOLO-MFD achieves a good balance between effectiveness and efficiency, achieving good effective fire detection performance across various scenarios. Full article
Show Figures

Figure 1

11 pages, 656 KiB  
Article
Adaptive Multi-Gradient Guidance with Conflict Resolution for Limited-Sample Regression
by Yu Lin, Jiaxiang Lin, Keju Zhang, Qin Zheng, Liqiang Lin and Qianqian Chen
Information 2025, 16(7), 619; https://doi.org/10.3390/info16070619 - 21 Jul 2025
Viewed by 311
Abstract
Recent studies report that gradient guidance extracted from a single-reference model can improve Limited-Sample regression. However, one reference model may not capture all relevant characteristics of the target function, which can restrict the capacity of the learner. To address this issue, we introduce [...] Read more.
Recent studies report that gradient guidance extracted from a single-reference model can improve Limited-Sample regression. However, one reference model may not capture all relevant characteristics of the target function, which can restrict the capacity of the learner. To address this issue, we introduce the Multi-Gradient Guided Network (MGGN), an extension of single-gradient guidance that combines gradients from several reference models. The gradients are merged through an adaptive weighting scheme, and an orthogonal-projection step is applied to reduce potential conflicts between them. Experiments on sine regression are used to evaluate the method. The results indicate that MGGN achieves higher predictive accuracy and improved stability than existing single-gradient guidance and meta-learning baselines, benefiting from the complementary information provided by multiple reference models. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 6748 KiB  
Article
YOLO-SSFA: A Lightweight Real-Time Infrared Detection Method for Small Targets
by Yuchi Wang, Minghua Cao, Qing Yang, Yue Zhang and Zexuan Wang
Information 2025, 16(7), 618; https://doi.org/10.3390/info16070618 - 20 Jul 2025
Viewed by 589
Abstract
Infrared small target detection is crucial for military surveillance and autonomous driving. However, complex scenes and weak signal characteristics make the identification of such targets particularly difficult. This study proposes YOLO-SSFA, an enhanced You Only Look Once version 11 (YOLOv11) model with three [...] Read more.
Infrared small target detection is crucial for military surveillance and autonomous driving. However, complex scenes and weak signal characteristics make the identification of such targets particularly difficult. This study proposes YOLO-SSFA, an enhanced You Only Look Once version 11 (YOLOv11) model with three modules: Scale-Sequence Feature Fusion (SSFF), LiteShiftHead detection head, and Noise Suppression Network (NSN). SSFF improves multi-scale feature representation through adaptive fusion; LiteShiftHead boosts efficiency via sparse convolution and dynamic integration; and NSN enhances localization accuracy by focusing on key regions. Experiments on the HIT-UAV and FLIR datasets show mAP50 scores of 94.9% and 85%, respectively. These findings showcase YOLO-SSFA’s strong potential for real-time deployment in challenging infrared environments. Full article
Show Figures

Figure 1

20 pages, 437 KiB  
Article
Post-Quantum Key Exchange and Subscriber Identity Encryption in 5G Using ML-KEM (Kyber)
by Qaiser Khan, Sourav Purification and Sang-Yoon Chang
Information 2025, 16(7), 617; https://doi.org/10.3390/info16070617 - 19 Jul 2025
Viewed by 534
Abstract
5G addresses user privacy concerns in cellular networking by encrypting a subscriber identifier with elliptic-curve-based encryption and then transmitting it as ciphertext known as a Subscriber Concealed Identifier (SUCI). However, an adversary equipped with a quantum computer can break a discrete-logarithm-based elliptic curve [...] Read more.
5G addresses user privacy concerns in cellular networking by encrypting a subscriber identifier with elliptic-curve-based encryption and then transmitting it as ciphertext known as a Subscriber Concealed Identifier (SUCI). However, an adversary equipped with a quantum computer can break a discrete-logarithm-based elliptic curve algorithm. Consequently, the user privacy in 5G is at stake against quantum attacks. In this paper, we study the incorporation of the post-quantum ciphers in the SUCI calculation both at the user equipment and at the core network, which involves the shared-key exchange and then using the resulting key for the ID encryption. We experiment on different hardware platforms to analyze the PQC key exchange and encryption using NIST-standardized CRYSTALS-Kyber (which is now called an ML-KEM after the standardization selection by NIST). Our analyses focus on the performances and compare the Kyber-based key exchange and encryption with the current (pre-quantum) elliptic curve Diffie–Hellman (ECDH). The performance analyses are critical because mobile networking involves resource-limited and battery-operating mobile devices. We measure and analyze not only the time and CPU-processing performances but also the energy and power performances. Our analyses show that Kyber-512 is the most efficient and even has better performance (i.e., faster computations and lower energy consumption) than ECDH. Full article
(This article belongs to the Special Issue Public Key Cryptography and Privacy Protection)
Show Figures

Figure 1

22 pages, 1195 KiB  
Article
Private Blockchain-Driven Digital Evidence Management Systems: A Collaborative Mining and NFT-Based Framework
by Butrus Mbimbi, David Murray and Michael Wilson
Information 2025, 16(7), 616; https://doi.org/10.3390/info16070616 - 17 Jul 2025
Viewed by 442
Abstract
Secure Digital Evidence Management Systems (DEMSs) ae crucial for law enforcement agencies, because traditional systems are prone to tampering and unauthorised access. Blockchain technology, particularly private blockchains, offers a solution by providing a centralised and tamper-proof system. This study proposes a private blockchain [...] Read more.
Secure Digital Evidence Management Systems (DEMSs) ae crucial for law enforcement agencies, because traditional systems are prone to tampering and unauthorised access. Blockchain technology, particularly private blockchains, offers a solution by providing a centralised and tamper-proof system. This study proposes a private blockchain using Proof of Work (PoW) to securely manage digital evidence. Miners are assigned specific nonce ranges to accelerate the mining process, called collaborative mining, to enhance the scalability challenges in DEMSs. Transaction data includes digital evidence to generate a Non-Fungible Token (NFT). Miners use NFTs to solve the puzzle according to the assigned difficulty level d, so as to generate a hash using SHA-256 and add it to the ledger. Users can verify the integrity and authenticity of records by re-generating the hash and comparing it with the one stored in the ledger. Our results show that the data was verified with 100% precision. The mining time was 2.5 s, and the nonce iterations were as high as 80×103 for d=5. This approach improves the scalability and integrity of digital evidence management by reducing the overall mining time. Full article
(This article belongs to the Special Issue Blockchain and AI: Innovations and Applications in ICT)
Show Figures

Figure 1

20 pages, 1606 KiB  
Article
Brain Tumour Segmentation Using Choquet Integrals and Coalition Game
by Makhlouf Derdour, Mohammed El Bachir Yahiaoui, Moustafa Sadek Kahil, Mohamed Gasmi and Mohamed Chahine Ghanem
Information 2025, 16(7), 615; https://doi.org/10.3390/info16070615 - 17 Jul 2025
Viewed by 349
Abstract
Artificial Intelligence (AI) and computer-aided diagnosis (CAD) have revolutionised various aspects of modern life, particularly in the medical domain. These technologies enable efficient solutions for complex challenges, such as accurately segmenting brain tumour regions, which significantly aid medical professionals in monitoring and treating [...] Read more.
Artificial Intelligence (AI) and computer-aided diagnosis (CAD) have revolutionised various aspects of modern life, particularly in the medical domain. These technologies enable efficient solutions for complex challenges, such as accurately segmenting brain tumour regions, which significantly aid medical professionals in monitoring and treating patients. This research focuses on segmenting glioma brain tumour lesions in MRI images by analysing them at the pixel level. The aim is to develop a deep learning-based approach that enables ensemble learning to achieve precise and consistent segmentation of brain tumours. While many studies have explored ensemble learning techniques in this area, most rely on aggregation functions like the Weighted Arithmetic Mean (WAM) without accounting for the interdependencies between classifier subsets. To address this limitation, the Choquet integral is employed for ensemble learning, along with a novel evaluation framework for fuzzy measures. This framework integrates coalition game theory, information theory, and Lambda fuzzy approximation. Three distinct fuzzy measure sets are computed using different weighting strategies informed by these theories. Based on these measures, three Choquet integrals are calculated for segmenting different components of brain lesions, and their outputs are subsequently combined. The BraTS-2020 online validation dataset is used to validate the proposed approach. Results demonstrate superior performance compared with several recent methods, achieving Dice Similarity Coefficients of 0.896, 0.851, and 0.792 and 95% Hausdorff distances of 5.96 mm, 6.65 mm, and 20.74 mm for the whole tumour, tumour core, and enhancing tumour core, respectively. Full article
Show Figures

Figure 1

24 pages, 2281 KiB  
Article
Multilayer Network Modeling for Brand Knowledge Discovery: Integrating TF-IDF and TextRank in Heterogeneous Semantic Space
by Peng Xu, Rixu Zang, Zongshui Wang and Zhuo Sun
Information 2025, 16(7), 614; https://doi.org/10.3390/info16070614 - 17 Jul 2025
Viewed by 302
Abstract
In the era of homogenized competition, brand knowledge has become a critical factor that influences consumer purchasing decisions. However, traditional single-layer network models fail to capture the multi-dimensional semantic relationships embedded in brand-related textual data. To address this gap, this study proposes a [...] Read more.
In the era of homogenized competition, brand knowledge has become a critical factor that influences consumer purchasing decisions. However, traditional single-layer network models fail to capture the multi-dimensional semantic relationships embedded in brand-related textual data. To address this gap, this study proposes a BKMN framework integrating TF-IDF and TextRank algorithms for comprehensive brand knowledge discovery. By analyzing 19,875 consumer reviews of a mobile phone brand from JD website, we constructed a tri-layer network comprising TF-IDF-derived keywords, TextRank-derived keywords, and their overlapping nodes. The model incorporates co-occurrence matrices and centrality metrics (degree, closeness, betweenness, eigenvector) to identify semantic hubs and interlayer associations. The results reveal that consumers prioritize attributes such as “camera performance”, “operational speed”, “screen quality”, and “battery life”. Notably, the overlap layer exhibits the highest node centrality, indicating convergent consumer focus across algorithms. The network demonstrates small-world characteristics (average path length = 1.627) with strong clustering (average clustering coefficient = 0.848), reflecting cohesive consumer discourse around key features. Meanwhile, this study proposes the Mul-LSTM model for sentiment analysis of reviews, achieving a 93% sentiment classification accuracy, revealing that consumers have a higher proportion of positive attitudes towards the brand’s cell phones, which provides a quantitative basis for enterprises to understand users’ emotional tendencies and optimize brand word-of-mouth management. This research advances brand knowledge modeling by synergizing heterogeneous algorithms and multilayer network analysis. Its practical implications include enabling enterprises to pinpoint competitive differentiators and optimize marketing strategies. Future work could extend the framework to incorporate sentiment dynamics and cross-domain applications in smart home or cosmetic industries. Full article
Show Figures

Figure 1

29 pages, 758 KiB  
Article
Value Co-Creation for E-Government Services in Small Island Developing Nations: A Case Study
by Wilford Gibson Lol, Krassie Petrova and Sarita Pais
Information 2025, 16(7), 613; https://doi.org/10.3390/info16070613 - 17 Jul 2025
Viewed by 323
Abstract
The adoption of e-government services in Small Island Developing Nations (SIDNs) aims to enhance public service efficiency, inclusiveness, and quality. However, e-government service development in SIDNs faces some significant constraints, including limited resources, geographical isolation, low digital literacy levels, and inadequate technological infrastructure. [...] Read more.
The adoption of e-government services in Small Island Developing Nations (SIDNs) aims to enhance public service efficiency, inclusiveness, and quality. However, e-government service development in SIDNs faces some significant constraints, including limited resources, geographical isolation, low digital literacy levels, and inadequate technological infrastructure. This study investigates value co-creation approaches in e-government service, aiming to identify specific value co-creation processes and methods to support sustainable e-government initiatives in SIDN settings. The study applies a qualitative approach; based on the thematic analysis of interviews with government stakeholders, it identifies contextual factors and conditions that influence e-government value co-creation processes in SIDNs and strategies for sustainable e-government service value co-creation. This study contributes a value co-creation framework that applies participatory design, agile development, collaborative governance, socio-technical thinking, and technology adaptation as methods for the design and implementation of flexible and inclusive e-government services that are responsive to local needs, resilient to challenges, and sustainable over time. The framework can be used by policymakers and practitioners to facilitate sustainable digital transformation in SIDNs through collaborative governance, active participation, and civic engagement with innovative technologies. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

17 pages, 497 KiB  
Article
Generative Data Modelling for Diverse Populations in Africa: Insights from South Africa
by Sally Sonia Simmons, John Elvis Hagan, Jr. and Thomas Schack
Information 2025, 16(7), 612; https://doi.org/10.3390/info16070612 - 17 Jul 2025
Viewed by 359
Abstract
Studies on the demography and health of racially diverse African populations are scarce, particularly due to lingering data challenges. Generative data modelling has emerged as a valuable solution to this burden. The study, therefore, examined the efficacy of Conditional Tabular GAN (CTGAN), CopulaGAN, [...] Read more.
Studies on the demography and health of racially diverse African populations are scarce, particularly due to lingering data challenges. Generative data modelling has emerged as a valuable solution to this burden. The study, therefore, examined the efficacy of Conditional Tabular GAN (CTGAN), CopulaGAN, and Tabula Variational Autoencoder (TVAE) for generating synthetic but realistic demographic and health data. This study employed the World Health Organisation stigy on global ageing and adult health survey (SAGE) Wave 1 South African data (n = 4227). Information missing from SAGE Wave 1, including demographic (e.g., race, age) and health (e.g., hypertension, blood pressure) indicators, were imputed using Generative Adversarial Imputation Nets (GAIN). CopulaGAN, CTGAN, and TVAE, sourced from the sdv 1.24.1 python library, generated 104,227 synthetic records based on the SAGE data constituents. The outcomes were accessed with similarity and machine learning (XGBoost) augmentation metrics (sourced from the sdmetrics 0.21.0 python library), including column shapes and overall and precision ratio scores. Generally, the GAIN imputations resulted in data with properties that were comparable to original and with no missing information. CTGAN’s (89.20%) overall quality of performance was above that of TVAE (86.50%) and CopulaGAN (88.45%). These findings underscore the usefulness of generative data modelling in addressing data quality challenges in diverse populations to enhance actionable health research and policy implementation. Full article
Show Figures

Graphical abstract

17 pages, 2533 KiB  
Article
Oscillator-Based Processing Unit for Formant Recognition
by Tamás Rudner-Halász, Wolfgang Porod and Gyorgy Csaba
Information 2025, 16(7), 611; https://doi.org/10.3390/info16070611 - 16 Jul 2025
Viewed by 233
Abstract
Oscillatory neural networks have so far been successfully applied to a number of computing problems, such as associative memories, or to handle computationally hard tasks. In this paper, we show how to use oscillators to process time-dependent waveforms with minimal or no preprocessing. [...] Read more.
Oscillatory neural networks have so far been successfully applied to a number of computing problems, such as associative memories, or to handle computationally hard tasks. In this paper, we show how to use oscillators to process time-dependent waveforms with minimal or no preprocessing. Since preprocessing and first-layer processing are often the most power-hungry steps in neural networks, our findings may open new doors to simple and power-efficient edge-AI devices. Full article
(This article belongs to the Special Issue Neuromorphic Engineering and Machine Learning)
Show Figures

Figure 1

18 pages, 957 KiB  
Article
CHTopo: A Multi-Source Large-Scale Chinese Toponym Annotation Corpus
by Peng Ye, Yujin Jiang and Yadi Wang
Information 2025, 16(7), 610; https://doi.org/10.3390/info16070610 - 16 Jul 2025
Viewed by 414
Abstract
Toponyms are fundamental geographical resources characterized by their spatial attributes, distinct from general nouns. While natural language provides rich toponymic data beyond traditional surveying methods, its qualitative ambiguity and inherent uncertainty challenge systematic extraction. Traditional toponym recognition methods based on part-of-speech tagging only [...] Read more.
Toponyms are fundamental geographical resources characterized by their spatial attributes, distinct from general nouns. While natural language provides rich toponymic data beyond traditional surveying methods, its qualitative ambiguity and inherent uncertainty challenge systematic extraction. Traditional toponym recognition methods based on part-of-speech tagging only focus on the surface-level features of words, failing to effectively handle complex scenarios such as alias nesting, metonymy ambiguity, and mixed punctuation. This leads to the loss of toponym semantic integrity and deviations in geographic entity recognition. This study proposes a set of Chinese toponym annotation specifications that integrate spatial semantics. By leveraging the XML markup language, it deeply combines the spatial location characteristics of toponyms with linguistic features, and designs fine-grained annotation rules to address the limitations of traditional methods in semantic integrity and geographic entity recognition. On this basis, by integrating multi-source corpora from the Encyclopedia of China: Chinese Geography and People’s Daily, a large-scale Chinese toponym annotation corpus (CHTopo) covering five major categories of toponyms has been constructed. The performance of this annotated corpus was evaluated through toponym recognition, exploring the construction methods of a large-scale, diversified, and high-coverage Chinese toponym annotated corpus from the perspectives of applicability and practicality. CHTopo is conducive to providing foundational support for geographic information extraction, spatial knowledge graphs, and geoparsing research, bridging linguistic and geospatial intelligence. Full article
(This article belongs to the Special Issue Text Mining: Challenges, Algorithms, Tools and Applications)
Show Figures

Figure 1

24 pages, 1605 KiB  
Article
Quantum-Secure Coherent Optical Networking for Advanced Infrastructures in Industry 4.0
by Ofir Joseph and Itzhak Aviv
Information 2025, 16(7), 609; https://doi.org/10.3390/info16070609 - 15 Jul 2025
Viewed by 533
Abstract
Modern industrial ecosystems, particularly those embracing Industry 4.0, increasingly depend on coherent optical networks operating at 400 Gbps and beyond. These high-capacity infrastructures, coupled with advanced digital signal processing and phase-sensitive detection, enable real-time data exchange for automated manufacturing, robotics, and interconnected factory [...] Read more.
Modern industrial ecosystems, particularly those embracing Industry 4.0, increasingly depend on coherent optical networks operating at 400 Gbps and beyond. These high-capacity infrastructures, coupled with advanced digital signal processing and phase-sensitive detection, enable real-time data exchange for automated manufacturing, robotics, and interconnected factory systems. However, they introduce multilayer security challenges—ranging from hardware synchronization gaps to protocol overhead manipulation. Moreover, the rise of large-scale quantum computing intensifies these threats by potentially breaking classical key exchange protocols and enabling the future decryption of stored ciphertext. In this paper, we present a systematic vulnerability analysis of coherent optical networks that use OTU4 framing, Media Access Control Security (MACsec), and 400G ZR+ transceivers. Guided by established risk assessment methodologies, we uncover critical weaknesses affecting management plane interfaces (e.g., MDIO and I2C) and overhead fields (e.g., Trail Trace Identifier, Bit Interleaved Parity). To mitigate these risks while preserving the robust data throughput and low-latency demands of industrial automation, we propose a post-quantum security framework that merges spectral phase masking with multi-homodyne coherent detection, strengthened by quantum key distribution for key management. This layered approach maintains backward compatibility with existing infrastructure and ensures forward secrecy against quantum-enabled adversaries. The evaluation results show a substantial reduction in exposure to timing-based exploits, overhead field abuses, and cryptographic compromise. By integrating quantum-safe measures at the optical layer, our solution provides a future-proof roadmap for network operators, hardware vendors, and Industry 4.0 stakeholders tasked with safeguarding next-generation manufacturing and engineering processes. Full article
Show Figures

Figure 1

50 pages, 9734 KiB  
Article
Efficient Hotspot Detection in Solar Panels via Computer Vision and Machine Learning
by Nayomi Fernando, Lasantha Seneviratne, Nisal Weerasinghe, Namal Rathnayake and Yukinobu Hoshino
Information 2025, 16(7), 608; https://doi.org/10.3390/info16070608 - 15 Jul 2025
Viewed by 798
Abstract
Solar power generation is rapidly emerging within renewable energy due to its cost-effectiveness and ease of deployment. However, improper inspection and maintenance lead to significant damage from unnoticed solar hotspots. Even with inspections, factors like shadows, dust, and shading cause localized heat, mimicking [...] Read more.
Solar power generation is rapidly emerging within renewable energy due to its cost-effectiveness and ease of deployment. However, improper inspection and maintenance lead to significant damage from unnoticed solar hotspots. Even with inspections, factors like shadows, dust, and shading cause localized heat, mimicking hotspot behavior. This study emphasizes interpretability and efficiency, identifying key predictive features through feature-level and What-if Analysis. It evaluates model training and inference times to assess effectiveness in resource-limited environments, aiming to balance accuracy, generalization, and efficiency. Using Unmanned Aerial Vehicle (UAV)-acquired thermal images from five datasets, the study compares five Machine Learning (ML) models and five Deep Learning (DL) models. Explainable AI (XAI) techniques guide the analysis, with a particular focus on MPEG (Moving Picture Experts Group)-7 features for hotspot discrimination, supported by statistical validation. Medium Gaussian SVM achieved the best trade-off, with 99.3% accuracy and 18 s inference time. Feature analysis revealed blue chrominance as a strong early indicator of hotspot detection. Statistical validation across datasets confirmed the discriminative strength of MPEG-7 features. This study revisits the assumption that DL models are inherently superior, presenting an interpretable alternative for hotspot detection; highlighting the potential impact of domain mismatch. Model-level insight shows that both absolute and relative temperature variations are important in solar panel inspections. The relative decrease in “blueness” provides a crucial early indication of faults, especially in low-contrast thermal images where distinguishing normal warm areas from actual hotspot is difficult. Feature-level insight highlights how subtle changes in color composition, particularly reductions in blue components, serve as early indicators of developing anomalies. Full article
Show Figures

Graphical abstract

23 pages, 3820 KiB  
Article
A Fundamental Statistics Self-Learning Method with Python Programming for Data Science Implementations
by Prismahardi Aji Riyantoko, Nobuo Funabiki, Komang Candra Brata, Mustika Mentari, Aviolla Terza Damaliana and Dwi Arman Prasetya
Information 2025, 16(7), 607; https://doi.org/10.3390/info16070607 - 15 Jul 2025
Viewed by 490
Abstract
The increasing demand for data-driven decision making to maintain the innovations and competitiveness of organizations highlights the need for data science educations across academia and industry. At its core is a solid understanding of statistics, which is necessary for conducting a thorough analysis [...] Read more.
The increasing demand for data-driven decision making to maintain the innovations and competitiveness of organizations highlights the need for data science educations across academia and industry. At its core is a solid understanding of statistics, which is necessary for conducting a thorough analysis of data and deriving valuable insights. Unfortunately, conventional statistics learning often lacks practice in real-world applications using computer programs, causing a separation between conceptual knowledge of statistics equations and their hands-on skills. Integrating statistics learning into Python programming can convey an effective solution for this problem, where it has become essential in data science implementations, with extensive and versatile libraries. In this paper, we present a self-learning method for fundamental statistics through Python programming for data science studies. Unlike conventional approaches, our method integrates three types of interactive problems—element fill-in-blank problem (EFP), grammar-concept understanding problem (GUP), and value trace problem (VTP)—in the Programming Learning Assistant System (PLAS). This combination allows students to write code, understand concepts, and trace the output value while obtaining instant feedback so that they can improve retention, knowledge, and practical skills in learning statistics using Python programming. For evaluations, we generated 22 instances using source codes for fundamental statistics topics, and assigned them to 40 first-year undergraduate students at UPN Veteran Jawa Timur, Indonesia. Statistics analytical methods were utilized to analyze the student learning performances. The results show that a significant correlation (ρ<0.05) exists between the students who solved our proposal and those who did not. The results confirm that it can effectively assist students in learning fundamental statistics self-learning using Python programming for data science implementations. Full article
Show Figures

Figure 1

16 pages, 15700 KiB  
Article
Towards Reshaping Children’s Habits: Vitalia’s AR-Gamified Approach
by Vasileios Arampatzakis, Vasileios Sevetlidis, Vasiliki Derri, Milena Raffi and George Pavlidis
Information 2025, 16(7), 606; https://doi.org/10.3390/info16070606 - 15 Jul 2025
Viewed by 423
Abstract
This paper presents the design, development, and pilot deployment of Vitalia, an AR-gamified application targeting the formation of healthy habits in primary education children. Developed within the EU DUSE project, Vitalia integrates physical activity, nutritional education, and immersive storytelling into a gamified [...] Read more.
This paper presents the design, development, and pilot deployment of Vitalia, an AR-gamified application targeting the formation of healthy habits in primary education children. Developed within the EU DUSE project, Vitalia integrates physical activity, nutritional education, and immersive storytelling into a gamified framework to promote sustained behavioral change. Grounded in evidence-based behavior change models and co-designed with health, nutrition, and physical activity experts, the system envisions high daily engagement rates and measurable knowledge improvements. The concept positions Vitalia as a scalable model for child-centric, ethically responsible digital health interventions, with the potential to be integrated into school curricula and public health strategies. Full article
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)
Show Figures

Figure 1

33 pages, 14272 KiB  
Article
Defly Compass Trend Analysis Methodology: Quantifying Trend Detection to Improve Foresight in Strategic Decision Making
by Mabel López Bordao, Antonia Ferrer Sapena, Carlos A. Reyes Pérez and Enrique A. Sánchez Pérez
Information 2025, 16(7), 605; https://doi.org/10.3390/info16070605 - 14 Jul 2025
Viewed by 478
Abstract
We present a new method for trend analysis that integrates traditional foresight techniques with advanced data processing and artificial intelligence. It addresses the challenge of analyzing large volumes of information while preserving expert insight. The hybrid methodology combines computational analysis with expert validation [...] Read more.
We present a new method for trend analysis that integrates traditional foresight techniques with advanced data processing and artificial intelligence. It addresses the challenge of analyzing large volumes of information while preserving expert insight. The hybrid methodology combines computational analysis with expert validation across four phases: literature review, information systematization, trend identification, and analysis. Tools like Voyant Tools 2.6.18 and NotebookLMare used for semantic and statistical exploration. Among them, we highlight the use of the Defly Compass tool, a natural language processing tool based on semantic projections and developed by our team. The method produces mixed results, including both conceptual conclusions and quantifiable, reproducible outcomes adaptable to diverse contexts. Comparative case studies in agriculture, education, and public health identified key patterns within and across sectors. Cross-domain validation revealed universal trends such as digital infrastructure, data integration, and equity. Designed for accessibility, the method enables small, non-specialized teams to combine computational tools with expert knowledge for strategic decision making in complex environments. Full article
Show Figures

Figure 1

15 pages, 632 KiB  
Article
Architecture of an Efficient Environment Management Platform for Experiential Cybersecurity Education
by David Arnold, John Ford and Jafar Saniie
Information 2025, 16(7), 604; https://doi.org/10.3390/info16070604 - 14 Jul 2025
Viewed by 384
Abstract
Testbeds are widely used in experiential learning, providing practical assessments and bridging classroom material with real-world applications. However, manually managing and provisioning student lab environments consumes significant preparation time for instructors. The growing demand for advanced technical skills, such as network administration and [...] Read more.
Testbeds are widely used in experiential learning, providing practical assessments and bridging classroom material with real-world applications. However, manually managing and provisioning student lab environments consumes significant preparation time for instructors. The growing demand for advanced technical skills, such as network administration and cybersecurity, is leading to larger class sizes. This stresses testbed resources and necessitates continuous design updates. To address these challenges, we designed an efficient Environment Management Platform (EMP). The EMP is composed of a set of 4 Command Line Interface scripts and a Web Interface for secure administration and bulk user operations. Based on our testing, the EMP significantly reduces setup time for student virtualized lab environments. Through a cybersecurity learning environment case study, we found that setup is completed in 15 s for each student, a 12.8-fold reduction compared to manual provisioning. When considering a class of 20 students, the EMP realizes a substantial saving of 62 min in system configuration time. Additionally, the software-based management and provisioning process ensures the accurate realization of lab environments, eliminating the errors commonly associated with manual configuration. This platform is applicable to many educational domains that rely on virtual machines for experiential learning. Full article
(This article belongs to the Special Issue Digital Systems in Higher Education)
Show Figures

Graphical abstract

35 pages, 6888 KiB  
Article
AirTrace-SA: Air Pollution Tracing for Source Attribution
by Wenchuan Zhao, Qi Zhang, Ting Shu and Xia Du
Information 2025, 16(7), 603; https://doi.org/10.3390/info16070603 - 13 Jul 2025
Viewed by 420
Abstract
Air pollution source tracing is vital for effective pollution prevention and control, yet traditional methods often require large amounts of manual data, have limited cross-regional generalizability, and present challenges in capturing complex pollutant interactions. This study introduces AirTrace-SA (Air Pollution Tracing for Source [...] Read more.
Air pollution source tracing is vital for effective pollution prevention and control, yet traditional methods often require large amounts of manual data, have limited cross-regional generalizability, and present challenges in capturing complex pollutant interactions. This study introduces AirTrace-SA (Air Pollution Tracing for Source Attribution), a novel hybrid deep learning model designed for the accurate identification and quantification of air pollution sources. AirTrace-SA comprises three main components: a hierarchical feature extractor (HFE) that extracts multi-scale features from chemical components, a source association bridge (SAB) that links chemical features to pollution sources through a multi-step decision mechanism, and a source contribution quantifier (SCQ) based on the TabNet regressor for the precise prediction of source contributions. Evaluated on real air quality datasets from five cities (Lanzhou, Luoyang, Haikou, Urumqi, and Hangzhou), AirTrace-SA achieves an average R2 of 0.88 (ranging from 0.84 to 0.94 across 10-fold cross-validation), an average mean absolute error (MAE) of 0.60 (ranging from 0.46 to 0.78 across five cities), and an average root mean square error (RMSE) of 1.06 (ranging from 0.51 to 1.62 across ten pollution sources). The model outperforms baseline models such as 1D CNN and LightGBM in terms of stability, accuracy, and cross-city generalization. Feature importance analysis identifies the main contributions of source categories, further improving interpretability. By reducing the reliance on labor-intensive data collection and providing scalable, high-precision source tracing, AirTrace-SA offers a powerful tool for environmental management that supports targeted emission reduction strategies and sustainable development. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining: Innovations in Big Data Analytics)
Show Figures

Figure 1

22 pages, 1199 KiB  
Article
Less Is More: Analyzing Text Abstraction Levels for Gender and Age Recognition Across Question-Answering Communities
by Alejandro Figueroa
Information 2025, 16(7), 602; https://doi.org/10.3390/info16070602 - 13 Jul 2025
Viewed by 229
Abstract
In social networks like community Question-Answering (cQA) services, members interact with each other by asking and answering each other’s questions. This way they find counsel and solutions to very specific real-life situations. Thus, it is safe to say that community fellows log into [...] Read more.
In social networks like community Question-Answering (cQA) services, members interact with each other by asking and answering each other’s questions. This way they find counsel and solutions to very specific real-life situations. Thus, it is safe to say that community fellows log into this kind of social network with the goal of satisfying information needs that cannot be readily resolved via traditional web searches. And in order to expedite this process, these platforms also allow registered, and many times unregistered, internauts to browse their archives. As a means of encouraging fruitful interactions, these websites need to be efficient when displaying contextualized/personalized material and when connecting unresolved questions to people willing to help. Here, demographic factors (i.e., gender) together with frontier deep neural networks have proved to be instrumental in adequately overcoming these challenges. In fact, current approaches have demonstrated that it is perfectly plausible to achieve high gender classification rates by inspecting profile images or textual interactions. This work advances this body of knowledge by leveraging lexicalized dependency paths to control the level of abstraction across texts. Our qualitative results suggest that cost-efficient approaches exploit distilled frontier deep architectures (i.e., DistillRoBERTa) and coarse-grained semantic information embodied in the first three levels of the respective dependency tree. Our outcomes also indicate that relative/prepositional clauses conveying geographical locations, relationships, and finance yield a marginal contribution when they show up deep in dependency trees. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

22 pages, 493 KiB  
Article
Improving Performance of Automatic Keyword Extraction (AKE) Methods Using PoS Tagging and Enhanced Semantic-Awareness
by Enes Altuncu, Jason R. C. Nurse, Yang Xu, Jie Guo and Shujun Li
Information 2025, 16(7), 601; https://doi.org/10.3390/info16070601 - 13 Jul 2025
Viewed by 436
Abstract
Automatic keyword extraction (AKE) has gained more importance with the increasing amount of digital textual data that modern computing systems process. It has various applications in information retrieval (IR) and natural language processing (NLP), including text summarisation, topic analysis and document indexing. This [...] Read more.
Automatic keyword extraction (AKE) has gained more importance with the increasing amount of digital textual data that modern computing systems process. It has various applications in information retrieval (IR) and natural language processing (NLP), including text summarisation, topic analysis and document indexing. This paper proposes a simple but effective post-processing-based universal approach to improving the performance of any AKE methods, via an enhanced level of semantic-awareness supported by PoS tagging. To demonstrate the performance of the proposed approach, we considered word types retrieved from a PoS tagging step and two representative sources of semantic information—specialised terms defined in one or more context-dependent thesauri, and named entities in Wikipedia. The above three steps can be simply added to the end of any AKE methods as part of a post-processor, which simply re-evaluates all candidate keywords following some context-specific and semantic-aware criteria. For five state-of-the-art (SOTA) AKE methods, our experimental results with 17 selected datasets showed that the proposed approach improved their performances both consistently (up to 100% in terms of improved cases) and significantly (between 10.2% and 53.8%, with an average of 25.8%, in terms of F1-score and across all five methods), especially when all the three enhancement steps are used. Our results have profound implications considering the fact that our proposed approach can be easily applied to any AKE method with the standard output (candidate keywords and scores) and the ease to further extend it. Full article
(This article belongs to the Special Issue Information Extraction and Language Discourse Processing)
Show Figures

Figure 1

24 pages, 939 KiB  
Review
Advances in Amazigh Language Technologies: A Comprehensive Survey Across Processing Domains
by Oussama Akallouch, Mohammed Akallouch and Khalid Fardousse
Information 2025, 16(7), 600; https://doi.org/10.3390/info16070600 - 13 Jul 2025
Viewed by 793
Abstract
The Amazigh language, spoken by millions across North Africa, presents unique computational challenges due to its complex morphological system, dialectal variation, and multiple writing systems. This survey examines technological advances over the past decade across four key domains: natural language processing, speech recognition, [...] Read more.
The Amazigh language, spoken by millions across North Africa, presents unique computational challenges due to its complex morphological system, dialectal variation, and multiple writing systems. This survey examines technological advances over the past decade across four key domains: natural language processing, speech recognition, optical character recognition, and machine translation. We analyze the evolution from rule-based systems to advanced neural models, demonstrating how researchers have addressed resource constraints through innovative approaches that blend linguistic knowledge with machine learning. Our analysis reveals uneven progress across domains, with optical character recognition reaching high maturity levels while machine translation remains constrained by limited parallel data. Beyond technical metrics, we explore applications in education, cultural preservation, and digital accessibility, showing how these technologies enable Amazigh speakers to participate in the digital age. This work illustrates that advancing language technology for marginalized languages requires fundamentally different approaches that respect linguistic diversity while ensuring digital equity. Full article
Show Figures

Figure 1

37 pages, 618 KiB  
Systematic Review
Interaction, Artificial Intelligence, and Motivation in Children’s Speech Learning and Rehabilitation Through Digital Games: A Systematic Literature Review
by Chra Abdoulqadir and Fernando Loizides
Information 2025, 16(7), 599; https://doi.org/10.3390/info16070599 - 12 Jul 2025
Viewed by 733
Abstract
The integration of digital serious games into speech learning (rehabilitation) has demonstrated significant potential in enhancing accessibility and inclusivity for children with speech disabilities. This review of the state of the art examines the role of serious games, Artificial Intelligence (AI), and Natural [...] Read more.
The integration of digital serious games into speech learning (rehabilitation) has demonstrated significant potential in enhancing accessibility and inclusivity for children with speech disabilities. This review of the state of the art examines the role of serious games, Artificial Intelligence (AI), and Natural Language Processing (NLP) in speech rehabilitation, with a particular focus on interaction modalities, engagement autonomy, and motivation. We have reviewed 45 selected studies. Our key findings show how intelligent tutoring systems, adaptive voice-based interfaces, and gamified speech interventions can empower children to engage in self-directed speech learning, reducing dependence on therapists and caregivers. The diversity of interaction modalities, including speech recognition, phoneme-based exercises, and multimodal feedback, demonstrates how AI and Assistive Technology (AT) can personalise learning experiences to accommodate diverse needs. Furthermore, the incorporation of gamification strategies, such as reward systems and adaptive difficulty levels, has been shown to enhance children’s motivation and long-term participation in speech rehabilitation. The gaps identified show that despite advancements, challenges remain in achieving universal accessibility, particularly regarding speech recognition accuracy, multilingual support, and accessibility for users with multiple disabilities. This review advocates for interdisciplinary collaboration across educational technology, special education, cognitive science, and human–computer interaction (HCI). Our work contributes to the ongoing discourse on lifelong inclusive education, reinforcing the potential of AI-driven serious games as transformative tools for bridging learning gaps and promoting speech rehabilitation beyond clinical environments. Full article
Show Figures

Graphical abstract

21 pages, 965 KiB  
Article
Emotional Responses to Racial Violence: Analyzing Sentiments and Emotions Among Black Women in Missouri
by Ivy Smith and Sheretta T. Butler-Barnes
Information 2025, 16(7), 598; https://doi.org/10.3390/info16070598 - 12 Jul 2025
Viewed by 414
Abstract
This study examines the emotional responses of Black women in Missouri regarding incidents of racial violence in the United States. Grounded in an analysis of self-reported emotions, this study explores how Black women (n = 384, Mage = 37) express their [...] Read more.
This study examines the emotional responses of Black women in Missouri regarding incidents of racial violence in the United States. Grounded in an analysis of self-reported emotions, this study explores how Black women (n = 384, Mage = 37) express their emotional experiences in response to racial violence. Utilizing the Multiple Affect Adjective Checklist-Revised (MAACL-R), sentiment analysis was used to assess the overall emotional tone of participants’ responses, while emotion analysis was used to identify specific emotions expressed. The findings highlight the complexities of Black women’s emotional responses, considering factors such as coping mechanisms, racial identity beliefs, spirituality and religiosity, and resilience and strength. By applying computational methods to analyze these emotions, this study reveals how racial violence shapes sentiment and emotional expression patterns. Furthermore, it highlights the significance of acknowledging the complex ways Black women navigate and process racial violence. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Graphical abstract

16 pages, 2741 KiB  
Article
EVOCA: Explainable Verification of Claims by Graph Alignment
by Carmela De Felice, Carmelo Fabio Longo, Misael Mongiovì, Daniele Francesco Santamaria and Giusy Giulia Tuccari
Information 2025, 16(7), 597; https://doi.org/10.3390/info16070597 - 11 Jul 2025
Viewed by 402
Abstract
The paper introduces EVOCA—Explainable Verification Of Claims by Graph Alignment—a hybrid approach that combines NLP (Natural Language Processing) techniques with the structural advantages of knowledge graphs to manage and reduce the amount of evidence required to evaluate statements. The approach leverages the [...] Read more.
The paper introduces EVOCA—Explainable Verification Of Claims by Graph Alignment—a hybrid approach that combines NLP (Natural Language Processing) techniques with the structural advantages of knowledge graphs to manage and reduce the amount of evidence required to evaluate statements. The approach leverages the explicit and interpretable structure of semantic graphs, which naturally represent the semantic structure of a sentence—or a set of sentences—and explicitly encodes the relationships among different concepts, thereby facilitating the extraction and manipulation of relevant information. The primary objective of the proposed tool is to condense the evidence into a short sentence that preserves only the salient and relevant information of the target claim. This process eliminates superfluous and redundant information, which could impact the performance of the subsequent verification task and provide useful information to explain the outcome. To achieve this, the proposed tool called EVOCA—Explainable Verification Of Claims by Graph Alignment—generates a sub-graph in AMR (Abstract Meaning Representation), representing the tokens of the claim–evidence pair that exhibit high semantic similarity. The structured representation offered by the AMR graph not only aids in identifying the most relevant information but also improves the interpretability of the results. The resulting sub-graph is converted back into natural language with the SPRING AMR tool, producing a concise but meaning-rich “sub-evidence” sentence. The output can be processed by lightweight language models to determine whether the evidence supports, contradicts, or is neutral about the claim. The approach is tested on the 4297 sentence pairs of the Climate-BERT-fact-checking dataset, and the promising results are discussed. Full article
Show Figures

Figure 1

16 pages, 1730 KiB  
Article
Retail Demand Forecasting: A Comparative Analysis of Deep Neural Networks and the Proposal of LSTMixer, a Linear Model Extension
by Georgios Theodoridis and Athanasios Tsadiras
Information 2025, 16(7), 596; https://doi.org/10.3390/info16070596 - 11 Jul 2025
Viewed by 887
Abstract
Accurate retail demand forecasting is integral to the operational efficiency of any retail business. As demand is described over time, the prediction of demand is a time-series forecasting problem which may be addressed in a univariate manner, via statistical methods and simplistic machine [...] Read more.
Accurate retail demand forecasting is integral to the operational efficiency of any retail business. As demand is described over time, the prediction of demand is a time-series forecasting problem which may be addressed in a univariate manner, via statistical methods and simplistic machine learning approaches, or in a multivariate fashion using generic deep learning forecasters that are well-established in other fields. This study analyzes, optimizes, trains and tests such forecasters, namely the Temporal Fusion Transformer and the Temporal Convolutional Network, alongside the recently proposed Time-Series Mixer, to accurately forecast retail demand given a dataset of historical sales in 45 stores with their accompanied features. Moreover, the present work proposes a novel extension of the Time-Series Mixer architecture, the LSTMixer, which utilizes an additional Long Short-Term Memory block to achieve better forecasts. The results indicate that the proposed LSTMixer model is the better predictor, whilst all the other aforementioned models outperform the common statistical and machine learning methods. An ablation test is also performed to ensure that the extension within the LSTMixer design is responsible for the improved results. The findings promote the use of deep learning models for retail demand forecasting problems and establish LSTMixer as a viable and efficient option. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Economics and Business Management)
Show Figures

Figure 1

25 pages, 2297 KiB  
Article
Detecting Fake News in Urdu Language Using Machine Learning, Deep Learning, and Large Language Model-Based Approaches
by Muhammad Shoaib Farooq, Syed Muhammad Asadullah Gilani, Muhammad Faraz Manzoor and Momina Shaheen
Information 2025, 16(7), 595; https://doi.org/10.3390/info16070595 - 10 Jul 2025
Viewed by 660
Abstract
Fake news is false or misleading information that looks like real news and spreads through traditional and social media. It has a big impact on our social lives, especially in politics. In Pakistan, where Urdu is the main language, finding fake news in [...] Read more.
Fake news is false or misleading information that looks like real news and spreads through traditional and social media. It has a big impact on our social lives, especially in politics. In Pakistan, where Urdu is the main language, finding fake news in Urdu is difficult because there are not many effective systems for this. This study aims to solve this problem by creating a detailed process and training models using machine learning, deep learning, and large language models (LLMs). The research uses methods that look at the features of documents and classes to detect fake news in Urdu. Different models were tested, including machine learning models like Naïve Bayes and Support Vector Machine (SVM), as well as deep learning models like Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), which used embedding techniques. The study also used advanced models like BERT and GPT to improve the detection process. These models were first evaluated on the Bend-the-Truth dataset, where CNN achieved an F1 score of 72%, Naïve Bayes scored 78%, and the BERT Transformer achieved the highest F1 score of 79% on Bend the Truth dataset. To further validate the approach, the models were tested on a more diverse dataset, Ax-to-Grind, where both SVM and LSTM achieved an F1 score of 89%, while BERT outperformed them with an F1 score of 93%. Full article
Show Figures

Figure 1

29 pages, 901 KiB  
Article
Requirement Analysis for a Qualifications-Based Learning Model Platform Using Quantitative and Qualitative Methods
by Simon Wetzel, Jennifer Roeder, Adrian Vogler and Matthias Hemmje
Information 2025, 16(7), 594; https://doi.org/10.3390/info16070594 - 10 Jul 2025
Viewed by 307
Abstract
Continuous learning is fundamental to professional and personal growth. Therefore, robust digital solutions are required to support adaptable and scalable educational demands. The Qualifications-Based Learning Model (QBLM) is a framework for qualifications-based learning, defining a software architecture and data models. However, the existing [...] Read more.
Continuous learning is fundamental to professional and personal growth. Therefore, robust digital solutions are required to support adaptable and scalable educational demands. The Qualifications-Based Learning Model (QBLM) is a framework for qualifications-based learning, defining a software architecture and data models. However, the existing implementations of QBLM lack horizontal scalability and flexibility, which results in tightly coupled components that limit the adaptability of the model to the evolving needs of learners and institutions. Therefore, a new Qualifications-Based Learning Platform (QBLM Platform) is planned, which extends the QBLM approach by utilizing a modular software architecture that enables flexible service integration, scalability, and operational resilience. However, to design such a QBLM Platform, a requirements analysis is necessary. By employing both quantitative and qualitative research methods, which include a survey and expert interviews, the requirements for a QBLM Platform are identified. The result of the research is used to define not only the essential features and characteristics of the QBLM Platform but also learning platforms in general. Full article
Show Figures

Figure 1

17 pages, 1937 KiB  
Article
Hybrid Deep Learning Model for Improved Glaucoma Diagnostic Accuracy
by Nahum Flores, José La Rosa, Sebastian Tuesta, Luis Izquierdo, María Henriquez and David Mauricio
Information 2025, 16(7), 593; https://doi.org/10.3390/info16070593 - 10 Jul 2025
Viewed by 404
Abstract
Glaucoma is an irreversible neurodegenerative disease that affects the optic nerve, leading to partial or complete vision loss. Early and accurate detection is crucial to prevent vision impairment, which necessitates the development of highly precise diagnostic tools. Deep learning (DL) has emerged as [...] Read more.
Glaucoma is an irreversible neurodegenerative disease that affects the optic nerve, leading to partial or complete vision loss. Early and accurate detection is crucial to prevent vision impairment, which necessitates the development of highly precise diagnostic tools. Deep learning (DL) has emerged as a promising approach for glaucoma diagnosis, where the model is trained on datasets of fundus images. To improve the detection accuracy, we propose a hybrid model for glaucoma detection that combines multiple DL models with two fine-tuning strategies and uses a majority voting scheme to determine the final prediction. In experiments, the hybrid model achieved a detection accuracy of 96.55%, a sensitivity of 98.84%, and a specificity of 94.32%. Integrating datasets was found to improve the performance compared to using them separately even with transfer learning. When compared to individual DL models, the hybrid model achieved a 20.69% improvement in accuracy compared to the best model when applied to a single dataset, a 13.22% improvement when applied with transfer learning across all datasets, and a 1.72% improvement when applied to all datasets. These results demonstrate the potential of hybrid DL models to detect glaucoma more accurately than individual models. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop