Next Issue
Volume 16, July
Previous Issue
Volume 16, May
 
 

Information, Volume 16, Issue 6 (June 2025) – 90 articles

Cover Story (view full-size image): This groundbreaking study explores how Large Language Models (LLMs) are revolutionizing education by enhancing accessibility, inclusivity, and personalized learning. By analyzing AI’s transformative potential, the research demonstrates how LLMs break down barriers tied to language diversity, learning disabilities, and socioeconomic disparities. These intelligent systems adapt content dynamically, addressing both educational and emotional needs while preserving the vital role of educators. The study advocates for a balanced approach—leveraging LLMs as pedagogical aids under teacher guidance—ensuring ethical oversight, cultural sensitivity, and emotional support. While acknowledging challenges like data privacy and bias, it underscores the necessity of responsible AI integration to democratize education and foster equity. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
50 pages, 3777 KiB  
Article
Intelligent Teaching Recommendation Model for Practical Discussion Course of Higher Education Based on Naive Bayes Machine Learning and Improved k-NN Data Mining Algorithm
by Xiao Zhou, Ling Guo, Rui Li, Ling Liu and Juan Pan
Information 2025, 16(6), 512; https://doi.org/10.3390/info16060512 - 19 Jun 2025
Viewed by 182
Abstract
Aiming at the existing problems in practical teaching in higher education, we construct an intelligent teaching recommendation model for a higher education practical discussion course based on naive Bayes machine learning and an improved k-NN data mining algorithm. Firstly, we establish the [...] Read more.
Aiming at the existing problems in practical teaching in higher education, we construct an intelligent teaching recommendation model for a higher education practical discussion course based on naive Bayes machine learning and an improved k-NN data mining algorithm. Firstly, we establish the naive Bayes machine learning algorithm to achieve accurate classification of the students in the class and then implement student grouping based on this accurate classification. Then, relying on the student grouping, we use the matching features between the students’ interest vector and the practical topic vector to construct an intelligent teaching recommendation model based on an improved k-NN data mining algorithm, in which the optimal complete binary encoding tree for the discussion topic is modeled. Based on the encoding tree model, an improved k-NN algorithm recommendation model is established to match the student group interests and recommend discussion topics. The experimental results prove that our proposed recommendation algorithm (PRA) can accurately recommend discussion topics for different student groups, match the interests of each group to the greatest extent, and improve the students’ enthusiasm for participating in practical discussions. As for the control groups of the user-based collaborative filtering recommendation algorithm (UCFA) and the item-based collaborative filtering recommendation algorithm (ICFA), under the experimental conditions of the single dataset and multiple datasets, the PRA has higher accuracy, recall rate, precision, and F1 value than the UCFA and ICFA and has better recommendation performance and robustness. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

25 pages, 24372 KiB  
Article
Data-Driven Machine Learning-Informed Framework for Model Predictive Control in Vehicles
by Edgar Amalyan and Shahram Latifi
Information 2025, 16(6), 511; https://doi.org/10.3390/info16060511 - 19 Jun 2025
Viewed by 345
Abstract
A machine learning framework is developed to interpret vehicle subsystem status from sensor data, providing actionable insights for adaptive control systems. Using the vehicle’s suspension as a case study, inertial data are collected from driving maneuvers, including braking and cornering, to seed a [...] Read more.
A machine learning framework is developed to interpret vehicle subsystem status from sensor data, providing actionable insights for adaptive control systems. Using the vehicle’s suspension as a case study, inertial data are collected from driving maneuvers, including braking and cornering, to seed a prototype XGBoost classifier. The classifier then pseudo-labels a larger exemplar dataset acquired from street and racetrack sessions, which is used to train an inference model capable of robust generalization across both regular and performance driving. An overlapping sliding-window grading approach with reverse exponential weighting smooths transient fluctuations while preserving responsiveness. The resulting real-time semantic mode predictions accurately describe the vehicle’s current dynamics and can inform a model predictive control system that can adjust suspension parameters and update internal constraints for improved performance, ride comfort, and component longevity. The methodology extends to other components, such as braking systems, offering a scalable path toward fully self-optimizing vehicle control in both conventional and autonomous platforms. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Figure 1

1 pages, 128 KiB  
Correction
Correction: Rosero-Montalvo et al. A New Data-Preprocessing-Related Taxonomy of Sensors for IoT Applications. Information 2022, 13, 241
by Paul D. Rosero-Montalvo, Vivian F. López-Batista and Diego H. Peluffo-Ordóñez
Information 2025, 16(6), 510; https://doi.org/10.3390/info16060510 - 19 Jun 2025
Viewed by 107
Abstract
In the published publication [...] Full article
(This article belongs to the Special Issue Pervasive Computing in IoT)
21 pages, 326 KiB  
Article
Incremental Weak Subgradient Methods for Non-Smooth Non-Convex Optimization Problems
by Narges Araboljadidi and Valentina De Simone
Information 2025, 16(6), 509; https://doi.org/10.3390/info16060509 - 19 Jun 2025
Viewed by 208
Abstract
Non-smooth, non-convex optimization problems frequently arise in modern machine learning applications, yet solving them efficiently remains a challenge. This paper addresses the minimization of functions of the form f(x)=i=1mfi(x) [...] Read more.
Non-smooth, non-convex optimization problems frequently arise in modern machine learning applications, yet solving them efficiently remains a challenge. This paper addresses the minimization of functions of the form f(x)=i=1mfi(x) where each component is Lipschitz continuous but potentially non-smooth and non-convex. We extend the incremental subgradient method by incorporating weak subgradients, resulting in a framework better suited for non-convex objectives. We provide a comprehensive convergence analysis for three step size strategies: constant, diminishing, and a novel dynamic approach. Our theoretical results show that all variants converge to a neighborhood of the optimal solution, with the size of this neighborhood governed by the weak subgradient parameters. Numerical experiments on classification tasks with non-convex regularization, evaluated on the Breast Cancer Wisconsin dataset, demonstrate the effectiveness of the proposed approach. In particular, the dynamic step size method achieves superior practical performance, outperforming both classical and diminishing step size variants in terms of accuracy and convergence speed. These results position the incremental weak subgradient framework as a promising tool for scalable and efficient optimization in machine learning settings involving non-convex objectives. Full article
(This article belongs to the Special Issue Emerging Research in Optimization and Machine Learning)
24 pages, 2001 KiB  
Article
Reliable Low-Latency Multicasting in MANET: A DTN7-Driven Pub/Sub Framework Optimizing Delivery Rate and Throughput
by Xinwei Liu and Satoshi Fujita
Information 2025, 16(6), 508; https://doi.org/10.3390/info16060508 - 18 Jun 2025
Viewed by 243
Abstract
This paper addresses the challenges of multicasting in Mobile Ad Hoc Networks (MANETs), where communication relies exclusively on direct interactions between mobile nodes without the support of fixed infrastructure. In such networks, efficient information dissemination is critical, particularly in scenarios where an event [...] Read more.
This paper addresses the challenges of multicasting in Mobile Ad Hoc Networks (MANETs), where communication relies exclusively on direct interactions between mobile nodes without the support of fixed infrastructure. In such networks, efficient information dissemination is critical, particularly in scenarios where an event detected by one node must be reliably communicated to a designated subset of nodes. The highly dynamic nature of MANET, characterized by frequent topology changes and unpredictable connectivity, poses significant challenges to stable and efficient multicasting. To address these issues, we adopt a Publish/Subscribe (Pub/Sub) model that utilizes brokers as intermediaries for information dissemination. However, ensuring the robustness of broker-based multicasting in a highly mobile environment requires novel strategies to mitigate the effects of frequent disconnections and mobility-induced disruptions. To this end, we propose a framework based on three key principles: (1) leveraging the Disruption-Tolerant Networking Implementations of the Bundle Protocol 7 (DTN7) at the network layer to sustain message delivery even in the presence of intermittent connectivity and high node mobility; (2) dynamically generating broker replicas to ensure that broker functionality persists despite sudden node failures or disconnections; and (3) enabling brokers and their replicas to periodically broadcast advertisement packets to maintain communication paths and facilitate efficient data forwarding, drawing inspiration from Named Data Networking (NDN) techniques. To evaluate the effectiveness of our approach, we conduct extensive simulations using ns-3, examining its impact on message delivery reliability, latency, and overall network throughput. The results demonstrate that our method significantly reduces message delivery delays while improving delivery rates, particularly in high-mobility scenarios. Additionally, the integration of DTN7 at the bundle layer proves effective in mitigating performance degradation in environments where nodes frequently change their positions. Our findings highlight the potential of our approach in enhancing the resilience and efficiency of broker-assisted multicasting in MANET, making it a promising solution for real-world applications such as disaster response, military operations, and decentralized IoT networks. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols, 3rd Edition)
Show Figures

Graphical abstract

16 pages, 21150 KiB  
Article
STCYOLO: Subway Tunnel Crack Detection Model with Complex Scenarios
by Jia Zhang, Hui Li, Weidong Song, Jinhe Zhang and Miao Shi
Information 2025, 16(6), 507; https://doi.org/10.3390/info16060507 - 18 Jun 2025
Viewed by 219
Abstract
The detection of tunnel cracks plays a vital role in ensuring structural integrity and driving safety. However, tunnel environments present significant challenges for crack detection, such as uneven lighting and shadow occlusion, which can obscure surface features and reduce detection accuracy. To address [...] Read more.
The detection of tunnel cracks plays a vital role in ensuring structural integrity and driving safety. However, tunnel environments present significant challenges for crack detection, such as uneven lighting and shadow occlusion, which can obscure surface features and reduce detection accuracy. To address these challenges, this paper proposes a novel crack detection network named STCYOLO. First, a dynamic snake convolution (DSConv) mechanism is introduced to adaptively adjust the shape and size of convolutional kernels, allowing them to better align with the elongated and irregular geometry of cracks, thereby enhancing performance under challenging lighting conditions. To mitigate the impact of shadow occlusion, a Shadow Occlusion-Aware Attention (SOAA) module is designed to enhance the network’s ability to identify cracks hidden in shadowed regions. Additionally, a tiny crack upsampling (TCU) module is proposed, which reorganizes convolution kernels to more effectively preserve fine-grained spatial details during upsampling, thereby improving the detection of small and subtle cracks. The experimental results demonstrate that, compared to YOLOv8, our proposed method achieves a 2.85% improvement in mAP and a 3.02% increase in the F score on the crack detection dataset. Full article
(This article belongs to the Special Issue Crack Identification Based on Computer Vision)
Show Figures

Figure 1

13 pages, 718 KiB  
Article
Application of Optimization Algorithms in Voter Service Module Allocation
by Edgar Jardón, Marcelo Romero and José-Raymundo Marcial-Romero
Information 2025, 16(6), 506; https://doi.org/10.3390/info16060506 - 18 Jun 2025
Viewed by 299
Abstract
Allocation models are essential tools for optimally distributing client requests across multiple services under defined restrictions and objective functions. This study evaluates several heuristics to address an allocation problem involving young individuals reaching voting age. A five-step methodology was implemented: defining variables, executing [...] Read more.
Allocation models are essential tools for optimally distributing client requests across multiple services under defined restrictions and objective functions. This study evaluates several heuristics to address an allocation problem involving young individuals reaching voting age. A five-step methodology was implemented: defining variables, executing heuristics, compiling results, evaluating outcomes, and selecting the most effective heuristic. Using experimental data from the Mexican National Electoral Institute (INE), the study focuses on 88,107 individuals aged 17–18 in the 16 municipalities of the Toluca Valley, who can access any of the 10 INE service modules. Six heuristics were analyzed in sequence: genetic algorithm, ant colony optimization, local search, tabu search, simulated annealing, and greedy algorithm. The results indicate that genetic algorithm significantly reduces the processing time when used as the initial heuristic. Furthermore, given the current capacity of the 10 INE modules, serving the entire target population would require nine working days. These findings align with principles of spatial justice and highlight the practical efficiency of heuristic-based solutions in administrative resource allocation. The main contribution of this study is the development and evaluation of a hybrid heuristic framework for allocating INE modules, demonstrating that combining multiple heuristics—with a genetic algorithm as the initial phase—significantly improves solution quality and computational efficiency. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

14 pages, 264 KiB  
Article
A CPSO-BPNN-Based Analysis of Factors Influencing the Mental Health of Urban Youth
by Hu Xiang and Yong-Hong Lan
Information 2025, 16(6), 505; https://doi.org/10.3390/info16060505 - 17 Jun 2025
Viewed by 203
Abstract
The fast-paced lifestyle, high-pressure work environment, crowded traffic, and polluted air of urban environments often have a negative impact on urban youth’s mental health.Understanding the factors in urban environments that influence the mental health of young people and the differences among groups can [...] Read more.
The fast-paced lifestyle, high-pressure work environment, crowded traffic, and polluted air of urban environments often have a negative impact on urban youth’s mental health.Understanding the factors in urban environments that influence the mental health of young people and the differences among groups can help improve the adaptability and mental health of urban youth. Based on the 2024 report on the health status of urban youth in China, this paper first analyzes this through a combination of multiple linear regression and automated machine learning methods. The key influencing factors of different living styles and environments on the mental health of urban youth and the priority of influencing factors are evaluated. The results are obtained by using the chaos particle swarm optimization-based back propagation neural network (CPSO-BPNN) model. Then, the heterogeneity of the different types of urban youth groups is analyzed. Finally, the conclusions and recommendations of this article are presented. This study provides theoretical support and a scientific decision-making reference for improving the adaptability and health of urban youth. Full article
(This article belongs to the Special Issue Information Systems in Healthcare)
Show Figures

Figure 1

25 pages, 1224 KiB  
Article
Generative Jazz Chord Progressions: A Statistical Approach to Harmonic Creativity
by Adriano N. Raposo and Vasco N. G. J. Soares
Information 2025, 16(6), 504; https://doi.org/10.3390/info16060504 - 17 Jun 2025
Viewed by 617
Abstract
Jazz music has long been a subject of interest in the field of generative music. Traditional jazz chord progressions follow established patterns that contribute to the genre’s distinct sound. However, the demand for more innovative and diverse harmonic structures has led to the [...] Read more.
Jazz music has long been a subject of interest in the field of generative music. Traditional jazz chord progressions follow established patterns that contribute to the genre’s distinct sound. However, the demand for more innovative and diverse harmonic structures has led to the exploration of alternative approaches in music generation. This paper addresses the challenge of generating novel and engaging jazz chord sequences that go beyond traditional chord progressions. It proposes an unconventional statistical approach, leveraging a corpus of 1382 jazz standards, which includes key information, song structure, and chord sequences by section. The proposed method generates chord sequences based on statistical patterns extracted from the corpus, considering a tonal context while introducing a degree of unpredictability that enhances the results with elements of surprise and interest. The goal is to move beyond conventional and well-known jazz chord progressions, exploring new and inspiring harmonic possibilities. The evaluation of the generated dataset, which matches the size of the learning corpus, demonstrates a strong statistical alignment between distributions across multiple analysis parameters while also revealing opportunities for further exploration of novel harmonic pathways. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Graphical abstract

31 pages, 3895 KiB  
Article
Enhanced Pilot Attention Monitoring: A Time-Frequency EEG Analysis Using CNN–LSTM Networks for Aviation Safety
by Quynh Anh Nguyen, Nam Anh Dao and Long Nguyen
Information 2025, 16(6), 503; https://doi.org/10.3390/info16060503 - 17 Jun 2025
Viewed by 240
Abstract
Despite significant technological advancements in aviation safety systems, human-operator condition monitoring remains a critical challenge, with more than 75% of aircraft incidents stemming from attention-related perceptual failures. This study addresses a fundamental question in sensor-based condition monitoring: how can temporal- and frequency-domain EEG [...] Read more.
Despite significant technological advancements in aviation safety systems, human-operator condition monitoring remains a critical challenge, with more than 75% of aircraft incidents stemming from attention-related perceptual failures. This study addresses a fundamental question in sensor-based condition monitoring: how can temporal- and frequency-domain EEG sensor data be optimally integrated to detect precursors of system failure in human–machine interfaces? We propose a three-stage diagnostic framework that mirrors industrial condition monitoring approaches. First, raw EEG sensor signals undergo preprocessing into standardized one-second epochs. Second, a novel hybrid feature-extraction methodology combines time- and frequency-domain features to create comprehensive sensor signatures of neural states. Finally, our dual-architecture CNN–LSTM model processes spatial patterns via CNNs while capturing temporal degradation signals via LSTMs, enabling robust classification in noisy operational environments. Our contributions include (1) a multimodal data fusion approach for EEG sensors that provides a more comprehensive representation of operator conditions, and (2) an artificial intelligence architecture that balances spatial and temporal analysis for the predictive maintenance of attention states. When validated on aviation-related EEG datasets, our condition monitoring system achieved significantly higher diagnostic accuracy across various noise conditions compared to existing approaches. The practical applications extend beyond theoretical improvement, offering a pathway to implement more reliable human–machine interface monitoring in critical systems, potentially preventing catastrophic failures by detecting condition anomalies before they propagate through the system. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)
Show Figures

Figure 1

32 pages, 11893 KiB  
Article
Global Navigation Satellite System Spoofing Attack Detection Using Receiver Independent Exchange Format Data and Long Short-Term Memory Algorithm
by Alexandru-Gabriel Romaniuc, Vlad-Cosmin Vasile and Monica-Elena Borda
Information 2025, 16(6), 502; https://doi.org/10.3390/info16060502 - 17 Jun 2025
Viewed by 343
Abstract
Global Navigation Satellite Systems (GNSS) are widely used for positioning, navigation, and timing (PNT) applications, making them a critical infrastructure component. However, GNSS signals and receivers are vulnerable to several attacks that can expose the users to serious threats. The GNSS spoofing attack, [...] Read more.
Global Navigation Satellite Systems (GNSS) are widely used for positioning, navigation, and timing (PNT) applications, making them a critical infrastructure component. However, GNSS signals and receivers are vulnerable to several attacks that can expose the users to serious threats. The GNSS spoofing attack, for example, is one of the most widespread in this domain and is used to manipulate positioning and timing data by transmitting counterfeit signals. Thus, in this study, we propose a method for analyzing and detecting anomalies in RINEX observation data that is associated with spoofing attacks. The proposed method is based on Long Short-Term Memory (LSTM) networks and focuses on the observation parameters defined by the RINEX standard, which are computed in the Measurements block and subsequently used in the Navigation block of a GNSS receiver architecture. Attack detection involves processing GNSS observation codes and learning the temporal dependencies necessary to identify anomalies associated with GNSS signal spoofing. During the testing phase, the proposed method was applied to GNSS observation codes affected by spoofing, using an LSTM-based reconstruction approach. An ensemble strategy across grouped observation codes was used to identify temporal inconsistencies indicative of anomalies. Full article
(This article belongs to the Special Issue Extended Reality and Cybersecurity)
Show Figures

Figure 1

16 pages, 8334 KiB  
Article
A Graph Laplacian Regularizer from Deep Features for Depth Map Super-Resolution
by George Gartzonikas, Evaggelia Tsiligianni, Nikos Deligiannis and Lisimachos P. Kondi
Information 2025, 16(6), 501; https://doi.org/10.3390/info16060501 - 17 Jun 2025
Viewed by 217
Abstract
Current depth map sensing technologies capture depth maps at low spatial resolution, rendering serious problems in various applications. In this paper, we propose a single depth map super-resolution method that combines the advantages of model-based methods and deep learning approaches. Specifically, we formulate [...] Read more.
Current depth map sensing technologies capture depth maps at low spatial resolution, rendering serious problems in various applications. In this paper, we propose a single depth map super-resolution method that combines the advantages of model-based methods and deep learning approaches. Specifically, we formulate a linear inverse problem which we solve by introducing a graph Laplacian regularizer. The regularization approach promotes smoothness and preserves the structural details of the observed depth map. We construct the graph Laplacian matrix by deploying latent features obtained from a pretrained deep learning model. The problem is solved with the Alternating Direction Method of Multipliers (ADMM). Experimental results show that the proposed approach outperforms existing optimization-based and deep learning solutions. Full article
(This article belongs to the Special Issue Advances in Computer Graphics and Visual Computing)
Show Figures

Figure 1

11 pages, 874 KiB  
Article
Psychometric Properties and Validation of the Chinese Adaption of the Affinity for Technology Interaction (ATI) Scale
by Denise Sogemeier, Ina Marie Koniakowsky, Sebastian Hergeth, Frederik Naujoks and Andreas Keinath
Information 2025, 16(6), 500; https://doi.org/10.3390/info16060500 - 16 Jun 2025
Viewed by 194
Abstract
The Affinity for Technology Interaction (ATI) scale has been widely used to assess the tendency to engage in technology. To enhance the scale’s applicability and facilitate cross-cultural research, it is essential to provide translations of the scale. A Chinese translation is still missing. [...] Read more.
The Affinity for Technology Interaction (ATI) scale has been widely used to assess the tendency to engage in technology. To enhance the scale’s applicability and facilitate cross-cultural research, it is essential to provide translations of the scale. A Chinese translation is still missing. Additionally, a validation is necessary as culture can affect the psychometrics of questionnaires, bearing the danger of applying inadequate measures. The aims of the present study are therefore providing a Chinese translation of the ATI scale and presenting non-parametric and parametric psychometric analyses of the translated version to examine the underlying factor structure. In contrast to the original scale, analyses of the Chinese version suggest a two-dimensional structure with one dimension describing a passive interest in technology and another describing an active engagement with technology. The findings enable researchers to use the scale in Chinese-speaking populations and thereby advancing the understanding of human-technology interaction across different cultures. Full article
Show Figures

Figure 1

22 pages, 2046 KiB  
Article
Optimizing IoT Intrusion Detection—A Graph Neural Network Approach with Attribute-Based Graph Construction
by Tien Ngo, Jiao Yin, Yong-Feng Ge and Hua Wang
Information 2025, 16(6), 499; https://doi.org/10.3390/info16060499 - 16 Jun 2025
Cited by 1 | Viewed by 457
Abstract
The inherent complexity and heterogeneity of the Internet of Things (IoT) ecosystem present significant challenges for developing effective intrusion detection systems. While graph deep-learning-based methods have shown promise in cybersecurity applications, existing approaches primarily construct graphs based on physical network connections, which may [...] Read more.
The inherent complexity and heterogeneity of the Internet of Things (IoT) ecosystem present significant challenges for developing effective intrusion detection systems. While graph deep-learning-based methods have shown promise in cybersecurity applications, existing approaches primarily construct graphs based on physical network connections, which may not effectively capture node representations. This paper proposes a Top-K Similarity Graph Framework (TKSGF) for IoT network intrusion detection. Instead of relying on physical links, the TKSGF constructs graphs based on Top-K attribute similarity, ensuring a more meaningful representation of node relationships. We employ GraphSAGE as the Graph Neural Network (GNN) model to effectively capture node representations while maintaining scalability. Furthermore, we conducted extensive experiments to analyze the impact of graph directionality (directed vs. undirected), different K values, and various GNN architectures and configurations on detection performance. Evaluations on binary and multi-class classification tasks using the NF-ToN IoT and NF-BoT IoT datasets from the Machine-Learning-Based Network Intrusion Detection System (NIDS) benchmark demonstrated that our proposed framework consistently outperformed traditional machine learning methods and existing graph-based approaches, achieving superior classification accuracy and robustness. Full article
(This article belongs to the Special Issue Data Privacy Protection in the Internet of Things)
Show Figures

Figure 1

24 pages, 6698 KiB  
Article
From Spectrum to Image: A Novel Deep Clustering Network for Lactose-Free Milk Adulteration Detection
by Chong Zhang, Shankui Ding and Ying He
Information 2025, 16(6), 498; https://doi.org/10.3390/info16060498 - 16 Jun 2025
Viewed by 279
Abstract
Traditional clustering methods are often ineffective in extracting relevant features from high-dimensional, nonlinear near-infrared (NIR) spectra, resulting in poor accuracy of detecting lactose-free milk adulteration. In this paper, we introduce a clustering model based on Gram angular field and convolutional depth manifold (GAF-ConvDuc). [...] Read more.
Traditional clustering methods are often ineffective in extracting relevant features from high-dimensional, nonlinear near-infrared (NIR) spectra, resulting in poor accuracy of detecting lactose-free milk adulteration. In this paper, we introduce a clustering model based on Gram angular field and convolutional depth manifold (GAF-ConvDuc). The Gram angular field accentuates variations in spectral absorption peaks, while convolution depth manifold clustering captures local features between adjacent wavelengths, reducing the influence of noise and enhancing clustering accuracy. Experiments were performed on samples from 2250 milk spectra using the GAF-ConvDuc model. Compared to K-means, the silhouette coefficient (SC) increased from 0.109 to 0.571, standardized mutual information index (NMI) increased from 0.696 to 0.921, the Adjusted Randindex (ARI) increased from 0.543 to 0.836, and accuracy (ACC) increased from 67.2% to 88.9%. Experimental results indicate that our method is superior to K-means, Variational Autoencoder (VAE) clustering, and other approaches. Without requiring pre-labeled data, the model achieves higher inter-cluster separation and more distinct clustering boundaries. These findings offer a robust solution for detecting lactose-free milk adulteration, crucial for food safety oversight. Full article
Show Figures

Graphical abstract

30 pages, 350 KiB  
Article
The Ethics of Data and Its Governance: A Discourse Theoretical Approach
by Bernd Carsten Stahl
Information 2025, 16(6), 497; https://doi.org/10.3390/info16060497 - 15 Jun 2025
Viewed by 531
Abstract
The rapidly growing amount and importance of data across all aspects of organisations and society have led to urgent calls for better, more comprehensive and applicable approaches to data governance. One key driver of this is the use of data in machine learning [...] Read more.
The rapidly growing amount and importance of data across all aspects of organisations and society have led to urgent calls for better, more comprehensive and applicable approaches to data governance. One key driver of this is the use of data in machine learning systems, which hold the promise of producing much social and economic good, but which simultaneously raise significant concerns. Calls for data governance thus typically have an ethical component. This can refer to specific ethical values that data governance is meant to preserve, most obviously in the area of privacy and data protection. More broadly, responsible data governance is seen as a condition of the development and use of ethical and trustworthy digital technologies. This conceptual paper takes the already existing ethical aspect of the data governance discourse as a point of departure and argues that ethics should play a more central role in data governance. Drawing on Habermas’s Theory of Communicative Action and using the example of neuro data, this paper argues that data shapes and is shaped by discourses. Data is at the core of our shared ontological positions and influences what we believe to be real and thus also what it means to be ethical. These insights can be used to develop guidance for the further development of responsible data governance. Full article
(This article belongs to the Special Issue Advances in Information Studies)
54 pages, 625 KiB  
Systematic Review
The Future Is Organic: A Deep Dive into Techniques and Applications for Real-Time Condition Monitoring in SASO Systems—A Systematic Review
by Tim Nolte and Sven Tomforde
Information 2025, 16(6), 496; https://doi.org/10.3390/info16060496 - 14 Jun 2025
Viewed by 267
Abstract
Condition Monitoring (CM) is a key component of Self-Adaptive and Self-Organizing (SASO) systems. By analyzing sensor data, CM enables systems to react to dynamic conditions, supporting the core principles of Organic Computing (OC): robustness, adaptability, and autonomy. This survey presents a structured overview [...] Read more.
Condition Monitoring (CM) is a key component of Self-Adaptive and Self-Organizing (SASO) systems. By analyzing sensor data, CM enables systems to react to dynamic conditions, supporting the core principles of Organic Computing (OC): robustness, adaptability, and autonomy. This survey presents a structured overview of CM techniques, application areas, and input data. It also assesses the extent to which current approaches support self-* properties, real-time operation, and predictive functionality. Out of 284 retrieved publications, 110 were selected for detailed analysis. About 38.71% focus on manufacturing, 65.45% on system-level monitoring, and 6.36% on static structures. Most approaches (69.09%) use Machine Learning (ML), while only 18.42% apply Deep Learning (DL). Predictive techniques are used in 16.63% of the studies, with 38.89% combining prediction and anomaly detection. Although 58.18% implement some self-* features, only 42.19% present explicitly self-adaptive or self-organizing methods. A mere 6.25% incorporate feedback mechanisms. No study fully combines self-adaptation and self-organization. Only 5.45% report processing times; however, 1000 Hz can be considered a reasonable threshold for high-frequency, real-time CM. These results highlight a significant research gap and the need for integrated SASO capabilities in future CM systems—especially in real-time, autonomous contexts. Full article
(This article belongs to the Special Issue Data-Driven Decision-Making in Intelligent Systems)
Show Figures

Figure 1

29 pages, 5178 KiB  
Article
HASSDE-NAS: Heuristic–Adaptive Spectral–Spatial Neural Architecture Search with Dynamic Cell Evolution for Hyperspectral Water Body Identification
by Feng Chen, Baishun Su and Zongpu Jia
Information 2025, 16(6), 495; https://doi.org/10.3390/info16060495 - 13 Jun 2025
Viewed by 329
Abstract
The accurate identification of water bodies in hyperspectral images (HSIs) remains challenging due to hierarchical representation imbalances in deep learning models, where shallow layers overly focus on spectral features, boundary ambiguities caused by the relatively low spatial resolution of satellite imagery, and limited [...] Read more.
The accurate identification of water bodies in hyperspectral images (HSIs) remains challenging due to hierarchical representation imbalances in deep learning models, where shallow layers overly focus on spectral features, boundary ambiguities caused by the relatively low spatial resolution of satellite imagery, and limited detection capability for small-scale aquatic features such as narrow rivers. To address these challenges, this study proposes Heuristic–Adaptive Spectral–Spatial Neural Architecture Search with Dynamic Cell Evaluation (HASSDE-NAS). The architecture integrates three specialized units; a spectral-aware dynamic band selection cell suppresses redundant spectral bands, while a geometry-enhanced edge attention cell refines fragmented spatial boundaries. Additionally, a bidirectional fusion alignment cell jointly optimizes spectral and spatial dependencies. A heuristic cell search algorithm optimizes the network architecture through architecture stability, feature diversity, and gradient sensitivity analysis, which improves search efficiency and model robustness. Evaluated on the Gaofen-5 datasets from the Guangdong and Henan regions, HASSDE-NAS achieves overall accuracies of 92.61% and 96%, respectively. This approach outperforms existing methods in delineating narrow river systems and resolving water bodies with weak spectral contrast under complex backgrounds, such as vegetation or cloud shadows. By adaptively prioritizing task-relevant features, the framework provides an interpretable solution for hydrological monitoring and advances neural architecture search in intelligent remote sensing. Full article
Show Figures

Figure 1

21 pages, 608 KiB  
Article
A Machine Learning-Assisted Automation System for Optimizing Session Preparation Time in Digital Audio Workstations
by Bogdan Moroșanu, Marian Negru, Georgian Nicolae, Horia Sebastian Ioniță and Constantin Paleologu
Information 2025, 16(6), 494; https://doi.org/10.3390/info16060494 - 13 Jun 2025
Viewed by 448
Abstract
Modern audio production workflows often require significant manual effort during the initial session preparation phase, including track labeling, format standardization, and gain staging. This paper presents a rule-based and Machine Learning-assisted automation system designed to minimize the time required for these tasks in [...] Read more.
Modern audio production workflows often require significant manual effort during the initial session preparation phase, including track labeling, format standardization, and gain staging. This paper presents a rule-based and Machine Learning-assisted automation system designed to minimize the time required for these tasks in Digital Audio Workstations (DAWs). The system automatically detects and labels audio tracks, identifies and eliminates redundant fake stereo channels, merges double-tracked instruments into stereo pairs, standardizes sample rate and bit rate across all tracks, and applies initial gain staging using target loudness values derived from a Genetic Algorithm (GA)-based system, which optimizes gain levels for individual track types based on engineer preferences and instrument characteristics. By replacing manual setup processes with automated decision-making methods informed by Machine Learning (ML) and rule-based heuristics, the system reduces session preparation time by up to 70% in typical multitrack audio projects. The proposed approach highlights how practical automation, combined with lightweight Neural Network (NN) models, can optimize workflow efficiency in real-world music production environments. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Graphical abstract

18 pages, 512 KiB  
Article
Animate, or Inanimate, That Is the Question for Large Language Models
by Giulia Pucci, Fabio Massimo Zanzotto and Leonardo Ranaldi
Information 2025, 16(6), 493; https://doi.org/10.3390/info16060493 - 13 Jun 2025
Viewed by 535
Abstract
The cognitive core of human beings is closely connected to the concept of animacy, which significantly influences their memory, vision, and complex language comprehension. While animacy is reflected in language through subtle constraints on verbs and adjectives, it is also acquired and honed [...] Read more.
The cognitive core of human beings is closely connected to the concept of animacy, which significantly influences their memory, vision, and complex language comprehension. While animacy is reflected in language through subtle constraints on verbs and adjectives, it is also acquired and honed through non-linguistic experiences. In the same vein, we suggest that the limited capacity of LLMs to grasp natural language, particularly in relation to animacy, stems from the fact that these models are trained solely on textual data. Hence, the question this paper aims to answer arises: Can LLMs, in their digital wisdom, process animacy in a similar way to what humans would do? We then propose a systematic analysis via prompting approaches. In particular, we probe different LLMs using controlled lexical contrasts (animate vs. inanimate nouns) and narrative contexts in which typically inanimate entities behave as animate. Results reveal that, although LLMs have been trained predominantly on textual data, they exhibit human-like behavior when faced with typical animate and inanimate entities in alignment with earlier studies, specifically on seven LLMs selected from three major families—OpenAI (GPT-3.5, GPT-4), Meta (Llama2 7B, 13B, 70B), and Mistral (Mistral-7B, Mixtral). GPT models generally achieve the most consistent and human-like performance, and in some tasks, such as sentence plausibility and acceptability judgments, even surpass human baselines. Moreover, although to a lesser degree, the other models also assume comparable results. Hence, LLMs can adapt to understand unconventional situations by recognising oddities as animated without needing to interface with unspoken cognitive triggers humans rely on to break down animations. Full article
Show Figures

Figure 1

33 pages, 3147 KiB  
Article
Virtual Collaboration and E-Democracy During the Pandemic Era: Insights on Digital Engagement, Infrastructure, and Social Dynamics
by George Asimakopoulos, Hera Antonopoulou, Ioanna Giannoukou, Antonia Golfi, Ioanna Sataraki and Constantinos Halkiopoulos
Information 2025, 16(6), 492; https://doi.org/10.3390/info16060492 - 13 Jun 2025
Viewed by 376
Abstract
The COVID-19 pandemic accelerated virtual collaboration, reshaping digital communication, remote work, education, and e-democracy. This study examines the impact of these tools on digital citizen participation through a quantitative cross-sectional survey of n = 1122 participants across diverse demographics. Using stratified purposive sampling, [...] Read more.
The COVID-19 pandemic accelerated virtual collaboration, reshaping digital communication, remote work, education, and e-democracy. This study examines the impact of these tools on digital citizen participation through a quantitative cross-sectional survey of n = 1122 participants across diverse demographics. Using stratified purposive sampling, descriptive statistics, correlation analyses, and segmentation by demographic and psychological factors, we analyzed how infrastructure quality, personality traits, and social dynamics influenced virtual engagement. While digital platforms have improved accessibility, findings reveal that they often fail to foster interpersonal trust and democratic deliberation. Statistical analyses demonstrated significant correlations between communication effectiveness and relationship quality (ρ = 0.387, p < 0.001), with distinct patterns emerging across age groups, community sizes, and personality types. Infrastructure disparities significantly impacted participation, particularly in rural areas (χ2 = 70.72, df = 12, p < 0.001, V = 0.145). Recommendations include enhancing digital infrastructure, developing adaptive e-governance platforms, and implementing trust-building mechanisms. Despite the limitations of self-reported data and the cross-sectional design, these insights contribute to building more inclusive digital governance frameworks. Future research should employ longitudinal approaches to explore evolving trends in e-democratic participation. Full article
Show Figures

Figure 1

25 pages, 1863 KiB  
Review
Deep Learning Segmentation Techniques for Atherosclerotic Plaque on Ultrasound Imaging: A Systematic Review
by Laura De Rosa, Serena L’Abbate, Eduarda Mota da Silva, Mauro Andretta, Elisabetta Bianchini, Vincenzo Gemignani, Claudia Kusmic and Francesco Faita
Information 2025, 16(6), 491; https://doi.org/10.3390/info16060491 - 13 Jun 2025
Viewed by 1029
Abstract
Background: Atherosclerotic disease is the leading global cause of death, driven by progressive plaque accumulation in the arteries. Ultrasound (US) imaging, both conventional (CUS) and intravascular (IVUS), is crucial for the non-invasive assessment of atherosclerotic plaques. Deep learning (DL) techniques have recently gained [...] Read more.
Background: Atherosclerotic disease is the leading global cause of death, driven by progressive plaque accumulation in the arteries. Ultrasound (US) imaging, both conventional (CUS) and intravascular (IVUS), is crucial for the non-invasive assessment of atherosclerotic plaques. Deep learning (DL) techniques have recently gained attention as tools to improve the accuracy and efficiency of image analysis in this domain. This paper reviews recent advancements in DL-based methods for the segmentation, classification, and quantification of atherosclerotic plaques in US imaging, focusing on their performance, clinical relevance, and translational challenges. Methods: A systematic literature search was conducted in the PubMed, Scopus, and Web of Science databases, following PRISMA guidelines. The review included peer-reviewed original articles published up to 31 January 2025 that applied DL models for plaque segmentation, characterization, and/or quantification in US images. Results: A total of 53 studies were included, with 72% focusing on carotid CUS and 28% on coronary IVUS. DL architectures, such as UNet and attention-based networks, were commonly used, achieving high segmentation accuracy with average Dice similarity coefficients of around 84%. Many models provided reliable quantitative outputs (such as total plaque area, plaque burden, and stenosis severity index) with correlation coefficients often exceeding R = 0.9 compared to manual annotations. Limitations included the scarcity of large, annotated, and publicly available datasets; the lack of external validation; and the limited availability of open-source code. Conclusions: DL-based approaches show considerable promise for advancing atherosclerotic plaque analysis in US imaging. To facilitate broader clinical adoption, future research should prioritize methodological standardization, external validation, data and code sharing, and integrating 3D US technologies. Full article
Show Figures

Figure 1

29 pages, 350 KiB  
Review
The Gaming Revolution in History Education: The Practice and Challenges of Integrating Game-Based Learning into Formal Education
by Chien-Hung Lai and Po-Yi Hu
Information 2025, 16(6), 490; https://doi.org/10.3390/info16060490 - 12 Jun 2025
Viewed by 1088
Abstract
This study conducts a comprehensive literature review to explore the potential and challenges of integrating game-based learning (GBL) into formal history education. Given the increasing interest in the educational value of games, this review systematically examines academic research published over the past fifteen [...] Read more.
This study conducts a comprehensive literature review to explore the potential and challenges of integrating game-based learning (GBL) into formal history education. Given the increasing interest in the educational value of games, this review systematically examines academic research published over the past fifteen years. The analysis focuses on two major themes: (1) the development and theoretical underpinnings of history-related game-based learning, and (2) the difficulties encountered when implementing GBL in formal education systems, including issues related to curriculum alignment, teacher readiness, and instructional assessment. Drawing on 118 selected high-impact publications, this review identifies both the pedagogical benefits and the structural limitations of using historical games in the classroom. The findings highlight that while game-based learning holds promise in improving students’ engagement, motivation, and understanding of historical content, its practical implementation requires careful instructional design, sufficient resources, and alignment with national educational standards. This review concludes by proposing a set of strategic recommendations to guide future integration efforts of GBL into history education. As a literature review, this study does not involve empirical data collection but rather synthesizes existing research findings to inform educational practice and future inquiry. Full article
23 pages, 722 KiB  
Systematic Review
A Systematic Review of Large Language Models in Medical Specialties: Applications, Challenges and Future Directions
by Asma Musabah Alkalbani, Ahmed Salim Alrawahi, Ahmad Salah, Venus Haghighi, Yang Zhang, Salam Alkindi and Quan Z. Sheng
Information 2025, 16(6), 489; https://doi.org/10.3390/info16060489 - 12 Jun 2025
Viewed by 490
Abstract
This systematic review evaluates recent literature from January 2021 to March 2024 on large language model (LLM) applications across diverse medical specialties. Searching PubMed, Web of Science, and Scopus, we included 84 studies. LLMs were applied to tasks such as clinical natural language [...] Read more.
This systematic review evaluates recent literature from January 2021 to March 2024 on large language model (LLM) applications across diverse medical specialties. Searching PubMed, Web of Science, and Scopus, we included 84 studies. LLMs were applied to tasks such as clinical natural language processing, medical decision support, education, and aiding diagnostic processes. While studies reported benefits such as improved efficiency and, in some specific NLP tasks, high accuracy above 90%, significant challenges persist concerning reliability, ethical implications, and performance consistency, with accuracy in broader diagnostic support applications showing substantial variability, with some as low as 3%. The overall risk of bias in the reviewed literature was considerably low in 72 studies. Key findings highlight a substantial heterogeneity in LLM performance across different medical tasks and contexts, preventing meta-analysis due to a lack of standardized methodologies. Future efforts should prioritize developing domain-specific LLMs using robust medical data and establishing rigorous validation standards to ensure their safe and effective clinical integration. Trial registration: PROSPERO (CRD42024561381). Full article
Show Figures

Figure 1

19 pages, 700 KiB  
Article
Driving International Collaboration Beyond Boundaries Through Hackathons: A Comparative Analysis of Four Hackathon Setups
by Alice Barana, Vasiliki Eirini Chatzea, Kelly Henao, Ania Maria Hildebrandt, Ilias Logothetis, Marina Marchisio Conte, Alexandros Papadakis, Alberto Rueda, Daniel Samoilovich, Georgios Triantafyllidis and Nikolas Vidakis
Information 2025, 16(6), 488; https://doi.org/10.3390/info16060488 - 12 Jun 2025
Viewed by 328
Abstract
Hackathon events have become increasingly popular in recent years as a modern tool for innovation in the education sector as they offer important learning advantages. Within the “INVITE” Erasmus+ project, four distinct hackathons were organized to bring together academic institutions, teachers, and students [...] Read more.
Hackathon events have become increasingly popular in recent years as a modern tool for innovation in the education sector as they offer important learning advantages. Within the “INVITE” Erasmus+ project, four distinct hackathons were organized to bring together academic institutions, teachers, and students in the design of innovative international virtual and blended collaborations. In addition, as part of the “INVITE” project, an Open Interactive Digital Ecosystem (digital platform) has been developed to facilitate hackathons organization and was tested within two of the events. This platform can enhance hosting action-training programs providing a shared open resources space for educators to contact peers and design projects. All four hackathons were held during 2024 and their duration and type (onsite, blended, hybrid, and online) varied significantly. However, all hackathon topics were related to sustainability, SDGs, and Green Agenda. In total, more than 220 participants enrolled in the four events, including students, researchers, and professors from different disciplines, age groups, and countries. All participants were provided with qualitative surveys to explore their satisfaction and experiences. The results compare different hackathon setups to reveal valuable insights regarding the optimal design for higher education hackathons. Full article
Show Figures

Figure 1

31 pages, 550 KiB  
Review
Advances in Application of Federated Machine Learning for Oncology and Cancer Diagnosis
by Mohammad Nasajpour, Seyedamin Pouriyeh, Reza M. Parizi, Meng Han, Fatemeh Mosaiyebzadeh, Yixin Xie, Liyuan Liu and Daniel Macêdo Batista
Information 2025, 16(6), 487; https://doi.org/10.3390/info16060487 - 12 Jun 2025
Viewed by 499
Abstract
Machine learning has brought about a revolutionary transformation in healthcare. It has traditionally been employed to create predictive models through training on locally available data. However, privacy concerns can sometimes impede the collection and integration of data from diverse sources. Conversely, a lack [...] Read more.
Machine learning has brought about a revolutionary transformation in healthcare. It has traditionally been employed to create predictive models through training on locally available data. However, privacy concerns can sometimes impede the collection and integration of data from diverse sources. Conversely, a lack of sufficient data may hinder the construction of accurate models, thereby limiting the ability to produce meaningful outcomes. Especially in the field of healthcare, collecting datasets centrally is challenging due to privacy concerns. Indeed, federated learning (FL) emerges as a sophisticated distributed machine learning approach that comes to the rescue in such scenarios. It allows multiple devices hosted at different institutions, like hospitals, to collaboratively train a global model without sharing raw data. In addition, each device retains its data securely on locally, addressing the challenges of time-consuming annotation and privacy concerns. In this paper, we conducted a comprehensive literature review aimed at identifying the most advanced federated learning applications in cancer research and clinical oncology analysis. Our main goal was to present a comprehensive overview of the development of federated learning in the field of oncology. Additionally, we discuss the challenges and future research directions. Full article
Show Figures

Figure 1

28 pages, 4278 KiB  
Article
The Interpretative Effects of Normalization Techniques on Complex Regression Modeling: An Application to Real Estate Values Using Machine Learning
by Debora Anelli, Pierluigi Morano, Francesco Tajani and Maria Rosaria Guarini
Information 2025, 16(6), 486; https://doi.org/10.3390/info16060486 - 11 Jun 2025
Viewed by 666
Abstract
The performance of machine learning models depends on several factors, including data normalization, which can significantly improve its accuracy. There are many standardization techniques, and none is universally suitable; the choice depends on the characteristics of the problem, the predictive task, and the [...] Read more.
The performance of machine learning models depends on several factors, including data normalization, which can significantly improve its accuracy. There are many standardization techniques, and none is universally suitable; the choice depends on the characteristics of the problem, the predictive task, and the needs of the model used. This study analyzes how normalization techniques influence the outcomes of real estate price regression models using machine learning to uncover complex relationships between urban and economic factors. Six normalization techniques are employed to assess how they affect the estimation of relationships between property value and factors like social degradation, resident population, per capita income, green spaces, building conditions, and degraded neighborhood presence. The study’s findings underscore the pivotal role of normalization in shaping the perception of variables, accentuating critical thresholds, or distorting anticipated functional relationships. The work is the first application of a methodological approach to define the best technique on the basis of two criteria: statistical reliability and empirical evidence of the functional relationships obtainable with each standardization technique. Notably, the study underscores the potential of machine-learning-based regression to circumvent the limitations of conventional models, thereby yielding more robust and interpretable results. Full article
23 pages, 2407 KiB  
Article
Enhancing Quantum Information Distribution Through Noisy Channels Using Quantum Communication Architectures
by Francisco Delgado
Information 2025, 16(6), 485; https://doi.org/10.3390/info16060485 - 11 Jun 2025
Viewed by 785
Abstract
Quantum information transmission is subject to imperfections in communication processes and systems. These phenomena alter the original content due to decoherence and noise. However, suitable communication architectures incorporating quantum and classical redundancy can selectively remove these errors, boosting destructive interference. In this work, [...] Read more.
Quantum information transmission is subject to imperfections in communication processes and systems. These phenomena alter the original content due to decoherence and noise. However, suitable communication architectures incorporating quantum and classical redundancy can selectively remove these errors, boosting destructive interference. In this work, a selection of architectures based on path superposition or indefinite causal order were analyzed under appropriate configurations, alongside traditional methods such as classical redundancy, thus enhancing transmission. For that purpose, we examined a broad family of decoherent channels associated with the qubit chain transmission by passing through tailored arrangements or composite architectures of imperfect channels. The outcomes demonstrated that, when combined with traditional redundancy, these configurations could significantly improve the transmission across a substantial subset of the channels. For quantum key distribution purposes, two alternative bases were considered to encode the information chain. Because a control system must be introduced in the proposed architectures, two strategies for its disposal at the end of the communication process were compared: tracing and measurement. In addition, eavesdropping was also explored under a representative scenario, to quantify its impact on the most promising architecture analyzed. Thus, in terms of transmission quality and security, the analysis revealed significant advantages over direct transmission schemes. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

19 pages, 546 KiB  
Article
Antecedents and Consequences of Flow Experience in Virtual Reality Tourism: A Path Analysis of Visit Intention
by Lei Zhou, Huaqing Zhou, Xiaotang Cui and Jing Zhao
Information 2025, 16(6), 484; https://doi.org/10.3390/info16060484 - 11 Jun 2025
Viewed by 351
Abstract
This study examines the psychological mechanisms underlying virtual reality (VR) tourism experiences through an integrated theoretical framework centered on flow experience and visit destination intention. Drawing upon flow theory, the research investigates how interactivity, perceived vividness, and telepresence influence flow experience and subsequently [...] Read more.
This study examines the psychological mechanisms underlying virtual reality (VR) tourism experiences through an integrated theoretical framework centered on flow experience and visit destination intention. Drawing upon flow theory, the research investigates how interactivity, perceived vividness, and telepresence influence flow experience and subsequently affect hedonic motivation and perceived visual appeal in VR tourism contexts. Using partial least squares structural equation modeling (PLS-SEM) analysis of data collected from 255 VR tourism users across major Chinese metropolitan centers, the study reveals that perceived vividness and telepresence significantly impact flow experience, while interactivity shows no significant effect. Flow experience demonstrates significant positive relationships with hedonic motivation and perceived visual appeal. Furthermore, hedonic motivation and perceived visual appeal significantly positively affect visit destination intention. The findings advance the theoretical understanding of VR tourism by illuminating the psychological pathways through which technological characteristics influence behavioral intentions. These results offer practical implications for destination marketers and VR tourism developers in designing more effective virtual experiences that enhance destination visit intentions. Full article
(This article belongs to the Special Issue Extended Reality and Its Applications)
Show Figures

Figure 1

24 pages, 354 KiB  
Article
Dynamic Mixture of Experts for Adaptive Computation in Character-Level Transformers
by Zhigao Huang, Musheng Chen and Shiyan Zheng
Information 2025, 16(6), 483; https://doi.org/10.3390/info16060483 - 11 Jun 2025
Viewed by 947
Abstract
This paper challenges the prevailing assumption that Mixture of Experts (MoE) consistently improves computational efficiency through a systematic evaluation of MoE variants in Transformer models. We implement and compare three approaches: basic MoE, top-k routing, and capacity-factored routing, each progressively addressing load-balancing [...] Read more.
This paper challenges the prevailing assumption that Mixture of Experts (MoE) consistently improves computational efficiency through a systematic evaluation of MoE variants in Transformer models. We implement and compare three approaches: basic MoE, top-k routing, and capacity-factored routing, each progressively addressing load-balancing challenges. Our experiments reveal critical trade-offs between performance and efficiency: while MoE models maintain validation performance comparable to baselines, they require significantly longer training times (a 50% increase) and demonstrate reduced inference speeds (up to 56% slower). Analysis of routing behavior shows that even with load-balancing techniques, expert utilization remains unevenly distributed. These findings provide empirical evidence that MoE’s computational benefits are highly dependent on model scale and task characteristics, challenging common assumptions about sparse architectures and offering crucial guidance for adaptive neural architecture design across different computational constraints. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop