Next Issue
Volume 16, December
Previous Issue
Volume 16, October
 
 

Information, Volume 16, Issue 11 (November 2025) – 93 articles

Cover Story (view full-size image): Data is becoming the backbone of the digital economy, and data spaces are emerging as critical ecosystems for the secure, controlled, and ethical exchange of information. This article introduces a comprehensive reference architecture for data governance in data spaces, addressing essential dimensions such as data quality, security, metadata management, roles, responsibilities, and the entire data lifecycle. By translating high-level governance strategies into concrete architectural components, the framework provides practical guidance for organizations and governing bodies, enabling trust, interoperability, accountability, and effective management of data, guiding government bodies to formalize data strategies and principles in concrete architectural components that establish the capacities to be implemented within the data ecosystem. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 777 KB  
Article
AI-Powered Learning: Revolutionizing Education and Automated Code Evaluation
by Andrija Bernik, Danijel Radošević and Andrej Čep
Information 2025, 16(11), 1015; https://doi.org/10.3390/info16111015 - 20 Nov 2025
Viewed by 823
Abstract
The paper presents a case study on using artificial intelligence (AI) for preliminary grading of student programming assignments. By integrating our previously introduced learning programming interface Verificator with the Gemini 2.5 large language model via Google AI Studio, C++ student submissions were evaluated [...] Read more.
The paper presents a case study on using artificial intelligence (AI) for preliminary grading of student programming assignments. By integrating our previously introduced learning programming interface Verificator with the Gemini 2.5 large language model via Google AI Studio, C++ student submissions were evaluated automatically and compared with teacher-assigned grades. The results showed moderate to high correlation, although the AI was stricter. The study demonstrates that AI tools can improve grading speed and consistency while highlighting the need for human oversight due to limitations in interpreting non-standard solutions. It also emphasizes ethical considerations such as transparency, bias, and data privacy in educational AI use. A hybrid grading model combining AI efficiency and human judgment is recommended. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Graphical abstract

24 pages, 4402 KB  
Article
A Technology-Enhanced Learning Approach to Upskill Adult Educators: Design and Evaluation of a DigComp-Driven IoT MOOC
by Theodor Panagiotakopoulos, Fotis Lazarinis, Omiros Iatrellis, Yiannis Kiouvrekis and Achilles Kameas
Information 2025, 16(11), 1014; https://doi.org/10.3390/info16111014 - 20 Nov 2025
Viewed by 273
Abstract
This study presents the design, implementation, and evaluation of a Massive Open Online Course (MOOC) on the Internet of Things (IoT), developed to upskill adult educators by equipping them with both technical and pedagogical competencies. Following a structured, multi-phase instructional design model grounded [...] Read more.
This study presents the design, implementation, and evaluation of a Massive Open Online Course (MOOC) on the Internet of Things (IoT), developed to upskill adult educators by equipping them with both technical and pedagogical competencies. Following a structured, multi-phase instructional design model grounded in the DigComp framework and supported by Open Educational Resources (OERs), the course was delivered over three training cycles via a MOODLE-based platform. The research employed pre- and post-course competence tests to assess the course’s impact, as well as post-course surveys with both quantitative and qualitative elements to assess participant experiences. The findings indicate high levels of satisfaction and perceived effectiveness. Full article
Show Figures

Figure 1

18 pages, 2980 KB  
Article
Prediction Multiscale Cross-Level Fusion U-Net with Combined Wavelet Convolutions for Thyroid Nodule Segmentation
by Shengzhi Liu, Haotian Tang, Junhao Zhao, Rundong Liu, Sirui Zheng, Kaiyao Hou, Xiyu Zhang, Fuyong Liu and Chen Ding
Information 2025, 16(11), 1013; https://doi.org/10.3390/info16111013 - 20 Nov 2025
Viewed by 287
Abstract
The precise segmentation of thyroid nodules in ultrasound images is essential for computer-aided diagnosis and treatment. Although various deep learning methods have been proposed, similar intensity distributions and variable nodule morphology often lead to blurred segmentation boundaries and missed detection of small nodules. [...] Read more.
The precise segmentation of thyroid nodules in ultrasound images is essential for computer-aided diagnosis and treatment. Although various deep learning methods have been proposed, similar intensity distributions and variable nodule morphology often lead to blurred segmentation boundaries and missed detection of small nodules. To address this problem, we propose a multiscale cross-level fusion U-net with combined wavelet convolutions (MCFU-net) for thyroid nodule segmentation. Firstly, the network designs a multi-branch wavelet convolution (MBWC) block, which decouples texture features through wavelet domain multiresolution analysis and reorganizes cross-channel features, thereby enhancing context extraction and aggregation capabilities during the encoding stage. Secondly, a scale-selective atrous pyramid (SSAP) module based on multi-level dynamic perception is constructed to achieve saliency enhancement for nodules of varying sizes, in order to improve the detection ability for small nodules. Thirdly, to decrease the loss of fine-grained information during upsampling, a cross-level fusion module (CLFM) with hierarchical refinement mechanisms is designed, which progressively reconstructs ambiguous boundary areas through multistage upsampling. Experiments conducted on two public ultrasound datasets, TN3K and DDTI, demonstrate the effectiveness and superiority of our method, achieving Dice coefficients of 85.22% and 78.21% and IoU values of 74.25% and 64.23%, respectively. Full article
Show Figures

Figure 1

26 pages, 2529 KB  
Article
Digital Innovation Through Behavioural Analytics: Evidence from Acquisition Channels and Engagement in Global Cruise Firms
by Dimitrios P. Reklitis, Nikolaos T. Giannakopoulos, Marina C. Terzi, Damianos P. Sakas, Stylianos K. Tountas, Nikos Kanellos and Panagiotis Reklitis
Information 2025, 16(11), 1012; https://doi.org/10.3390/info16111012 - 20 Nov 2025
Viewed by 312
Abstract
Digital transformation has reshaped how cruise firms acquire, engage and retain customers. However, existing research rarely integrates these behavioural dimensions within a unified analytical framework. This study applies a hybrid regression–Fuzzy Cognitive Mapping (FCM) approach to examine how acquisition channels, engagement indicators and [...] Read more.
Digital transformation has reshaped how cruise firms acquire, engage and retain customers. However, existing research rarely integrates these behavioural dimensions within a unified analytical framework. This study applies a hybrid regression–Fuzzy Cognitive Mapping (FCM) approach to examine how acquisition channels, engagement indicators and online reputation metrics jointly shape website performance and digital innovation among leading global cruise operators. Using multi-source web-analytics data, regression models identify the direct predictive effects of organic, paid, referral and email channels, while FCM captures their non-linear feedback dynamics. Results reveal that visibility does not equate to engagement: organic and referral traffic drive exposure but not depth, whereas authority and reputation mediate engagement–performance relationships. Scenario simulations reveal asymmetric responses within the digital ecosystem. Consequently, balanced, knowledge-driven channel diversification emerges as a key strategic advantage. The findings extend the Knowledge-Based View (KBV) by conceptualising behavioural analytics as organisational knowledge resources that enable adaptive learning and digital innovation. The proposed framework contributes to both tourism analytics and information systems research, offering a scalable model for understanding how data-intensive service firms convert behavioural information into strategic knowledge and sustainable competitive advantage. Full article
(This article belongs to the Special Issue Emerging Research in Knowledge Management and Innovation)
Show Figures

Figure 1

13 pages, 1955 KB  
Article
Perspective on the Role of AI in Shaping Human Cognitive Development
by Amin Abbosh, Adnan Al-Anbuky, Fei Xue and Sundus S. Mahmoud
Information 2025, 16(11), 1011; https://doi.org/10.3390/info16111011 - 20 Nov 2025
Viewed by 883
Abstract
 The fourth industrial revolution, driven by Artificial Intelligence (AI) and Generative AI (GenAI), is rapidly transforming human life, with profound effects on education, employment, operational efficiency, social behavior, and lifestyle. While AI tools potentially offer unprecedented support in learning and problem-solving, their [...] Read more.
 The fourth industrial revolution, driven by Artificial Intelligence (AI) and Generative AI (GenAI), is rapidly transforming human life, with profound effects on education, employment, operational efficiency, social behavior, and lifestyle. While AI tools potentially offer unprecedented support in learning and problem-solving, their integration into education raises critical questions about cognitive development and long-term intellectual capacity. Drawing parallels to previous industrial revolutions that reshaped human biological systems, this paper explores how GenAI introduces a new level of abstraction that may relieve humans from routine cognitive tasks, potentially enhancing performance but also risking a cognitively sedentary condition. We position levels of abstraction as the central theoretical lens to explain when GenAI reallocates cognitive effort toward higher-order reasoning and when it induces passive reliance. We present a conceptual model of AI-augmented versus passive trajectories in cognitive development and demonstrate its utility through a simulation-platform case study, which exposes concrete failure modes and the critical role of expert interventions. Rather than a hypothesis-testing empirical study, this paper offers a conceptual synthesis and concludes with mitigation strategies organized by abstraction layer, along with platform-centered implications for pedagogy, curriculum design, and assessment.  Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

28 pages, 3332 KB  
Article
An Optimization-Based Aggregation Approach with Triangular Intuitionistic Fuzzy Numbers in High-Dimensional Multi-Attribute Decision-Making
by Yanshan Qian, Junda Qiu, Jiali Tang, Qi Liu, Chuanan Li and Senyuan Chen
Information 2025, 16(11), 1010; https://doi.org/10.3390/info16111010 - 19 Nov 2025
Viewed by 262
Abstract
We address information fusion and spatial structure modeling in high-dimensional fuzzy multi-attribute decision-making by proposing a novel framework that couples Triangular Intuitionistic Fuzzy Numbers (TIFNs) with the Plant Growth Simulation Algorithm (PGSA). The method first maps the triangular intuitionistic fuzzy information of experts [...] Read more.
We address information fusion and spatial structure modeling in high-dimensional fuzzy multi-attribute decision-making by proposing a novel framework that couples Triangular Intuitionistic Fuzzy Numbers (TIFNs) with the Plant Growth Simulation Algorithm (PGSA). The method first maps the triangular intuitionistic fuzzy information of experts on each evaluation scheme into high-dimensional spatial points to realize the structured expression of decision-making information. Subsequently, the PGSA is used to perform dynamic global optimization search on the high-dimensional point cloud to determine the optimal set point and realize the intelligent aggregation of heterogeneous fuzzy data from multiple sources. The algorithm breaks through the limitation of traditional linear aggregation on the portrayal of information spatial distribution and is able to improve the accuracy and consistency of decision-making results in high-dimensional complex environments. The experimental results show that the method in this paper outperforms the mainstream aggregation methods in a number of evaluation indexes such as weighted Hamming distance, correlation, information energy and correlation coefficient. The proposed model provides a new technical path for intelligent solution and theory expansion of high-dimensional fuzzy decision-making problems. Full article
Show Figures

Graphical abstract

34 pages, 3169 KB  
Article
Cognitive Atrophy Paradox of AI–Human Interaction: From Cognitive Growth and Atrophy to Balance
by Igor Kabashkin
Information 2025, 16(11), 1009; https://doi.org/10.3390/info16111009 - 19 Nov 2025
Viewed by 1161
Abstract
The rapid integration of artificial intelligence (AI) into professional, educational, and everyday cognitive processes has created a dual dynamic of cognitive growth and cognitive atrophy. This study introduces a unified theoretical and quantitative framework to analyze these opposing tendencies and their equilibrium, conceptualized [...] Read more.
The rapid integration of artificial intelligence (AI) into professional, educational, and everyday cognitive processes has created a dual dynamic of cognitive growth and cognitive atrophy. This study introduces a unified theoretical and quantitative framework to analyze these opposing tendencies and their equilibrium, conceptualized as the cognitive co-evolution model. The model interprets human–AI interaction as a nonlinear process in which reflective engagement enhances metacognitive skills, while over-delegation to automation reduces analytical autonomy. To quantify this balance, the paper proposes the cognitive sustainability index (CSI) as a composite measure integrating five behavioral parameters representing autonomy, reflection, creativity, delegation, and reliance. Simulation examples and domain-specific illustrations, including the case of software developers, demonstrate how CSI values can reveal distinct cognitive zones ranging from atrophy to synergy. Building upon these findings, the paper develops the framework of applied cognitive management, which links cognitive monitoring with adaptive interventions across individual, educational, professional, and institutional levels. The results highlight the need for organizations and policymakers to monitor cognitive sustainability as a strategic indicator of digital transformation. Maintaining CSI above the sustainability threshold ensures that automation enhances rather than replaces human reasoning, creativity, and ethical responsibility. The study concludes by outlining methodological challenges and future research directions toward a quantitative science of cognitive sustainability and co-evolutionary human–AI ecosystems. Full article
Show Figures

Graphical abstract

25 pages, 5621 KB  
Article
Balanced Neonatal Cry Classification: Integrating Preterm and Full-Term Data for RDS Screening
by Somaye Valizade Shayegh and Chakib Tadj
Information 2025, 16(11), 1008; https://doi.org/10.3390/info16111008 - 19 Nov 2025
Viewed by 238
Abstract
Respiratory distress syndrome (RDS) is one of the most serious neonatal conditions, frequently leading to respiratory failure and death in low-resource settings. Early detection is therefore critical, particularly where access to advanced diagnostic tools is limited. Recent advances in machine learning have enabled [...] Read more.
Respiratory distress syndrome (RDS) is one of the most serious neonatal conditions, frequently leading to respiratory failure and death in low-resource settings. Early detection is therefore critical, particularly where access to advanced diagnostic tools is limited. Recent advances in machine learning have enabled non-invasive neonatal cry diagnostic systems (NCDSs) for early screening. To the best of our knowledge, this is the first cry-based RDS detection study to include both preterm and full-term infants in a subject-balanced design, using 76 neonates (38 RDS, 38 healthy; 19 per subgroup) and 8534 expiratory cry segments (4267 per class). Cry waveforms were converted to mono, high-pass-filtered, and segmented to isolate expiratory units. Mel-Frequency Cepstral Coefficients (MFCCs) and Filterbank (FBANK) features were extracted and transformed into fixed-dimensional embeddings using a lightweight X-vector model with mean-SDor attention-based pooling, followed by a binary classifier. Model parameters were optimized via grid search. Performance was evaluated using accuracy, precision, recall, F1-score, and ROC–AUC under stratified 10-fold cross-validation. MFCC + mean–SD achieved 93.59 ± 0.48% accuracy, while MFCC + attention reached 93.53 ± 0.52% accuracy with slightly higher precision, reducing false RDS alarms and improving clinical reliability. To enhance interpretability, Integrated Gradients were applied to MFCC and FBANK features to reveal the spectral regions contributing most to the decision. Overall, the proposed NCDS reliably distinguishes RDS from healthy cries and generalizes across neonatal subgroups despite the greater variability in preterm vocalizations. Full article
(This article belongs to the Special Issue Biomedical Signal and Image Processing with Artificial Intelligence)
Show Figures

Figure 1

28 pages, 3335 KB  
Article
MDFA-AconvNet: A Novel Multiscale Dilated Fusion Attention All-Convolution Network for SAR Target Classification
by Jiajia Wang, Jun Liu, Pin Zhang, Qi Jia, Xin Yang, Shenyu Du and Xueyu Bai
Information 2025, 16(11), 1007; https://doi.org/10.3390/info16111007 - 19 Nov 2025
Viewed by 335
Abstract
Synthetic aperture radar (SAR) features all-weather and all-day imaging capabilities, long-range detection, and high resolution, making it indispensable for battlefield reconnaissance, target detection, and guidance. In recent years, deep learning has emerged as a prominent approach for the classification of SAR image targets, [...] Read more.
Synthetic aperture radar (SAR) features all-weather and all-day imaging capabilities, long-range detection, and high resolution, making it indispensable for battlefield reconnaissance, target detection, and guidance. In recent years, deep learning has emerged as a prominent approach for the classification of SAR image targets, owing to its hierarchical feature extraction, progressive refinement, and end-to-end learning capabilities. However, challenges such as the high cost of SAR data acquisition and the limited number of labeled samples often result in overfitting and poor model generalization. In addition, conventional layers typically operate with fixed receptive fields, making it difficult to simultaneously capture multiscale contextual information and dynamically focus on salient target features. To address these limitations, this paper proposes a novel architecture: the Multiscale Dilated Fusion Attention All-Convolution Network (MDFA-AconvNet). The model incorporates a multiscale dilated attention mechanism that significantly broadens the receptive field across varying target scales in SAR images without compromising spatial resolution, thereby enhancing multiscale feature extraction. Furthermore, by introducing both channel attention and spatial attention mechanisms, the model is able to selectively emphasize informative feature channels and spatial regions relevant to target recognition. These attention modules are seamlessly integrated into the All-Convolution Network (A-convNet) backbone, resulting in comprehensive performance improvements. Extensive experiments on the MSTAR dataset demonstrate that the proposed MDFA-AconvNet achieves a high classification accuracy of 99.38% in ten target classes, markedly outperforming the original A-convNet algorithm. These compelling results highlight the model’s robustness against target variations and its significant potential for practical deployment, paving the way for more efficient SAR image classification and recognition systems. Full article
Show Figures

Graphical abstract

21 pages, 2304 KB  
Article
Hierarchical Prompt Engineering and Task-Differentiated Low-Rank Adaptation for Artificial Intelligence-Generated Content Image Quality Assessment
by Minjuan Gao, Qiaorong Zhang, Chenye Song, Xuande Zhang and Yankang Li
Information 2025, 16(11), 1006; https://doi.org/10.3390/info16111006 - 19 Nov 2025
Viewed by 530
Abstract
Assessing the quality of Artificial Intelligence-Generated Content (AIGC) images remains a critical challenge, as conventional Image Quality Assessment (IQA) methods often fail to capture the semantic consistency between generated images and their textual prompts. This study aims to establish an interpretable and efficient [...] Read more.
Assessing the quality of Artificial Intelligence-Generated Content (AIGC) images remains a critical challenge, as conventional Image Quality Assessment (IQA) methods often fail to capture the semantic consistency between generated images and their textual prompts. This study aims to establish an interpretable and efficient multimodal framework for evaluating AIGC image quality. The research addresses three key scientific questions: how to leverage structured prompt semantics for more interpretable assessments, how to enable parameter-efficient yet accurate adaptation, and how to achieve unified handling of perceptual and semantic subtasks. To this end, we propose the Prompt-Enhanced Low-Rank Adaptation (PELA) framework, which integrates Hierarchical Prompt Engineering and Low-Rank Adaptation within a CLIP-based backbone. Hierarchical prompts encode multi-level semantics for fine-grained evaluation, while low-rank adaptation enables lightweight, task-specific optimization. Experiments conducted on AGIQA-1K, AGIQA-3K, and AIGCIQA-2023 datasets demonstrate that PELA achieves superior correlation with human perceptual judgments and sets new state-of-the-art results across multiple metrics. The findings confirm that combining structured prompt semantics with efficient adaptation offers a compact, interpretable, and scalable paradigm for multimodal image quality assessment. Full article
Show Figures

Figure 1

10 pages, 220 KB  
Article
Digital Yards, Tangible Gains: Evidence of Change in Third-Party Logistics Yard Performance
by Ziang Wang, Jinxuan Ma and Ting Wang
Information 2025, 16(11), 1005; https://doi.org/10.3390/info16111005 - 19 Nov 2025
Viewed by 398
Abstract
This study investigated the impact of a Yard Management System (YMS) implemented at a third-party logistics distribution center in the United States. Five years of operational data (2018–2022), including 72 monthly observations of inbound and outbound freight performance (measured in pounds) and detention [...] Read more.
This study investigated the impact of a Yard Management System (YMS) implemented at a third-party logistics distribution center in the United States. Five years of operational data (2018–2022), including 72 monthly observations of inbound and outbound freight performance (measured in pounds) and detention occurrences (measured in US dollars), were analyzed using one-way ANOVA to assess pre- and post-implementation performance. The results indicated that the YMS significantly improved inbound and outbound freight volume, reduced detention occurrences, and enhanced operational efficiency within the third-party logistics distribution center. These findings suggest that YMS can be an effective tool for enhancing yard-level operational efficiency, reducing delays, and supporting broader supply chain optimization strategies in third-party logistics environments. Full article
24 pages, 11339 KB  
Article
A Simulation Modeling of Temporal Multimodality in Online Streams
by Abdurrahman Alshareef
Information 2025, 16(11), 999; https://doi.org/10.3390/info16110999 - 18 Nov 2025
Viewed by 271
Abstract
Temporal variability in online streams arises in information systems where heterogeneous modalities exhibit varying latencies and delay distributions. Efficient synchronization strategies help to establish a reliable flow and ensure a correct delivery. This work establishes a formal modeling foundation for addressing temporal dynamics [...] Read more.
Temporal variability in online streams arises in information systems where heterogeneous modalities exhibit varying latencies and delay distributions. Efficient synchronization strategies help to establish a reliable flow and ensure a correct delivery. This work establishes a formal modeling foundation for addressing temporal dynamics in streams with multimodality using a discrete-event system specification framework. This specification captures different latencies and interarrival dynamics inherent in multimodal flows. The framework also incorporates a Markov variant to account for variations in delay processes, thereby capturing timing uncertainty in a single modality. The proposed models are modular, with built-in mechanisms for diverse temporal integration, thereby facilitating heterogeneity in information flows and communication. Various structural and behavioral forms can be flexibly represented and readily simulated. The devised experiments demonstrate, across several model permutations, the time-series behavior of individual stream components and the overall composed system, highlighting performance metrics in both, quantifying composability and modular effects, and incorporating learnability into the simulation of multimodal streams. The primary motivation of this work is to enhance the degree of fitting within formal simulation frameworks and to enable adaptive, learnable distribution modeling in multimodal settings that combine synthetic and real input data. We demonstrate the resulting errors and degradation when replacing real sensor data with synthetic inputs at different dropping probabilities. Full article
Show Figures

Graphical abstract

23 pages, 1145 KB  
Article
Fiscal Management and Artificial Intelligence as Strategies to Combat Corruption in Colombia
by Ana E. Monsalvo, Carlos M. Zuluaga-Pardo, Jaime A. Restrepo-Carmona, Lilibeth Aguilera-Pua, Juan C. Castaño, Edison F. Borda, Rosse M. Villamil, Hernán Felipe García and Luis Fletscher
Information 2025, 16(11), 998; https://doi.org/10.3390/info16110998 - 18 Nov 2025
Viewed by 436
Abstract
Corruption in Colombia remains a critical barrier to development, institutional trust, and equitable access to public services, despite legislative efforts such as the Anti-Corruption Statute. This article explores the intersection between fiscal management and artificial intelligence (AI) as integrated strategies for enhancing transparency, [...] Read more.
Corruption in Colombia remains a critical barrier to development, institutional trust, and equitable access to public services, despite legislative efforts such as the Anti-Corruption Statute. This article explores the intersection between fiscal management and artificial intelligence (AI) as integrated strategies for enhancing transparency, accountability, and risk assessment in public administration. Drawing on theoretical frameworks and empirical data from 2020 to 2022, this study analyzes the scale and impact of corruption and the effectiveness of oversight mechanisms led by the Comptroller General of the Republic (CGR). A key innovation examined is the implementation of a GPT-based scoring model that automates the evaluation of internal accounting controls in 219 public entities. By leveraging AI to support fiscal audits, Colombia demonstrates a scalable approach to modernizing anti-corruption practices. The study concludes with policy recommendations that emphasize digital transformation, institutional strengthening, citizen engagement, and capacity building to improve fiscal governance and reduce corruption. Full article
Show Figures

Figure 1

34 pages, 1299 KB  
Article
Autoencoder-Based Poisoning Attack Detection in Graph Recommender Systems
by Quanqiang Zhou, Xi Zhao and Xiaoyue Zhang
Information 2025, 16(11), 1004; https://doi.org/10.3390/info16111004 - 18 Nov 2025
Viewed by 264
Abstract
Graph-based Recommender Systems (GRSs) model complex user–item relationships. They offer improved accuracy and personalization in recommendations compared to traditional models. However, GRSs also face severe challenges from novel poisoning attacks. Attackers often manipulate GRS graph structures by injecting attack users and their interaction [...] Read more.
Graph-based Recommender Systems (GRSs) model complex user–item relationships. They offer improved accuracy and personalization in recommendations compared to traditional models. However, GRSs also face severe challenges from novel poisoning attacks. Attackers often manipulate GRS graph structures by injecting attack users and their interaction data. This leads to misleading recommendations. Existing detection methods lack the ability to identify such attacks targeting graph-based systems. To address this, we propose AutoDAP, a novel autoencoder-based detection method for poisoning attacks in GRSs. AutoDAP first extracts key statistical features from user interaction data. It fuses them with original interaction information. Then, an autoencoder architecture processes this data. The encoder extracts deep features and connects to an output layer for classification prediction probabilities. The decoder reconstructs graph structure features. By jointly optimizing classification and reconstruction losses, AutoDAP effectively integrates supervised and unsupervised signals. This enhances the detection of attack users. Evaluations on the MovieLens-10M dataset against various poisoning attacks, and on the Amazon dataset with real attack data, demonstrate AutoDAP’s superiority. It outperforms several representative baseline methods in both simulated (MovieLens) and real-world (Amazon) attack scenarios, demonstrating its effectiveness and robustness. Full article
Show Figures

Figure 1

23 pages, 4124 KB  
Review
Defenses Against Adversarial Attacks on Object Detection: Methods and Future Directions
by Anant Thunuguntla, Prasad Tadepalli and Giuseppe Raffa
Information 2025, 16(11), 1003; https://doi.org/10.3390/info16111003 - 18 Nov 2025
Viewed by 672
Abstract
The object detection systems aim to classify and localize objects in an image or video. Over the past decade, we have seen how adversarial attacks can impact the performance of object detection systems and make them unusable in many situations. This survey summarizes [...] Read more.
The object detection systems aim to classify and localize objects in an image or video. Over the past decade, we have seen how adversarial attacks can impact the performance of object detection systems and make them unusable in many situations. This survey summarizes different types of digital adversarial attacks bounded by Lp norms and the corresponding defense techniques for object detection systems. It categorizes the defenses into six groups, namely, preprocessing, adversarial training, detection of adversarial noise, architectural changes, ensemble defense, and certified defenses, and highlights the effectiveness of each technique. We end the paper with a discussion of the weaknesses of different defenses and possible approaches to make them stronger. Patch- or physical-based attacks are excluded from this survey, as they follow a different threat model. Full article
Show Figures

Figure 1

30 pages, 4825 KB  
Article
A Priority-Based Multiobjective Optimization Framework for Fair Profit Allocation in Cooperative Systems of Cross-Border E-Commerce Logistics Supply Chains
by Meng Zhang, Peng Jia and Yige Zhang
Information 2025, 16(11), 1002; https://doi.org/10.3390/info16111002 - 18 Nov 2025
Viewed by 402
Abstract
Cross-border e-commerce logistics supply chain alliances face the practical challenge of allocating profits in a way that is both fair and operationally controllable when member contributions and cooperation priorities jointly matter. This study proposes an integrated framework that first evaluates member contributions via [...] Read more.
Cross-border e-commerce logistics supply chain alliances face the practical challenge of allocating profits in a way that is both fair and operationally controllable when member contributions and cooperation priorities jointly matter. This study proposes an integrated framework that first evaluates member contributions via a group decision-making (GDM) procedure to derive contribution weights, and then aggregates them to coalition-level importance scores to rank feasible cooperation structures. Building on these inputs, we formulate a Priority-Based Multiobjective Linear Programming (P-MOLP) model that performs tiered (priority) optimization under individual rationality and budget-balance constraints, thereby ensuring implementable allocations. Using alliance data, P-MOLP provides clearer structural differentiation than an ordinary goal-programming model: high-contribution, strategically central members receive larger shares, and low-contribution members receive less (“more-work-more-reward”). Unlike weighted-Shapley, which can violate individual rationality under extreme weights, P-MOLP is individually rational and budget-balanced, aligns better with observed practice, remains applicable when some coalition values are infeasible or missing, and attains ε-core near-stability. Priority weights serve as managerial levers to tune outcomes. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Graphical abstract

32 pages, 1807 KB  
Systematic Review
Artificial Intelligence and Crime in Latin America: A Multilingual Bibliometric Review (2010–2025)
by Félix Díaz, Nhell Cerna and Rafael Liza
Information 2025, 16(11), 1001; https://doi.org/10.3390/info16111001 - 18 Nov 2025
Viewed by 737
Abstract
Artificial intelligence is increasingly used to support public safety by predicting events, uncovering patterns, and informing decisions. In Latin America, where crime burdens are high and data systems are heterogeneous, a region-focused synthesis is needed to assess progress, identify gaps, and clarify operational [...] Read more.
Artificial intelligence is increasingly used to support public safety by predicting events, uncovering patterns, and informing decisions. In Latin America, where crime burdens are high and data systems are heterogeneous, a region-focused synthesis is needed to assess progress, identify gaps, and clarify operational implications. Accordingly, this PRISMA-guided, multilingual (English, Spanish, and Portuguese) bibliometric review synthesizes 146 peer-reviewed journal articles (2010–October 2025) to examine trends, methods, and application domains. Since 2018, publication output accelerated, peaking in 2024–2025. Regionally, Brazil leads within a multi-hub co-authorship network linking Latin American nodes to the United States and Spain; additional hubs include Colombia, Chile, Mexico, Ecuador, and Peru. Methodologically, three motifs dominate: temporal-dependence modeling; ensemble learners with cost-sensitive decision rules; and multimodal integration of remote sensing and computer vision with administrative data. At the application level, four families prevail: utility and fiscal-fraud analytics; environmental offenses with temporal modeling; cyber and platform-based analytics; and sensing, geospatial, and forensic workflows. However, evaluation practices are heterogeneous, with frequent risks of spatial or temporal leakage; moreover, reporting on fairness, accountability, and transparency is limited. In order to support responsible scaling, research directions include interoperable data governance, leakage-controlled and cost-sensitive evaluation, domain adaptation that accounts for spatial dependence, open and auditable benchmarks, and broader regional participation. To our knowledge, this review is one of the first multilingual, region-centered syntheses of artificial intelligence and crime in Latin America, and it establishes a reproducible baseline and an actionable evidence map that enable comparable, leakage-controlled evaluation and inform research, funding, and public safety policy in the region. Full article
Show Figures

Figure 1

29 pages, 1481 KB  
Review
Business Resilience Through AI-Agent Automation for SMEs and Startups: A Review on Agile Marketing and CRM
by Hamed Hokmabadi, Seyed M. H. S. Rezvani, Hamid Hokmabadi and Nuno Marques de Almeida
Information 2025, 16(11), 1000; https://doi.org/10.3390/info16111000 - 18 Nov 2025
Viewed by 822
Abstract
Market volatility and resource constraints pose significant resilience challenges to small and medium-sized enterprises (SMEs). Although AI-agent automation, agile marketing, and customer relationship management (CRM) offer powerful individual solutions, their synergistic impact on SME resilience remains critically underexplored. This review bridges this gap [...] Read more.
Market volatility and resource constraints pose significant resilience challenges to small and medium-sized enterprises (SMEs). Although AI-agent automation, agile marketing, and customer relationship management (CRM) offer powerful individual solutions, their synergistic impact on SME resilience remains critically underexplored. This review bridges this gap by proposing an integrated, AI-driven resilience framework designed to enhance the adaptive capacity of smaller firms. Through a systematic analysis of 35 peer-reviewed articles, our study explicitly maps AI-agent automation, agile marketing, and CRM to the dynamic capabilities of sensing, seizing, and reconfiguring, clarifying the causal pathways to SME resilience. The framework defines key inputs (e.g., multi-channel customer data), processes (e.g., iterative sprints), and outputs (e.g., enhanced market responsiveness). We identify APIs and SaaS platforms as the critical technological backbone for implementation. The central finding is that this integrated model empowers SMEs to build dynamic resilience and achieve competitive parity through data-driven, automated workflows. Actionable recommendations include adopting API-first strategies, investing in workforce training, and prioritizing data security. Full article
Show Figures

Figure 1

7 pages, 236 KB  
Article
Topological Formalism for Quantum Entanglement via B3 and S0 Mappings
by Sergio Manzetti
Information 2025, 16(11), 997; https://doi.org/10.3390/info16110997 - 17 Nov 2025
Viewed by 246
Abstract
We present two propositions and a theorem to establish a foundational framework for a novel perspective on quantum information framed in terms of differential geometry and topology. In particular, we show that the mapping to S0 naturally encodes the binary outcomes of [...] Read more.
We present two propositions and a theorem to establish a foundational framework for a novel perspective on quantum information framed in terms of differential geometry and topology. In particular, we show that the mapping to S0 naturally encodes the binary outcomes of entangled quantum states, providing a minimal yet powerful abstraction of quantum duality. Building on this, we introduce the concept of a discrete fiber bundle to represent quantum steering and correlations, where each fiber corresponds to the two possible measurement outcomes of entangled qubits. This construction offers a new topological viewpoint on quantum information, distinct from traditional Hilbert-space or metric-based approaches. The present work serves as a preliminary formulation of this framework, with further developments to follow. Full article
Show Figures

Figure 1

21 pages, 1676 KB  
Article
Curriculum-Aware Cognitive Diagnosis via Graph Neural Networks
by Chensha Fu and Quanrong Fang
Information 2025, 16(11), 996; https://doi.org/10.3390/info16110996 - 17 Nov 2025
Viewed by 449
Abstract
Cognitive diagnosis is an important component of adaptive learning, as it infers learners’ latent knowledge states and enables tailored feedback. However, existing approaches often emphasize sequential modeling or latent factorization, while insufficiently incorporating curriculum structures that embody prerequisite relations. This gap constrains both [...] Read more.
Cognitive diagnosis is an important component of adaptive learning, as it infers learners’ latent knowledge states and enables tailored feedback. However, existing approaches often emphasize sequential modeling or latent factorization, while insufficiently incorporating curriculum structures that embody prerequisite relations. This gap constrains both predictive accuracy and pedagogical interpretability. To address this limitation, we propose a Curriculum-Aware Graph Neural Cognitive Diagnosis (CA-GNCD) framework that integrates curriculum priors into graph-based neural modeling. The framework combines graph representation learning, knowledge-prior fusion, and interpretability constraints to jointly capture relational dependencies among concepts and individual learner trajectories. Experiments on three widely used benchmark datasets, ASSISTments2017, EdNet-KT1, and Eedi, show that CA-GNCD achieves consistent improvements over classical probabilistic, psychometric, and recent neural baselines. On average, it improves AUC by more than 4.5 percentage points and exhibits relatively faster convergence, greater robustness to noisy conditions, and stronger cross-domain generalization. These results suggest that aligning diagnostic predictions with curriculum structures can enhance interpretability and reliability, offering implications for personalized learning support. While promising, further validation in diverse educational contexts is required to establish the generalizability and practical deployment of the proposed framework. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

28 pages, 1341 KB  
Article
HyEWCos: A Comparative Study of Hybrid Embedding and Weighting Techniques for Text Similarity in Short Subjective Educational Text
by Hendry Hendry, Tukino Tukino, Eko Sediyono, Ahmad Fauzi and Baenil Huda
Information 2025, 16(11), 995; https://doi.org/10.3390/info16110995 - 17 Nov 2025
Viewed by 505
Abstract
This study is intended to evaluate and contrast the performance of varying combinations of embedding algorithms and weighting systems in measuring perception-based text similarity using the Cosine Similarity approach. Within a structured experiment design, a hybrid model referred to as HyEWCos (Hybrid Embedding [...] Read more.
This study is intended to evaluate and contrast the performance of varying combinations of embedding algorithms and weighting systems in measuring perception-based text similarity using the Cosine Similarity approach. Within a structured experiment design, a hybrid model referred to as HyEWCos (Hybrid Embedding and Weighting for Cosine Similarity) was built incorporating conventional embedding models (Word2Vec, FastText), transformer-based models (BERT, GPT), and statistical and linguistic word weighting schemes (TFIDF, BM25, POS-weighting, and N-weighting). The test results indicate that Word2Vec merged with the CBOW architecture and TFIDF weighting always returned the most reliable performance, with lowest error values (RMSE and MAE of 0.9868) and the highest rating correlation with expert judgment (Pearson’s, 0.524; Spearman’s, 0.543). These results show that contextually conditioned distributional representation approaches perform better in maintaining the semantic subtlety of short and subjective texts than transformer models that are not fine-tuned. This work is unique in terms of its evaluation framework because it integrates embedding and weighting approaches that have hitherto been examined mostly in separation. The main contribution of the study is the development of an experimental framework that serves as a foundation for building more stable and accurate text-based assessment systems. The research also proves the need for making decisions on representation methods based on the data type and domain and opens a door for continuing research in adaptive hybrid models and how their potential can be achieved through combining the best of various approaches. Full article
Show Figures

Figure 1

44 pages, 7333 KB  
Article
Understanding the Rise of Automated Machine Learning: A Global Overview and Topic Analysis
by George-Cristian Tătaru, Adriana Cosac, Ioana Ioanăș, Margareta-Stela Florescu, Mihai Orzan, Camelia Delcea and Liviu-Adrian Cotfas
Information 2025, 16(11), 994; https://doi.org/10.3390/info16110994 - 17 Nov 2025
Viewed by 780
Abstract
Automated Machine Learning (AutoML) has become an important area of modern artificial intelligence, enabling computers to automate the selection, training, and tuning of machine learning models and offering exciting opportunities for enhanced decision-making across various sectors. As the global adoption of machine learning [...] Read more.
Automated Machine Learning (AutoML) has become an important area of modern artificial intelligence, enabling computers to automate the selection, training, and tuning of machine learning models and offering exciting opportunities for enhanced decision-making across various sectors. As the global adoption of machine learning technologies grows, it has been observed that also the importance of understanding the development and proliferation of AutoML research continues to grow, as highlighted by the increased number of scientific papers published each year. The present paper explores the scientific literature associated with AutoML with the aim of highlighting emerging trends, key topics, and collaborative networks that have contributed to the rise of this field. Using data from the Institute for Scientific Information (ISI) Web of Science database, we analyzed 920 papers dedicated to AutoML research, extracted based on specific keywords. A key finding is the significant annual growth rate of 87.76%, which underscores the increasing interest of the academic community in AutoML. Furthermore, we employed n-gram analysis and reviewed the most cited papers in the database, providing a comprehensive bibliometric overview of the current state of AutoML research. Additionally, topic discovery has been conducted through the use of Latent Dirichlet Allocation (LDA) and BERTopic, showcasing the interest of the researchers in this area. The analysis is completed by a review of the most cited papers, as well as discussions of the papers in the research areas associated with this AutoML. These findings offer valuable insights into the evolution of AutoML and highlight the key challenges and opportunities addressed by the academic community in this rapidly growing field. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Figure 1

29 pages, 4291 KB  
Article
An AI-Based Sensorless Force Feedback in Robot-Assisted Minimally Invasive Surgery
by Doina Pisla, Nadim Al Hajjar, Gabriela Rus, Calin Popa, Bogdan Gherman, Andra Ciocan, Andrei Cailean, Corina Radu, Damien Chablat, Calin Vaida and Anca-Elena Iordan
Information 2025, 16(11), 993; https://doi.org/10.3390/info16110993 - 17 Nov 2025
Viewed by 790
Abstract
(1) Background: Most robotic MIS platforms lack native haptic feedback, leaving surgeons to infer tissue loads from vision alone—an especially risky limitation in esophageal procedures. (2) Methods: We develop a sensorless, image-only force-estimation pipeline that maps endoscopic video to tool–tissue forces using a [...] Read more.
(1) Background: Most robotic MIS platforms lack native haptic feedback, leaving surgeons to infer tissue loads from vision alone—an especially risky limitation in esophageal procedures. (2) Methods: We develop a sensorless, image-only force-estimation pipeline that maps endoscopic video to tool–tissue forces using a lightweight EfficientNetV2B0 CNN. The model is trained on 9691 labeled frames from in vitro esophageal experiments and validated against an FT300 load cell. For intraoperative feasibility, the system is deployed as a plug-in on PARA-SILSROB, consuming the existing laparoscope feed and driving a commercial haptic device. The runtime processes every 10th frame of a 60 FPS stream (≈6 Hz updates) with ~15–20 ms per-prediction latency. (3) Results: On held-out tests, the model achieves MAE = 0.017 N and MSE = 0.0004 N2, outperforming a recurrent CNN baseline while maintaining real-time performance on commodity hardware. Integrated evaluations confirm stable operation at the deployed update rate and low latency compatible with closed-loop haptics. (4) Conclusions: By avoiding distal force sensors and preserving sterile workflow, the approach is readily translatable and retrofit-friendly for current robotic platforms. The results support the practical feasibility of real-time, sensorless force feedback for robotic esophagectomy and related MIS tasks, with potential to reduce tissue trauma and enhance operative safety. Full article
Show Figures

Figure 1

39 pages, 2153 KB  
Article
OSSAPTestingPlus: A Blockchain-Based Collaborative Framework for Enhancing Trust and Integrity in Distributed Agile Testing of Archaeological Photogrammetry Open-Source Software
by Omer Aziz, Muhammad Shoaib Farooq, Junaid Nasir Qureshi, Muhammad Faraz Manzoor and Momina Shaheen
Information 2025, 16(11), 992; https://doi.org/10.3390/info16110992 - 17 Nov 2025
Viewed by 325
Abstract
(1) Background: A blockchain-based framework for distributed agile Open-Source Software for Archaeological Photogrammetry (OSSAP) testing life cycle is an innovative approach that uses blockchain technology to optimize the Open-Source Software for Archaeological Photogrammetry process. Previously, various methods have been employed to address communication [...] Read more.
(1) Background: A blockchain-based framework for distributed agile Open-Source Software for Archaeological Photogrammetry (OSSAP) testing life cycle is an innovative approach that uses blockchain technology to optimize the Open-Source Software for Archaeological Photogrammetry process. Previously, various methods have been employed to address communication and collaboration challenges in Open-Source Software for Archaeological Photogrammetry, but they were inadequate in aspects such as trust, traceability, and security. Additionally, a significant cause of project failure was the non-completion of unit testing by developers, leading to delayed testing. (2) Methods: This article discusses the integration of blockchain technology in Open-Source Software for Archaeological Photogrammetry and resolves critical concerns related to transparency, trust, coordination, testing and communication. A novel approach is proposed based on a blockchain framework named Open-Source Software for Archaeological Photogrammetry Testing-Plus. (3) Results: The Open-Source Software for Archaeological Photogrammetry Testing-Plus framework utilizes blockchain technology to provide a secure and transparent platform for acceptance testing and payment verification. Moreover, by leveraging smart contracts on a private Ethereum blockchain, Open-Source Software for Archaeological Photogrammetry Testing-Plus ensures that both the testing team and the development team are working towards a common goal and are compensated fairly for their contributions. (4) Conclusions: The experimental results conclusively show that this innovative approach substantially improves transparency, trust, coordination, testing and communication and provides security for both the testing team and the development team engaged in the distributed agile Open-Source Software for Archaeological Photogrammetry (Open-Source Software for Archaeological Photogrammetry) testing life cycle. Full article
(This article belongs to the Special Issue Blockchain and AI: Innovations and Applications in ICT)
Show Figures

Graphical abstract

24 pages, 1156 KB  
Article
Efficient Transformer-Based Abstractive Urdu Text Summarization Through Selective Attention Pruning
by Muhammad Azhar, Adeen Amjad, Ghulam Farid, Deshinta Arrova Dewi and Malathy Batumalay
Information 2025, 16(11), 991; https://doi.org/10.3390/info16110991 - 16 Nov 2025
Viewed by 225
Abstract
In today’s data-driven world, automatic text summarization is essential for extracting insights from large data volumes. While extractive summarization is well-studied, abstractive summarization remains limited, especially for low-resource languages like Urdu. This study introduces process innovation through transformer-based models—Efficient-BART (EBART), Efficient-T5 (ET5), and [...] Read more.
In today’s data-driven world, automatic text summarization is essential for extracting insights from large data volumes. While extractive summarization is well-studied, abstractive summarization remains limited, especially for low-resource languages like Urdu. This study introduces process innovation through transformer-based models—Efficient-BART (EBART), Efficient-T5 (ET5), and Efficient-GPT-2 (EGPT-2)—optimized for Urdu abstractive summarization. Innovations include strategically removing inefficient attention heads to reduce computational complexity and improve accuracy. Theoretically, this pruning preserves structural integrity by retaining heads that capture diverse linguistic features, while eliminating redundant ones. Adapted from BART, T5, and GPT-2, these optimized models significantly outperform their originals in ROUGE evaluations, demonstrating the effectiveness of process innovation and optimization for Urdu natural language processing. Full article
Show Figures

Figure 1

29 pages, 4304 KB  
Review
From Pixels to Motion: A Systematic Analysis of Translation-Based Video Synthesis Techniques
by Pratim Saha and Chengcui Zhang
Information 2025, 16(11), 990; https://doi.org/10.3390/info16110990 - 16 Nov 2025
Viewed by 353
Abstract
Translation-based Video Synthesis (TVS) has emerged as a transformative technology that enables sophisticated manipulation and generation of dynamic visual content. This comprehensive survey systematically examines the evolution of TVS methodologies, encompassing both image-to-video (I2V) and video-to-video (V2V) translation approaches. We analyze the progression [...] Read more.
Translation-based Video Synthesis (TVS) has emerged as a transformative technology that enables sophisticated manipulation and generation of dynamic visual content. This comprehensive survey systematically examines the evolution of TVS methodologies, encompassing both image-to-video (I2V) and video-to-video (V2V) translation approaches. We analyze the progression from domain-specific facial animation techniques to generalizable diffusion-based frameworks, investigating architectural innovations that address fundamental challenges in temporal consistency and cross-domain adaptation. Our investigation categorizes V2V methods into paired approaches, including conditional GAN-based frameworks and world-consistent synthesis, and unpaired approaches organized into five distinct paradigms: 3D GAN-based processing, temporal constraint mechanisms, optical flow integration, content-motion disentanglement learning, and extended image-to-image frameworks. Through comprehensive evaluation across diverse datasets, we analyze the performance using spatial quality metrics, temporal consistency measures, and semantic preservation indicators. We present a qualitative analysis comparing methods evaluated on identical benchmarks, revealing critical trade-offs between visual quality, temporal coherence, and computational efficiency. Current challenges persist in long-term temporal coherence, with future research directions identified in long-range video generation, audio-visual synthesis for enhanced realism, and development of comprehensive evaluation metrics that better capture human perceptual quality. This survey provides a structured understanding of methodological foundations, evaluation frameworks, and future research opportunities in TVS. We identify pathways for advancing cross-domain generalization, improving computational efficiency, and developing enhanced evaluation metrics for practical deployment, contributing to the broader understanding of temporal video synthesis technologies. Full article
(This article belongs to the Special Issue Computer and Multimedia Technology)
Show Figures

Figure 1

18 pages, 272 KB  
Article
Measuring Narrative Complexity Among Suicide Deaths in the National Violent Death Reporting System (2003–2021 NVDRS)
by Christina Chance, Alina Arseniev-Koehler, Vickie M. Mays, Kai-Wei Chang and Susan D. Cochran
Information 2025, 16(11), 989; https://doi.org/10.3390/info16110989 - 15 Nov 2025
Viewed by 313
Abstract
A widely used repository of violent death records is the U.S. Centers for Disease Control National Violent Death Reporting System (NVDRS). The NVDRS includes narrative data, which researchers frequently utilize to go beyond its structured variables. Prior work has shown that NVDRS narratives [...] Read more.
A widely used repository of violent death records is the U.S. Centers for Disease Control National Violent Death Reporting System (NVDRS). The NVDRS includes narrative data, which researchers frequently utilize to go beyond its structured variables. Prior work has shown that NVDRS narratives vary in length depending on decedent and incident characteristics, including race/ethnicity. Whether these length differences reflect differences in narrative information potential is unclear. We use the 2003–2021 NVDRS to investigate narrative length and complexity measures among 300,323 suicides varying in decedent and incident characteristics. To do so, we operationalized narrative complexity using three manifest measures: word count, sentence count, and dependency tree depth. We then employed regression methods to predict word counts and narrative complexity scores from decedent and incident characteristics. Both were consistently lower for black non-Hispanic decedents compared to white, non-Hispanic decedents. Although narrative complexity is just one aspect of narrative information potential, these findings suggest that the information in NVDRS narratives is more limited for some racial/ethnic minorities. Future studies, possibly leveraging large language models, are needed to develop robust measures to aid in determining whether narratives in the NVDRS have achieved their stated goal of fully describing the circumstances of suicide. Full article
Show Figures

Figure 1

31 pages, 6557 KB  
Article
Sustaining CyberWater-VisTrails: A Case Study in Software Upgrades and Reengineering
by Drew Bieger, Ahmed Sheba, Adham M. Hassan, Sherif Aly, Ranran Chen, Xu Liang and Yao Liang
Information 2025, 16(11), 988; https://doi.org/10.3390/info16110988 - 15 Nov 2025
Viewed by 243
Abstract
This study focuses on the process of updating and upgrading a large-scale legacy software system to ensure its compatibility with modern computing environments. The evolution and maintenance of legacy software pose significant challenges in software engineering, especially given the rapid advancements in technology, [...] Read more.
This study focuses on the process of updating and upgrading a large-scale legacy software system to ensure its compatibility with modern computing environments. The evolution and maintenance of legacy software pose significant challenges in software engineering, especially given the rapid advancements in technology, computing platforms, and dependent libraries. These challenges become even more pronounced when new systems are built upon existing open-source software, which may become outdated due to discontinued maintenance or lack of community support. In this work, we examine the problem from a sustainable computing perspective through the case study of the CyberWater project—an innovative cyberinfrastructure framework designed to support open data access and open model integration in water science and engineering. CyberWater is built on top of VisTrails, an open-source scientific workflow system. VisTrails has not been actively maintained since 2017, requiring an upgrade to ensure CyberWater’s continued functionality, compatibility, and long-term sustainability. This paper presents our work on upgrading VisTrails, including the complete upgrade process, tools developed and utilized, testing strategies, and the final outcomes. We also share key experiences and lessons learned, with a focus on the sustainability challenges and considerations that arise when maintaining and evolving large-scale open-source software systems in scientific computing environments. Full article
(This article belongs to the Special Issue Optimization and Methodology in Software Engineering, 2nd Edition)
Show Figures

Graphical abstract

25 pages, 1230 KB  
Article
A Capability-Based Framework for Knowledge-Driven AI Innovation and Sustainability
by Márcia R. C. Santos, Luísa Cagica Carvalho and Edgar Francisco
Information 2025, 16(11), 987; https://doi.org/10.3390/info16110987 - 14 Nov 2025
Viewed by 558
Abstract
As artificial intelligence (AI) technologies increasingly shape sustainability agendas, organizations face the strategic challenge of aligning AI-driven innovation with long-term environmental and social goals. While academic interest in this intersection is growing, research remains fragmented and often lacks actionable insights into the organizational [...] Read more.
As artificial intelligence (AI) technologies increasingly shape sustainability agendas, organizations face the strategic challenge of aligning AI-driven innovation with long-term environmental and social goals. While academic interest in this intersection is growing, research remains fragmented and often lacks actionable insights into the organizational capabilities needed to operationalize sustainable AI innovation. This study addresses this gap by exploring how knowledge-based organizational capabilities—such as absorptive capacity, knowledge integration, organizational learning, and strategic leadership—support the alignment of AI initiatives with sustainability strategies. Grounded in the knowledge-based view of the firm, we conduct a bibliometric and thematic analysis of 216 peer-reviewed articles to identify emerging conceptual domains at the nexus of AI, innovation, and sustainability. The analysis reveals five dominant capability clusters: (1) data governance and decision intelligence; (2) policy-driven innovation and green transitions; (3) digital transformation through education and innovation; (4) collaborative adoption for sustainable outcomes; and (5) AI for smart cities and climate action. These clusters illuminate the multi-dimensional roles that knowledge management and organizational capabilities play in enabling responsible, impactful, and context-sensitive AI adoption. In addition to mapping the intellectual structure of the field, the study proposes a set of strategic and policy-oriented recommendations for applying these capabilities in practice. The findings offer both theoretical contributions and practical guidance for firms, policymakers, and educators seeking to embed sustainability into AI-driven transformation. This work advances the discourse on innovation and knowledge management by providing a structured, capability-based perspective for designing and implementing sustainable AI strategies. Full article
(This article belongs to the Special Issue Emerging Research in Knowledge Management and Innovation)
Show Figures

Graphical abstract

20 pages, 342 KB  
Article
Synthetic Data Generation for Binary and Multi-Class Classification in the Health Domain
by Camila Guerreiro, Fátima Leal and Micaela Pinho
Information 2025, 16(11), 986; https://doi.org/10.3390/info16110986 - 14 Nov 2025
Viewed by 311
Abstract
The growing demand for data-driven solutions in healthcare is often hindered by limited access to high-quality datasets due to privacy concerns, data imbalance, and regulatory constraints. Synthetic data generation has emerged as a promising strategy to address these challenges by creating artificial yet [...] Read more.
The growing demand for data-driven solutions in healthcare is often hindered by limited access to high-quality datasets due to privacy concerns, data imbalance, and regulatory constraints. Synthetic data generation has emerged as a promising strategy to address these challenges by creating artificial yet statistically valid datasets that preserve the underlying patterns of real data without compromising patient confidentiality. This study explores methodologies for generating synthetic data tailored to binary and multi-class classification problems within the health domain. We employ advanced techniques such as probabilistic modelling, generative adversarial networks, and data augmentation strategies to replicate realistic feature distributions and class relationships. A comprehensive evaluation is conducted using benchmark healthcare datasets, measuring fidelity, diversity, and utility of the synthetic data in downstream predictive modelling tasks. The original dataset consisted of 2125 imbalanced cases, both in the binary and multi-class classification scenarios. Experimental results demonstrate that models trained on synthetic datasets achieve performance levels comparable to those trained on real data, particularly in scenarios with severe class imbalance. The findings underscore the potential of synthetic data as a privacy-preserving enabler for robust machine learning applications in healthcare, facilitating innovation while adhering to strict data protection regulations. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop