Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (117)

Search Parameters:
Keywords = network management and orchestration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 7704 KB  
Article
Seamless User-Generated Content Processing for Smart Media: Delivering QoE-Aware Live Media with YOLO-Based Bib Number Recognition
by Alberto del Rio, Álvaro Llorente, Sofia Ortiz-Arce, Maria Belesioti, George Pappas, Alejandro Muñiz, Luis M. Contreras and Dimitris Christopoulos
Electronics 2025, 14(20), 4115; https://doi.org/10.3390/electronics14204115 - 21 Oct 2025
Viewed by 314
Abstract
The increasing availability of User-Generated Content during large-scale events is transforming spectators into active co-creators of live narratives while simultaneously introducing challenges in managing heterogeneous sources, ensuring content quality, and orchestrating distributed infrastructures. A trial was conducted to evaluate automated orchestration, media enrichment, [...] Read more.
The increasing availability of User-Generated Content during large-scale events is transforming spectators into active co-creators of live narratives while simultaneously introducing challenges in managing heterogeneous sources, ensuring content quality, and orchestrating distributed infrastructures. A trial was conducted to evaluate automated orchestration, media enrichment, and real-time quality assessment in a live sporting scenario. A key innovation of this work is the use of a cloud-native architecture based on Kubernetes, enabling dynamic and scalable integration of smartphone streams and remote production tools into a unified workflow. The system also included advanced cognitive services, such as a Video Quality Probe for estimating perceived visual quality and an AI Engine based on YOLO models for detection and recognition of runners and bib numbers. Together, these components enable a fully automated workflow for live production, combining real-time analysis and quality monitoring, capabilities that previously required manual or offline processing. The results demonstrated consistently high Mean Opinion Score (MOS) values above 3 72.92% of the time, confirming acceptable perceived quality under real network conditions, while the AI Engine achieved strong performance with a Precision of 93.6% and Recall of 80.4%. Full article
Show Figures

Figure 1

19 pages, 685 KB  
Article
Intent-Based Resource Allocation in Edge and Cloud Computing Using Reinforcement Learning
by Dimitrios Konidaris, Polyzois Soumplis, Andreas Varvarigos and Panagiotis Kokkinos
Algorithms 2025, 18(10), 627; https://doi.org/10.3390/a18100627 - 4 Oct 2025
Viewed by 483
Abstract
Managing resource use in cloud and edge environments is crucial for optimizing performance and efficiency. Traditionally, this process is performed with detailed knowledge of the available infrastructure while being application-specific. However, it is common that users cannot accurately specify their applications’ low-level requirements, [...] Read more.
Managing resource use in cloud and edge environments is crucial for optimizing performance and efficiency. Traditionally, this process is performed with detailed knowledge of the available infrastructure while being application-specific. However, it is common that users cannot accurately specify their applications’ low-level requirements, and they tend to overestimate them—a problem further intensified by their lack of detailed knowledge on the infrastructure’s characteristics. In this context, resource orchestration mechanisms perform allocations based on the provided worst-case assumptions, with a direct impact on the performance of the whole infrastructure. In this work, we propose a resource orchestration mechanism based on intents, in which users provide their high-level workload requirements by specifying their intended preferences for how the workload should be managed, such as prioritizing high capacity, low cost, or other criteria. Building on this, the proposed mechanism dynamically assigns resources to applications through a Reinforcement Learning method leveraging the feedback from the users and infrastructure providers’ monitoring system. We formulate the respective problem as a discrete-time, finite horizon Markov decision process. Initially, we solve the problem using a tabular Q-learning method. However, due to the large state space inherent in real-world scenarios, we also employ Deep Reinforcement Learning, utilizing a neural network for the Q-value approximation. The presented mechanism is capable of continuously adapting the manner in which resources are allocated based on feedback from users and infrastructure providers. A series of simulation experiments were conducted to demonstrate the applicability of the proposed methodologies in intent-based resource allocation, examining various aspects and characteristics and performing comparative analysis. Full article
(This article belongs to the Special Issue Emerging Trends in Distributed AI for Smart Environments)
Show Figures

Figure 1

12 pages, 284 KB  
Article
AI-Enabled Secure and Scalable Distributed Web Architecture for Medical Informatics
by Marian Ileana, Pavel Petrov and Vassil Milev
Appl. Sci. 2025, 15(19), 10710; https://doi.org/10.3390/app151910710 - 4 Oct 2025
Viewed by 585
Abstract
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical [...] Read more.
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical informatics, integrating artificial intelligence techniques and cloud-based services. The system ensures interoperability via HL7 FHIR standards and preserves data privacy and fault tolerance across interconnected medical institutions. A hybrid AI pipeline combining principal component analysis (PCA), K-Means clustering, and convolutional neural networks (CNNs) is applied to diffusion tensor imaging (DTI) data for early detection of neurological anomalies. The architecture leverages containerized microservices orchestrated with Docker Swarm, enabling adaptive resource management and high availability. Experimental validation confirms reduced latency, improved system reliability, and enhanced compliance with medical data exchange protocols. Results demonstrate superior performance with an average latency of 94 ms, a diagnostic accuracy of 91.3%, and enhanced clinical workflow efficiency compared to traditional monolithic architectures. The proposed solution successfully addresses scalability limitations while maintaining data security and regulatory compliance across multi-institutional deployments. This work contributes to the advancement of intelligent, interoperable, and scalable e-health infrastructures aligned with the evolution of digital healthcare ecosystems. Full article
(This article belongs to the Special Issue Data Science and Medical Informatics)
Show Figures

Figure 1

29 pages, 2319 KB  
Article
Research on the Development of a Building Model Management System Integrating MQTT Sensing
by Ziang Wang, Han Xiao, Changsheng Guan, Liming Zhou and Daiguang Fu
Sensors 2025, 25(19), 6069; https://doi.org/10.3390/s25196069 - 2 Oct 2025
Viewed by 713
Abstract
Existing building management systems face critical limitations in real-time data integration, primarily relying on static models that lack dynamic updates from IoT sensors. To address this gap, this study proposes a novel system integrating MQTT over WebSocket with Three.js visualization, enabling real-time sensor-data [...] Read more.
Existing building management systems face critical limitations in real-time data integration, primarily relying on static models that lack dynamic updates from IoT sensors. To address this gap, this study proposes a novel system integrating MQTT over WebSocket with Three.js visualization, enabling real-time sensor-data binding to Building Information Models (BIM). The architecture leverages MQTT’s lightweight publish-subscribe protocol for efficient communication and employs a TCP-based retransmission mechanism to ensure 99.5% data reliability in unstable networks. A dynamic topic-matching algorithm is introduced to automate sensor-BIM associations, reducing manual configuration time by 60%. The system’s frontend, powered by Three.js, achieves browser-based 3D visualization with sub-second updates (280–550 ms latency), while the backend utilizes SpringBoot for scalable service orchestration. Experimental evaluations across diverse environments—including high-rise offices, industrial plants, and residential complexes—demonstrate the system’s robustness: Real-time monitoring: Fire alarms triggered within 2.1 s (22% faster than legacy systems). Network resilience: 98.2% availability under 30% packet loss. User efficiency: 4.6/5 satisfaction score from facility managers. This work advances intelligent building management by bridging IoT data with interactive 3D models, offering a scalable solution for emergency response, energy optimization, and predictive maintenance in smart cities. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

37 pages, 3222 KB  
Article
Unified Distributed Machine Learning for 6G Intelligent Transportation Systems: A Hierarchical Approach for Terrestrial and Non-Terrestrial Networks
by David Naseh, Arash Bozorgchenani, Swapnil Sadashiv Shinde and Daniele Tarchi
Network 2025, 5(3), 41; https://doi.org/10.3390/network5030041 - 17 Sep 2025
Viewed by 587
Abstract
The successful integration of Terrestrial and Non-Terrestrial Networks (T/NTNs) in 6G is poised to revolutionize demanding domains like Earth Observation (EO) and Intelligent Transportation Systems (ITSs). Still, it requires Distributed Machine Learning (DML) frameworks that are scalable, private, and efficient. Existing methods, such [...] Read more.
The successful integration of Terrestrial and Non-Terrestrial Networks (T/NTNs) in 6G is poised to revolutionize demanding domains like Earth Observation (EO) and Intelligent Transportation Systems (ITSs). Still, it requires Distributed Machine Learning (DML) frameworks that are scalable, private, and efficient. Existing methods, such as Federated Learning (FL) and Split Learning (SL), face critical limitations in terms of client computation burden and latency. To address these challenges, this paper proposes a novel hierarchical DML paradigm. We first introduce Federated Split Transfer Learning (FSTL), a foundational framework that synergizes FL, SL, and Transfer Learning (TL) to enable efficient, privacy-preserving learning within a single client group. We then extend this concept to the Generalized FSTL (GFSTL) framework, a scalable, multi-group architecture designed for complex and large-scale networks. GFSTL orchestrates parallel training across multiple client groups managed by intermediate servers (RSUs/HAPs) and aggregates them at a higher-level central server, significantly enhancing performance. We apply this framework to a unified T/NTN architecture that seamlessly integrates vehicular, aerial, and satellite assets, enabling advanced applications in 6G ITS and EO. Comprehensive simulations using the YOLOv5 model on the Cityscapes dataset validate our approach. The results show that GFSTL not only achieves faster convergence and higher detection accuracy but also substantially reduces communication overhead compared to baseline FL, and critically, both detection accuracy and end-to-end latency remain essentially invariant as the number of participating users grows, making GFSTL especially well suited for large-scale heterogeneous 6G ITS deployments. We also provide a formal latency decomposition and analysis that explains this scaling behavior. This work establishes GFSTL as a robust and practical solution for enabling the intelligent, connected, and resilient ecosystems required for next-generation transportation and environmental monitoring. Full article
Show Figures

Figure 1

40 pages, 2568 KB  
Review
Intelligent Edge Computing and Machine Learning: A Survey of Optimization and Applications
by Sebastián A. Cajas Ordóñez, Jaydeep Samanta, Andrés L. Suárez-Cetrulo and Ricardo Simón Carbajo
Future Internet 2025, 17(9), 417; https://doi.org/10.3390/fi17090417 - 11 Sep 2025
Cited by 3 | Viewed by 4470
Abstract
Intelligent edge machine learning has emerged as a paradigm for deploying smart applications across resource-constrained devices in next-generation network infrastructures. This survey addresses the critical challenges of implementing machine learning models on edge devices within distributed network environments, including computational limitations, memory constraints, [...] Read more.
Intelligent edge machine learning has emerged as a paradigm for deploying smart applications across resource-constrained devices in next-generation network infrastructures. This survey addresses the critical challenges of implementing machine learning models on edge devices within distributed network environments, including computational limitations, memory constraints, and energy-efficiency requirements for real-time intelligent inference. We provide comprehensive analysis of soft computing optimization strategies essential for intelligent edge deployment, systematically examining model compression techniques including pruning, quantization methods, knowledge distillation, and low-rank decomposition approaches. The survey explores intelligent MLOps frameworks tailored for network edge environments, addressing continuous model adaptation, monitoring under data drift, and federated learning for distributed intelligence while preserving privacy in next-generation networks. Our work covers practical applications across intelligent smart agriculture, energy management, healthcare, and industrial monitoring within network infrastructures, highlighting domain-specific challenges and emerging solutions. We analyze specialized hardware architectures, cloud offloading strategies, and distributed learning approaches that enable intelligent edge computing in heterogeneous network environments. The survey identifies critical research gaps in multimodal model deployment, streaming learning under concept drift, and integration of soft computing techniques with intelligent edge orchestration frameworks for network applications. These gaps directly manifest as open challenges in balancing computational efficiency with model robustness due to limited multimodal optimization techniques, developing sustainable intelligent edge AI systems arising from inadequate streaming learning adaptation, and creating adaptive network applications for dynamic environments resulting from insufficient soft computing integration. This comprehensive roadmap synthesizes current intelligent edge machine learning solutions with emerging soft computing approaches, providing researchers and practitioners with insights for developing next-generation intelligent edge computing systems that leverage machine learning capabilities in distributed network infrastructures. Full article
Show Figures

Graphical abstract

14 pages, 2916 KB  
Article
Temporal Molecular Signatures of Early Human Clavicle Fracture Healing: Characterization of Hematological, Cytokine, and miRNA Profiles
by Li Wan, Sandra Failer, Nadja Muehlhaupt, Christina Schwenk, Peter Biberthaler, Conrad Ketzer, Gregor Roemmermann, Olivia Bohe and Marc Hanschen
Int. J. Mol. Sci. 2025, 26(18), 8825; https://doi.org/10.3390/ijms26188825 - 10 Sep 2025
Viewed by 483
Abstract
Fracture healing failure affects millions globally, yet early molecular mechanisms remain poorly understood. This study aimed to characterize initial fracture response through analyzing peripheral blood hematology, multiplex cytokine profiles, and microRNA (miRNA) expression in fracture hematoma within the first 5 days post-injury. In [...] Read more.
Fracture healing failure affects millions globally, yet early molecular mechanisms remain poorly understood. This study aimed to characterize initial fracture response through analyzing peripheral blood hematology, multiplex cytokine profiles, and microRNA (miRNA) expression in fracture hematoma within the first 5 days post-injury. In a prospective cohort of 64 patients with acute clavicle fractures, we assessed hematological parameters, cytokine levels via multiplex immunoassays, and miRNA expression through RNA sequencing, and quantitative PCR (qPCR) validation. Fracture severity and time elapsed post-injury were key drivers of molecular response variability. Severe fractures (type C) were associated with older patient age and impaired hematological parameters, including reduced hemoglobin, erythrocyte counts, and hematocrit. Leukocyte counts declined over time, reflecting evolving systemic inflammation. Severity-dependent cytokines included eotaxin, interferon alpha-2 (IFNα2), interleukin-1 alpha (IL-1α), macrophage inflammatory protein-1 (MIP-1α), whereas interferon gamma-induced protein 10 (IP-10) and MIP-1α distinguished temporal healing phases. MiRNA profiling revealed 55 miRNAs with significant time-dependent expression changes (27 downregulated, 28 upregulated). Five key miRNAs (miR-140-5p, miR-181a-5p, miR-214-3p, miR-23a-3p, miR-98-5p) showed robust temporal patterns and enrichment in cytokine signaling pathways critical for bone repair. This work presents the first detailed molecular portrait of early human fracture healing, highlighting hematological, immune cytokine, and miRNA networks orchestrating repair. These insights provide a foundation for biomarkers development to predict healing outcomes and support precision-targeted interventions in fracture management. Full article
(This article belongs to the Special Issue Bone Metabolism and Bone Diseases)
Show Figures

Figure 1

19 pages, 2520 KB  
Article
Research on a Blockchain-Based Quality and Safety Traceability System for Hymenopellis raphanipes
by Wei Xu, Hongyan Guo, Xingguo Zhang, Mingxia Lin and Pingzeng Liu
Sustainability 2025, 17(16), 7413; https://doi.org/10.3390/su17167413 - 16 Aug 2025
Cited by 1 | Viewed by 1198
Abstract
Hymenopellis raphanipes is a high-value edible fungus with a short shelf life and high perishability, which poses significant challenges for quality control and safety assurance throughout its supply chain. Ensuring effective traceability is essential for improving production management, strengthening consumer trust, and supporting [...] Read more.
Hymenopellis raphanipes is a high-value edible fungus with a short shelf life and high perishability, which poses significant challenges for quality control and safety assurance throughout its supply chain. Ensuring effective traceability is essential for improving production management, strengthening consumer trust, and supporting brand development. This study proposes a comprehensive traceability system tailored to the full lifecycle of Hymenopellis raphanipes, addressing the operational needs of producers and regulators alike. Through detailed analysis of the entire supply chain, from raw material intake, cultivation, and processing to logistics and sales, the system defines standardized traceability granularity and a unique hierarchical coding scheme. A multi-layered system architecture is designed, comprising a data acquisition layer, network transmission layer, storage management layer, service orchestration layer, business logic layer, and user interaction layer, ensuring modularity, scalability, and maintainability. To address performance bottlenecks in traditional systems, a multi-chain collaborative traceability model is introduced, integrating a mainchain–sidechain storage mechanism with an on-chain/off-chain hybrid management strategy. This approach effectively mitigates storage overhead and enhances response efficiency. Furthermore, data integrity is verified through hash-based validation, supporting high-throughput queries and reliable traceability. Experimental results from its real-world deployment demonstrate that the proposed system significantly outperforms traditional single-chain models in terms of query latency and throughput. The solution enhances data transparency and regulatory efficiency, promotes sustainable practices in green agricultural production, and offers a scalable reference model for the traceability of other high-value agricultural products. Full article
Show Figures

Figure 1

32 pages, 2917 KB  
Article
Self-Adapting CPU Scheduling for Mixed Database Workloads via Hierarchical Deep Reinforcement Learning
by Suchuan Xing, Yihan Wang and Wenhe Liu
Symmetry 2025, 17(7), 1109; https://doi.org/10.3390/sym17071109 - 10 Jul 2025
Cited by 3 | Viewed by 1352
Abstract
Modern database systems require autonomous CPU scheduling frameworks that dynamically optimize resource allocation across heterogeneous workloads while maintaining strict performance guarantees. We present a novel hierarchical deep reinforcement learning framework augmented with graph neural networks to address CPU scheduling challenges in mixed database [...] Read more.
Modern database systems require autonomous CPU scheduling frameworks that dynamically optimize resource allocation across heterogeneous workloads while maintaining strict performance guarantees. We present a novel hierarchical deep reinforcement learning framework augmented with graph neural networks to address CPU scheduling challenges in mixed database environments comprising Online Transaction Processing (OLTP), Online Analytical Processing (OLAP), vector processing, and background maintenance workloads. Our approach introduces three key innovations: first, a symmetric two-tier control architecture where a meta-controller allocates CPU budgets across workload categories using policy gradient methods while specialized sub-controllers optimize process-level resource allocation through continuous action spaces; second, graph neural network-based dependency modeling that captures complex inter-process relationships and communication patterns while preserving inherent symmetries in database architectures; and third, meta-learning integration with curiosity-driven exploration enabling rapid adaptation to previously unseen workload patterns without extensive retraining. The framework incorporates a multi-objective reward function balancing Service Level Objective (SLO) adherence, resource efficiency, symmetric fairness metrics, and system stability. Experimental evaluation through high-fidelity digital twin simulation and production deployment demonstrates substantial performance improvements: 43.5% reduction in p99 latency violations for OLTP workloads and 27.6% improvement in overall CPU utilization, with successful scaling to 10,000 concurrent processes maintaining sub-3% scheduling overhead. This work represents a significant advancement toward truly autonomous database resource management, establishing a foundation for next-generation self-optimizing database systems with implications extending to broader orchestration challenges in cloud-native architectures. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

16 pages, 1966 KB  
Article
DRL-Driven Intelligent SFC Deployment in MEC Workload for Dynamic IoT Networks
by Seyha Ros, Intae Ryoo and Seokhoon Kim
Sensors 2025, 25(14), 4257; https://doi.org/10.3390/s25144257 - 8 Jul 2025
Cited by 1 | Viewed by 669
Abstract
The rapid increase in the deployment of Internet of Things (IoT) sensor networks has led to an exponential growth in data generation and an unprecedented demand for efficient resource management infrastructure. Ensuring end-to-end communication across multiple heterogeneous network domains is crucial to maintaining [...] Read more.
The rapid increase in the deployment of Internet of Things (IoT) sensor networks has led to an exponential growth in data generation and an unprecedented demand for efficient resource management infrastructure. Ensuring end-to-end communication across multiple heterogeneous network domains is crucial to maintaining Quality of Service (QoS) requirements, such as low latency and high computational capacity, for IoT applications. However, limited computing resources at multi-access edge computing (MEC), coupled with increasing IoT network requests during task offloading, often lead to network congestion, service latency, and inefficient resource utilization, degrading overall system performance. This paper proposes an intelligent task offloading and resource orchestration framework to address these challenges, thereby optimizing energy consumption, computational cost, network congestion, and service latency in dynamic IoT-MEC environments. The framework introduces task offloading and a dynamic resource orchestration strategy, where task offloading to the MEC server ensures an efficient distribution of computation workloads. The dynamic resource orchestration process, Service Function Chaining (SFC) for Virtual Network Functions (VNFs) placement, and routing path determination optimize service execution across the network. To achieve adaptive and intelligent decision-making, the proposed approach leverages Deep Reinforcement Learning (DRL) to dynamically allocate resources and offload task execution, thereby improving overall system efficiency and addressing the optimal policy in edge computing. Deep Q-network (DQN), which is leveraged to learn an optimal network resource adjustment policy and task offloading, ensures flexible adaptation in SFC deployment evaluations. The simulation result demonstrates that the DRL-based scheme significantly outperforms the reference scheme in terms of cumulative reward, reduced service latency, lowered energy consumption, and improved delivery and throughput. Full article
Show Figures

Figure 1

16 pages, 467 KB  
Article
A Socially Assistive Robot as Orchestrator of an AAL Environment for Seniors
by Carlos E. Sanchez-Torres, Ernesto A. Lozano, Irvin H. López-Nava, J. Antonio Garcia-Macias and Jesus Favela
Technologies 2025, 13(6), 260; https://doi.org/10.3390/technologies13060260 - 19 Jun 2025
Viewed by 673
Abstract
Social robots in Ambient Assisted Living (AAL) environments offer a promising alternative for enhancing senior care by providing companionship and functional support. These robots can serve as intuitive interfaces to complex smart home systems, allowing seniors and caregivers to easily control their environment [...] Read more.
Social robots in Ambient Assisted Living (AAL) environments offer a promising alternative for enhancing senior care by providing companionship and functional support. These robots can serve as intuitive interfaces to complex smart home systems, allowing seniors and caregivers to easily control their environment and access various assistance services through natural interactions. By combining the emotional engagement capabilities of social robots with the comprehensive monitoring and support features of AAL, this integrated approach can potentially improve the quality of life and independence of elderly individuals while alleviating the burden on human caregivers. This paper explores the integration of social robotics with ambient assisted living (AAL) technologies to enhance elderly care. We propose a novel framework where a social robot is the central orchestrator of an AAL environment, coordinating various smart devices and systems to provide comprehensive support for seniors. Our approach leverages the social robot’s ability to engage in natural interactions while managing the complex network of environmental and wearable sensors and actuators. In this paper, we focus on the technical aspects of our framework. A computational P2P notebook is used to customize the environment and run reactive services. Machine learning models can be included for real-time recognition of gestures, poses, and moods to support non-verbal communication. We describe scenarios to illustrate the utility and functionality of the framework and how the robot is used to orchestrate the AAL environment to contribute to the well-being and independence of elderly individuals. We also address the technical challenges and future directions for this integrated approach to elderly care. Full article
Show Figures

Figure 1

29 pages, 1145 KB  
Article
What Drives Successful Campus Living Labs? The Case of Utrecht University
by Claudia Stuckrath, Maryse M. H. Chappin and Ernst Worrell
Sustainability 2025, 17(12), 5506; https://doi.org/10.3390/su17125506 - 14 Jun 2025
Cited by 1 | Viewed by 1548
Abstract
Campus living labs (CLLs) foster sustainability within higher education institutions (HEIs), yet their institutional embedding remains challenging. Relying on the idea of strategic niche management (SNM), this paper examines three processes key to protected space development: vision articulation, social network building, and learning. [...] Read more.
Campus living labs (CLLs) foster sustainability within higher education institutions (HEIs), yet their institutional embedding remains challenging. Relying on the idea of strategic niche management (SNM), this paper examines three processes key to protected space development: vision articulation, social network building, and learning. This research explores the factors that enable the development of protected spaces for successful CLLs. Using an embedded case study approach, seven sustainability initiatives were analysed at Utrecht University, the Netherlands. We found that the perceived success in CLLs is related to sustainability outcomes, scaling pathways, and process outcomes. In addition, different groups of factors driving the development of protected spaces were identified: broad factors that contribute to all or multiple key processes, specific factors that support only one process, and peripheral factors that were less frequently mentioned. ‘Organisational culture’ appeared to be an important broad factor contributing to all key processes. ‘Resources’ and ‘Coordination’ were also important, specifically for social network building, but also mentioned as currently being absent by many. Finally, this paper contributes by incorporating a new factor, ‘Orchestration’, a subtle yet strategic form of coordination. It offers insights for HEIs aiming to develop CLLs as part of their sustainability strategy. Full article
(This article belongs to the Special Issue Sustainable Impact and Systemic Change via Living Labs)
Show Figures

Figure 1

23 pages, 2071 KB  
Systematic Review
Creating Value in Metaverse-Driven Global Value Chains: Blockchain Integration and the Evolution of International Business
by Sina Mirzaye Shirkoohi and Muhammad Mohiuddin
J. Theor. Appl. Electron. Commer. Res. 2025, 20(2), 126; https://doi.org/10.3390/jtaer20020126 - 2 Jun 2025
Cited by 3 | Viewed by 1591
Abstract
The convergence of blockchain and metaverse technologies is poised to redefine how Global Value Chains (GVCs) create, capture, and distribute value, yet scholarly insight into their joint impact remains scattered. Addressing this gap, the present study aims to clarify where, how, and under [...] Read more.
The convergence of blockchain and metaverse technologies is poised to redefine how Global Value Chains (GVCs) create, capture, and distribute value, yet scholarly insight into their joint impact remains scattered. Addressing this gap, the present study aims to clarify where, how, and under what conditions blockchain-enabled transparency and metaverse-enabled immersion enhance GVC performance. A systematic literature review (SLR), conducted according to PRISMA 2020 guidelines, screened 300 articles from ABI Global, Business Source Premier, and Web of Science records, yielding 65 peer-reviewed articles for in-depth analysis. The corpus was coded thematically and mapped against three theoretical lenses: transaction cost theory, resource-based view, and network/ecosystem perspectives. Key findings reveal the following: 1. digital twins anchored in immersive platforms reduce planning cycles by up to 30% and enable real-time, cross-border supply chain reconfiguration; 2. tokenized assets, micro-transactions, and decentralized finance (DeFi) are spawning new revenue models but simultaneously shift tax triggers and compliance burdens; 3. cross-chain protocols are critical for scalable trust, yet regulatory fragmentation—exemplified by divergent EU, U.S., and APAC rules—creates non-trivial coordination costs; and 4. traditional IB theories require extension to account for digital-capability orchestration, emerging cost centers (licensing, reserve backing, data audits), and metaverse-driven network effects. Based on these insights, this study recommends that managers adopt phased licensing and geo-aware tax engines, embed region-specific compliance flags in smart-contract metadata, and pilot digital-twin initiatives in sandbox-friendly jurisdictions. Policymakers are urged to accelerate work on interoperability and reporting standards to prevent systemic bottlenecks. Finally, researchers should pursue multi-case and longitudinal studies measuring the financial and ESG outcomes of integrated blockchain–metaverse deployments. By synthesizing disparate streams and articulating a forward agenda, this review provides a conceptual bridge for international business scholarship and a practical roadmap for firms navigating the next wave of digital GVC transformation. Full article
Show Figures

Figure 1

23 pages, 2239 KB  
Review
Molecular Mechanisms of Epithelial–Mesenchymal Transition in Retinal Pigment Epithelial Cells: Implications for Age-Related Macular Degeneration (AMD) Progression
by Na Wang, Yaqi Wang, Lei Zhang, Wenjing Yang and Songbo Fu
Biomolecules 2025, 15(6), 771; https://doi.org/10.3390/biom15060771 - 27 May 2025
Cited by 2 | Viewed by 1575
Abstract
Age-related macular degeneration (AMD), the leading cause of irreversible blindness worldwide, represents a complex neurodegenerative disorder whose pathogenesis remains elusive. At the core of AMD pathophysiology lies the retinal pigment epithelium (RPE), whose epithelial–mesenchymal transition (EMT) has emerged as a critical pathological mechanism [...] Read more.
Age-related macular degeneration (AMD), the leading cause of irreversible blindness worldwide, represents a complex neurodegenerative disorder whose pathogenesis remains elusive. At the core of AMD pathophysiology lies the retinal pigment epithelium (RPE), whose epithelial–mesenchymal transition (EMT) has emerged as a critical pathological mechanism driving disease progression. This transformative process, characterized by RPE cell dedifferentiation and subsequent extracellular matrix remodeling, is orchestrated through a sophisticated network of molecular interactions and cellular signaling cascades. Our review provides a comprehensive analysis of the molecular landscape underlying RPE EMT in AMD, with particular emphasis on seven interconnected pathological axes: (i) oxidative stress and mitochondrial dysfunction, (ii) hypoxia-inducible factor signaling, (iii) autophagic flux dysregulation, (iv) chronic inflammatory responses, (v) complement system overactivation, (vi) epigenetic regulation through microRNA networks, and (vii) key developmental signaling pathway reactivation. Furthermore, we evaluate emerging therapeutic strategies targeting EMT modulation, providing a comprehensive perspective on potential interventions to halt AMD progression. By integrating current mechanistic insights with therapeutic prospects, this review aims to bridge the gap between fundamental research and clinical translation in AMD management. Full article
(This article belongs to the Section Molecular Biology)
Show Figures

Figure 1

28 pages, 2049 KB  
Review
A Survey on Software Defined Network-Enabled Edge Cloud Networks: Challenges and Future Research Directions
by Baha Uddin Kazi, Md Kawsarul Islam, Muhammad Mahmudul Haque Siddiqui and Muhammad Jaseemuddin
Network 2025, 5(2), 16; https://doi.org/10.3390/network5020016 - 20 May 2025
Cited by 3 | Viewed by 3817
Abstract
The explosion of connected devices and data transmission in the Internet of Things (IoT) era brings substantial burden on the capability of cloud computing. Moreover, these IoT devices are mostly positioned at the edge of a network and limited in resources. To address [...] Read more.
The explosion of connected devices and data transmission in the Internet of Things (IoT) era brings substantial burden on the capability of cloud computing. Moreover, these IoT devices are mostly positioned at the edge of a network and limited in resources. To address these challenges, edge cloud-distributed computing networks emerge. Because of the distributed nature of edge cloud networks, many research works considering software defined networks (SDNs) and network–function–virtualization (NFV) could be key enablers for managing, orchestrating, and load balancing resources. This article provides a comprehensive survey of these emerging technologies, focusing on SDN controllers, orchestration, and the function of artificial intelligence (AI) in enhancing the capabilities of controllers within the edge cloud computing networks. More specifically, we present an extensive survey on the research proposals on the integration of SDN controllers and orchestration with the edge cloud networks. We further introduce a holistic overview of SDN-enabled edge cloud networks and an inclusive summary of edge cloud use cases and their key challenges. Finally, we address some challenges and potential research directions for further exploration in this vital research area. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

Back to TopTop