Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (50)

Search Parameters:
Keywords = context-aware resource allocation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
41 pages, 5116 KB  
Review
Towards 6G C-V2X Networks: A Comprehensive Survey on Mobility Management, Multi-RAT Coexistence, and Machine Learning (3M) Framework for C-ITS
by Malghalara Abdul Ali, Sajjad Ahmad Khan, Sultan Aldirmaz Colak, Selahattin Kosunalp and Teodor Iliev
Electronics 2026, 15(5), 1042; https://doi.org/10.3390/electronics15051042 - 2 Mar 2026
Abstract
The Cooperative-Intelligent Transport Systems (C-ITS) require emerging Vehicular-to-Everything (V2X) applications, such as Advanced Driving Systems (ADS) and Connected Autonomous Driving (CAD), to support efficient road safety measures. These applications often require high reliability, throughput, and low latency by exchanging a significant amount of [...] Read more.
The Cooperative-Intelligent Transport Systems (C-ITS) require emerging Vehicular-to-Everything (V2X) applications, such as Advanced Driving Systems (ADS) and Connected Autonomous Driving (CAD), to support efficient road safety measures. These applications often require high reliability, throughput, and low latency by exchanging a significant amount of data among End-to-End (E2E) vehicles. However, current V2X communication technologies, such as DSRC and C-V2X, are not able to meet these stringent demands. Two or more Radio Access Technologies (RATs) are essential to guarantee the required Quality of Service (QoS) in high-density vehicular environments. To address this critical gap, this survey presents the 3M Framework—a hybrid vehicular architecture approach based on Multi-Radio Access Technology (M-RAT), Mobility Management, and Machine Learning (ML). The manuscript provides a detailed overview of V2X Multi-RAT evolutions, analyzing their state-of-the-art and limitations in heterogeneous scenarios. We specifically highlight that the existing Long Term Evolution (LTE)-based mobility management fails to meet V2X handover requirements for high-speed vehicles, necessitating a comprehensive overview of Vertical Handover (VHO). Furthermore, the survey details how the integration of ML promotes the prediction of network states, enabling optimized context-aware decisions for connectivity and resource allocation, thereby reducing Handover Failures (HoFs) and enhancing reliability using techniques like Deep Reinforcement Learning (DRL). Finally, based on a comprehensive review of existing methods, the paper identifies critical research directions and challenges required to realize intelligent, hyper-fast, and ultra-reliable Beyond 5G (B5G) and Sixth Generation (6G) V2X networks, delivering a more profound understanding for future endeavors. Full article
Show Figures

Figure 1

24 pages, 1561 KB  
Article
Rough Sets Meta-Heuristic Schema for Inverse Kinematics and Path Planning of Surgical Robotic Arms
by Nizar Rokbani
Robotics 2026, 15(3), 52; https://doi.org/10.3390/robotics15030052 - 28 Feb 2026
Viewed by 76
Abstract
Surgical robots require sub-millimeter accuracy and reliable inverse kinematics across anatomies. Population-based metaheuristics address this, but static parameters may limit achieving the needed precision for clinical use. This study introduces the Rough Sets Meta-Heuristic Schema (RSMS) for dynamic, context-aware control. RSMS categorizes agents [...] Read more.
Surgical robots require sub-millimeter accuracy and reliable inverse kinematics across anatomies. Population-based metaheuristics address this, but static parameters may limit achieving the needed precision for clinical use. This study introduces the Rough Sets Meta-Heuristic Schema (RSMS) for dynamic, context-aware control. RSMS categorizes agents (Elite, Boundary, Poor) via Rough Set discretization based on fitness and distribution, allocating resources accordingly without problem-specific heuristics. To demonstrate the approach’s effectiveness, RSMS was implemented within Particle Swarm Optimization and evaluated as a surgical robotics inverse kinematics solver and path planner. In simulations using three surgical problems, RS-PSO allowed upgrading of the performance of the standard PSO in terms of consistent convergence and success in tight search spaces. Statistical tests confirmed these improvements. Using a 7-DOF KUKA LBR iiwa robot and surgical benchmarks of landmark acquisition, spiral trajectory tracking, and constrained path, RS-PSO achieved success rates of 100%, 67%, and 100%, respectively, meeting surgical requirements. The results demonstrate clinical gains in accuracy, consistency, and reproducibility for minimally invasive surgery. These findings support the practical advantages of RS-PSO and, more importantly, show that the RS-MH framework can be used as a general, reusable tool to improve the robustness, precision, and reproducibility of many swarm-based meta-heuristics for surgical robotics and other applications. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

21 pages, 2079 KB  
Article
Assuring Brokerage Quality in the Cloud–Edge Continuum
by Evangelos Barmpas, Simeon Veloudis, Yiannis Verginadis and Iraklis Paraskakis
Future Internet 2026, 18(2), 107; https://doi.org/10.3390/fi18020107 - 19 Feb 2026
Viewed by 225
Abstract
The Cloud–Edge Continuum (CEC) has emerged as a paradigm for distributing computational resources across cloud, fog, and edge layers, enabling latency-sensitive applications to operate efficiently. However, ensuring the quality of service (QoS) brokerage in such environments remains a challenge. Existing frameworks primarily focus [...] Read more.
The Cloud–Edge Continuum (CEC) has emerged as a paradigm for distributing computational resources across cloud, fog, and edge layers, enabling latency-sensitive applications to operate efficiently. However, ensuring the quality of service (QoS) brokerage in such environments remains a challenge. Existing frameworks primarily focus on resource management techniques such as allocation, scheduling, and offloading but fail to address the quality assurance of the brokerage process itself. This paper introduces SLA governance as a means of ensuring the quality of service brokerage by validating—through automated reasoning—Service Level Agreements (SLAs) against meta-quality constraints—high-level policies that define permissible QoS conditions. We propose an ontology-driven approach leveraging the ODRL ontology for SLA representation and capturing meta-quality constraints. Our method also enables introspective reasoning ensuring internal SLA consistency. Additionally, we integrate SLA governance with a real-time monitoring framework, the Event Management System (EMS), to continuously track workload performance and trigger SLA adaptation when necessary. This integration ensures that SLA-based brokerage decisions remain dynamic and context-aware. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for the Next-Generation Networks)
Show Figures

Figure 1

23 pages, 1074 KB  
Systematic Review
Soil Heavy Metals for Sustainable Risk Management: A Systematic Review and a Context-Aware Method Selection Framework
by Leqi Yang, Tianxiang Yue and Maohua Ma
Sustainability 2026, 18(4), 1893; https://doi.org/10.3390/su18041893 - 12 Feb 2026
Viewed by 171
Abstract
Sustainable land use requires precise monitoring of soil pollution, yet accurately predicting the spatial distribution of heavy metals often relies on post hoc accuracy comparisons with limited a priori diagnosis. To address the challenge of cost effective environmental monitoring, we conducted a PRISMA [...] Read more.
Sustainable land use requires precise monitoring of soil pollution, yet accurately predicting the spatial distribution of heavy metals often relies on post hoc accuracy comparisons with limited a priori diagnosis. To address the challenge of cost effective environmental monitoring, we conducted a PRISMA guided systematic review (2000–2024) and synthesized 135 studies to develop a mechanism-informed, context aware method selection framework. Evidence revealed three regularities: (i) element–driver coupling is structured (Pb/Cd/Zn predominantly anthropogenic; Cr/Ni geogenic; As/Hg mixed), with dominant influence scales from local to regional; (ii) model performance hinges on alignment between algorithmic assumptions, and context hybrid machine learning models integrating multi-source covariates tend to excel under strong, non-stationary anthropogenic heterogeneity, whereas kriging variants are more robust when geogenic continuity holds; and (iii) applicability is jointly constrained by environmental context, data foundations, and management objectives. Building on these insights, we propose a three-step decision workflow—goal definition, contextual diagnosis, and method matching. This framework serves as a decision support tool that shifts selection from trial and error to a priori alignment, optimizing resource allocation and enhancing the reliability of pollution assessments for sustainable soil remediation and policymaking. Full article
Show Figures

Figure 1

62 pages, 1774 KB  
Review
Quantum-Enhanced Edge Intelligence Leveraging Large Language Models for Immersive Space–Aerial–Ground Communications: Survey, Challenges, and Open Issues
by Abhishek Gupta and Ajmery Sultana
Sensors 2026, 26(4), 1181; https://doi.org/10.3390/s26041181 - 11 Feb 2026
Viewed by 400
Abstract
The integration of unmanned aerial vehicles (UAVs), autonomous vehicles, and advanced satellite systems in sixth-generation (6G) networks is poised to redefine next-generation communications as well as next-generation intelligent transportation systems. This paper examines the convergence of UAVs, CubeSats, and terrestrial infrastructures that comprise [...] Read more.
The integration of unmanned aerial vehicles (UAVs), autonomous vehicles, and advanced satellite systems in sixth-generation (6G) networks is poised to redefine next-generation communications as well as next-generation intelligent transportation systems. This paper examines the convergence of UAVs, CubeSats, and terrestrial infrastructures that comprise the framework of Space–Aerial–Ground Integrated Networks (SAGINs) as vital enablers of the International Mobile Telecommunications (IMT)-2030 standards. This paper examines the role of UAVs in providing flexible and quickly deployable airborne connectivity. It also discusses how CubeSats enhance global coverage through low-latency relaying and resilient backhaul links from low Earth orbit (LEO). Additionally, the paper highlights how terrestrial systems contribute high-capacity, densely concentrated communication layers that support various end-user applications. By examining their interoperability and coordinated resource allocation, the paper underscores that the seamless interaction of SAGIN nodes is essential for achieving the ultra-reliable, intelligent, and pervasive communication capabilities envisioned by IMT-2030. As 6G aims for ultra-low latency, high reliability, and massive connectivity, UAVs and CubeSats emerge as key enablers for extending coverage and capacity, particularly in remote and dense urban regions. Furthermore, the role of large language models (LLMs) is explored for intelligent network management and real-time data optimization, while quantum communication is analyzed for ensuring security and minimizing latency. The integration of LLMs into quantum-enhanced edge intelligence for SAGINs represents an emerging research frontier for adaptive, high-throughput, and context-aware decision-making. By exploiting quantum-assisted parallelism and entanglement-based optimization, LLMs enhance the processing efficiency of multimodal data across space, aerial, and terrestrial nodes. This paper further investigates distributed quantum inference and multimodal sensor data fusion to enable resilient, self-optimizing communication systems comprising a high volume of data traffic, which is a critical bottleneck in the global connectivity transition. LLMs are envisioned as cognitive control centers capable of generating semantic representations for mission-critical communications that enhance energy efficiency, reliability, and adaptive learning at the edge. The findings of the survey reveal that quantum-enhanced LLMs overcome challenges pertaining to bandwidth allocation, dynamic routing, and interoperability in existing classical communication systems. Overall, quantum-empowered LLMs significantly assist intelligent, autonomous, and immersive communications in SAGIN, while enabling secure, privacy-preserving communication. Full article
(This article belongs to the Special Issue Vehicular Sensing for Improved Urban Mobility: 2nd Edition)
Show Figures

Figure 1

33 pages, 3090 KB  
Article
Vulnerability to Counterfeit Currency Fraud in Bulgaria: Public Competency Assessment in Identifying Genuine Lev Banknotes Before the Euro Cash Changeover
by Georgi Georgiev, Ivan Georgiev, Katina Kisyova and Slavi Georgiev
Soc. Sci. 2026, 15(2), 104; https://doi.org/10.3390/socsci15020104 - 9 Feb 2026
Viewed by 255
Abstract
This article examines vulnerability to counterfeit currency fraud in Bulgaria by assessing citizens’ competence in recognizing genuine banknotes of the national currency (BGN) prior to the introduction of euro banknotes in 2026. Counterfeit banknotes represent a form of economic crime in which individual [...] Read more.
This article examines vulnerability to counterfeit currency fraud in Bulgaria by assessing citizens’ competence in recognizing genuine banknotes of the national currency (BGN) prior to the introduction of euro banknotes in 2026. Counterfeit banknotes represent a form of economic crime in which individual victims’ losses are closely tied to their ability to authenticate cash in everyday transactions. Drawing on level-1 security features and guidelines of the Bulgarian National Bank, we developed a structured questionnaire to operationalize knowledge of key authenticity checks (hologram, intaglio printing, watermark, security thread, see-through register). The survey was administered online and on paper over a 20-day period (22 August–11 September 2025) and completed by 371 respondents from across the country. Using descriptive statistics tools, we identify three distinct groups: (i) highly competent respondents who reliably distinguish genuine from counterfeit banknotes; (ii) individuals with high self-reported confidence but inconsistent performance; and (iii) a particularly vulnerable group with low knowledge of security features, limited awareness of official guidance and low self-confidence. Vulnerability is significantly associated with lower education, residence in smaller settlements, lack of prior exposure to counterfeit banknotes and absence of contact with institutional information campaigns. The findings have direct implications for crime prevention and criminal justice policy: they provide an evidence base for targeted public awareness initiatives and risk-based allocation of resources aimed at protecting high-risk groups from currency-related fraud in the context of the monetary transition. Full article
(This article belongs to the Section Crime and Justice)
Show Figures

Figure 1

27 pages, 1434 KB  
Article
An ML-Based Approach to Leveraging Social Media for Disaster Type Classification and Analysis Across World Regions
by Mohammad Robel Miah, Lija Akter, Ahmed Abdelmoamen Ahmed, Louis Ngamassi and Thiagarajan Ramakrishnan
Computers 2026, 15(1), 16; https://doi.org/10.3390/computers15010016 - 1 Jan 2026
Viewed by 379
Abstract
Over the past decade, the frequency and impact of both natural and human-induced disasters have increased significantly, highlighting the urgent need for effective and timely relief operations. Disaster response requires efficient allocation of resources to the right locations and disaster types in a [...] Read more.
Over the past decade, the frequency and impact of both natural and human-induced disasters have increased significantly, highlighting the urgent need for effective and timely relief operations. Disaster response requires efficient allocation of resources to the right locations and disaster types in a cost- and time-effective manner. However, during such events, large volumes of unverified and rapidly spreading information—especially on social media—often complicate situational awareness and decision-making. Consequently, extracting actionable insights and accurately classifying disaster-related information from social media platforms has become a critical research challenge. Machine Learning (ML) approaches have shown strong potential for categorizing disaster-related tweets, yet substantial variations in model accuracy persist across disaster types and regional contexts, suggesting that universal models may overlook linguistic and cultural nuances. This paper investigates the categorization and sub-categorization of natural disaster tweets using a labeled dataset of over 32,000 samples. Logistic Regression and Random Forest classifiers were trained and evaluated after comprehensive preprocessing to predict disaster categories and sub-categories. Furthermore, a country-specific prediction framework was implemented to assess how regional and cultural variations influence model performance. The results demonstrate strong overall classification accuracy, while revealing marked differences across countries, emphasizing the importance of context-aware, culturally adaptive ML approaches for reliable disaster information management. Full article
(This article belongs to the Special Issue Advances in Semantic Multimedia and Personalized Digital Content)
Show Figures

Figure 1

18 pages, 1101 KB  
Article
Power Management of a Wind-Powered Microgrid Based on Qualitative Needs
by Maryam Yaghoubirad and John Hall
Energies 2026, 19(1), 241; https://doi.org/10.3390/en19010241 - 31 Dec 2025
Viewed by 325
Abstract
Power management strategies for microgrids are typically designed around quantitative performance metrics such as cost, efficiency, and reliability. While effective in many settings, these approaches often do not fully account for qualitative, human-centric considerations, such as the relative importance or criticality of different [...] Read more.
Power management strategies for microgrids are typically designed around quantitative performance metrics such as cost, efficiency, and reliability. While effective in many settings, these approaches often do not fully account for qualitative, human-centric considerations, such as the relative importance or criticality of different loads. This limitation is especially relevant in remote or community-based energy systems, and becomes more pronounced in wind-powered microgrids, where variable generation and limited resources require flexible and context-aware operational decisions. In this work, a qualitative-driven power management framework is proposed that incorporates stakeholder-defined qualitative indices into microgrid energy allocation. A community–importance (CI) index is used to represent qualitative needs as normalized weighting factors, which are then used to guide power redistribution during supply–demand imbalances. The framework is demonstrated using a wind-powered microgrid with heterogeneous load types and is evaluated under different operating scenarios. The results show that the proposed approach supports prioritized and socially informed power allocation while preserving overall system feasibility. Rather than replacing conventional quantitative optimization, the framework acts as a complementary decision-support layer and is particularly well suited for microgrids serving remote or resource-constrained communities where qualitative priorities play an important role in operational planning. Full article
Show Figures

Figure 1

29 pages, 5168 KB  
Article
Effects of Dual-Operator Modes on Team Situation Awareness: A Non-Dyadic HMI Perspective in Intelligent Coal Mines
by Xiaofang Yuan, Xinxiang Zhang, Jiawei He and Linhui Sun
Appl. Sci. 2025, 15(24), 13222; https://doi.org/10.3390/app152413222 - 17 Dec 2025
Cited by 1 | Viewed by 399
Abstract
Under the context of non-dyadic human–machine interaction in intelligent coal mines, this study investigates the impact of different dyadic collaboration modes on Team Situation Awareness (TSA). Based on a simulated coal mine monitoring task, the experiment compares four working modes—Individual Operation, Supervised Operation, [...] Read more.
Under the context of non-dyadic human–machine interaction in intelligent coal mines, this study investigates the impact of different dyadic collaboration modes on Team Situation Awareness (TSA). Based on a simulated coal mine monitoring task, the experiment compares four working modes—Individual Operation, Supervised Operation, Cooperative Operation, and Divided-task Operation—across tasks of varying complexity. TSA was assessed using both objective (SAGAT) and subjective (SART) measures, alongside parallel evaluations of task performance and workload (NASA-TLX). The results demonstrate that, compared to Individual or Supervised Operation, both Cooperative and Divided-task Operation significantly enhance TSA and task performance. Cooperative Operation improves information integration and comprehension, while Divided-task Operation enhances response efficiency by enabling focused attention on role-specific demands. Moreover, dyadic collaboration reduces cognitive workload, with the task-sharing mode showing the lowest cognitive and temporal demands. The findings indicate that clear task structuring and real-time information exchange can alleviate cognitive bottlenecks and promote accurate environmental perception. Theoretically, this study extends the application of non-dyadic interaction theory to intelligent coal mine scenarios and empirically validates a “Collaboration Mode–TSA–Performance” model. Practically, it provides design implications for adaptive collaboration frameworks in high-risk, high-complexity industrial systems, highlighting the value of dynamic role allocation in optimizing cognitive resource utilization and enhancing operational safety. Full article
Show Figures

Figure 1

21 pages, 1279 KB  
Article
Visible Light Communication vs. Optical Camera Communication: A Security Comparison Using the Risk Matrix Methodology
by Ignacio Marin-Garcia, Victor Guerra, Jose Rabadan and Rafael Perez-Jimenez
Photonics 2025, 12(12), 1201; https://doi.org/10.3390/photonics12121201 - 5 Dec 2025
Viewed by 687
Abstract
Optical Wireless Communication (OWC) technologies are emerging as promising complements to radio-frequency systems, offering high bandwidth, spatial confinement, and license-free operation. Within this domain, Visible Light Communication (VLC) and Optical Camera Communication (OCC) represent two distinct paradigms with divergent performance and security profiles. [...] Read more.
Optical Wireless Communication (OWC) technologies are emerging as promising complements to radio-frequency systems, offering high bandwidth, spatial confinement, and license-free operation. Within this domain, Visible Light Communication (VLC) and Optical Camera Communication (OCC) represent two distinct paradigms with divergent performance and security profiles. While VLC leverages LED-photodiode links for high-speed data transfer, OCC exploits ubiquitous image sensors to decode modulated light patterns, enabling flexible but lower-rate communication. Despite their potential, both remain vulnerable to various attacks, including eavesdropping, jamming, spoofing, and privacy breaches. This work applies—and extends—the Risk Matrix (RM) methodology to systematically evaluate the security of VLC and OCC across reconnaissance, denial, and exploitation phases. Unlike prior literature, which treats VLC and OCC separately and under incompatible threat definitions, we introduce a unified, domain-specific risk framework that maps empirical channel behavior and attack feasibility into a common set of impact and likelihood indices. A normalized risk rank (NRR) is proposed to enable a direct, quantitative comparison of heterogeneous attacks and technologies under a shared reference scale. By quantifying risks for representative threats—including war driving, Denial of Service (DoS) attacks, preshared key cracking, and Evil Twin attacks—our analysis shows that neither VLC nor OCC is intrinsically more secure; rather, their vulnerabilities are context-dependent, shaped by physical constraints, receiver architectures, and deployment environments. VLC tends to concentrate confidentiality-driven exposure due to optical leakage paths, whereas OCC is more sensitive to availability-related degradation under adversarial load. Overall, the main contribution of this work is the first unified, standards-aligned, and empirically grounded risk-assessment framework capable of comparing VLC and OCC on a common security scale. The findings highlight the need for technology-aware security strategies in future OWC deployments and demonstrate how an adapted RM methodology can identify priority areas for mitigation, design, and resource allocation. Full article
Show Figures

Figure 1

30 pages, 874 KB  
Review
Beyond Service Inventories: A Three-Dimensional Framework for Diagnosing Structural Barriers in Academic Library Research Dataset Management
by Mthokozisi Masumbika Ncube and Patrick Ngulube
Information 2025, 16(12), 1046; https://doi.org/10.3390/info16121046 - 1 Dec 2025
Viewed by 572
Abstract
Academic libraries have assumed expansive research data management (RDM) responsibilities, yet persistent dataset underutilisation suggests systemic disconnects between services and researcher needs. This scoping review applied a three-dimensional diagnostic framework to examine why libraries struggle to advance beyond consultative roles despite sustained investment. [...] Read more.
Academic libraries have assumed expansive research data management (RDM) responsibilities, yet persistent dataset underutilisation suggests systemic disconnects between services and researcher needs. This scoping review applied a three-dimensional diagnostic framework to examine why libraries struggle to advance beyond consultative roles despite sustained investment. Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines, this review analysed 34 empirical studies (2015–2025). Electronic databases, key journals, and grey literature sources were systematically reviewed, with 65% of studies originating from high-income (Global North) contexts. The analysis integrated the Institutional Readiness Index (IRI), Service Maturity Level (SML), and Information Flow Efficiency (IFE) to assess library engagement with research datasets. Three structural patterns constrain effectiveness. First, a capacity-complexity mismatch emerges as libraries manage increasingly diverse datasets without proportional infrastructure scaling, creating bottlenecks in discoverability, interoperability, and preservation. Second, structural progression barriers appear, where advancement requires simultaneous development across infrastructure, staffing, governance, and engagement rather than sequential improvement. Third, an implementation gap separates Findable, Accessible, Interoperable, Reusable (FAIR) policy awareness from operational capacity, as most institutions demonstrate standards knowledge without technical operationalisation ability. These patterns form interdependent constraints: infrastructure limitations correlate with restricted services, which are associated with persistent researcher skill gaps, reduced engagement, and constrained resource allocation, reinforcing the initial deficits. The review framework provides diagnostic specificity for identifying whether constraints stem from readiness, maturity, or implementation failures. This study advances RDM scholarship by explaining stagnation patterns rather than cataloguing services, offering an empirically grounded diagnostic tool. However, the findings reflect predominantly high-resource contexts and require validation across diverse institutional settings. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

18 pages, 1835 KB  
Article
Towards Robust Medical Image Segmentation with Hybrid CNN–Linear Mamba
by Xiao Ma and Guangming Lu
Electronics 2025, 14(23), 4726; https://doi.org/10.3390/electronics14234726 - 30 Nov 2025
Viewed by 702
Abstract
Problem: Medical image segmentation faces critical challenges in balancing global context modeling and computational efficiency. While conventional neural networks struggle with long-range dependencies, Transformers incur quadratic complexity. Although Mamba-based architectures achieve linear complexity, they lack adaptive mechanisms for heterogeneous medical images and demonstrate [...] Read more.
Problem: Medical image segmentation faces critical challenges in balancing global context modeling and computational efficiency. While conventional neural networks struggle with long-range dependencies, Transformers incur quadratic complexity. Although Mamba-based architectures achieve linear complexity, they lack adaptive mechanisms for heterogeneous medical images and demonstrate insufficient local feature extraction capabilities. Method: We propose Linear Context-Aware Robust Mamba (LCAR–Mamba) to address these dual limitations through adaptive resource allocation and enhanced multi-scale extraction. LCAR–Mamba integrates two synergistic modules: the Context-Aware Linear Mamba Module (CALM) for adaptive global–local fusion, and the Multi-scale Partial Dilated Convolution Module (MSPD) for efficient multi-scale feature refinement. Core Innovations: CALM module implements content-driven resource allocation through four-stage processing: (1) analyzing spatial complexity via gradient and activation statistics, (2) computing allocation weights to dynamically balance global and local processing branches, (3) parallel dual-path processing with linear attention and convolution, and (4) adaptive fusion guided by complexity weights. MSPD module employs statistics-based channel selection and multi-scale partial dilated convolutions to capture features at multiple receptive scales while reducing computational cost. Key Results: On ISIC2017 and ISIC2018 datasets, mIoU improvements of 0.81%/1.44% confirm effectiveness across 2D benchmarks. On the Synapse dataset, LCAR–Mamba achieves 85.56% DSC, outperforming the former best Mamba baseline by 0.48% with 33% fewer parameters. Significance: LCAR–Mamba demonstrates that adaptive resource allocation and statistics-driven multi-scale extraction can address critical limitations in linear-complexity architectures, establishing a promising direction for efficient medical image segmentation. Full article
(This article belongs to the Special Issue Target Tracking and Recognition Techniques and Their Applications)
Show Figures

Figure 1

14 pages, 1345 KB  
Article
Fair and Energy-Efficient Charging Resource Allocation for Heterogeneous UGV Fleets
by Dimitris Ziouzios, Nikolaos Baras, Minas Dasygenis and Constantinos Tsanaktsidis
Computers 2025, 14(11), 473; https://doi.org/10.3390/computers14110473 - 1 Nov 2025
Viewed by 511
Abstract
This paper addresses the critical challenge of energy management for autonomous robots in the context of large-scale photovoltaic parks. The dynamic and vast nature of these environments, characterized by dense, structured rows of solar panels, introduces unique complexities, including uneven terrain, varied operational [...] Read more.
This paper addresses the critical challenge of energy management for autonomous robots in the context of large-scale photovoltaic parks. The dynamic and vast nature of these environments, characterized by dense, structured rows of solar panels, introduces unique complexities, including uneven terrain, varied operational demands, and the need for equitable resource allocation among diverse robot fleets. The presented framework adapts and significantly extends the Affinity Propagation algorithm for strategic charging station placement within photovoltaic parks. The key contributions include: (1) a multi-attribute grid-based environment model that quantifies terrain difficulty and panel-specific obstacles; (2) an extended multi-factor scoring function that incorporates penalties for terrain inaccessibility and proximity to sensitive photovoltaic infrastructure; (3) a sophisticated, energy-aware consumption model that accounts for terrain friction, slope, and rolling resistance; and (4) a novel multi-agent fairness constraint that ensures equitable access to charging resources across heterogeneous robot sub-fleets. Through extensive simulations on synthesized photovoltaic park environments, it is demonstrated that the enhanced algorithm not only significantly reduces travel distance and energy consumption but also promotes a fairer, more efficient operational ecosystem, paving the way for scalable and sustainable robotic maintenance and inspection. Full article
(This article belongs to the Special Issue Advanced Human–Robot Interaction 2025)
Show Figures

Figure 1

32 pages, 3647 KB  
Article
AI Bias in Power Systems Domain—Exemplary Cases and Approaches
by Chijioke Eze, Abraham Ezema, Lara Roth, Zhiyu Pan, Ferdinanda Ponci and Antonello Monti
Energies 2025, 18(18), 4819; https://doi.org/10.3390/en18184819 - 10 Sep 2025
Viewed by 1728
Abstract
This paper examines artificial intelligence (AI) bias in power systems applications through systematic analysis of three critical use cases: load forecasting, predictive maintenance, and ontology matching for system interoperability. While AI solutions show great potential for addressing complex power system challenges, they face [...] Read more.
This paper examines artificial intelligence (AI) bias in power systems applications through systematic analysis of three critical use cases: load forecasting, predictive maintenance, and ontology matching for system interoperability. While AI solutions show great potential for addressing complex power system challenges, they face adoption barriers due to biases that compromise fairness, reliability, and operational performance. Our investigation demonstrates how different bias types—including data representation, algorithmic, and sampling biases—manifest in power systems contexts, directly affecting grid efficiency, resource allocation, and socioeconomic equity across the electrical power and energy domain. For each use case, we provide quantitative evidence of bias impact and propose targeted mitigation strategies that emphasize data diversity, ensemble methods, explainable AI techniques, and fairness-aware algorithms. By establishing a comprehensive taxonomy of bias types relevant to power systems and developing practical mitigation frameworks, this work bridges the critical gap between abstract bias concepts and real-world power system applications. The resulting framework provides a structured approach for developing equitable, robust AI systems that align with power systems’ operational requirements while accelerating the responsible adoption of AI in safety-critical infrastructure. Full article
(This article belongs to the Special Issue Advances in Sustainable Power and Energy Systems: 2nd Edition)
Show Figures

Figure 1

32 pages, 1813 KB  
Article
Compressing and Decompressing Activities in Multi-Project Scheduling Under Uncertainty and Resource Flexibility
by Marzieh Aghileh, Anabela Tereso, Filipe Alvelos and Maria Odete Monteiro Lopes
Sustainability 2025, 17(18), 8108; https://doi.org/10.3390/su17188108 - 9 Sep 2025
Viewed by 1334
Abstract
In multi-project environments characterized by resource constraints and high uncertainty, traditional scheduling approaches often fail to respond effectively to dynamic project conditions. Fixed activity durations and rigid resource allocations limit adaptability, leading to inefficiencies and delays. To address this, the paper proposes a [...] Read more.
In multi-project environments characterized by resource constraints and high uncertainty, traditional scheduling approaches often fail to respond effectively to dynamic project conditions. Fixed activity durations and rigid resource allocations limit adaptability, leading to inefficiencies and delays. To address this, the paper proposes a novel heuristic-based scheduling method that compresses and decompresses activity durations dynamically within the context of multi-project scheduling under uncertainty and resource flexibility—while preserving resource and precedence feasibility. The technique integrates Critical Path Method (CPM) calculations with heuristic rules to identify candidate activities whose durations can be reduced or extended based on slack availability and resource effort profiles. The objective is to enhance scheduling flexibility, improve resource utilization, and better align project execution with organizational priorities and sustainability goals. Validated through a case study at an automotive company in Portugal, the method demonstrates its practical effectiveness in recalibrating schedules and balancing resource loads. This contribution offers a timely and necessary innovation for companies aiming to enhance responsiveness and competitiveness in increasingly complex project landscapes. It provides an actionable framework for dynamic schedule adjustment in multi-project environments, helping companies to respond more effectively to uncertainty and resource fluctuations. Importantly, the proposed approach also supports sustainability objectives in new product development and supply chain operations. For practitioners, the method offers a responsive and sustainable planning tool that supports real-time adjustments in project portfolios, enhancing resource visibility and execution resilience. For researchers, the study contributes a reproducible, Python-based implementation grounded in Design Science Research (DSR), addressing gaps in stochastic multi-project scheduling and sustainability-aware planning. Full article
(This article belongs to the Special Issue Achieving Sustainability in New Product Development and Supply Chain)
Show Figures

Figure 1

Back to TopTop