Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (342)

Search Parameters:
Keywords = dynamic contextual information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 1452 KB  
Article
A User-Centric Context-Aware Framework for Real-Time Optimisation of Multimedia Data Privacy Protection, and Information Retention Within Multimodal AI Systems
by Ndricim Topalli and Atta Badii
Sensors 2025, 25(19), 6105; https://doi.org/10.3390/s25196105 - 3 Oct 2025
Abstract
The increasing use of AI systems for face, object, action, scene, and emotion recognition raises significant privacy risks, particularly when processing Personally Identifiable Information (PII). Current privacy-preserving methods lack adaptability to users’ preferences and contextual requirements, and obfuscate user faces uniformly. This research [...] Read more.
The increasing use of AI systems for face, object, action, scene, and emotion recognition raises significant privacy risks, particularly when processing Personally Identifiable Information (PII). Current privacy-preserving methods lack adaptability to users’ preferences and contextual requirements, and obfuscate user faces uniformly. This research proposes a user-centric, context-aware, and ontology-driven privacy protection framework that dynamically adjusts privacy decisions based on user-defined preferences, entity sensitivity, and contextual information. The framework integrates state-of-the-art recognition models for recognising faces, objects, scenes, actions, and emotions in real time on data acquired from vision sensors (e.g., cameras). Privacy decisions are directed by a contextual ontology based in Contextual Integrity theory, which classifies entities into private, semi-private, or public categories. Adaptive privacy levels are enforced through obfuscation techniques and a multi-level privacy model that supports user-defined red lines (e.g., “always hide logos”). The framework also proposes a Re-Identifiability Index (RII) using soft biometric features such as gait, hairstyle, clothing, skin tone, age, and gender, to mitigate identity leakage and to support fallback protection when face recognition fails. The experimental evaluation relied on sensor-captured datasets, which replicate real-world image sensors such as surveillance cameras. User studies confirmed that the framework was effective, with over 85.2% of participants rating the obfuscation operations as highly effective, and the other 14.8% stating that obfuscation was adequately effective. Amongst these, 71.4% considered the balance between privacy protection and usability very satisfactory and 28% found it satisfactory. GPU acceleration was deployed to enable real-time performance of these models by reducing frame processing time from 1200 ms (CPU) to 198 ms. This ontology-driven framework employs user-defined red lines, contextual reasoning, and dual metrics (RII/IVI) to dynamically balance privacy protection with scene intelligibility. Unlike current anonymisation methods, the framework provides a real-time, user-centric, and GDPR-compliant method that operationalises privacy-by-design while preserving scene intelligibility. These features make the framework appropriate to a variety of real-world applications including healthcare, surveillance, and social media. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

25 pages, 877 KB  
Article
Cyber Coercion Detection Using LLM-Assisted Multimodal Biometric System
by Abdulaziz Almehmadi
Appl. Sci. 2025, 15(19), 10658; https://doi.org/10.3390/app151910658 - 2 Oct 2025
Abstract
Cyber coercion, where legitimate users are forced to perform actions under duress, poses a serious insider threat to modern organizations, especially to critical infrastructure. Traditional security controls and monitoring tools struggle to distinguish coerced actions from normal user actions. In this paper, we [...] Read more.
Cyber coercion, where legitimate users are forced to perform actions under duress, poses a serious insider threat to modern organizations, especially to critical infrastructure. Traditional security controls and monitoring tools struggle to distinguish coerced actions from normal user actions. In this paper, we propose a cyber coercion detection system that analyzes a user’s activity using an integrated large language model (LLM) to evaluate contextual cues from user commands or actions and current policies and procedures. If the LLM indicates coercion, behavioral methods, such as keystroke dynamics and mouse usage patterns, and physiological signals such as heart rate are analyzed to detect stress or anomalies indicative of duress. Experimental results show that the LLM-assisted multimodal approach shows potential in detecting coercive activity with and without detected coercive communication, where multimodal biometrics assist the confidence of the LLM in cases in which it does not detect coercive communication. The proposed system may add a critical detection capability against coercion-based cyber-attacks, providing early warning signals that could inform defensive responses before damage occurs. Full article
Show Figures

Figure 1

16 pages, 3907 KB  
Article
Addressing Air Pollution Challenges: An Integrated Algorithmic Approach Towards Safeguarding Built Heritage
by Dimitrios Mitsos and Vassilis Poulopoulos
Algorithms 2025, 18(10), 619; https://doi.org/10.3390/a18100619 - 1 Oct 2025
Abstract
Air pollution poses significant risks to built heritage, yet traditional methods for diagnosing degradation patterns remain largely fragmented, often relying on isolated data streams and/or subjective comparative interpretations. This study proposes a novel modular workflow that integrates Raman spectroscopy and micro-XRF spectrometry data [...] Read more.
Air pollution poses significant risks to built heritage, yet traditional methods for diagnosing degradation patterns remain largely fragmented, often relying on isolated data streams and/or subjective comparative interpretations. This study proposes a novel modular workflow that integrates Raman spectroscopy and micro-XRF spectrometry data with user-defined contextual metadata to automate the characterisation of pollution-induced degradation layers on monuments. This method utilises algorithms for peak detection, dimensionality reduction, unsupervised machine learning clustering, variance analysis across centroids, and correlation analysis, as well as steps for data re-encoding and visualisation of the results, allowing for scalable and reproducible analyses on heterogeneous multidimensional datasets. Applied to case studies from Athens, Piraeus, and Eleusis, Greece, the workflow successfully identified pollution sources and degradation patterns, while also quantifying the contribution of features, including contextual variables such as surface orientation and sampling height. The results validate the method’s capacity to combine molecular and elemental data streams, to enhance interpretive clarity, and to minimise manual effort and subjectivity. This work showcases the potential of algorithmic approaches in cultural heritage diagnostics to adapt dynamically and incorporate additional datasets and informs future applications of automated methods in the broader field of heritage science. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 1571 KB  
Article
Decision Support Systems for Time Series in Sport: Literature Review and Applied Example of Changepoint-Based Most Demanding Scenario Analysis in Basketball
by Xavier Schelling, Bartholomew Spencer, Victor Azalbert, Enrique Alonso-Perez-Chao, Carlos Sosa and Sam Robertson
Appl. Sci. 2025, 15(19), 10575; https://doi.org/10.3390/app151910575 - 30 Sep 2025
Abstract
Decision Support Systems (DSSs) are increasingly shaping high-performance sport by translating complex time series data into actionable insights for coaches and practitioners. This paper outlines a structured, five-stage DSS development pipeline, grounded in the Schelling and Robertson framework, and demonstrates its application in [...] Read more.
Decision Support Systems (DSSs) are increasingly shaping high-performance sport by translating complex time series data into actionable insights for coaches and practitioners. This paper outlines a structured, five-stage DSS development pipeline, grounded in the Schelling and Robertson framework, and demonstrates its application in professional basketball. Using changepoint analysis, we present a novel approach to dynamically quantify Most Demanding Scenarios (MDSs) using high-resolution optical tracking data in this context. Unlike fixed-window methods, this approach adapts scenario duration to real performance, improving the ecological validity and practical interpretation of MDS metrics for athlete profiling, benchmarking, and training prescription. The system is realized as an interactive web dashboard, providing intuitive visualizations and individualized feedback by integrating validated workload metrics with contextual game information. Practitioners can rapidly distinguish normative from outlier performance periods, guiding recovery and conditioning strategies, and more accurately replicating game demands in training. While illustrated in basketball, the pipeline and principles are broadly transferable, offering a replicable blueprint for integrating context-aware analytics and enhancing data-driven decision-making in elite sport. Full article
(This article belongs to the Special Issue State-of-the-Art of Intelligent Decision Support Systems)
Show Figures

Figure 1

20 pages, 2545 KB  
Article
LG-UNet Based Segmentation and Survival Prediction of Nasopharyngeal Carcinoma Using Multimodal MRI Imaging
by Yuhao Yang, Junhao Wen, Tianyi Wu, Jinrang Dong, Yunfei Xia and Yu Zhang
Bioengineering 2025, 12(10), 1051; https://doi.org/10.3390/bioengineering12101051 - 29 Sep 2025
Abstract
Image segmentation and survival prediction for nasopharyngeal carcinoma (NPC) are crucial for clinical diagnosis and treatment decisions. This study presents an improved 3D-UNet-based model for NPC GTV segmentation, referred to as LG-UNet. The encoder introduces deep strip convolution and channel attention mechanisms to [...] Read more.
Image segmentation and survival prediction for nasopharyngeal carcinoma (NPC) are crucial for clinical diagnosis and treatment decisions. This study presents an improved 3D-UNet-based model for NPC GTV segmentation, referred to as LG-UNet. The encoder introduces deep strip convolution and channel attention mechanisms to enhance feature extraction while avoiding spatial feature loss and anisotropic constraints. The decoder incorporates Dynamic Large Convolutional Kernel (DLCK) and Global Feature Fusion (GFF) modules to capture multi-scale features and integrate global contextual information, enabling precise segmentation of the tumor GTV in NPC MRI images. Risk prediction is performed on the segmented multi-modal MRI images using the Lung-Net model, with output risk factors combined with clinical data in the Cox model to predict metastatic probabilities for NPC lesions. Experimental results on 442 NPC MRI scans from Sun Yat-sen University Cancer Center showed DSC of 0.8223, accuracy of 0.8235, recall of 0.8297, and HD95 of 1.6807 mm. Compared to the baseline model, the DSC improved by 7.73%, accuracy increased by 4.52%, and recall improved by 3.40%. The combined model’s risk prediction showed C-index values of 0.756, with a 5-year AUC value of 0.789. This model can serve as an auxiliary tool for clinical decision-making in NPC. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

18 pages, 1003 KB  
Article
Weathering the Storm: Dynamic Capabilities and Supply Chain Agility in Supply Chain Resilience
by Marie Legg, Reginald A. Silver and Sungjune Park
Logistics 2025, 9(4), 136; https://doi.org/10.3390/logistics9040136 - 25 Sep 2025
Abstract
Background: Despite growing interest in supply chain resilience (SCRes), theoretical overlap between dynamic capabilities (DC) and supply chain agility (SCA) has complicated empirical analysis of their distinct roles. Additionally, the contextual role of information asymmetry in shaping resilience remains underexplored. This study [...] Read more.
Background: Despite growing interest in supply chain resilience (SCRes), theoretical overlap between dynamic capabilities (DC) and supply chain agility (SCA) has complicated empirical analysis of their distinct roles. Additionally, the contextual role of information asymmetry in shaping resilience remains underexplored. This study addresses both issues by modeling DC hierarchically and examining IA as a moderator. Methods: Data were collected through a cross-sectional survey of 157 U.S.-based supply chain professionals. Partial least squares structural equation modeling (PLS-SEM) was used to examine the relationships among DC, SCA, IA, and SCRes. Results: SCA was a strong, direct predictor of SCRes. In contrast, DC showed no direct effect in the full model; however, in a hierarchical component model (HCM), DC, a higher-order construct, emerged as significant predictor of SCRes. IA exerted a dual negative influence: it directly weakened SCRes and negatively moderated the relationship between DC and SCRes. Conclusions: This study makes two novel contributions. First, it resolves ambiguity between DC and SCA by empirically modeling DC as a higher-order construct that encompasses but remains distinct from SCA. Second, it introduces IA as a multidimensional barrier to resilience, demonstrating its direct and interactive effects. These findings provide new insight into capability design and contextual adaptation for SCRes in uncertain, information-constrained environments. Full article
Show Figures

Figure 1

27 pages, 9667 KB  
Article
REU-YOLO: A Context-Aware UAV-Based Rice Ear Detection Model for Complex Field Scenes
by Dongquan Chen, Kang Xu, Wenbin Sun, Danyang Lv, Songmei Yang, Ranbing Yang and Jian Zhang
Agronomy 2025, 15(9), 2225; https://doi.org/10.3390/agronomy15092225 - 20 Sep 2025
Viewed by 266
Abstract
Accurate detection and counting of rice ears serve as a critical indicator for yield estimation, but the complex conditions of paddy fields limit the efficiency and precision of traditional sampling methods. We propose REU-YOLO, a model specifically designed for UAV low-altitude remote sensing [...] Read more.
Accurate detection and counting of rice ears serve as a critical indicator for yield estimation, but the complex conditions of paddy fields limit the efficiency and precision of traditional sampling methods. We propose REU-YOLO, a model specifically designed for UAV low-altitude remote sensing to collect images of rice ears, to address issues such as high-density and complex spatial distribution with occlusion in field scenes. Initially, we combine the Additive Block containing Convolutional Additive Self-attention (CAS) and Convolutional Gated Linear Unit (CGLU) to propose a novel module called Additive-CGLU-C2F (AC-C2f) as a replacement for the original C2f in YOLOv8. It can capture the contextual information between different regions of images and improve the feature extraction ability of the model, introduce the Dropblock strategy to reduce model overfitting, and replace the original SPPF module with the SPPFCSPC-G module to enhance feature representation and improve the capacity of the model to extract features across varying scales. We further propose a feature fusion network called Multi-branch Bidirectional Feature Pyramid Network (MBiFPN), which introduces a small object detection head and adjusts the head to focus more on small and medium-sized rice ear targets. By using adaptive average pooling and bidirectional weighted feature fusion, shallow and deep features are dynamically fused to enhance the robustness of the model. Finally, the Inner-PloU loss function is introduced to improve the adaptability of the model to rice ear morphology. In the self-developed dataset UAVR, REU-YOLO achieves a precision (P) of 90.76%, a recall (R) of 86.94%, an mAP0.5 of 93.51%, and an mAP0.5:0.95 of 78.45%, which are 4.22%, 3.76%, 4.85%, and 8.27% higher than the corresponding values obtained with YOLOv8 s, respectively. Furthermore, three public datasets, DRPD, MrMT, and GWHD, were used to perform a comprehensive evaluation of REU-YOLO. The results show that REU-YOLO indicates great generalization capabilities and more stable detection performance. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

24 pages, 2338 KB  
Article
DynaNet: A Dynamic Feature Extraction and Multi-Path Attention Fusion Network for Change Detection
by Xue Li, Dong Li, Jiandong Fang and Xueying Feng
Sensors 2025, 25(18), 5832; https://doi.org/10.3390/s25185832 - 18 Sep 2025
Viewed by 349
Abstract
Existing change detection methods often struggle with both inadequate feature fusion and interference from background noise when processing bi-temporal remote sensing imagery. These challenges are particularly pronounced in building change detection, where capturing subtle spatial and semantic dependencies is critical. To address these [...] Read more.
Existing change detection methods often struggle with both inadequate feature fusion and interference from background noise when processing bi-temporal remote sensing imagery. These challenges are particularly pronounced in building change detection, where capturing subtle spatial and semantic dependencies is critical. To address these issues, we propose DynaNet, a dynamic feature extraction and multi-path attention fusion network for change detection. Specifically, we design a Dynamic Feature Extractor (DFE) that leverages a cross-temporal gating mechanism to amplify relevant change signals while suppressing irrelevant variations, enabling high-quality feature alignment. A Contextual Attention Module (CAM) is then employed to incorporate global contextual information, further enhancing the discriminative capability of change regions. Additionally, a Multi-Branch Attention Fusion Module (MBAFM) is introduced to model inter-scale semantic relationships through self- and cross-attention mechanisms, thereby improving the detection of fine-grained structural changes. To facilitate robust evaluation, we present a new benchmark dataset, Inner-CD, comprising 800 pairs of 256 × 256 bi-temporal satellite images with 0.5–2 m spatial resolution. Unlike existing datasets, Inner-CD features abundant buildings in both temporal images, with changes manifested as subtle morphological variations. Extensive experiments demonstrate that DynaNet achieves state-of-the-art performance, obtaining F1-scores of 90.92% on Inner-CD, 92.38% on LEVIR-CD, and 94.35% on WHU-CD. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 2051 KB  
Article
A Study on the Evolution of Online Public Opinion During Major Public Health Emergencies Based on Deep Learning
by Yimin Yang, Julin Wang and Ming Liu
Mathematics 2025, 13(18), 3021; https://doi.org/10.3390/math13183021 - 18 Sep 2025
Viewed by 194
Abstract
This study investigates the evolution of online public opinion during the COVID-19 pandemic by integrating topic mining with sentiment analysis. To overcome the limitations of traditional short-text models and improve the accuracy of sentiment detection, we propose a novel hybrid framework that combines [...] Read more.
This study investigates the evolution of online public opinion during the COVID-19 pandemic by integrating topic mining with sentiment analysis. To overcome the limitations of traditional short-text models and improve the accuracy of sentiment detection, we propose a novel hybrid framework that combines a GloVe-enhanced Biterm Topic Model (BTM) for semantic-aware topic clustering with a RoBERTa-TextCNN architecture for deep, context-rich sentiment classification. The framework is specifically designed to capture both the global semantic relationships of words and the dynamic contextual nuances of social media discourse. Using a large-scale corpus of more than 550,000 Weibo posts, we conducted comprehensive experiments to evaluate the model’s effectiveness. The proposed approach achieved an accuracy of 92.45%, significantly outperforming baseline transformer-based baseline representative of advanced contextual embedding models across multiple evaluation metrics, including precision, recall, F1-score, and AUC. These results confirm the robustness and stability of the hybrid design and demonstrate its advantages in balancing precision and recall. Beyond methodological validation, the empirical analysis provides important insights into the dynamics of online public discourse. User engagement is found to be highest for the topics directly tied to daily life, with discussions about quarantine conditions alone accounting for 42.6% of total discourse. Moreover, public sentiment proves to be highly volatile and event-driven; for example, the announcement of Wuhan’s reopening produced an 11% surge in positive sentiment, reflecting a collective emotional uplift at a major turning point of the pandemic. Taken together, these findings demonstrate that online discourse evolves in close connection with both societal conditions and government interventions. The proposed topic–sentiment analysis framework not only advances methodological research in text mining and sentiment analysis, but also has the potential to serve as a practical tool for real-time monitoring online opinion. By capturing the fluctuations of public sentiment and identifying emerging themes, this study aims to provide insights that could inform policymaking by suggesting strategies to guide emotional contagion, strengthen crisis communication, and promote constructive public debate during health emergencies. Full article
(This article belongs to the Special Issue AI, Machine Learning and Optimization)
Show Figures

Figure 1

20 pages, 3989 KB  
Article
A2DSC-Net: A Network Based on Multi-Branch Dilated and Dynamic Snake Convolutions for Water Body Extraction
by Shuai Zhang, Chao Zhang, Qichao Zhao, Junjie Ma and Pengpeng Zhang
Water 2025, 17(18), 2760; https://doi.org/10.3390/w17182760 - 18 Sep 2025
Viewed by 256
Abstract
The accurate and efficient acquisition of the spatiotemporal distribution of surface water is of vital importance for water resource utilization, flood monitoring, and environmental protection. However, deep learning models often suffer from two major limitations when applied to high-resolution remote sensing imagery: the [...] Read more.
The accurate and efficient acquisition of the spatiotemporal distribution of surface water is of vital importance for water resource utilization, flood monitoring, and environmental protection. However, deep learning models often suffer from two major limitations when applied to high-resolution remote sensing imagery: the loss of small water body features due to encoder scale differences, and reduced boundary accuracy for narrow water bodies in complex backgrounds. To address these challenges, we introduce the A2DSC-Net, which offers two key innovations. First, a multi-branch dilated convolution (MBDC) module is designed to capture contextual information across multiple spatial scales, thereby enhancing the recognition of small water bodies. Second, a Dynamic Snake Convolution module is introduced to adaptively extract local features and integrate global spatial cues, significantly improving the delineation accuracy of narrow water bodies under complex background conditions. Ablation and comparative experiments were conducted under identical settings using the LandCover.ai and Gaofen Image Dataset (GID). The results show that A2DSC-Net achieves an average precision of 96.34%, average recall of 96.19%, average IoU of 92.8%, and average F1-score of 96.26%, outperforming classical segmentation models such as U-Net, DeepLabv3+, DANet, and PSPNet. These findings demonstrate that A2DSC-Net provides an effective and reliable solution for water body extraction from high-resolution remote sensing imagery. Full article
Show Figures

Figure 1

24 pages, 2616 KB  
Article
Symmetric Affix–Context Co-Attention: A Dual-Gating Framework for Robust POS Tagging in Low-Resource MRLs
by Yuan Qi, Samat Ali and Alim Murat
Symmetry 2025, 17(9), 1561; https://doi.org/10.3390/sym17091561 - 18 Sep 2025
Viewed by 296
Abstract
Part-of-speech (POS) tagging in low-resource, morphologically rich languages (LRLs/MRLs) remains challenging due to extensive affixation, high out-of-vocabulary (OOV) rates, and pervasive polysemy. We propose MRL-POS, a unified Transformer-CRF framework that dynamically selects informative affix features and integrates them with deep contextual embeddings via [...] Read more.
Part-of-speech (POS) tagging in low-resource, morphologically rich languages (LRLs/MRLs) remains challenging due to extensive affixation, high out-of-vocabulary (OOV) rates, and pervasive polysemy. We propose MRL-POS, a unified Transformer-CRF framework that dynamically selects informative affix features and integrates them with deep contextual embeddings via a novel dual-gating co-attention mechanism. First, a Dynamic Affix Selector adaptively adjusts n-gram ranges and frequency thresholds based on word length to ensure high-precision affix segmentation. Second, the Affix–Context Co-Attention Module employs two gating functions that conditionally amplify contextual dimensions with affix cues and vice versa, enabling robust disambiguation of complex and ambiguous forms. Third, Layer-Wise Attention Pooling aggregates multi-layer XLM-RoBERTa representations, emphasizing those most relevant for morphological and syntactic tagging. Evaluations on Uyghur, Kyrgyz, and Uzbek show that MRL-POS achieves an average F1 of 84.10%, OOV accuracy of 84.24%, and Poly-F1 of 72.14%, outperforming strong baselines by up to 8 F1 points. By explicitly modeling the symmetry between morphological affix cues and sentence-level context through a dual-gating co-attention mechanism, MRL-POS achieves a balanced fusion that both preserves local structure and captures global dependencies. Interpretability analyses confirm that 89.1% of the selected affixes align with linguistic expectations. This symmetric design not only enhances robustness in low-resource and agglutinative settings but also offers a general paradigm for symmetry-aware sequence labeling tasks. Full article
Show Figures

Figure 1

5 pages, 160 KB  
Proceeding Paper
Abductive Intelligence, Creativity, Generative AI: The Role of Eco-Cognitive Openness and Situatedness
by Lorenzo Magnani
Proceedings 2025, 126(1), 10; https://doi.org/10.3390/proceedings2025126010 - 17 Sep 2025
Viewed by 260
Abstract
I recently developed the concept of eco-cognitive openness and situatedness to explain how cognitive systems, whether human or artificial, engage dynamically with their surroundings to generate information and creative outcomes through abductive cognition. Human cognition demonstrates significant eco-cognitive openness, utilizing external resources like [...] Read more.
I recently developed the concept of eco-cognitive openness and situatedness to explain how cognitive systems, whether human or artificial, engage dynamically with their surroundings to generate information and creative outcomes through abductive cognition. Human cognition demonstrates significant eco-cognitive openness, utilizing external resources like tools and cultural contexts to produce contextually rich hypotheses, sometimes highly creative via what I called “unlocked strategies.” Conversely, generative AI, such as large language models (LLMs) and image generators, employs “locked strategies,” relying on pre-existing datasets with minimal real-time environmental interaction—this leads to limited creativity. While these systems can yield some low-level degrees of creative outputs, their lack of human-like eco-cognitive openness restricts their ability to achieve high-level creative abductive feats, which remain a human strength, especially among the most talented. However, LLMs often outperform humans in routine cognitive tasks, exposing human intellectual limitations rather than AI deficiencies. Much human cognition is repetitive and imitative, resembling “stochastic parrots,” much like LLMs. Thus, LLMs are potent cognitive tools that can enhance human performance but also endanger creativity. Future AI developments, such as human–AI partnerships, could improve eco-cognitive openness, but risks like bias and overcomputationalization necessitate human oversight to ensure meaningful results. In collaborative settings, generative AI can serve as an epistemic mediator, narrowing the gap toward unlocked creativity. To safeguard human creativity, control over AI output must be maintained, embedding them in socio-cultural contexts. I also express concern that ethical and legal frameworks to mitigate AI’s negative impacts may fail to be enforced, risking “ethics washing” and “law washing.” Full article
21 pages, 37484 KB  
Article
Reconstructing Hyperspectral Images from RGB Images by Multi-Scale Spectral–Spatial Sequence Learning
by Wenjing Chen, Lang Liu and Rong Gao
Entropy 2025, 27(9), 959; https://doi.org/10.3390/e27090959 - 15 Sep 2025
Viewed by 455
Abstract
With rapid advancements in transformers, the reconstruction of hyperspectral images from RGB images, also known as spectral super-resolution (SSR), has made significant breakthroughs. However, existing transformer-based methods often struggle to balance computational efficiency with long-range receptive fields. Recently, Mamba has demonstrated linear complexity [...] Read more.
With rapid advancements in transformers, the reconstruction of hyperspectral images from RGB images, also known as spectral super-resolution (SSR), has made significant breakthroughs. However, existing transformer-based methods often struggle to balance computational efficiency with long-range receptive fields. Recently, Mamba has demonstrated linear complexity in modeling long-range dependencies and shown broad applicability in vision tasks. This paper proposes a multi-scale spectral–spatial sequence learning method, named MSS-Mamba, for reconstructing hyperspectral images from RGB images. First, we introduce a continuous spectral–spatial scan (CS3) mechanism to improve cross-dimensional feature extraction of the foundational Mamba model. Second, we propose a sequence tokenization strategy that generates multi-scale-aware sequences to overcome Mamba’s limitations in hierarchically learning multi-scale information. Specifically, we design the multi-scale information fusion (MIF) module, which tokenizes input sequences before feeding them into Mamba. The MIF employs a dual-branch architecture to process global and local information separately, dynamically fusing features through an adaptive router that generates weighting coefficients. This produces feature maps that contain both global contextual information and local details, ultimately reconstructing a high-fidelity hyperspectral image. Experimental results on the ARAD_1k, CAVE and grss_dfc_2018 dataset demonstrate the performance of MSS-Mamba. Full article
Show Figures

Figure 1

29 pages, 20970 KB  
Article
A Semantic Energy-Aware Ontological Framework for Adaptive Task Planning and Allocation in Intelligent Mobile Systems
by Jun-Hyeon Choi, Dong-Su Seo, Sang-Hyeon Bae, Ye-Chan An, Eun-Jin Kim, Jeong-Won Pyo and Tae-Yong Kuc
Electronics 2025, 14(18), 3647; https://doi.org/10.3390/electronics14183647 - 15 Sep 2025
Viewed by 264
Abstract
Intelligent robotic systems frequently operate under stringent energy limitations, especially in complex and dynamic environments. To enhance both adaptability and reliability, this study introduces a semantic planning framework that integrates ontology-driven reasoning with energy awareness. The framework estimates energy consumption based on the [...] Read more.
Intelligent robotic systems frequently operate under stringent energy limitations, especially in complex and dynamic environments. To enhance both adaptability and reliability, this study introduces a semantic planning framework that integrates ontology-driven reasoning with energy awareness. The framework estimates energy consumption based on the platform-specific behavior of sensing, actuation, and computational modules while continuously updating place-level semantic representations using real-time execution data. These representations encode not only spatial and contextual semantics but also energy characteristics acquired from prior operational history. By embedding historical energy usage profiles into hierarchical semantic maps, this framework enables more efficient route planning and context-aware task assignment. A shared semantic layer facilitates coordinated planning for both single-robot and multi-robot systems, with the decisions informed by energy-centric knowledge. This approach remains hardware-independent and can be applied across diverse platforms, such as indoor service robots and ground-based autonomous vehicles. Experimental validation using a differential-drive mobile platform in a structured indoor setting demonstrates improvements in energy efficiency, the robustness of planning, and the quality of the task distribution. This framework effectively connects high-level symbolic reasoning with low-level energy behavior, providing a unified mechanism for energy-informed semantic decision-making. Full article
Show Figures

Figure 1

18 pages, 3124 KB  
Article
TFHA: A Time–Frequency Harmonic Attention Framework for Analyzing Digital Management Strategy Impact Mechanisms
by Shu Cao and Can Zhou
Appl. Sci. 2025, 15(18), 9989; https://doi.org/10.3390/app15189989 - 12 Sep 2025
Viewed by 238
Abstract
In the era of digital transformation, understanding and quantifying the mechanisms by which management strategies influence organizational performance is a critical yet insufficiently addressed challenge. Existing analytical models often overlook the intertwined temporal dependencies, cross-frequency interactions, and heterogeneous contextual factors that shape strategic [...] Read more.
In the era of digital transformation, understanding and quantifying the mechanisms by which management strategies influence organizational performance is a critical yet insufficiently addressed challenge. Existing analytical models often overlook the intertwined temporal dependencies, cross-frequency interactions, and heterogeneous contextual factors that shape strategic impacts in real-world settings. To address these limitations, we propose TFHA (Time–Frequency Harmonic Attention), a unified framework that integrates frequency-domain pattern decomposition, temporal context encoding, and multi-view representation learning to analyze and forecast strategy-driven performance outcomes in an interpretable manner. Specifically, a Fourier Frequency Attention module captures multi-scale periodic patterns underlying strategic behaviors, while a temporal feature embedding component encodes both static calendar effects and dynamic, event-triggered fluctuations. Furthermore, a Contrastive Time–Frequency Representation Enhancement module aligns semantic, behavioral, and quantitative perspectives to produce robust, context-aware representations. Experiments on four real-world datasets from digital tourism management platforms demonstrate that TFHA reduces MAE by up to 18.5% compared with strong baselines such as Autoformer, Informer, and ETSformer, while exhibiting strong robustness under input perturbations and cross-domain generalization. These results highlight TFHA’s potential as both a predictive tool and an analytical lens for revealing the time–frequency dynamics underpinning the effectiveness of digital brand management strategies in tourism contexts. Full article
(This article belongs to the Special Issue Advanced Methods for Time Series Forecasting)
Show Figures

Figure 1

Back to TopTop