Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (64)

Search Parameters:
Keywords = token frequency

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3194 KB  
Article
Improved Real-Time Detection Transformer with Low-Frequency Feature Integrator and Token Statistics Self-Attention for Automated Grading of Stropharia rugoso-annulata Mushroom
by Yu-Hang He, Shi-Yun Duan and Wen-Hao Su
Foods 2025, 14(20), 3581; https://doi.org/10.3390/foods14203581 - 21 Oct 2025
Viewed by 288
Abstract
Manual grading of Stropharia rugoso-annulata mushroom is plagued by inefficiency and subjectivity, while existing detection models face inherent trade-offs between accuracy, real-time performance, and deployability on resource-constrained edge devices. To address these challenges, this study presents an Improved Real-Time Detection Transformer (RT-DETR) tailored [...] Read more.
Manual grading of Stropharia rugoso-annulata mushroom is plagued by inefficiency and subjectivity, while existing detection models face inherent trade-offs between accuracy, real-time performance, and deployability on resource-constrained edge devices. To address these challenges, this study presents an Improved Real-Time Detection Transformer (RT-DETR) tailored for automated grading of Stropharia rugoso-annulata. Two innovative modules underpin the model: (1) the low-frequency feature integrator (LFFI), which leverages wavelet decomposition to preserve critical low-frequency global structural information, thereby enhancing the capture of large mushroom morphology; (2) the Token Statistics Self-Attention (TSSA) mechanism, which replaces traditional self-attention with second-moment statistical computations. This reduces complexity from O(n2) to O(n) and inherently generates interpretable attention patterns, augmenting model explainability. Experimental results demonstrate that the improved model achieves 95.2% mAP@0.5:0.95 at 262 FPS, with a substantial reduction in computational overhead compared to the original RT-DETR. It outperforms APHS-YOLO in both accuracy and efficiency, eliminates the need for non-maximum suppression (NMS) post-processing, and balances global structural awareness with local detail sensitivity. These attributes render it highly suitable for industrial edge deployment. This work offers an efficient framework for the automated grading of large-target crop detection. Full article
(This article belongs to the Section Food Engineering and Technology)
Show Figures

Figure 1

26 pages, 52162 KB  
Article
ASFT-Transformer: A Fast and Accurate Framework for EEG-Based Pilot Fatigue Recognition
by Jiming Liu, Yi Zhou, Qileng He and Zhenxing Gao
Sensors 2025, 25(19), 6256; https://doi.org/10.3390/s25196256 - 9 Oct 2025
Viewed by 568
Abstract
Objective evaluation of pilot fatigue is crucial for enhancing aviation safety. Although electroencephalography (EEG) is regarded as an effective tool for recognizing pilot fatigue, the direct application of deep learning models to raw EEG signals faces significant challenges due to issues such as [...] Read more.
Objective evaluation of pilot fatigue is crucial for enhancing aviation safety. Although electroencephalography (EEG) is regarded as an effective tool for recognizing pilot fatigue, the direct application of deep learning models to raw EEG signals faces significant challenges due to issues such as massive data volume, excessively long training time, and model overfitting. Moreover, existing feature-based methods often suffer from data redundancy due to the lack of effective feature and channel selections, which compromises the model’s recognition efficiency and accuracy. To address these issues, this paper proposes a framework, named ASFT-Transformer, for fast and accurate detection of pilot fatigue. This framework first extracts time-domain and frequency-domain features from the four EEG frequency bands. Subsequently, it introduces a feature and channel selection strategy based on one-way analysis of variance and support vector machine (ANOVA-SVM) to identify the most fatigue-relevant features and pivotal EEG channels. Finally, the FT-Transformer (Feature Tokenizer + Transformer) model is employed for classification based on the selected features, transforming the fatigue recognition problem into a tabular data classification task. EEG data is collected from 32 pilots before and after actual simulator training to validate the proposed method. The results show that ASFT-Transformer achieved average accuracies of 97.24% and 87.72% based on cross-clip data partitioning and cross-subject data partitioning, which were significantly superior to several mainstream machine learning and deep learning models. Under the two types of cross-validation, the proposed feature and channel selection strategy not only improved the average accuracy by 2.45% and 8.07%, respectively, but also drastically reduced the average training time from above 1 h to under 10 min. This study offers civil aviation authorities and airline operators a tool to manage pilot fatigue objectively and effectively, thereby contributing to flight safety. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

31 pages, 4278 KB  
Article
Acoustic Analysis of Semi-Rigid Base Asphalt Pavements Based on Transformer Model and Parallel Cross-Gate Convolutional Neural Network
by Changfeng Hao, Min Ye, Boyan Li and Jiale Zhang
Appl. Sci. 2025, 15(16), 9125; https://doi.org/10.3390/app15169125 - 19 Aug 2025
Viewed by 456
Abstract
Semi-rigid base asphalt pavements, a common highway structure in China, often suffer from debonding defects which reduce road stability and shorten service life. In this study, a new method of road debonding detection based on the acoustic vibration method is proposed to address [...] Read more.
Semi-rigid base asphalt pavements, a common highway structure in China, often suffer from debonding defects which reduce road stability and shorten service life. In this study, a new method of road debonding detection based on the acoustic vibration method is proposed to address the needs of hidden debonding defects which are difficult to detect. The approach combines the Transformer model and the Transformer-based Parallel Cross-Gated Convolutional Neural Network (T-PCG-CNN) to classify and recognize semi-rigid base asphalt pavement acoustic data. Firstly, over a span of several years, an excitation device was designed and employed to collect acoustic data from different road types, creating a dedicated multi-sample dataset specifically for semi-rigid base asphalt pavements. Secondly, the improved Mel frequency cepstral coefficient (MFCC) feature and its first-order differential features (ΔMFCC) and second-order differential features (Δ2MFCC) are adopted as the input data of the network for different sample acoustic signal characteristics. Then, the proposed T-PCG-CNN model fuses the multi-frequency feature extraction advantage of a parallel cross-gate convolutional network and the long-time dependency capture ability of the Transformer model to improve the classification performance of different road acoustic features. Comprehensive experiments were conducted to analyze parameter sensitivity, feature combination strategies, and comparisons with existing classification algorithms. The results demonstrate that the proposed model achieves high accuracy and weighted F1 score. The confusion matrix indicates high per-class recall (including debonding), and the one-vs-rest ROC curves (AUC ≥ 0.95 for all classes) confirm strong class separability with low false-alarm trade-offs across operating thresholds. Moreover, the use of blockwise self-attention with global tokens and shared weight matrices significantly reduces model complexity and size. In the multi-type road data classification test, the classification accuracy reaches 0.9208 and the weighted F1 value reaches 0.9315, which is significantly better than the existing methods, demonstrating its generalizability in the identification of multiple road defect types. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

18 pages, 1345 KB  
Article
Detecting Structural Changes in Bitcoin, Altcoins, and the S&P 500 Using the GSADF Test: A Comparative Analysis of 2024 Trends
by Azusa Yamaguchi
J. Risk Financial Manag. 2025, 18(8), 450; https://doi.org/10.3390/jrfm18080450 - 12 Aug 2025
Viewed by 2089
Abstract
Understanding structural regime shifts in crypto asset markets is vital for early detection of systemic risk. This study applies the Generalized Sup Augmented Dickey–Fuller (GSADF) test to daily high-frequency price data of five major crypto assets—BTC, ETH, SOL, AAVE, and BCH—from 2023 to [...] Read more.
Understanding structural regime shifts in crypto asset markets is vital for early detection of systemic risk. This study applies the Generalized Sup Augmented Dickey–Fuller (GSADF) test to daily high-frequency price data of five major crypto assets—BTC, ETH, SOL, AAVE, and BCH—from 2023 to 2025. The results reveal asset-specific structural breaks: BTC and BCH aligned with macroeconomic shocks, while DeFi tokens (e.g., AAVE, SOL) exhibited fragmented, project-driven shifts. The S&P 500 index, in contrast, showed no persistent regime shifts, indicating greater structural stability. To examine inter-asset linkages, we construct co-occurrence matrices based on GSADF breakpoints. These reveal strong co-explosivity between BTC and other assets, and unexpectedly weak synchronization between ETH and AAVE, underscoring the sectoral idiosyncrasies of DeFi tokens. While the GSADF test remains central to our analysis, we also employ a Markov Switching Model (MSM) as a secondary tool to capture short-term volatility clustering. Together, these methods provide a layered view of long- and short-term market dynamics. This study highlights crypto markets’ structural heterogeneity and proposes scalable computational frameworks for real-time monitoring of explosive behavior. Full article
(This article belongs to the Section Risk)
Show Figures

Figure 1

18 pages, 27645 KB  
Article
Innovative Pedagogies for Industry 4.0: Teaching RFID with Serious Games in a Project-Based Learning Environment
by Pascal Vrignat, Manuel Avila, Florent Duculty, Christophe Bardet, Stéphane Begot and Pascale Marangé
Educ. Sci. 2025, 15(8), 953; https://doi.org/10.3390/educsci15080953 - 24 Jul 2025
Viewed by 728
Abstract
This work was conducted within the framework of French university reforms undertaken since 2022. Regardless of learning level and target audience, project-based learning has proved its effectiveness as a teaching strategy for many years. The novelty of the present contribution lies in the [...] Read more.
This work was conducted within the framework of French university reforms undertaken since 2022. Regardless of learning level and target audience, project-based learning has proved its effectiveness as a teaching strategy for many years. The novelty of the present contribution lies in the gamification of this learning method. A popular game, Trivial Pursuit, was adapted to enable students to acquire knowledge in a playful manner while preparing for upcoming technical challenges. Various technical subjects were chosen to create new cards for the game. A total of 180 questions and their answers were created. The colored tokens were then used to trace manufactured products. This teaching experiment was conducted as part of a project-based learning program with third-year Bachelor students (Electrical Engineering and Industrial Computing Department). The game components associated with the challenge proposed to the students comprised six key elements: objectives, challenges, mechanics, components, rules, and environment. Within the framework of the Industry 4.0 concept, this pedagogical activity focused on the knowledge, understanding, development, and application of an RFID (Radio Frequency Identification) system demonstrating the capabilities of this technology. This contribution outlines the various stages of the work assigned to the students. An industrial partner was also involved in this work. Full article
Show Figures

Figure 1

24 pages, 3937 KB  
Article
HyperTransXNet: Learning Both Global and Local Dynamics with a Dual Dynamic Token Mixer for Hyperspectral Image Classification
by Xin Dai, Zexi Li, Lin Li, Shuihua Xue, Xiaohui Huang and Xiaofei Yang
Remote Sens. 2025, 17(14), 2361; https://doi.org/10.3390/rs17142361 - 9 Jul 2025
Cited by 1 | Viewed by 729
Abstract
Recent advances in hyperspectral image (HSI) classification have demonstrated the effectiveness of hybrid architectures that integrate convolutional neural networks (CNNs) and Transformers, leveraging CNNs for local feature extraction and Transformers for global dependency modeling. However, existing fusion approaches face three critical challenges: (1) [...] Read more.
Recent advances in hyperspectral image (HSI) classification have demonstrated the effectiveness of hybrid architectures that integrate convolutional neural networks (CNNs) and Transformers, leveraging CNNs for local feature extraction and Transformers for global dependency modeling. However, existing fusion approaches face three critical challenges: (1) insufficient synergy between spectral and spatial feature learning due to rigid coupling mechanisms; (2) high computational complexity resulting from redundant attention calculations; and (3) limited adaptability to spectral redundancy and noise in small-sample scenarios. To address these limitations, we propose HyperTransXNet, a novel CNN-Transformer hybrid architecture that incorporates adaptive spectral-spatial fusion. Specifically, the proposed HyperTransXNet comprises three key modules: (1) a Hybrid Spatial-Spectral Module (HSSM) that captures the refined local spectral-spatial features and models global spectral correlations by combining depth-wise dynamic convolution with frequency-domain attention; (2) a Mixture-of-Experts Routing (MoE-R) module that adaptively fuses multi-scale features by dynamically selecting optimal experts via Top-K sparse weights; and (3) a Spatial-Spectral Tokens Enhancer (SSTE) module that ensures causality-preserving interactions between spectral bands and spatial contexts. Extensive experiments on the Indian Pines, Houston 2013, and WHU-Hi-LongKou datasets demonstrate the superiority of HyperTransXNet. Full article
(This article belongs to the Special Issue AI-Driven Hyperspectral Remote Sensing of Atmosphere and Land)
Show Figures

Figure 1

27 pages, 19258 KB  
Article
A Lightweight Multi-Frequency Feature Fusion Network with Efficient Attention for Breast Tumor Classification in Pathology Images
by Hailong Chen, Qingqing Song and Guantong Chen
Information 2025, 16(7), 579; https://doi.org/10.3390/info16070579 - 6 Jul 2025
Viewed by 727
Abstract
The intricate and complex tumor cell morphology in breast pathology images is a key factor for tumor classification. This paper proposes a lightweight breast tumor classification model with multi-frequency feature fusion (LMFM) to tackle the problem of inadequate feature extraction and poor classification [...] Read more.
The intricate and complex tumor cell morphology in breast pathology images is a key factor for tumor classification. This paper proposes a lightweight breast tumor classification model with multi-frequency feature fusion (LMFM) to tackle the problem of inadequate feature extraction and poor classification performance. The LMFM utilizes wavelet transform (WT) for multi-frequency feature fusion, integrating high-frequency (HF) tumor details with high-level semantic features to enhance feature representation. The network’s ability to extract irregular tumor characteristics is further reinforced by dynamic adaptive deformable convolution (DADC). The introduction of the token-based Region Focus Module (TRFM) reduces interference from irrelevant background information. At the same time, the incorporation of a linear attention (LA) mechanism lowers the model’s computational complexity and further enhances its global feature extraction capability. The experimental results demonstrate that the proposed model achieves classification accuracies of 98.23% and 97.81% on the BreaKHis and BACH datasets, with only 9.66 M parameters. Full article
(This article belongs to the Section Biomedical Information and Health)
Show Figures

Figure 1

36 pages, 6995 KB  
Article
HETMCL: High-Frequency Enhancement Transformer and Multi-Layer Context Learning Network for Remote Sensing Scene Classification
by Haiyan Xu, Yanni Song, Gang Xu, Ke Wu and Jianguang Wen
Sensors 2025, 25(12), 3769; https://doi.org/10.3390/s25123769 - 17 Jun 2025
Viewed by 707
Abstract
Remote Sensing Scene Classification (RSSC) is an important and challenging research topic. Transformer-based methods have shown encouraging performance in capturing global dependencies. However, recent studies have revealed that Transformers perform poorly in capturing high frequencies that mainly convey local information. To solve this [...] Read more.
Remote Sensing Scene Classification (RSSC) is an important and challenging research topic. Transformer-based methods have shown encouraging performance in capturing global dependencies. However, recent studies have revealed that Transformers perform poorly in capturing high frequencies that mainly convey local information. To solve this problem, we propose a novel method based on High-Frequency Enhanced Vision Transformer and Multi-Layer Context Learning (HETMCL), which can effectively learn the comprehensive features of high-frequency and low-frequency information in visual data. First, Convolutional Neural Networks (CNNs) extract low-level spatial structures, and the Adjacent Layer Feature Fusion Module (AFFM) reduces semantic gaps between layers to enhance spatial context. Second, the High-Frequency Information Enhancement Vision Transformer (HFIE) includes a High-to-Low-Frequency Token Mixer (HLFTM), which captures high-frequency details. Finally, the Multi-Layer Context Alignment Attention (MCAA) integrates multi-layer features and contextual relationships. On UCM, AID, and NWPU datasets, HETMCL achieves state-of-the-art OA of 99.76%, 97.32%, and 95.02%, respectively, outperforming existing methods by up to 0.38%. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

29 pages, 5553 KB  
Article
Data-Driven Multi-Scale Channel-Aligned Transformer for Low-Carbon Autonomous Vessel Operations: Enhancing CO2 Emission Prediction and Green Autonomous Shipping Efficiency
by Jiahao Ni, Hongjun Tian, Kaijie Zhang, Yihong Xue and Yang Xiong
J. Mar. Sci. Eng. 2025, 13(6), 1143; https://doi.org/10.3390/jmse13061143 - 9 Jun 2025
Viewed by 853
Abstract
The accurate prediction of autonomous vessel CO2 emissions is critical for achieving IMO 2050 carbon neutrality and optimizing low-carbon maritime operations. Traditional models face limitations in real-time multi-source data analysis and dynamic cross-variable dependency modeling, hindering data-driven decision-making for sustainable autonomous shipping. [...] Read more.
The accurate prediction of autonomous vessel CO2 emissions is critical for achieving IMO 2050 carbon neutrality and optimizing low-carbon maritime operations. Traditional models face limitations in real-time multi-source data analysis and dynamic cross-variable dependency modeling, hindering data-driven decision-making for sustainable autonomous shipping. This study proposes a Multi-scale Channel-aligned Transformer (MCAT) model, integrated with a 5G–satellite–IoT communication architecture, to address these challenges. The MCAT model employs multi-scale token reconstruction and a dual-level attention mechanism, effectively capturing spatiotemporal dependencies in heterogeneous data streams (AIS, sensors, weather) while suppressing high-frequency noise. To enable seamless data collaboration, a hybrid transmission framework combining satellite (Inmarsat/Iridium), 5G URLLC slicing, and industrial Ethernet is designed, achieving ultra-low latency (10 ms) and nanosecond-level synchronization via IEEE 1588v2. Validated on a 22-dimensional real autonomous vessel dataset, MCAT reduces prediction errors by 12.5% MAE and 24% MSE compared to state-of-the-art methods, demonstrating superior robustness under noisy scenarios. Furthermore, the proposed architecture supports smart autonomous shipping solutions by providing demonstrably interpretable emission insights through its dual-level attention mechanism (visualized via attention maps) for route optimization, fuel efficiency enhancement, and compliance with CII regulations. This research bridges AI-driven predictive analytics with green autonomous shipping technologies, offering a scalable framework for digitalized and sustainable maritime operations. Full article
(This article belongs to the Special Issue Sustainable Maritime Transport and Port Intelligence)
Show Figures

Figure 1

19 pages, 2861 KB  
Article
The Classical Model of Type-Token Systems Compared with Items from the Standardized Project Gutenberg Corpus
by Martin Tunnicliffe and Gordon Hunter
Analytics 2025, 4(2), 16; https://doi.org/10.3390/analytics4020016 - 5 Jun 2025
Viewed by 922
Abstract
We compare the “classical” equations of type-token systems, namely Zipf’s laws, Heaps’ law and the relationships between their indices, with data selected from the Standardized Project Gutenberg Corpus (SPGC). Selected items all exceed 100,000 word-tokens and are trimmed to 100,000 word-tokens each. With [...] Read more.
We compare the “classical” equations of type-token systems, namely Zipf’s laws, Heaps’ law and the relationships between their indices, with data selected from the Standardized Project Gutenberg Corpus (SPGC). Selected items all exceed 100,000 word-tokens and are trimmed to 100,000 word-tokens each. With the most egregious anomalies removed, a dataset of 8432 items is examined in terms of the relationships between the Zipf and Heaps’ indices computed using the Maximum Likelihood algorithm. Zipf’s second (size) law indices suggest that the types vs. frequency distribution is log–log convex, with the high and low frequency indices showing weak but significant negative correlation. Under certain circumstances, the classical equations work tolerably well, though the level of agreement depends heavily on the type of literature and the language (Finnish being notably anomalous). The frequency vs. rank characteristics exhibit log–log linearity in the “middle range” (ranks 100–1000), as characterised by the Kolmogorov–Smirnov significance. For most items, the Heaps’ index correlates strongly with the low frequency Zipf index in a manner consistent with classical theory, while the high frequency indices are largely uncorrelated. This is consistent with a simple simulation. Full article
Show Figures

Figure 1

18 pages, 4494 KB  
Article
MDFN: Enhancing Power Grid Image Quality Assessment via Multi-Dimension Distortion Feature
by Zhenyu Chen, Jianguang Du, Jiwei Li and Hongwei Lv
Sensors 2025, 25(11), 3414; https://doi.org/10.3390/s25113414 - 29 May 2025
Cited by 1 | Viewed by 708
Abstract
Low-quality power grid image data can greatly affect the effect of deep learning in the power industry. Therefore, adopting accurate image quality assessment techniques is essential for screening high-quality power grid images. Although current blind image quality assessment (BIQA) methods have made some [...] Read more.
Low-quality power grid image data can greatly affect the effect of deep learning in the power industry. Therefore, adopting accurate image quality assessment techniques is essential for screening high-quality power grid images. Although current blind image quality assessment (BIQA) methods have made some progress, they usually use only one type of feature and ignore other factors that affect the quality of images, such as noise and brightness, which are highly relevant to low-quality power grid images with noise, underexposure, and overexposure. Therefore, we propose a multi-dimension distortion feature network (MDFN) based on CNN and Transformer, which considers high-frequency (edges and details) and low-frequency (semantic and structural) features of images, along with noise and brightness features, to achieve more accurate quality assessment. Specifically, the network employs a dual-branch feature extractor, where the CNN branch captures local distortion features and the Transformer branch integrates both local and global features. We argue that separating low-frequency and high-frequency components enables richer distortion features. Thus, we propose a frequency selection module (FSM) which extracts high-frequency and low-frequency features and updates these features to achieve global spatial information fusion. Additionally, previous methods only use the CLS token for predicting the quality score of the image. Considering the issues of severe noise and exposure in power grid images, we design an effective way to extract noise and brightness features and combine them with the CLS token for the prediction. The results of the experiments indicate that our method surpasses existing approaches across three public datasets and a power grid image dataset, which shows the superiority of our proposed method. Full article
Show Figures

Figure 1

23 pages, 2439 KB  
Article
The Origin of Shared Emergent Properties in Discrete Systems
by Les Hatton and Greg Warr
Entropy 2025, 27(6), 561; https://doi.org/10.3390/e27060561 - 26 May 2025
Viewed by 1048
Abstract
Here, we propose that the shared emergent properties reproducibly observed in discrete systems can be explained by a theory that embeds the Conservation of Hartley–Shannon Information (CoHSI) in a statistical mechanics framework. Specific predictions of global properties that represent the most likely equilibrium [...] Read more.
Here, we propose that the shared emergent properties reproducibly observed in discrete systems can be explained by a theory that embeds the Conservation of Hartley–Shannon Information (CoHSI) in a statistical mechanics framework. Specific predictions of global properties that represent the most likely equilibrium state should be apparent in all qualifying systems, regardless of provenance. We demonstrate that these predictions of emergent global properties hold true in systems as disparate as collections of software written in the programming language C and collections of proteins. The implication is that the emergence of such shared properties is not driven by any specific local mechanism as the systems are so different. This raises the interesting prospect that important properties of biological systems (exemplified here by the length and multiplicity distributions of proteins) have little, if anything, to do with natural selection. Similarly, the size distribution of components and the frequency of tokens observed in computer software in C emerge as the most likely states, and are thus properties that are divorced from human agency, regardless of functionality. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

19 pages, 303 KB  
Article
Representation and Processing of L2 Compositional Multiword Sequences: Effects of Token Frequency, Type Frequency, and Constituency
by Yingying Xu and Yang Yu
Behav. Sci. 2025, 15(6), 734; https://doi.org/10.3390/bs15060734 - 26 May 2025
Viewed by 405
Abstract
The present study investigates the effects of token frequency, type frequency, and constituency on L2 compositional multiword sequence (CMS) processing among 60 Chinese L2 English speakers at two proficiency levels, using an online phrasal decision task. The findings reveal the following: (1) Both [...] Read more.
The present study investigates the effects of token frequency, type frequency, and constituency on L2 compositional multiword sequence (CMS) processing among 60 Chinese L2 English speakers at two proficiency levels, using an online phrasal decision task. The findings reveal the following: (1) Both proficiency groups exhibited a significant token frequency effect in their L2 phrasal and non-phrasal CMS processing, indicating that both sequence types hold psychological reality in L2 learners’ mental representations. (2) The type frequency effect was observed in the higher-proficiency groups’ processing of phrasal and non-phrasal CMSs with low token frequencies, yet it was more pronounced in the less proficient group’s processing of phrasal and non-phrasal CMSs with high token frequencies, indicating that the effect of type frequency operates on a gradient continuum rather than being strictly categorical. (3) Constituency emerged as a robust predictor of processing efficiency, with phrasal CMSs being processed more efficiently than their non-phrasal counterparts across nearly all frequency conditions and proficiency levels. This consistent advantage for phrasal structures underscores the fundamental role of structural integrity in L2 CMS processing. These findings contribute novel insights into the mechanisms underlying L2 CMS processing, while also offering practical pedagogical implications for enhancing L2 CMS acquisition. Full article
(This article belongs to the Section Cognition)
21 pages, 1514 KB  
Article
Decoding the Dynamic Connectedness Between Traditional and Digital Assets Under Dynamic Economic Conditions
by Sahar Loukil, Aamir Aijaz Syed, Fadhila Hamza and Ahmed Jeribi
J. Theor. Appl. Electron. Commer. Res. 2025, 20(2), 97; https://doi.org/10.3390/jtaer20020097 - 9 May 2025
Cited by 5 | Viewed by 1229
Abstract
This study examines the dynamic interconnectedness between digital and traditional assets, with an emphasis on fiat currencies (such as JPY/USD and CHF/USD), cryptocurrencies (such as Bitcoin), and digital assets backed by gold (such as Tether Gold and Digix Gold Token) under various economic [...] Read more.
This study examines the dynamic interconnectedness between digital and traditional assets, with an emphasis on fiat currencies (such as JPY/USD and CHF/USD), cryptocurrencies (such as Bitcoin), and digital assets backed by gold (such as Tether Gold and Digix Gold Token) under various economic conditions. The study uses sophisticated techniques, including dynamic connectedness, quantile connectedness, and time-frequency connectedness analyses, to test non-linear and asymmetric interactions between various asset classes. The findings reveal that while cryptocurrencies, especially Bitcoin, frequently serve as net recipients of shocks during times of economic instability, gold and gold-backed assets are the primary shock transmitters. These findings highlight the increasing importance that digital assets play amid economic and geopolitical crises as well as their growing incorporation into the larger financial ecosystem. The study contributes to the literature on asset interconnection and provides implications for systemic risk management and financial stability; specifically, it offers insightful information for hedging and portfolio diversification techniques. Full article
(This article belongs to the Special Issue Blockchain Business Applications and the Metaverse)
Show Figures

Figure 1

17 pages, 4969 KB  
Article
Temporal Decay Loss for Adaptive Log Anomaly Detection in Cloud Environments
by Lelisa Adeba Jilcha, Deuk-Hun Kim and Jin Kwak
Sensors 2025, 25(9), 2649; https://doi.org/10.3390/s25092649 - 22 Apr 2025
Cited by 1 | Viewed by 1238
Abstract
Log anomaly detection in cloud computing environments is essential for maintaining system reliability and security. While sequence modeling architectures such as LSTMs and Transformers have been widely employed to capture temporal dependencies in log messages, their effectiveness deteriorates in zero-shot transfer scenarios due [...] Read more.
Log anomaly detection in cloud computing environments is essential for maintaining system reliability and security. While sequence modeling architectures such as LSTMs and Transformers have been widely employed to capture temporal dependencies in log messages, their effectiveness deteriorates in zero-shot transfer scenarios due to distributional shifts in log structures, terminology, and event frequencies, as well as minimal token overlap across datasets. To address these challenges, we propose an effective detection approach integrating a domain-specific pre-trained language model (PLM) fine-tuned on cybersecurity-adjacent data with a novel loss function, Loss with Decaying Factor (LDF). LDF introduces an exponential time decay mechanism into the training objective, ensuring a dynamic balance between historical context and real-time relevance. Unlike traditional sequence models that often overemphasize outdated information and impose high computational overhead, LDF constrains the training process by dynamically weighing log messages based on their temporal proximity, thereby aligning with the rapidly evolving nature of cloud computing environments. Additionally, the domain-specific PLM mitigates semantic discrepancies by improving the representation of log data across heterogeneous datasets. Extensive empirical evaluations on two supercomputing log datasets demonstrate that this approach substantially enhances cross-dataset anomaly detection performance. The main contributions of this study include: (1) the introduction of a Loss with Decaying Factor (LDF) to dynamically balance historical context with real-time relevance; and (2) the integration of a domain-specific PLM for enhancing generalization in zero-shot log anomaly detection across heterogeneous cloud environments. Full article
Show Figures

Figure 1

Back to TopTop