Next Article in Journal
Recent Progress in Advanced Electrode Materials for the Detection of 4-Nitrophenol and Its Derivatives for Environmental Monitoring
Previous Article in Journal
Star Lightweight Convolution and NDT-RRT: An Integrated Path Planning Method for Walnut Harvesting Robots
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EffiShapeFormer: Shapelet-Based Sensor Time Series Classification with Dual Filtering and Convolutional-Inverted Attention

1
School of Mechanical Engineering, Xinjiang University, Urumqi 830017, China
2
United Laboratories of TT&C and Communication, Korla 841001, China
3
School of Big Data & Software Engineering, Chongqing University, Chongqing 400030, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2026, 26(1), 307; https://doi.org/10.3390/s26010307
Submission received: 8 November 2025 / Revised: 24 December 2025 / Accepted: 1 January 2026 / Published: 3 January 2026
(This article belongs to the Section Intelligent Sensors)

Abstract

In the field of sensors, time series classification holds significant importance for applications such as industrial monitoring, mechanical fault diagnosis, and action recognition. However, while existing models demonstrate excellent classification accuracy, they generally suffer from insufficient interpretability. Shapelet-based methods offer interpretability advantages, yet existing models like ShapeFormer suffer from high computational resource consumption and low training efficiency during shapelet discovery and training phases, limiting their applicability in complex sensor time series classification tasks. To address this, our research proposes Efficiency ShapeFormer (EffiShapeFormer), an efficient time series classification framework, based on the latest shapelet model ShapeFormer. During the Shapelet Discovery phase, EffiShapeFormer introduces a dual-filtering mechanism. The Coarse Screening module efficiently identifies discriminative shapelets, while the Class-specific Representation module models these features to extract class-specific characteristics. Subsequently, in the Generic Representation stage, the proposed Convolution-Inverted Attention (CIA) module achieves synergistic integration of local feature extraction and global dependency modeling to capture cross-category generic features. Finally, the model fuses class-specific and generic features to achieve efficient and accurate time series classification. Experimental results on 22 sensor time series datasets demonstrate that EffiShapeFormer achieves higher average accuracy and F1-scores than baseline models, validating the proposed method’s significant advantages in both efficiency and performance.

1. Introduction

Time series data, as a fundamental and ubiquitous data form, underpins a wide range of domains and is inherently characterized by its sequential order and temporal dependency. Among various data mining tasks, Time Series Classification (TSC) has gradually become a prominent research focus. In particular, time series data generated from sensors are widely utilized in industrial monitoring [1], mechanical fault diagnosis [2], environmental sensing [3], and action recognition [4]. However, sensor signals often exhibit complex characteristics such as pronounced long-range dependencies, non-stationarity, multi-scale structures, and heavy noise contamination. These challenges arise from the intricate intrinsic physical mechanisms underlying signal generation, substantial external environmental disturbances, and the coupling effects among multiple sensors [5]. Such characteristics not only hinder traditional time series processing methods from effectively capturing discriminative representations, but also pose severe challenges to the representation capacity and generalization performance of current deep learning models for sensor-based time series. Moreover, under resource-constrained computational scenarios, the demands for both model efficiency and interpretability become increasingly critical. Consequently, numerous approaches have been proposed in recent years to address the TSC problem more effectively. In 2017, Vaswani et al. introduced the groundbreaking Transformer [6] architecture—an entirely self-attention-based sequence modeling framework.Its success is largely attributed to high parallelism and strong global feature modeling capability. The Transformer’s strong ability to model global dependencies has also garnered widespread attention in the field of time series analysis.
However, despite the Transformer architecture’s excellence in capturing long-range dependencies and global semantic information, its direct application to time series tasks still encounters several challenges. The computational complexity of the standard self-attention mechanism grows quadratically with the sequence length, making it inefficient for modeling long time series. Moreover, inherent properties such as trend, periodicity, and multi-scale structures limit the effectiveness of purely global attention mechanisms. To overcome these limitations, numerous Transformer-based variants have been proposed to enhance temporal modeling efficiency and performance, including Informer [7], Reformer [8], Autoformer [9], Crossformer [10], and LightTS [11]. These models introduce mechanisms such as sparse attention, sequence decomposition, and architectural optimization, achieving remarkable improvements in long-sequence prediction and modeling tasks. Nevertheless, despite their advances in temporal dependency modeling, these methods remain limited in time series classification. Although they exhibit strong global modeling capabilities, their attention weights fail to directly indicate the discriminative importance of input features, resulting in limited interpretability. Consequently, it becomes difficult to uncover the decision-making rationale of such models in sensor signal analysis. To address these issues, researchers have begun exploring alternative frameworks that balance interpretability and classification performance for complex sensor-based time series data.
To balance model performance and interpretability, researchers have recently explored hybrid architectures that integrate traditional temporal feature extraction techniques with deep representation learning [12]. Compared to deep attention-based models, the Shapelet approach has attracted considerable attention due to its strong interpretability and outstanding discriminative capability [13,14]. Shapelets are short, highly class-discriminative subsequences that capture locally discriminative morphological variations and have been shown to play a pivotal role in time series classification [15,16].
Nevertheless, traditional Shapelet discovery methods typically depend on exhaustive searches and repeated distance computations across a vast number of candidate subsequences, leading to extremely high computational complexity and significant resource consumption [17]. This greatly restricts their applicability to high-dimensional and large-scale sensor datasets. To improve efficiency, ShapeFormer enhances the Offline Shapelet Discovery (OSD) [18] process, reducing the time required for shapelet extraction to a certain extent, while achieving remarkable temporal modeling performance through the integration of a Transformer architecture. However, when applied to sensor time series data characterized by strong noise, multi-scale structures, and non-stationary dynamics, ShapeFormer still encounters substantial computational overhead and distance metric costs during candidate evaluation. Furthermore, its standard Transformer encoder retains quadratic complexity with respect to sequence length, resulting in heavy training burdens and limited scalability for edge-device deployment. These limitations collectively define the core motivation of this study—developing a more efficient, interpretable, and resource-friendly model tailored for complex sensor time series classification tasks.
From a modeling perspective, shapelets and Transformer architectures address complementary aspects of sensor time series classification. Shapelets focus on capturing explicit and interpretable local discriminative patterns, whereas Transformer-based models are well suited for modeling long-range temporal dependencies and complex inter-variable interactions. This complementarity provides a natural motivation for integrating shapelet representations within a Transformer-based framework.
To address the aforementioned challenges, we propose EffiShapeFormer, a novel framework designed to substantially enhance both efficiency and classification accuracy in sensor time-series classification tasks. This framework introduces two key innovations:
(1)
Dual-layer Filtering Mechanism (DFM): We propose a two-stage screening strategy, implemented as an algorithmic filtering module within the Shapelet discovery phase. In the first stage, a coarse-grained rapid screening based on Euclidean distance is performed to eliminate candidate subsequences with limited discriminative potential. In the second stage, a refined evaluation combines Perceptual Subsequence Distance (PSD) [18,19] and information gain metrics to further select highly discriminative shapelets. This dual-layer design effectively reduces computational costs while maintaining strong discriminative capability. The filtered shapelets are then modeled in a Class-specific Representation module that employs a Transformer to capture category-specific characteristics.
(2)
Convolution-Inverted Attention (CIA) Module: We design a novel CIA module that integrates convolutional operations into the Transformer’s self-attention mechanism [6] to enhance local temporal pattern extraction. By inverting the attention dimension from the temporal axis to the variable axis, the module achieves bidirectional modeling of temporal and variable dependencies. This design not only reduces computational complexity but also strengthens the model’s ability to capture multi-variable interactions effectively.
The main contributions of this paper can be summarized in the following three aspects:
  • We propose a Dual-layer Filtering Mechanism that significantly reduces redundant computations in the Shapelet discovery process, enhancing efficiency;
  • We designed a learnable neural network module, termed Convolutional-Inverted Attention (CIA), which is integrated into the proposed model to efficiently fuse temporal and variable dependencies, thereby improving scalability and classification accuracy;
  • We validated the model’s effectiveness on multiple sensor-time series datasets. Experimental results demonstrate that our approach outperforms ShapeFormer in classification accuracy, computational efficiency, and interpretability, providing a scalable and practical solution for real-world time series analysis tasks and enabling faster and more interpretable feature extraction.
The remainder of this paper is organized as follows. Section 2 reviews time series classification models and related research based on Shapelet learning; Section 3 introduces the fundamental concepts and theoretical methods of this study; Section 4 details the overall architecture and core module design of the proposed method; Section 5 presents the experimental setup and analyzes the results; Section 6 summarizes the paper and outlines the directions of our future research.

2. Related Work

In this section, we summarize recent advancements in time series classification tasks, highlighting the strengths and limitations of existing methods to lay the groundwork for future research.

2.1. Time Series Classification Model

TSC aims to identify the category to which an input sequence belongs based on its dynamic change patterns. With the advancement of deep learning, numerous neural network models have been applied to classification tasks. Early research primarily employed structures such as Convolutional Neural Networks (CNNs) [20] and Recurrent Neural Networks (RNNs) [21]. A pioneering approach like the Multi-Channel Convolutional Neural Network (MCDCNN) [22] applied CNNs to TSC. To better capture dependencies in long sequences, the Transformer architecture was introduced to time series analysis [5]. By enabling global feature interactions through its Self-Attention Mechanism, the Transformer no longer relies on strict sequence position modeling, significantly enhancing its representational power for lengthy sequences. Subsequently, numerous improved models emerged rapidly, including Informer [7], Autoformer [9], and Reformer [8] for long sequence modeling, as well as Crossformer [10] and LightTS [11] for multivariate time series. These models demonstrate superior performance in both prediction and classification tasks, further validating the potential of the Transformer architecture in time series modeling.
Beyond attention-driven models, a number of lightweight and efficient time series modeling approaches have emerged in recent years. The DLinear [23] model proposed by Zeng et al. decomposes time series into trend and seasonal components, modeling each separately through linear layers. This significantly reduces model complexity while maintaining strong predictive performance and interpretability. Wu et al.’s TimesNet [24] model transforms time series into two-dimensional images, using multi-period convolutional modules to extract pattern information across different time scales. This approach demonstrates enhanced capabilities in modeling periodicity and multi-scale structures. These methods offer novel perspectives for time series classification and, to some extent, break through the limitations of traditional deep learning models.
Despite demonstrating outstanding performance in time series classification, deep models still face significant bottlenecks due to high computational complexity and insufficient interpretability. Enhancing model transparency and efficiency has become a key research direction.

2.2. Shapelet-Based Time Series Methods

In recent years, research on the interpretability of time series has garnered increasing attention, with its core objective being to reveal the decision-making basis of deep models when processing temporal data [25,26,27]. Under this background, researchers have proposed various time series classification methods based on shapelets. Related studies have gradually reached a consensus: shapelets not only offer strong interpretability but also serve as a key factor in enhancing time series classification performance [13,14,15,28,29]. Early shapelet discovery methods typically enumerated all possible sub-sequences within a sequence, selecting the one with maximum information gain as a shapelet candidate [13]. This exhaustive strategy incurs extremely high computational costs. Subsequent research attempted to construct shapelets through random generation or the use of common sub-sequences [28,30]. However, such approaches often lack the correlation between positional information and variable levels, resulting in limited discriminative power [30]. Recently, the OSD [18] method and its improved variant, Shapeformer, have made significant strides in enhancing shapelet quality while reducing computational overhead. However, existing approaches still face challenges such as high computational burden and insufficient global modeling in complex sensor data scenarios. Balancing local interpretability with global dependency modeling in high-dimensional temporal data remains an urgent research challenge.

2.3. Other Interpretable Time Series Classification Methods

In addition to shapelet-based and attention-driven models, several other interpretable time series classification approaches have been explored in the literature. Prototype-based methods construct class-level representations using one or multiple representative time series and perform classification based on similarity matching, providing a straightforward form of interpretability [31,32]. Symbolic methods, such as Symbolic Aggregate approXimation (SAX)-based and Symbolic Fourier Approximation (SFA)-based approaches, discretize time series into symbolic representations and conduct classification in the symbolic domain, where discriminative subsequences can be explicitly identified [33,34,35]. Moreover, Convolutional Neural Network (CNN)-based models have been combined with visualization techniques, such as class activation mapping and gradient-based attribution, to highlight salient temporal regions or variables that contribute to classification decisions [36,37].
Although existing interpretable time series classification methods provide valuable insights from different perspectives, they often exhibit inherent limitations when applied to complex and high-dimensional sensor data. Prototype-based and symbolic approaches typically emphasize global similarity or rely on predefined discretization schemes, which may fail to capture fine-grained local discriminative patterns under non-stationary and noisy conditions. CNN-based interpretability methods usually provide post-hoc explanations, where interpretability is not explicitly embedded into the model structure.
In contrast, shapelets represent explicit and semantically meaningful local subsequence patterns that enable intuitive interpretation at the pattern level, while the Transformer architecture is particularly effective at modeling long-range dependencies and global interactions in multivariate time series. However, existing studies rarely integrate shapelets as explicit representations within a Transformer-based modeling framework. This observation motivates our work to incorporate shapelet representations into a Transformer architecture, aiming to jointly capture interpretable local patterns and global temporal dependencies within a unified and efficient model.

3. Preliminaries

3.1. Single-Channel/Multi-Channel Time Series Classification

We represent a time series sample as X R L × D , where D denotes the number of channels(variables) and L represents the length of the time series.All time series samples are of equal length L, which is obtained through standard preprocessing (e.g., sliding-window segmentation) applied to the raw sensor signals. When D = 1 , it represents a single-channel time series; when D > 1 , it represents a multi-channel time series. Here, X = [ X 1 , . . . , X D ] , and each X d corresponds to a time series for channel d. Specifically, X d = [ x 1 d , , x L d ] , where x t d signifies the value of channel d at time step t within X. For a time series training dataset containing N samples, we define it as C = ( X ( n ) , y ( n ) ) n = 1 N , where X ( n ) R L × D represents the nth time series sample, y ( n ) Y denotes its corresponding category label, and Y is the set of all labels. The Time Series Classification (TSC) task involves training a classifier f θ : R L × D Y to predict the category of time series samples with unknown labels.

3.2. Shapelet

Given a time series sample X R L × D , a Shapelet S i is defined as a consecutive subsequence extracted from a single channel:
S i = X p i s t a r t : p i e n d , d i = [ x p i s t a r t d i , x p i s t a r t + 1 d i , , x p i e n d d i ] R i ,
where d i { 1 , , D } is the channel index, p i s t a r t and p i e n d are the start and end indices in the source series, and i = p i e n d p i s t a r t + 1 L is the shapelet length.
In addition, we store the meta information ( i , d i , p i s t a r t , p i e n d ) for indexing and position-related operations, while all distance computations are conducted on the numeric subsequence S i .

3.3. Perceptually Important Points (PIPs)

The PIP method was first proposed in [38]. For a time series X, we first construct a list of PIPs and add the first and last indices of X to it (PIPs = [1, L]). Subsequently, by recursively searching the sequence for the point with the maximum perpendicular distance (Maximum Perpendicular Distance, PD) from the line formed by the first two selected PIPs, the index corresponding to this point is added as a new PIP to the list. This process is repeated until the desired number of PIPs is obtained.

3.4. Euclidean Distance(ED)

Given a shapelet S i R I and an input time series X, we define all consecutive subsequences of length i on the corresponding channel d i as:
X i b = X b : b + i 1 , d i = x b d i , x b + 1 d i , , x b + i 1 d i , b = 1 , 2 , , L i + 1 .
The Euclidean distance between the shapelet S i and the time series S is defined as the minimum distance over all sliding windows:
E D ( S i , X ) = min 1 b L i + 1 S i X i b 2 = min 1 b L i + 1 t = 0 i 1 s i , t + 1 x b + t d i 2 ,
where b denotes the sliding window start index in X; t indexes the position within a window; x b + t d i denotes the value on channel d i at time b + t .

3.5. Multi-Head Attention Mechanism (MHA) [6]

Given an input time series X e m b R L × d e m b , where X e m b represents the input embedding matrix of the time series, linear mappings produce Query, Key, and Value matrices: Q = X W Q , K = X W K , V = X W V , where W Q , W K , W V R d e m b × d e m b is a learnable projection matrix. Each head computes attention outputs in blocks across feature dimensions, defined as:
head h = Softmax Q h K h d k V h .
Among these, Q h , K h , V h R L × d k and head h represents the output matrix of the h-th attention head. d k denotes the scaling factor to stabilize gradients, and d k = d e m b / H , where H is the number of total attention heads, Softmax ( · ) represents the standard softmax function of computing attention weights. The above formula can be further expanded as:
head h = Softmax X W Q h ( X W K h ) d k X W V h .
After concatenating the results from all attention heads, the final output is obtained through a linear mapping:
MHA ( H ) = concat ( h e a d 1 , , h e a d H ) W O ,
where W o R d e m b × d e m b is the output projection matrix.
In Table 1, we summarize the important notations and descriptions in the paper.

4. Methodology

4.1. Overall Architecture

Figure 1 presents the overall methodological framework proposed in this study. To address the high computational overhead introduced by information gain calculations during Shapelet candidate selection in the original Shapelet discovery module, as well as the low training efficiency of the Transformer’s self-attention mechanism when processing large-scale time series data, we introduce two key structural enhancements to the ShapeFormer model. These improvements are designed to significantly boost both computational efficiency and temporal modeling performance.
During the Shapelet discovery phase, we design a coarse screening strategy based on Euclidean distance to rapidly eliminate candidate subsequences with weak discriminative capability prior to detailed evaluation. This strategy significantly reduces the frequency of distance computations and information gain evaluations, thereby substantially decreasing the time cost of Shapelet mining.
In the general representation learning module, we propose a novel Convolution-Inverted Attention (CIA) neural network module. This design replaces the original two-layer convolutional structure with a single-layer convolutional architecture, thereby enhancing computational efficiency while retaining strong local feature extraction capability. Moreover, by introducing an inverse attention mechanism that shifts the computation dimension of self-attention from the temporal axis to the variable axis, the model can effectively capture inter-variable dependencies. This approach substantially reduces training time while preserving the model’s discriminative performance. The following sections will detail the specific modules of our method.

4.2. Coarse Screening in Shapelet Discovery

In the Shapelet Discovery module, we improved the Offline Shapelet Discovery (OSD) method. During the shapelet candidates extraction phase, we employed Perceptually Important Points (PIPs) to extract Shapelets from the training set C = { ( X ( n ) , y ( n ) ) } n = 1 N [38]. Specifically, we recursively search the time series X for the newest PIP with the maximum vertical distance from the line formed by two previously selected PIPs. When a new PIP is added to the PIP set, we use the third consecutive PIP to obtain new shapelet candidates. Thus, for a new PIP, up to three Shapelets may be added to the shapelet candidates set [19,39]. In this paper, we adopt the same strategy as Shapeformer [19], setting the number of PIPs to n p i p = 0.2 × L and L as the time series length, selecting up to 3 × n p i p shapelet candidates. Each shapelet simultaneously stores its numerical segment, start and end positions, and associated variable channel information, providing data support for subsequent segment screening. Figure 2 shows an example of identifying the first 5 PIPs from the time series X in the training dataset.
Although the PIP method effectively reduces the number of shapelet candidates, the computational burden remains significant during subsequent screening due to the need for repeated PSD and information gain calculations. To address this issue, this paper proposes a coarse-grained screening mechanism based on Euclidean distance. This approach is grounded in two key considerations: First, Euclidean distance itself is computationally straightforward, making it suitable for rapid preliminary screening of large-scale shapelet candidates. Second, from the perspective of shape similarity, Euclidean distance effectively reflects the discriminative potential of shapelet candidates. Although Euclidean distance is known to be sensitive to noise and scaling shifts, its use in the coarse-grained screening phase is justified by the fact that this stage focuses on rapidly filtering out obviously non-discriminative shapelets from large datasets. Since this phase is preliminary, the impact of noise is minimized as it only serves to reduce the pool of shapelet candidates. Additionally, by using more refined methods, such as information gain, in the subsequent fine-grained screening phase, we ensure that only the most discriminative shapelets are selected. Therefore, the use of Euclidean distance in the coarse screening phase effectively enhances the overall efficiency of the shapelet discovery process without significantly compromising the classification accuracy.
By employing the coarse screening mechanism to eliminate less discriminative candidates before fine-grained screening, we significantly enhance the overall efficiency of the discovery process. For ease of presentation in the coarse-grained screening stage, we introduce S D j , k T and S D j , k O to denote shapelet candidates indexed by channel D j and candidate index k in the target and other classes, respectively. This is only an indexing notation and does not change the shapelet definition in Section 3; each S D j , k ( · ) still corresponds to a numeric shapelet subsequence extracted from a single channel, together with its meta information (length and location). Consequently, all distance computations in this section are performed on the same numeric subsequences; the superscripts/subscripts are used solely for bookkeeping and for describing the coarse screening process succinctly. We categorize shapelet candidates extracted from the training set C = { ( X ( n ) , y ( n ) ) } n = 1 N into two classes: < X i , S D j , k T > represents shapelet candidates on X i within the target class, while < X i , S D j , k O > denotes shapelet candidates on X i within the other class, as illustrated in Figure 3. For < X i , S D j , k T > , i = 1 , . . . , n C T , k = 1 , . . . , 3 n p i p , n C T indicates the number of samples in the target class, m denotes the variable index, and 3 n p i p represents the Shapelet index. For < X i , S D j , k O > , i = 1 , . . . , n C O , k = 1 , . . . , 3 n p i p , n C O denotes the number of samples in the other class. Accordingly, < X i , S D j , k ( · ) > simply denotes evaluating candidate S D j , k ( · ) on sample X i during screening.
Our coarse screening process is illustrated in Figure 4. For a given target class candidate shapelet S D , k T and time series sample X i , their minimum Euclidean distances within the target class and across other classes are defined as follows:
D intra ( S D , k T ) = m i n i = 1 , . . . , n C T ED ( S D , k T , X i ) ,
D inter ( S D , k T ) = m i n i = 1 , . . . , n C O ED ( S D , k T , X i ) ,
where n C T and n C O are the numbers of samples from the target class and from other classes, respectively; and E D ( · , · ) is the minimum Euclidean distance (Equation (3)).
Calculate the average minimum distance D ¯ inter ( S D , k T ) of this shaplet across other categories, then define a discriminative metric δ ( S D , k T ) based on the distance differences between samples of different categories for filtering.
δ ( S D , k T ) = D ¯ inter ( S D , k T ) D intra ( S D , k T ) D ¯ inter ( S D , k T ) .
A larger δ ( · ) indicates that the candidate is more discriminative for separating the target class from other classes. We rank candidates by δ ( · ) in descending order and discard the bottom β % candidates, where β is an experimental hyperparameter, for which we conducted hyperparameter sensitivity experiments in Section 5.3.
After the coarse screening process concludes, the retained shapelet candidates are designated as S = { S 1 , . . . , S G } and enter the Fine Screening module. By calculating their Perceptual Subsequence Distance (PSD) with all instances in the training data X R L × D , the optimal information gain is identified to evaluate their discriminative capability. The Shapelet set S with the highest information gain is selected as the final choice and stored in the Shapelet pool.
PSD ( X , S i ) = min b = 1 , , L i + 1 CID X b : b + i 1 , d i , S i ,
here, b denotes the sliding window start index in X (not the start index of the shapelet in its source seties), d i and i are the channel index and length of S i , X b : b + i 1 , d i is the length - i subsequence on channel d i starting at b; and CID ( · , · ) signifies the complexity-invariant distance.
By introducing a correction factor related to the intrinsic pattern complexity of the sequence, this metric effectively enhances the robustness of traditional Euclidean distance in measuring morphological similarity. It has been demonstrated to improve the discriminative capability of shapelets in time series classification tasks [40].

4.3. Class-Specific Representation

To deeply mine discriminative features highly correlated with categories within time series, we introduced a class-specific representation module into our model. Based on the self-attention mechanism of Transformers, this module constructs high-level feature representations by modeling the differential relationships between shapelets and input sequences.
Each S i in the final shapelet set S records its length i , channel index d i , and position ( P i s t a r t , P i e n d ) within the original sequence. For input sequence X, we compute S i distances between all subsequences in X on channel d i , restricting the search range to a neighborhood centered at P i s t a r t with radius w. The subsequence with the shortest distance becomes the best-fit subsequence I i .
J i = a r g m i n b W ( p i s t a r t , w ) CID ( X b : b + i 1 , d i , S i ) ,
I i = X J i : J i + i 1 , d i .
We linearly project both the shapelet S i and its best-fit subsequence I i into the same embedding space h s i = P S ( S i ) , h I i = P I ( I i ) , yielding their difference features: F i = h I i h s i . Here, F i R d s p e c i and P ( · ) denotes the linear projection, while d s p e c i represents the embedding size of the difference features. Subsequently, the difference features F i are integrated with position embeddings to capture their sequential order. To better indicate the positional information of shapelets, both the position index p i s t a r t , P i e n d and channel index d i of shapelets are learned through linear projection to obtain their embeddings.
F ˜ i = F i + PE ( p i s t a r t ) + PE ( p i e n d ) + PE ( d i ) .
Here, PE ( · ) is the position embedding function, which maps the start point, end point, and one-hot encoded variables into dense vectors via a learnable linear projection, thereby endowing the model with positional awareness.
Feed all F ˜ i R 1 × d s p e c i into the MHA of the Transformer Encoder, where G denotes the number of elements in S . Given the projection W Q , W K , W V R h × d s p e c i × ( d s p e c i / H ) , compute the attention weight for position i to position j, ultimately yielding the output Z s p e c i = { Z 1 s p e c i , , Z G s p e c i } , where Z i s p e c i R d s p e c i .
α i , j = Softmax ( F ˜ i W Q ) ( F ˜ j W K ) d s p e c i ,
Z i s p e c i = j = 1 G α i , j ( F ˜ i W V ) .
Due to the category-specific nature of these features, attention scores between samples of the same category are significantly higher than those between samples of different categories, thereby enhancing the model’s ability to distinguish between categories. Simultaneously, leveraging the local discriminative properties of shapelet, differential features can identify representative key subsequences across different time segments and variable dimensions within the time series. This enables the model to more effectively capture temporal dependencies and cross-variable correlations within the sequence.

4.4. Generic Representation

To enhance the effectiveness of modeling multivariate time series features, we propose a novel universal feature extraction module—CIA (Convolution-Inverted Attention)—whose overall structure is illustrated in Figure 1a. The core concept of the CIA module is to achieve synergistic integration between local feature extraction and global variable correlation modeling. Traditional Transformers compute attention over the temporal dimension, which can capture long-term dependencies but incurs high computational overhead and tends to overlook inherent correlations between variables. Conversely, while convolutional operations efficiently extract local temporal patterns, their limited receptive field makes it difficult to model global dependencies.
Inspired by iTransformer [41], this module employs a dimensional Conversion approach, treating variables as tokens and time points as features. This shifts the application dimension of Self-Attention from the temporal axis to the variable axis, as illustrated in Figure 5. This design enables the model to explicitly learn correlations between variables while leveraging one-dimensional convolutional layers to efficiently capture local morphological features in the temporal dimension. The CIA module achieves dual modeling of temporal and variable dependencies while maintaining computational efficiency, significantly enhancing the discriminative power and generalization capabilities of the general representation.
Unlike traditional iTransformer, the CIA module incorporates convolutional layers into the self-attention mechanism. The convolution operation allows the CIA module to achieve a stronger local receptive field, improving its ability to capture local temporal patterns. Additionally, the convolutional layers help reduce the computational cost, making the model more efficient when handling long time series. In contrast, iTransformer only inverts the attention dimension to model dependencies between time and variables, without incorporating convolution, limiting its ability to efficiently extract local features.This design is particularly important for sensor time series, which often exhibit strong local fluctuations, short-term transient patterns, and noise-contaminated dynamics. By introducing a convolutional layer before inverted attention, the CIA module explicitly captures local temporal variations that are typically under-modeled by the purely attention-based iTransformer, while preserving its ability to model global inter-variable dependencies.
One-Dimensional Convolution for Local Feature Extraction: For the time series X R L × D , we employ a convolutional module for local feature extraction. This convolutional block consists of a one-dimensional convolutional layer (Conv1D), batch normalization (BatchNorm), and a GELU activation function in sequence. The computational process is as follows:
U = GELU ( BatchNorm ( Conv 1 D ( X ) ) ) .
The kernel dimensions of the convolution are R 1 × d c , where d c is the kernel size of the convolution filter. The resulting universal features are U R L × d g e n e r , where d g e n e r is the feature dimension of the convolved output, which controls the subsequent number of tokens.
Inverse Attention Models Variable Dependencies: The overall structure is shown in Figure 6. After obtaining features containing local information U, we transpose dimensions to treat variables as tokens and time points as features: E = U + P R d g e n e r × L , where P R d g e n e r × L is the learnable position encoding. To convert time series embeddings into variable token representations, we employ a Multi-layer Perception (MLP) network to map each variable’s time series embedding to dimension d v a r . This transforms each variable into a token [41,42], E = MLP ( E ) R d g e n e r × d v a r , where d v a r represents the mapping dimension. Consequently, we obtain d g e n e r variable tokens.Subsequently, feature E is input into the multi-head attention mechanism to learn correlations. Through the linear projection matrix W Q , W K , W V R d v a r × d v a r , queries, keys, and values ( Q = E W Q , K = E W K , V = E W V R d g e n e r × d v a r ) are obtained. q i , k i R d v a r serves as the query and key for a variable token. For any pair of variable tokens i , j , their pre-Softmax score is:
A i , j = q i k j d v a r .
The correlation between variable i and variable j in the projection is measured by α i , j , expressed in matrix form as:
A = Q K d v a r R d g e n e r × d g e n e r .
Next, the Softmax function yields the weight coefficients α i , : = Softmax ( A i , : ) R d g e n e r . These weights are then applied to sum all values, resulting in the output E g e n e r = [ E 1 g e n e r , , E d g e n e r g e n e r ] R d g e n e r × d v a r ,
E i g e n e r = j = 1 d g e n e r α i , j V j R d v a r .
After obtaining the variable representation H g e n e r updated through the self-attention mechanism, the model further performs independent nonlinear mapping on the features of each variable token via a Feed-Forward Network (FFN) [6] to enhance its expressive capability. This process employs residual connections and Layer Normalization to maintain training stability.
E ˜ = LayerNorm ( E g e n e r ) ,
Z g e n e r = LayerNorm ( E ˜ + FFN ( E ˜ ) ) R d g e n e r × d v a r ,
among these, the FFN ( · ) consists of two fully connected layers and the GELU activation function, which performs a nonlinear feature transformation on each variable token. Since this module uses classical features as input tokens, we employ average pooling to derive the final class tokens:
Z g e n e r = AvgPooling ( Z g e n e r ) .
Under this architecture, the self-attention weight matrix clearly reflects global correlations among variables, thereby enhancing model interpretability. The final output Z g e n e r R d v a r effectively integrates local temporal patterns with global variable dependencies.

4.5. Classification Head

To synergistically leverage feature information across different levels, this model concatenates category-specific representations Z s p e c i with general representations Z g e n e r to form a fused representation Z c o n as input to the classification head. This fusion strategy enables the model to make more robust classification decisions by simultaneously utilizing global variable correlations from the general representations and local discriminative patterns revealed by Shapelet within the category-specific representations.
Z c o n = concat ( Z s p e c i , Z g e n e r ) .

4.6. Big- O Complexity Analysis

In this section, we provide an analysis of the computational complexity of the proposed modules (DFM, CIA, Transformer encoder) to support the claims of improved efficiency. The complexity of each module is evaluated using Big- O notation, allowing for a clear understanding of the performance improvements over previous methods. Table 2 illustrates the complexity analysis for each module in EffiShapeFormer.
The DFM module involves two stages: Coarse Screening and Fine Screening. The Coarse Screening stage uses Euclidean distance calculations, which have a complexity of O ( L · N ) , where L is the time series length and N is the number of shapelet candidates. The Fine Screening stage incorporates Perceptual Subsequence Distance (PSD), which involves O ( L 2 ) operations due to the pairwise distance computations.
The CIA module modifies the Transformer self-attention mechanism by shifting the attention dimension from the temporal axis to the variable axis. The quadratic complexity of the self-attention mechanism is O ( D 2 · L ) , where D is the number of variable and L is the length of the sequence.
The Transformer Encoder’s self-attention mechanism has a complexity of O ( L 2 · D ) , where L is the sequence length and D is the dimensionality of the input.
By integrating the DFM and the CIA module, our model achieves a significant reduction in computational complexity, especially in comparison to previous methods such as ShapeFormer. The overall complexity of the EffiShapeFormer framework is reduced from O ( L 2 · D ) to O ( L · N + D 2 · L ) , since D L , this demonstrates the efficiency improvements we have achieved.

5. Experience

5.1. Experimental Settings

5.1.1. Datasets

This study evaluates the proposed method using three types of sensor time series datasets. One category employs single-channel time series datasets from the UCR [43] Archive. The UCR Archive comprises 85 distinct time series classification datasets covering various types such as bio-signals, action recognition, speech signals, and sensor signals. It stands as one of the most widely used benchmark libraries in time series classification research. We selected thirteen sensor-related datasets for experimentation, with most implementations mirroring configurations from prior studies.
The second category utilizes the UEA Archive multi-channel time series datasets. Comprising over 31 multi-channel time series classification datasets, the UEA [44] Archive spans diverse application scenarios including mechanical fault detection, human action recognition, medical signal analysis, and sensor monitoring. It stands as one of the most commonly used benchmark libraries in multi-channel time series classification research. We selected five datasets related to mechanical sensors for experimentation, with most implementations mirroring configurations employed in other studies.
The third category employs the Gearbox Dataset [45], a multi-channel mechanical dataset from Southeast University. This dataset was acquired from the Drivetrain Dynamic Simulator (DDS) and comprises two sub-datasets: bearings and gears. Data for four fault types was collected for bearings and gears under two operating conditions (speed-load configurations of 20-0 and 30-2). Fault type descriptions are shown in the table. Each file contains 8 signals representing: 1— motor vibration; 2, 3, 4—Vibration of the planetary gearbox in the x, y, and z directions; 5—Motor torque; 6, 7, 8—Vibration of the parallel gearbox in the x, y, and z directions. Table 3 is the descriptions for Bearingset and Gearset.

5.1.2. Data Preprocessing

The UCR [43] and UEA [44] datasets has been split into training and testing portions, with most components ready for direct experimentation. A validation set was selected from each training dataset using an 80/20 ratio. However, the DodgerLoopDay dataset contained a small number of missing values (NaN), which we repaired using mean imputation.
In this experiment, we primarily preprocessed the Gearbox [45] Dataset. Due to the lengthy time series for each fault type in both the bearing and gear sub-datasets, we reduced the computational burden while maintaining data representativeness by truncating each sequence to sixty-fourth of its original length. We then performed non-overlapping sampling using a sliding window size of 1024. Subsequently, the first 80% of each fault category was used as the training set, and the remaining 20% as the test set. The validation set was processed identically to the UCR and UEA datasets. The processed bearing and gear data were then categorized into two operating conditions, yielding the final four datasets.
Beyond truncation and sliding-window sampling, no additional preprocessing (e.g., filtering, artifact removal, or normalization) was applied to preserve original signal characteristics and maintain format consistency with the UCR and UEA datasets. Detailed information for each data set is presented in Table 4.

5.1.3. Implementation Details

Our model was trained using the RAdam optimizer with an initial learning rate of 5 × 10 2 and weight decay of 5 × 10 4 . The training process involved 64 batch sizes and ran for 200 epochs, with all other parameters consistent with Shapeformer. To ensure experimental fairness, we set the window size and number of extracted shapelets to 100 and 10, respectively, (except for the SonyAIBORobotSurface1, SonyAIBORobotSurface2, Libras, ERing and RacketSports datasets, where the window size was set to 10) for both Shapeformer and our method. Before experiments, we performed hyperparameter tuning. After final hyperparameters were determined, model training and testing proceeded. Training employed early stopping based on validation set loss. All experiments were implemented in PyTorch 2.2.2 on Python 3.10.18. (Computational Infrastructure: Windows operating system, GPU NVIDIA GeForce RTX 4090 with 24 GB VRAM (NVIDIA Corporation, Santa Clara, CA, USA)).

5.1.4. Baselines

To validate the effectiveness and advanced nature of the proposed method, we selected the most representative time series models currently available as comparative benchmarks. Baseline methods for time series classification are summarized as follows:
(1)
Autoformer [9]: A time series Transformer based on autocorrelation mechanisms, capturing long-term dependencies through trend-seasonal decomposition to enhance long-sequence prediction performance.
(2)
Crossformer [10]: Models dependencies among multivariate time series via cross-dimensional attention mechanisms, enabling efficient learning of full-dimensional interactive features.
(3)
DLinear [23]: A linear decomposition-based time series modeling method achieving efficient forecasting through independent modeling of trend and seasonal components.
(4)
Informer [7]: An efficient Transformer employing ProbSparse self-attention and hierarchical distillation architecture for long-sequence time series forecasting.
(5)
iTransformer [41]: Replaces traditional time-dimension modeling with feature-dimension modeling for more efficient multivariate time series representation learning.
(6)
LightTS [11]: A lightweight time series model constructed using a simple MLP architecture combined with two downsampling strategies: interval sampling and continuous sampling. This approach leverages the observation that “time series downsampling often preserves key information,” significantly reducing computational complexity while maintaining accuracy.
(7)
PatchTST [46]: Inputs time series divided into local patches into a Transformer, enhancing local pattern capture and prediction stability.
(8)
Reformer [8]: An efficient Transformer variant that introduces locality-sensitive hashing (LSH) attention and reversible layers to significantly reduce memory and computational complexity, enabling scalable modeling of long time series sequences.
(9)
Shapeformer [19]: Combines shapelet feature extraction with the Transformer architecture to learn shape-aware representations for time series.
(10)
TimesNet [24]: Maps one-dimensional time series to two-dimensional tensors, modeling periodicity and local variations through multi-scale convolutions in a two-dimensional time-frequency space for universal temporal feature extraction.
In all baseline experiments, we strictly adhere to the parameter configurations specified in their original papers. Validation loss-based early stopping is employed throughout training to ensure fairness and comparability of experimental results.

5.1.5. Evaluation Metrics

To comprehensively evaluate the performance of the proposed method, this study employs multiple classification evaluation metrics, including Accuracy (ACC) and F1-Score (F1). We computed the average values of these metrics for each model and conducted a comprehensive ranking based on these averages to measure the overall classification performance. Additionally, to validate the computational efficiency of the model, we recorded the shapelet discovery time and total training time for both our method and the Shapeformer model under the same task for efficiency comparison.
ACC = TP + TN TP + FP + FN + TN ,
F 1 = 2 × PR × RE PR + RE .
In these equations, T, F, P, and N represent true, false, positive, and negative, respectively. For example, TP denotes the number of true positives, while FN denotes the number of false negatives.

5.2. Experimental Results

5.2.1. Performance Evaluation

To comprehensively evaluate the performance of the proposed EffiShapeFormer method, we conducted systematic comparisons with baseline approaches across all experimental datasets. A total of ten representative baseline models were selected, and their classification accuracy (Accuracy) and F1-Score were recorded. The main experimental results are presented in Table 5, Table 6, Table 7 and Table 8. For ease of comparison, the best and second-best results are highlighted in bold and underlined, respectively.
As shown in Table 5 and Table 6, the proposed EffiShapeFormer model achieves significant performance improvements over the baseline ShapeFormer in terms of average classification accuracy, yielding an approximate 6% increase. This improvement highlights the superior feature extraction and representation capabilities of our proposed framework. Furthermore, EffiShapeFormer consistently ranks first among all baseline methods, indicating its strong adaptability and robustness across diverse datasets.
In terms of the F1-Score metric, as presented in Table 7 and Table 8, EffiShapeFormer also demonstrates superior performance, surpassing ShapeFormer by approximately 5.6% and achieving the highest overall average F1-Score among all comparative models. These results verify that EffiShapeFormer not only improves the classification precision but also maintains a better balance between precision and recall, reflecting its enhanced ability to handle imbalanced and complex time series patterns.
Although our method does not attain the best performance on every single dataset, it achieves Top-1 accuracy on 12 datasets and Top-2 accuracy on 2 datasets, demonstrating excellent generalization ability, stability, and competitiveness across multiple evaluation scenarios. Overall, these experimental results strongly validate the effectiveness and robustness of the proposed model in sensor-based time series classification tasks.
It can be observed from Table 5, Table 6, Table 7 and Table 8 that no single method consistently achieves the best performance across all datasets. This variability is closely related to the diverse characteristics of the evaluated datasets, as summarized in Table 4.
EffiShapeFormer demonstrates particularly strong performance on datasets with longer sequence lengths, multiple sensor channels, and clear local discriminative patterns, such as Bearing20/30, Gear20/30, Epilepsy, and SonyAIBORobotSurface. In these scenarios, the proposed Dual-layer Filtering Mechanism effectively selects informative shapelets, while the CIA module jointly captures local temporal dynamics and cross-variable dependencies, which aligns well with the intrinsic structure of multivariate sensor signals.
In contrast, for datasets with very short sequences, extremely limited training samples, or relatively weak local temporal structure, simpler models or methods with stronger inductive biases toward global similarity may occasionally achieve slightly better results. This observation is consistent with prior studies and highlights that dataset characteristics such as dimensionality, sequence length, and class complexity play a critical role in determining model effectiveness.
Overall, although EffiShapeFormer does not dominate every individual dataset, it achieves the best average performance across all evaluated benchmarks, indicating its robustness and adaptability across diverse sensor time series classification scenarios.

5.2.2. Hyperparameter Stability

In this method, an important hyperparameter—the coarse screening threshold β —needs to be adjusted. To analyze the impact of this parameter on model performance and verify its stability, we conducted systematic experiments across all datasets using different values of β to evaluate the model’s classification performance. Specifically, the coarse screening threshold was set within the range [0.05, 0.10, 0.15, 0.20, 0.25, 0.3, 0.4, 0.5, 0.6, 0.7] to investigate the model’s sensitivity to this parameter and identify a suitable value that balances efficiency and performance. Table 9 summarizes the optimal threshold obtained for each sensor dataset. Under these threshold settings, the model achieved the highest accuracy and F1-Score on the corresponding dataset. Therefore, in subsequent experiments, we adopted these optimal threshold values for model training and evaluation. It is worth noting that β is selected offline for each dataset based on training/validation performance. Once determined, the corresponding β is fixed and consistently used throughout training and testing on that dataset, without any further adjustment at inference time.
Specifically, the choice of β depends on the characteristics of the dataset: (1) In datasets with higher noise levels or stronger non-stationarity, the discriminative distance distribution of candidate shapelets is more dispersed. In such cases, a larger β is required to avoid prematurely discarding potentially useful shapelets. (2) In datasets with strong intra-class consistency and clear inter-class differences, the coarse screening phase can reliably distinguish candidate shapelets, and a smaller β is sufficient to reduce computational cost while maintaining performance. (3) Variations in sequence length and channel dimensions across different datasets can also affect the number and statistical properties of shapelet candidates, thereby influencing the optimal choice of β .
Therefore, the variability of β across datasets is not due to instability in the method, but rather reflects the differences in the density of shapelet discriminative information under different data distributions.

5.2.3. Computational Efficiency Analysis

In the proposed method, a Coarse Screening module is incorporated during the shapelet discovery stage, and a Convolutional-Inverted Attention (CIA) module is integrated within the generic representation stage. Compared with ShapeFormer model, our approach achieves a higher overall computational efficiency while maintaining strong classification performance. To verify this advantage, we conducted a systematic comparative analysis of both shapelet discovery time and model training time on eight datasets.
Shapelet Discovery Time. The shapelet discovery time refers to the total duration spent during the shapelet discovery stage, encompassing both the extraction and filtering processes. As shown in Figure 7, our method consistently requires less discovery time than ShapeFormer on eight datasets. This improvement can be attributed to the introduced Coarse Screening mechanism, which effectively eliminates redundant or non-discriminative shapelet candidates in the early stage, thereby reducing unnecessary computation. The experimental results clearly demonstrate that the Coarse Screening module plays a crucial role in enhancing shapelet discovery efficiency and significantly accelerates the overall shapelet discovery process.
Model Training Time. To further evaluate the effectiveness of the CIA module in reducing computational costs during model training, we compared the training times of our method and ShapeFormer on eight datasets. As illustrated in Figure 7, our method exhibits substantially lower training time than ShapeFormer. This result indicates that the CIA module efficiently captures cross-class feature representations while reducing redundant parameter updates, thereby significantly lowering the computational burden during training. Overall, the results confirm that the proposed framework achieves an excellent balance between computational efficiency and classification performance.

5.3. Ablation Study

To further verify the effectiveness and contribution of each module in the proposed method, we conducted a systematic ablation study on all datasets. Specifically, we progressively removed or added key modules while keeping other structures unchanged, and recorded the model’s average Accuracy and F1-Score under different configurations. This approach enables a quantitative assessment of each module’s impact on the overall performance improvement. All comparisons were made with respect to ShapeFormer baseline, thereby revealing the specific role of each component in enhancing the model’s discriminative capability and feature representation.
As shown in Table 10, the baseline ShapeFormer achieves an Accuracy of 0.7855 and an F1-Score of 0.7634 without any additional components. When the Coarse Screening module is introduced, the performance slightly decreases to an Accuracy of 0.7675 and an F1-Score of 0.7510, suggesting that coarse screening mainly reduces the interference of irrelevant shapelets but, when used alone, provides limited direct gains for classification. When incorporating the inverse-attention mechanism into ShapeFormer, the performance improves to an Accuracy of 0.7937 and an F1-Score of 0.7832, indicating that inverse attention can better model informative dependencies and enhance feature discrimination. Building upon this, the Convolutional-Inverted Attention (CIA) module further boosts the performance to an Accuracy of 0.8113 and an F1-Score of 0.7849, demonstrating that combining convolutional feature projection with inverse-attention-based interaction is more effective than using inverse attention alone. Finally, the Proposed Model, which integrates both the Coarse Screening mechanism and the CIA module, achieves the best overall performance with an Accuracy of 0.8456 and an F1-Score of 0.8263, confirming the complementary contributions of the proposed components while maintaining computational efficiency.

5.4. A Case Study of Epilepsy

To interpret the results of EffiShapeFormer, we adopt the Epilepsy dataset from the UEA archive [44], which contains four activity classes (Running, Walking, Sawing, and Seizure Mimicking) for human activity recognition. Each class consists of three channels. For each class, we select 10 shapelets for analysis. Specifically, we randomly choose a Sawing instance from the training set and select the top three shapelets from this class. Meanwhile, we select one top shapelet from each of the other three classes for comparative visualization. The results are shown in Figure 8a. In this figure, S, R, W, and SM denote the Sawing, Running, Walking, and Seizure Mimicking classes, respectively. The suffix-01 (or -04) indicates the shapelet index within the corresponding class. For instance, S-01 refers to the first shapelet in the Sawing class, and SM-04 refers to the fourth shapelet in the Seizure Mimicking class. The outlined boxes indicate the corresponding best-fit subsequences matched by each shapelet. As can be observed, EffiShapeFormer is able to localize key subsequences across different channels and temporal positions and match them with the learned shapelets. Compared with shapelets from other classes, the same class shapelets exhibit higher similarity to their best-fit subsequences, highlighting the model’s ability to capture class-discriminative local patterns in time series.
As shown in Figure 8b, we visualize the channel-wise attention response of the CIA module in EffiShapeFormer on the Epilepsy dataset, and compare the attention distributions at the early training stage (Epoch 0) and after convergence (Epoch 135). The rows and columns of the attention matrix correspond to the 48 latent feature channels produced by the CIA conv1d projection (the convolution mapping dimension is set to 48 in our experiments). On top of these latent channels, an inverse-attention mechanism is applied to model cross-channel dependencies. Specifically, each element A i , j denotes the attention weight assigned to channel j when updating the representation of channel i, thereby characterizing the strength of inter-channel interactions.
To emphasize cross-channel relations and avoid self-correlation dominating the visualization, the diagonal entries are masked and shown as zero for plotting purposes only. At Epoch 0, the attention pattern is relatively diffuse and lacks stable structure, indicating that the model has not yet formed consistent cross-channel dependencies. In contrast, at Epoch 135, the attention map exhibits clear stripe-like structures: several bright vertical stripes suggest that a small subset of channels is consistently attended by many other channels, behaving as key channels in cross-channel interactions; meanwhile, some horizontal bands indicate that certain channels rely consistently on specific key channels during their updates. Overall, the converged attention evolves from an unstructured diffuse pattern to a sparser and more organized one, suggesting that conv1d provides compact latent channel representations, while inverse attention further promotes effective cross-channel dependencies and suppresses redundant interactions. This supports the capability of EffiShapeFormer to capture discriminative dependency patterns in the latent feature space.

6. Conclusions

In this study, we propose an efficient model architecture named EffiShapeFormer for sensor-based time series classification tasks. The model introduces a dual filtering mechanism in the Shapelet Discovery stage to efficiently select discriminative shapelets. In the Class-specific Representation module, the filtered shapelets are modeled to capture class-specific features, while in the Generic Representation stage, the proposed CIA module is employed to extract cross-class generic features. Finally, the model fuses the class-specific and generic representations to achieve efficient and accurate time series classification, significantly improving computational efficiency while maintaining high accuracy.
The ablation experiments verify the functional contributions of the proposed modules: the Coarse Screening module effectively reduces computational time during the shapelet discovery phase while maintaining high accuracy; the CIA module accelerates the training process and better captures global dependencies among variables. The combination of both modules not only enhances the overall efficiency of the model but also further improves classification performance.
Experimental results on 22 sensor datasets demonstrate that EffiShapeFormer achieves the best overall average performance and consistently competitive results compared with baseline models, highlighting the effectiveness and potential of our approach for sensor-based time series classification tasks. In future work, we plan to further exploit the interpretability of shapelets to explore their broader applicability across various sensor time series analysis scenarios.

Author Contributions

Conceptualization, S.W., J.B., L.W. and X.T.; methodology, S.W., J.B., L.W. and S.Z.; software, S.W., J.B., L.W. and H.W.; validation, J.B., S.W. and S.Z.; formal analysis, Q.Z., N.W. and X.T.; investigation, J.B. and S.W.; resources, L.L. and J.L.; data curation, J.B., S.W. and H.W.; writing—original draft preparation, S.W. and J.B.; writing—review and editing, J.B. and S.W.; visualization, X.L. and X.Z.; supervision, X.Y.; project administration, X.T., S.Z. and H.W.; funding acquisition, L.L. and J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Chongging Municipal Economy and Information Technology Commission (grant No. YJX-2025001001008), the National Natural Science Foundation of China (grant Nos. 62477004, 62377040), the Fundamental Research Funds for the Central Universities (grant NO. 2023CDJYGRH-YB08), and the General Program of Chongqing Science and Health Joint Medical Research Project (grant NO. 2023MSXM023).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The sensor data used in this study are publicly available from the UEA, UCR, and Gearbox databases.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Michau, G.; Hu, Y.; Palmé, T.; Fink, O. Feature learning for fault detection in high-dimensional condition monitoring signals. Proc. Inst. Mech. Eng. Part O J. Risk Reliab. 2020, 234, 104–115. [Google Scholar] [CrossRef]
  2. Zhang, M.; Xing, X.; Wang, W. Smart Sensor-Based Monitoring Technology for Machinery Fault Detection. Sensors 2024, 24, 2470. [Google Scholar] [CrossRef]
  3. Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and sensing for autonomous vehicles under adverse weather conditions: A survey. ISPRS J. Photogramm. Remote Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
  4. Mekruksavanich, S.; Jitpattanakul, A. Lstm networks using smartphone data for sensor-based human activity recognition in smart homes. Sensors 2021, 21, 1636. [Google Scholar] [CrossRef]
  5. Ismail Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Deep learning for time series classification: A review. Data Min. Knowl. Discov. 2019, 33, 917–963. [Google Scholar] [CrossRef]
  6. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
  7. Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 11106–11115. [Google Scholar]
  8. Kitaev, N.; Kaiser, Ł.; Levskaya, A. Reformer: The efficient transformer. arXiv 2020, arXiv:2001.04451. [Google Scholar] [CrossRef]
  9. Wu, H.; Xu, J.; Wang, J.; Long, M. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Adv. Neural Inf. Process. Syst. 2021, 34, 22419–22430. [Google Scholar]
  10. Zhang, Y.; Yan, J. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In Proceedings of the The Eleventh International Conference on Learning Representations, Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
  11. Campos, D.; Zhang, M.; Yang, B.; Kieu, T.; Guo, C.; Jensen, C.S. Lightts: Lightweight time series classification with adaptive ensemble distillation. Proc. ACM Manag. Data 2023, 1, 1–27. [Google Scholar] [CrossRef]
  12. Ruiz, A.P.; Flynn, M.; Large, J.; Middlehurst, M.; Bagnall, A. The great multivariate time series classification bake off: A review and experimental evaluation of recent algorithmic advances. Data Min. Knowl. Discov. 2021, 35, 401–449. [Google Scholar] [CrossRef] [PubMed]
  13. Ye, L.; Keogh, E. Time series shapelets: A new primitive for data mining. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Paris, France, 28 June–1 July 2009; pp. 947–956. [Google Scholar]
  14. Grabocka, J.; Schilling, N.; Wistuba, M.; Schmidt-Thieme, L. Learning time-series shapelets. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 24–27 August 2014; pp. 392–401. [Google Scholar]
  15. Qu, E.; Wang, Y.; Luo, X.; He, W.; Ren, K.; Li, D. CNN kernels can be the best shapelets. In Proceedings of the Twelfth International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [Google Scholar]
  16. Wen, Y.; Ma, T.; Weng, L.; Nguyen, L.; Julius, A.A. Abstracted shapes as tokens-a generalizable and interpretable model for time-series classification. Adv. Neural Inf. Process. Syst. 2024, 37, 92246–92272. [Google Scholar]
  17. Hills, J.; Lines, J.; Baranauskas, E.; Mapp, J.; Bagnall, A. Classification of time series by shapelet transformation. Data Min. Knowl. Discov. 2014, 28, 851–881. [Google Scholar] [CrossRef]
  18. Le, X.M.; Tran, M.T.; Huynh, V.N. Learning perceptual position-aware shapelets for time series classification. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Grenoble, France, 19–23 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 53–69. [Google Scholar]
  19. Le, X.M.; Luo, L.; Aickelin, U.; Tran, M.T. Shapeformer: Shapelet transformer for multivariate time series classification. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Barcelona, Spain, 25–29 August 2024; pp. 1484–1494. [Google Scholar]
  20. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 2002, 86, 2278–2324. [Google Scholar] [CrossRef]
  21. Elman, J.L. Finding structure in time. Cogn. Sci. 1990, 14, 179–211. [Google Scholar] [CrossRef]
  22. Zheng, Y.; Liu, Q.; Chen, E.; Ge, Y.; Zhao, J.L. Time series classification using multi-channels deep convolutional neural networks. In Proceedings of the International Conference on Web-Age Information Management, Macau, China, 16–18 June 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 298–310. [Google Scholar]
  23. Zeng, A.; Chen, M.; Zhang, L.; Xu, Q. Are transformers effective for time series forecasting? In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 11121–11128. [Google Scholar]
  24. Wu, H.; Hu, T.; Liu, Y.; Zhou, H.; Wang, J.; Long, M. Timesnet: Temporal 2d-variation modeling for general time series analysis. arXiv 2022, arXiv:2210.02186. [Google Scholar]
  25. Theissler, A.; Spinnato, F.; Schlegel, U.; Guidotti, R. Explainable AI for time series classification: A review, taxonomy and research directions. IEEE Access 2022, 10, 100700–100724. [Google Scholar] [CrossRef]
  26. Kacprzyk, K.; Liu, T.; van der Schaar, M. Towards transparent time series forecasting. In Proceedings of the Twelfth International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [Google Scholar]
  27. Huang, B.; Jin, M.; Liang, Y.; Barthelemy, J.; Cheng, D.; Wen, Q.; Liu, C.; Pan, S. ShapeX: Shapelet-Driven Post Hoc Explanations for Time Series Classification Models. arXiv 2025, arXiv:2510.20084. [Google Scholar]
  28. Li, G.; Choi, B.; Xu, J.; Bhowmick, S.S.; Chun, K.P.; Wong, G.L.H. Shapenet: A shapelet-neural network approach for multivariate time series classification. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 8375–8383. [Google Scholar]
  29. Lines, J.; Davis, L.M.; Hills, J.; Bagnall, A. A shapelet transform for time series classification. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Beijing, China, 12–16 August 2012; pp. 289–297. [Google Scholar]
  30. Grabocka, J.; Wistuba, M.; Schmidt-Thieme, L. Fast classification of univariate and multivariate time series through shapelet discovery. Knowl. Inf. Syst. 2016, 49, 429–454. [Google Scholar] [CrossRef]
  31. Ghods, A.; Cook, D.J. PIP: Pictorial interpretable prototype learning for time series classification. IEEE Comput. Intell. Mag. 2022, 17, 34–45. [Google Scholar] [CrossRef] [PubMed]
  32. Ghosal, G.R.; Abbasi-Asl, R. Multi-modal prototype learning for interpretable multivariable time series classification. arXiv 2021, arXiv:2106.09636. [Google Scholar] [CrossRef]
  33. Lin, J.; Keogh, E.; Lonardi, S.; Chiu, B. A symbolic representation of time series, with implications for streaming algorithms. In Proceedings of the 8th ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery, Washington, DC, USA, 13 June 2003; pp. 2–11. [Google Scholar]
  34. Schäfer, P.; Högqvist, M. SFA: A symbolic fourier approximation and index for similarity search in high dimensional datasets. In Proceedings of the 15th International Conference on Extending Database Technology, Berlin, Germany, 26–30 March 2012; pp. 516–527. [Google Scholar]
  35. Schäfer, P. The BOSS is concerned with time series classification in the presence of noise. Data Min. Knowl. Discov. 2015, 29, 1505–1530. [Google Scholar] [CrossRef]
  36. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
  37. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  38. Chung, F.L.K.; Fu, T.C.; Luk, W.P.R.; Ng, V.T.Y. Flexible time series pattern matching based on perceptually important points. In Proceedings of the Workshop on Learning from Temporal and Spatial Data in International Joint Conference on Artificial Intelligence, Seattle, WA, USA, 6 August 2001. [Google Scholar]
  39. Batista, G.E.; Wang, X.; Keogh, E.J. A complexity-invariant distance measure for time series. In Proceedings of the 2011 SIAM International Conference on Data Mining, SIAM, Mesa, AZ, USA, 28–30 April 2011; pp. 699–710. [Google Scholar]
  40. Kim, S.W.; Park, D.H.; Lee, H.G. Efficient processing of subsequence matching with the Euclidean metric in time-series databases. Inf. Process. Lett. 2004, 90, 253–260. [Google Scholar] [CrossRef]
  41. Liu, Y.; Hu, T.; Zhang, H.; Wu, H.; Wang, S.; Ma, L.; Long, M. itransformer: Inverted transformers are effective for time series forecasting. arXiv 2023, arXiv:2310.06625. [Google Scholar]
  42. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  43. Dau, H.A.; Bagnall, A.; Kamgar, K.; Yeh, C.C.M.; Zhu, Y.; Gharghabi, S.; Ratanamahatana, C.A.; Keogh, E. The UCR time series archive. IEEE/CAA J. Autom. Sin. 2019, 6, 1293–1305. [Google Scholar] [CrossRef]
  44. Bagnall, A.; Dau, H.A.; Lines, J.; Flynn, M.; Large, J.; Bostrom, A.; Southam, P.; Keogh, E. The UEA multivariate time series classification archive. arXiv 2018, arXiv:1811.00075. [Google Scholar] [CrossRef]
  45. Shao, S.; McAleer, S.; Yan, R.; Baldi, P. Highly accurate machine fault diagnosis using deep transfer learning. IEEE Trans. Ind. Inform. 2018, 15, 2446–2455. [Google Scholar] [CrossRef]
  46. Nie, Y. A Time Series is Worth 64Words: Long-term Forecasting with Transformers. arXiv 2022, arXiv:2211.14730. [Google Scholar]
Figure 1. Framework of EffiShapeFormer. (a) Generic Representation: We propose Convolutional Inverse Attention (CIA), which employs a “dimension transposition” approach to treat variables as tokens and time points as features. This shifts the application dimension of self-attention from the temporal axis to the variable axis, significantly enhancing computational efficiency; (b) Shapelet Discovery: PIP is employed to extract shapelet candidates, which undergo coarse-grained screening before calculating distance and information gain to select the final shapelet; (c) Class-specific Representation: A self-attention-based shapelet learning mechanism captures the interactive relationship between shapelets and input sequences to learn discriminative feature representations.
Figure 1. Framework of EffiShapeFormer. (a) Generic Representation: We propose Convolutional Inverse Attention (CIA), which employs a “dimension transposition” approach to treat variables as tokens and time points as features. This shifts the application dimension of self-attention from the temporal axis to the variable axis, significantly enhancing computational efficiency; (b) Shapelet Discovery: PIP is employed to extract shapelet candidates, which undergo coarse-grained screening before calculating distance and information gain to select the final shapelet; (c) Class-specific Representation: A self-attention-based shapelet learning mechanism captures the interactive relationship between shapelets and input sequences to learn discriminative feature representations.
Sensors 26 00307 g001
Figure 2. The process of extracting first 5 PIPs. In that, PD is maximum Perpendicular Distance.
Figure 2. The process of extracting first 5 PIPs. In that, PD is maximum Perpendicular Distance.
Sensors 26 00307 g002
Figure 3. Representative examples of shapelet candidates in the training datasets. (a) illustrates the shapelets extracted from the target class, while (b) presents those extracted from the other classes.
Figure 3. Representative examples of shapelet candidates in the training datasets. (a) illustrates the shapelets extracted from the target class, while (b) presents those extracted from the other classes.
Sensors 26 00307 g003
Figure 4. Calculate the intra-class distance and inter-class distance between shapelets and the training data X during the coarse screening process.
Figure 4. Calculate the intra-class distance and inter-class distance between shapelets and the training data X during the coarse screening process.
Sensors 26 00307 g004
Figure 5. Token Dimension Conversion.
Figure 5. Token Dimension Conversion.
Sensors 26 00307 g005
Figure 6. Overall Structure of Inverse Attention.
Figure 6. Overall Structure of Inverse Attention.
Sensors 26 00307 g006
Figure 7. Comparison of discovery time, training time, accuracy, and F1-Score between our method and Shapeformer on eight datasets.The horizontal axis denotes the dataset indices: 1—Car, 2—DodgerLoopWeekend, 3—ERing, 4—Libras, 5—Lightning7, 6—Plane, 7—RacketSports, 8—Trace.
Figure 7. Comparison of discovery time, training time, accuracy, and F1-Score between our method and Shapeformer on eight datasets.The horizontal axis denotes the dataset indices: 1—Car, 2—DodgerLoopWeekend, 3—ERing, 4—Libras, 5—Lightning7, 6—Plane, 7—RacketSports, 8—Trace.
Sensors 26 00307 g007
Figure 8. (a) Shapelets and their best-fit subsequences on the Epilepsy dataset (a randomly selected Sawing instance). Pink: top three Sawing shapelets (S-01, S-04, S-08); cyan: top shapelets from other classes (SM-04, W-02, R-02). Shaded boxes indicate the corresponding best-fit subsequences. (b) Feature-channel attention map of the CIA module on the Epilepsy dataset (Epoch 0 vs. Epoch 135). Diagonal entries are masked (shown as zero) for visualization only to emphasize cross-channel attention.
Figure 8. (a) Shapelets and their best-fit subsequences on the Epilepsy dataset (a randomly selected Sawing instance). Pink: top three Sawing shapelets (S-01, S-04, S-08); cyan: top shapelets from other classes (SM-04, W-02, R-02). Shaded boxes indicate the corresponding best-fit subsequences. (b) Feature-channel attention map of the CIA module on the Epilepsy dataset (Epoch 0 vs. Epoch 135). Diagonal entries are masked (shown as zero) for visualization only to emphasize cross-channel attention.
Sensors 26 00307 g008
Table 1. Important Notations and Descriptions.
Table 1. Important Notations and Descriptions.
NotationDescription
NThe number of training samples
x t d Value at time step t on channel d
S i The i-th shapelet (a continuous subsequence from one channel)
d i The channel index of shapelet S i
p i start , p i end Start/end indices of shapelet S i
i The length of shapelet S i
bThe sliding-window start index in matching
X i b = X b : b + i 1 , d i    Length- i subsequence on channel d i starting at b
ED ( · , · ) Euclidean distance
n p i p The number of PIPs
S D j , k ( T ) , S D j , k ( O ) The candidate indexed by channel D j and candidate id k, from target class (T) or other class (O). (Does not redefine shapelets.)
n C T , n C O The number of samples in target class/other classes during coarse screening
D intra ( · ) , D inter ( · ) The minimum intra/inter-class distance of shapelet
δ ( · ) Discriminative score based on intra/inter-class distance difference for coarse screening
β Percentage threshold: discard bottom β % candidates ranked by δ ( · )
S Candidate set retained after coarse screening
S i The i-th final shapelet after fine screening
PSD ( · , · ) Perceptual subsequence distance based on CID ( · , · )
CID ( · , · ) Complexity-invariant distance
I i Best-fit subsequence in X aligned to S i
h S i , h I i Embeddings of shapelet S i and matched subsequence I i via linear projections
PE ( · ) Learnable embedding function for p i s t a r t , p i e n d , d i
wNeighborhood radius for local matching around p i start
Z speci , Z gener , Z con Class-specific, generic, and concatenated representations
H , h Number of attention heads; index of the h-th head
Softmax ( · ) Softmax function for attention normalization
Table 2. Complexity Analysis for Each Module in EffiShapeFormer.
Table 2. Complexity Analysis for Each Module in EffiShapeFormer.
ModuleComplexityExplanation
DFM O ( L · N ) Coarse Screening involves Euclidean distance calculation; Fine Screening uses Perceptual Subsequence Distance (PSD).
CIA O ( D 2 · L ) The modified self-attention mechanism reduces complexity by shifting the attention dimension.
Transformer Encoder O ( L 2 · D ) Traditional Transformer self-attention mechanism with quadratic complexity.
EffiShapeFormer O ( L · N + D 2 · L ) Combined efficiency of DFM and CIA, reducing the overall computational cost.
Table 3. Fault type descriptions for Bearingset and Gearset.
Table 3. Fault type descriptions for Bearingset and Gearset.
LocationTypeDescription
GearsetChippedCrack occurs in the gear feet
MissMissing one of feet in the gear
RootCrack occurs in the root of gear feet
SurfaceWear occurs in the surface of gear
BearingsetBallCrack occurs in the ball
InnerCrack occurs in the inner ring
OuterCrack occurs in the outer ring
CombCrack occurs in the both inner and outer ring
Table 4. Characteristics of the Datasets. # denotes number of.
Table 4. Characteristics of the Datasets. # denotes number of.
Datasets#ChannelsSeries LengthNum Classes#Train#Val#Test
Bearing208102441634148
Bearing308102441634148
Car15774481260
DodgerLoopDay12887621680
DodgerLoopGame12882164138
DodgerLoopWeekend      11500224060600
Earthquakes1512225765139
Epilepsy3207411027138
ERing4656246270
Handwriting31522612030850
Libras2451514436180
Gear208102441634148
Gear308102441634148
Lightning216372481261
Lightning713197561473
Plane114478421105
RacketSports630412130152
SonyAIBORobotSurface11702164601
SonyAIBORobotSurface21652216953
StarLightCurves1102438002008236
Trace127548020100
Wafer115228002006164
Table 5. Comparison of Classification Accuracy on Datasets (Part 1).
Table 5. Comparison of Classification Accuracy on Datasets (Part 1).
DatasetAutoformerCrossformerDLinearInformeriTransformerLightTS
Bearing200.29170.45830.25000.50000.35420.3125
Bearing300.20830.56250.60420.52080.54170.5833
Car0.28330.68330.78330.73330.80000.8000
DodgerLoopDay0.29870.62340.55840.64940.50650.6104
DodgerLoopGame0.51970.44090.65350.48030.51180.6929
DodgerLoopWeekend0.63490.07140.95240.96830.97620.9683
Earthquakes0.55400.68350.68350.74100.75540.7194
Epilepsy0.76810.86230.45650.78990.67390.8406
ERing0.63700.94440.89630.94810.92960.9000
Handwriting0.05290.18590.13650.20240.20350.1306
Libras0.71670.85560.67220.66110.86110.6722
Gear200.27080.79170.58330.97920.45830.5833
Gear300.22920.75000.77080.83330.75000.6875
Lightning20.50820.73770.67210.73770.72130.6885
Lightning70.21920.67120.67120.73970.61640.6712
Plane0.95240.96190.98100.95240.97140.9714
RacketSports0.76970.76320.75000.88820.74340.6711
SonyAIBORobotSurface10.44090.66060.59570.42930.47590.4210
SonyAIBORobotSurface20.85940.85620.85620.81530.83630.8468
StarLightCurves0.27360.89320.89850.89810.85650.9157
Trace0.59000.77000.52000.88000.52000.5500
Wafer0.98490.99330.94290.99270.99350.9940
Average-ACC0.50290.69180.67680.74280.68440.6923
Rank1179486
Table 6. Comparison of Classification Accuracy on Datasets (Part 2).
Table 6. Comparison of Classification Accuracy on Datasets (Part 2).
DatasetPatchTSTReformerShapeformerTimesNetOur
Bearing200.08330.58330.75000.64580.8744
Bearing300.12500.56250.97910.52080.9841
Car0.78330.70000.30000.76670.8167
DodgerLoopDay0.46750.53250.48750.51950.6000
DodgerLoopGame0.51970.65350.63040.42520.8623
DodgerLoopWeekend0.76190.98410.93480.98410.9855
Earthquakes0.71220.65470.70500.60430.7482
Epilepsy0.96380.83330.95650.84780.9783
ERing0.95930.92960.78520.91850.8704
Handwriting0.13180.23880.26710.21060.2365
Libras0.74440.68890.90000.76670.8444
Gear200.08331.00000.75000.83330.8333
Gear300.41670.85421.00000.75001.0000
Lightning20.68850.67210.78690.72130.7869
Lightning70.71230.76710.61640.65750.6575
Plane0.97140.96190.98100.96190.9905
RacketSports0.72370.85530.87500.82890.8618
SonyAIBORobotSurface10.56070.43260.88690.42930.9018
SonyAIBORobotSurface20.84680.87510.79640.80590.8741
StarLightCurves0.92300.90230.91050.86850.9299
Trace0.94000.83000.99000.88000.9700
Wafer0.99500.99190.99240.99430.9964
Average-ACC0.64150.75020.78550.72460.8456
Rank103251
Table 7. Comparison of Classification F1-Score on Datasets (Part 1).
Table 7. Comparison of Classification F1-Score on Datasets (Part 1).
DatasetAutoformerCrossformerDLinearInformeriTransformerLightTS
Bearing200.25610.48330.25000.37500.31820.3155
Bearing300.17350.49320.56840.41610.49580.5417
Car0.25160.68960.78760.74550.80580.8032
DodgerLoopDay0.27330.59490.54940.64670.51390.5872
DodgerLoopGame0.48260.43190.64860.45170.33850.6851
DodgerLoopWeekend0.52790.06670.94260.96110.97060.9611
Earthquakes0.47120.50800.66020.47440.48180.5007
Epilepsy0.76460.85790.39250.77240.65490.8341
ERing0.61890.94400.89400.94720.92940.8984
Handwriting0.03880.15690.09390.16640.16520.0887
Libras0.68860.85150.65760.65000.85490.6608
Gear200.25940.78400.56110.97910.44050.5611
Gear300.15090.74120.73530.82980.74870.6792
Lightning20.40890.72080.66110.72890.67960.6799
Lightning70.17690.61150.61520.73450.58010.6151
Plane0.94370.96140.98040.95180.97030.9703
RacketSports0.77570.77680.76140.89390.75310.6816
SonyAIBORobotSurface10.32280.65190.56880.30030.38590.2986
SonyAIBORobotSurface20.84600.85100.84650.80960.82660.8322
StarLightCurves0.26210.83260.86120.86980.62940.8768
Trace0.54840.74210.45040.87530.50590.4846
Wafer0.96090.98270.83170.98080.98300.9844
Average-F10.46380.66970.65080.70730.63780.6609
Rank1168497
Table 8. Comparison of Classification F1-Score on Datasets (Part 2).
Table 8. Comparison of Classification F1-Score on Datasets (Part 2).
DatasetPatchTSTReformerShapeformerTimesNetOur
Bearing200.10670.51450.74330.60160.8695
Bearing300.14830.49320.97910.50250.9837
Car0.78570.70710.25440.76320.8109
DodgerLoopDay0.46780.53640.48300.49660.5992
DodgerLoopGame0.39310.64720.61570.42000.8597
DodgerLoopWeekend0.55420.98020.92020.98020.9815
Earthquakes0.46000.57230.54760.58220.4280
Epilepsy0.96420.81330.95670.82900.9779
ERing0.95940.92830.78320.91760.8686
Handwriting0.09290.19460.24900.17530.2128
Libras0.74250.68050.89920.75920.8436
Gear200.06451.00000.75000.82220.8179
Gear300.33540.85251.00000.65001.0000
Lightning20.66550.66110.77110.70580.7860
Lightning70.67350.77420.61330.64760.6245
Plane0.97030.96140.97830.96220.9876
RacketSports0.73440.86250.88300.83700.8699
SonyAIBORobotSurface10.52000.30680.87970.30030.9008
SonyAIBORobotSurface20.83240.86680.78610.79360.8681
StarLightCurves0.89480.86110.88500.80070.9314
Trace0.93590.82350.98900.87530.9671
Wafer0.98690.97840.98020.98530.9908
Average-F10.60400.72800.77030.70030.8263
Rank103251
Table 9. Coarse Screening Threshold β for Experimental Dataset.
Table 9. Coarse Screening Threshold β for Experimental Dataset.
Dataset β Dataset β Dataset β
Bearing200.20ERing0.10RacketSports0.60
Bearing300.50Handwriting0.60SonyAIBORobotSurface10.10
Car0.40Libras0.05SonyAIBORobotSurface20.25
DodgerLoopDay0.25Gear200.50StarLightCurves0.40
DodgerLoopGame0.50Gear300.30Trace0.10
DodgerLoopWeekend0.50Lightning20.25Wafer0.20
Earthquakes0.70Lightning70.10
Epilepsy0.60Plane0.40
Table 10. Comparative Performance Evaluation of Proposed Model and Ablation Variants on All Experimental Datasets. The best results are highlighted in bold.
Table 10. Comparative Performance Evaluation of Proposed Model and Ablation Variants on All Experimental Datasets. The best results are highlighted in bold.
ComponentsAccuracyF1-Score
ShapeFormer (Baseline)0.78550.7634
Coarse Screening + ShapeFormer0.76750.7510
Inverse Attention + ShapeFormer0.79370.7832
CIA + ShapeFormer0.81130.7849
Proposed Model0.84560.8263
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bao, J.; Wang, S.; Tang, X.; Zhang, S.; Wang, H.; Wang, L.; Zhang, Q.; Wu, N.; Yang, X.; Zhang, X.; et al. EffiShapeFormer: Shapelet-Based Sensor Time Series Classification with Dual Filtering and Convolutional-Inverted Attention. Sensors 2026, 26, 307. https://doi.org/10.3390/s26010307

AMA Style

Bao J, Wang S, Tang X, Zhang S, Wang H, Wang L, Zhang Q, Wu N, Yang X, Zhang X, et al. EffiShapeFormer: Shapelet-Based Sensor Time Series Classification with Dual Filtering and Convolutional-Inverted Attention. Sensors. 2026; 26(1):307. https://doi.org/10.3390/s26010307

Chicago/Turabian Style

Bao, Junjie, Shengcai Wang, Xuehai Tang, Shuaiqin Zhang, Hui Wang, Lei Wang, Qianxi Zhang, Nengchao Wu, Xinyu Yang, Xianyu Zhang, and et al. 2026. "EffiShapeFormer: Shapelet-Based Sensor Time Series Classification with Dual Filtering and Convolutional-Inverted Attention" Sensors 26, no. 1: 307. https://doi.org/10.3390/s26010307

APA Style

Bao, J., Wang, S., Tang, X., Zhang, S., Wang, H., Wang, L., Zhang, Q., Wu, N., Yang, X., Zhang, X., Li, X., Liao, J., & Liu, L. (2026). EffiShapeFormer: Shapelet-Based Sensor Time Series Classification with Dual Filtering and Convolutional-Inverted Attention. Sensors, 26(1), 307. https://doi.org/10.3390/s26010307

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop