Next Article in Journal
Closed Forms and Structural Properties of Lucas Matrices Derived from Tridiagonal Toeplitz Matrices
Previous Article in Journal
Anisotropic Four-Dimensional Spaces of Real Numbers
Previous Article in Special Issue
Intelligent System Study for Asymmetric Positioning of Personnel, Transport, and Equipment Monitoring in Coal Mines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ProFusion: Multimodal Prototypical Networks for Few-Shot Learning with Feature Fusion

1
School of Computer and Information Engineering, Fuyang Normal University, Fuyang 236037, China
2
Anhui Engineering Research Center for Intelligent Computing and Information Innovation, Fuyang Normal University, Fuyang 236037, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(5), 796; https://doi.org/10.3390/sym17050796
Submission received: 14 March 2025 / Revised: 11 May 2025 / Accepted: 15 May 2025 / Published: 20 May 2025
(This article belongs to the Special Issue Symmetry and Asymmetry in Computer Vision and Graphics)

Abstract

:
Existing few-shot learning models leverage vision-language pre-trained models to alleviate the data scarcity problem. However, such models usually process visual and text information separately, which causes still inherent disparities between cross-modal features. Therefore, we propose the ProFusion model, which leverages multimodal pre-trained models and prototypical networks to construct multiple prototypes. Specifically, ProFusion generates image and text prototypes symmetrically using the visual encoder and text encoder, while integrating visual and text information through the fusion module to create more expressive multimodal feature fusion prototypes. Additionally, we introduce the alignment module to ensure consistency between image and text prototypes. During inference, ProFusion calculates the similarity of test images to the three types of prototypes separately and applies a weighted sum to generate the final prediction. Experiments demonstrate that ProFusion performs outstanding classification tasks on 15 benchmark datasets.

1. Introduction

With the rapid development of artificial intelligence technologies, deep learning has made significant advances in both natural language processing (NLP) and computer vision (CV). In the field of NLP, models based on the Transformer [1] architecture, such as BERT [2] and GPT3 [3], have surpassed traditional methods in tasks such as machine translation, text generation, and sentiment analysis. In the field of CV, deep learning has driven the progress in tasks such as image classification, object detection, and semantic segmentation. As one of the fundamental tasks in computer vision, traditional image classification has also seen remarkable improvements. Traditional methods for image classification [4,5,6] rely on large-scale labeled datasets, where deep neural networks learn feature representations of images from a substantial number of training samples. However, in practical applications, the acquisition of large-scale labeled data is often associated with high costs. In this context, few-shot learning (FSL) [7], as a cutting-edge research area, aims to address the data bottleneck issue. Its core objective is to enhance the model’s generalization ability and mitigate the reliance on large-scale labeled data, enabling the model to achieve good performance even with extremely limited training data. As an important branch of few-shot learning, few-shot image classification similarly seeks to accurately classify unlabeled samples using a limited number of labeled examples [8].
To achieve this goal, researchers have proposed various learning strategies, among which transfer learning, meta-learning, and multimodal learning are representative approaches. Transfer learning mitigates the problem of limited labeled data in the target domain by pretraining models on large-scale source data and transferring the acquired knowledge to the target task. In few-shot learning, GDA-CLIP [9] employs a transfer learning strategy by using the CLIP pre-trained model as the backbone network for feature extraction. Meta-learning aims to “learn to learn” by training models in a task-oriented manner, thereby enhancing their generalization ability to new tasks. For example, MAML [10] optimizes the initialization of model parameters through multi-task training, enabling rapid adaptation to new tasks with only a few samples. In addition, metric-based meta-learning methods (e.g., matching networks and prototypical networks) learn a shared embedding space, allowing classification based on similarity measurements between samples. Multimodal learning focuses on integrating information from different modalities (e.g., images and text) to construct richer feature representations. For instance, methods such as Tip-Adapter [11] and CoOP [12] incorporate class names as auxiliary information to compensate for the limitations of visual features. It can be seen that transfer learning primarily focuses on transferring knowledge acquired from the source domain to the target domain, emphasizing the ability of “knowledge transfer”; meta-learning, on the other hand, aims to improve the model’s ability to quickly adapt to and learn new tasks, that is, “learning how to learn”; and multimodal learning focuses on fully utilizing the symmetrical complementarity between multimodal information to enhance the model’s representation ability.
In recent years, an increasing number of researchers have applied vision-language pre-trained models (VLPs) [13,14,15] to few-shot image classification tasks. For example, Tip-Adapter [11] uses the CLIP [14] pre-trained model to construct a key-value cache model as an adapter. During inference, the model calculates the similarity between the test image and the adapter, as well as the text features, ultimately performing a weighted fusion to obtain the classification result. Proto-CLIP [16] also uses the CLIP pre-trained model to generate image and text prototypes and then compute the prediction scores for the test image based on these two prototypes. These methods demonstrate that current few-shot image classification approaches primarily emphasize the independent modeling of visual and textual modalities, leveraging cross-modal information matching to mitigate the challenge of limited training samples. However, such methods often fail to adequately address the inherent disparities between cross-modal features and tend to neglect the effective integration of multimodal information. Moreover, the feature extraction capability of pre-trained models plays a pivotal role in the overall performance of the few-shot image classification process. As shown in Figure 1, this paper compares the image feature extraction capabilities of two different pre-trained models: ResNet50 [14] and BEiT3-large-itc [17]. From the visualization results in Figure 1a, which shows the image features extracted by ResNet50, it can be seen that this model has a more scattered feature distribution, with unclear boundaries between categories, reflecting its limitations in distinguishing between classes. In contrast, in Figure 1b, which shows the image features extracted by BEiT3-large-itc, it can be seen that this model demonstrates a much more compact distribution, with more robust cohesion of features within the same category and more distinct boundaries between different categories, leading to a clearer class separation. Therefore, in practical applications, selecting the appropriate pre-trained model based on the task requirements plays a crucial role in improving classification accuracy and performance.
To solve such problems, we propose ProFusion, a few-shot image classification model that integrates multimodal features. The ProFusion model leverages the BEiT-3 [17] multimodal pre-trained model to enhance feature extraction capabilities and significantly improve category discrimination through the generated image prototypes, text prototypes, and symmetrically fused multimodal feature fusion prototypes. Specifically, inspired by prototypical networks [18], ProFusion employs a visual encoder and a text encoder to extract image and text features, then aggregates them to compute image and text prototypes. In addition, the model creates multimodal feature fusion prototypes for each category by symmetrically combining visual and text information, thereby alleviating the inherent disparities between cross-modal features.
To comprehensively evaluate the performance of the ProFusion model, we performed experiments on 15 publicly available datasets from various domains, demonstrating the model’s superiority. Our contributions can be summarized as follows.
  • We present ProFusion, a few-shot image classification model that draws inspiration from prototypical networks to construct multimodal class prototype representations.
  • We propose a cross-modal feature fusion method that utilizes the fusion module of multimodal pre-trained models to integrate visual and text information to create multimodal feature fusion prototypes, thereby alleviating the inherent disparities between cross-modal features.
  • We perform an experimental evaluation of the ProFusion model on 15 benchmark datasets, demonstrating its significant performance improvement over other models.

2. Related Work

2.1. Few-Shot Learning

Unlike traditional deep learning methods, FSL aims to enable models to adapt to new tasks and generalize effectively with limited training data [19,20,21]. To achieve this goal, various approaches have emerged in the FSL domain, focusing primarily on data augmentation [22], transfer learning [23], meta-learning [24], and multimodal learning [25]. Traditional data augmentation techniques generate new samples by applying transformations such as rotation, scaling, and random cropping to existing samples. Modern methods leverage advanced generative techniques. For example, the Generative Adversarial Network (GAN) [26] creates high-quality images through adversarial training between a generator and a discriminator. Variational Autoencoder (VAE) [27] encodes input images into latent variables and then decodes and reconstructs them to generate new images. Furthermore, models like DALL-E [28] enhance data diversity by generating images based on text descriptions. Transfer learning adapts pre-trained models on large-scale datasets to specific tasks by fine-tuning them on limited target data. Meta-learning is a method that enables quick adaptation to new tasks by “learning to learn”, leveraging knowledge from previous tasks to help the model generalize quickly. Lastly, multimodal learning symmetrically integrates information from multiple modalities, including text, images, and audio, to compensate for the limitations of single-modal information.

2.2. Pre-Trained Models

In recent years, with the exponential growth in computational resources and data scale, VLPs have achieved remarkable progress in both CV and NLP [14,17,29,30]. These models leverage large-scale datasets [31,32,33] to learn complex and symmetry-aware semantic relationships between images and text, allowing deep cross-modal understanding. For instance, CLIP enhances cross-modal retrieval by employing image–text contrastive training, maximizing the similarity of positive pairs while minimizing that of negative pairs. BEiT-3 [17] uses the Multiway Transformers architecture and a unified masked data modeling approach for pretraining. During training, BEiT-3 randomly masks parts of text tokens or image patches and trains the model to reconstruct these masked portions, thereby improving its feature learning capabilities. Similarly to CLIP, BEiT-3 incorporates visual and text encoders to process visual and text data separately. To adapt to multimodal tasks, BEiT-3 introduces a multimodal fusion module that leverages visual-language experts to integrate visual and text information. In general, due to their superior representation learning capabilities, multimodal pre-trained models have been extensively utilized across a wide range of downstream tasks, including visual question answering [34], image classification [35], image-text retrieval [36], image captioning [37], and visual reasoning [38].

2.3. Mainstream Methods

Over the past few years, the field of few-shot image classification has seen a surge in approaches leveraging vision-language pre-trained models as backbone networks, achieving remarkable progress. For example, Linear-probe CLIP [14] employs image features to train a logistic regression classifier for classification. CoOP [12] improves the performance of few-shot image classification by introducing learnable prompts. CLIP-Adapter [39] fine-tunes image and text embeddings using lightweight adapters. Tip-Adapter [11] proposes constructing a key-value cache model, where the features of the test image are matched to keys, and the resulting prediction scores are linearly combined with those based on text features to produce the final classification result. Proto-CLIP [16] leverages image and text features to construct image and text prototypes, comparing image features with class prototypes for classification in a few-shot setting. PMPro [40] improves few-shot classification by partially fine-tuning pre-trained model parameters and constructing symmetry-aware mixed-modal prototypes. MaPLe [41] introduces learnable prompts in both the text and image branches of CLIP and takes advantage of a coupling mechanism between the prompts to achieve better alignment of vision-language. CALIP [42] proposes enhancing the zero-shot capability of the CLIP model by introducing a parameter-free attention mechanism, without the need for additional training or fine-tuning. GDA-CLIP [9] introduces a training-free adaptive method based on Gaussian Discriminant Analysis (GDA), assuming that class features follow Gaussian distributions with shared covariance, and formulates the classifier using class means and covariance through Bayesian inference. Whether through key-value cache matching or mixed-modal prototype construction, prior methods have primarily focused on processing either visual or text information independently, overlooking the symmetrical integration of image and text information. Consequently, we introduce multimodal feature fusion prototypes to alleviate the inherent disparities between cross-modal features. Table 1 presents a comparative analysis between our proposed model, ProFusion, and other existing few-shot learning approaches.

3. Method

3.1. Overview

In few-shot image classification tasks, one-shot or two-shot tasks refer to models that learn from only one or two samples from each class. Such tasks are commonly referred to as the “N-way K-shot” problem, where N represents the number of classes and K represents the number of samples per class, with K N . Given a few-shot classification task, the dataset is usually partitioned into a support set and a query set, and the knowledge learned from the support set is used to classify the query set. The support set is defined as S = { x i s , u i s , y i s } i = 1 M s , where M s = N × K represents the total number of samples in the support set, x i s denotes the image of the support set for the sample i th , u i s represents the category name, and y i s { 1 , 2 , , N } represents the category label of the support set. Meanwhile, the query set is defined as Q = { x j q , y j q } j = 1 M q , where x j q denotes the j th query set, y j q { 1 , 2 , , N } denotes the labels of the query set, and M q is the total number of items in the query set. We dedicate ourselves to constructing multiple prototypical representations for each category by leveraging multimodal pre-trained models to learn features from the support set, thereby providing a reference for classifying images in the query set.

3.2. Multimodal Prototype

The BEiT-3 multimodal pre-trained model serves as the backbone network. Its visual encoder E v ( · ) is used to extract the features of the image, expressed as v i s = E v ( x i s ; θ v ) , where v i s R 1 × d , d represents the dimensions of the feature and θ v denotes the parameters of the visual encoder. For text class names, we adopt a manually designed prompt template, such as “a photo of a {class}”, where “class” refers to the name of each category. Using this template, text prompts that describe the images are represented as w i = P n ( u i s ) , n { 1 , 2 , , N } . Before extracting features from these prompts, the text is tokenized as e i = Tokenizer ( w i ) . Subsequently, the text encoder E t ( · ) is applied to extract text features, expressed as t i s = E t ( e i ; θ t ) , where t i s R 1 × d , d represents the dimensions of the feature and θ t denotes the parameters of the text encoder.
The key challenge of few-shot image classification lies in the limited number of training samples, which makes traditional deep learning methods prone to overfitting. To address this issue, we adopt the design philosophy of prototypical networks [18], where image and text information is jointly mapped to a unified feature space, and multimodal class prototypes are constructed as class anchors. The test samples are then classified on the basis of their distances to these class prototypes, thereby avoiding overfitting. Therefore, we incorporate the prototype network method into the few-shot image classification task. Specifically, for the k th category in the support set, the image prototype ρ k v is calculated by averaging the image features of samples in the category:
ρ k v = 1 M k y i s = k v i s
where ρ k v R 1 × d , d is the dimension of the feature, y i s = k indicates samples with a category label of k, and M k represents the total number of samples labeled as k. Similarly, based on the text prompt features t i s , the corresponding text prototype ρ k t can be computed as follows:
ρ k t = 1 M k y i s = k t i
where ρ k t R 1 × d .

3.3. Multimodal Feature Fusion

Although existing models attempt to leverage multimodal information from images and text for few-shot classification, there are inherent disparities between the two modalities. Therefore, we employ a multimodal fusion module E f ( · ) , which integrates image and text information using a shared cross-modal self-attention mechanism along with a vision-language expert network. This process generates symmetrically fused multimodal fusion features, represented as f i s = E f ( x i s , e i ; θ f ) , where f i s R 1 × d and θ f denote the parameters of the fusion module. As shown in Figure 2, the fusion module is primarily composed of L layers of Multiway Transformer blocks. Each layer incorporates self-attention mechanisms, residual connections, layer normalization, as well as vision and language expert networks. In the top F layers of the Multiway Transformer, an additional vision-language expert network is introduced to facilitate joint modeling of visual and language information. Taking BEiT3-base-itc as an example, the model consists of 12 Multiway Transformer layers with a hidden size of 768 and 12 attention heads. In contrast, BEiT3-large-itc contains 24 Multiway Transformer layers with a hidden size of 1024 and 16 attention heads. In both variants, every layer is equipped with vision and language expert networks, and the top layers additionally incorporate vision-language expert networks. It is worth noting that the parameters of the fusion module are kept frozen during training.
Specifically, images are first divided into multiple patches and transformed into visual tokens, while text is tokenized into text tokens. In the fusion module, visual tokens and text tokens are processed separately by vision and language feed-forward networks. Simultaneously, a vision-language feed-forward network captures the cross-modal relationships between image and text. The shared self-attention mechanism aligns and interacts with image and text information, allowing the deep fusion of multimodal data. Based on the fused features obtained from images and text, the multimodal feature fusion prototype for the k th category, ρ k f , is calculated as follows:
ρ k f = 1 M k y i s = k f i s
where ρ k f R 1 × d .

3.4. Similarity Measurement

For query set images, we employ a lightweight adapter [16] to learn their features, avoiding overfitting due to the limited amount of data. The lightweight adapters include two designs based on MLP and convolution. The MLP-based adapter consists of a two-layer fully connected network for feature transformation. The first layer reduces the dimensionality of the input features, while the second layer maps them back to the original dimension. A fusion coefficient (e.g., 0.2) is then used to perform weighted residual fusion between the output and input features, preserving the original information. The convolution-based adapter includes three convolutional layers (1 × 1, 3 × 3, and 1 × 1), along with normalization layers and ReLU activation functions. The input features are first reshaped, then processed through the three convolutional layers. Finally, the output features are fused with the input features via a residual connection, effectively preserving the original information. On ImageNet, assuming an input feature dimension of 1024, the parameter sizes of the MLP- and convolution-based adapters are 0.30 M and 0.03 M, respectively. After the query set image x j q is processed by the visual encoder E v ( · ) to extract features, it is passed through the lightweight adapter module. A residual connection is used to combine the new features with the original. The adapter is defined as A ( · ) , and the feature representation of the query set image is given by v j q = A ( E v ( x j q ; θ v ) ; θ a ) , where v j q R 1 × d , and the adapter parameters θ a are trained using the support set.
Based on the above, our objective is to classify the query image using the image prototype, text prototype, and multimodal feature fusion prototype, as illustrated in Equation (4):
P ( y ^ = k x j q , ρ ) = m { v , t , f } λ m · P ( y ^ = k x j q , ρ m )
where ρ represents the set of the three types of prototypes; ρ v = { ρ k v k { 1 , 2 , , N } } , ρ t = { ρ k t k { 1 , 2 , , N } } , and ρ f = { ρ k f k { 1 , 2 , , N } } represent the sets of image prototypes, text prototypes, and multimodal fusion prototypes, respectively. P ( y ^ = k x j q , ρ m ) denotes the predicted probability for the query set image based on different modal prototypes, where y ^ is the predicted label. The weight relations are given by λ v = α , λ t = β , and λ f = γ , where α + β + γ = 1 and α , β , γ [ 0 , 1 ] .
Specifically, the image prototype ρ k v , text prototype ρ k t , and multimodal feature fusion prototype ρ k f are treated as learnable parameters, which are fine-tuned using the support set to dynamically adjust these parameters. For the query set image x j q , visual features v j q are extracted using the visual encoder E v ( · ) and the lightweight adapter A ( · ) . The squared Euclidean distance between the query set image features v j q and the prototype features is then calculated. The results are converted into probabilities using the softmax function. The entire process can be represented by the following equation:
P ( y ^ = k x j q , ρ m ) = exp τ · D ( v j q , ρ k m ) k = 1 N exp τ · D ( v j q , ρ k m )
where D ( · ) denotes the squared Euclidean distance, k { 1 , 2 , , N } , and m { v , t , f } indicates the three modalities: image (v), text (t), and multimodal fusion (f). The temperature parameter τ R + controls the “smoothness” of the probability distribution. A smaller τ sharpens the distribution, emphasizing the selection of more similar categories, while a larger τ makes the distribution smoother, allowing more categories to be considered as possible choices. Finally, the predicted class probabilities, calculated based on the image prototype ρ k v , text prototype ρ k t , and multimodal feature fusion prototype ρ k f , are weighted and summed using the hyperparameters α , β , and γ to obtain the final classification result, as shown in Figure 2.

3.5. Loss Function

To ensure the consistency of the image and text prototypes in the feature space, we introduce an image–text prototype alignment module. This module maximizes the similarity between the image and text prototypes of the same category, while minimizing the similarity between prototypes of different categories. We use the InfoNCE loss function [43] from contrastive learning to achieve this alignment:
L i t a k ( ρ k ξ , ρ ζ ) = log exp ( ρ k ξ · ρ k ζ ) k = 1 N exp ( ρ k ξ · ρ k ζ )
where · denotes the dot product, ξ , ζ { ( v , t ) , ( t , v ) } . L i t a k ( ρ k ξ , ρ ζ ) measures the similarity between the image prototypes and the text prototypes. The theoretical foundation of InfoNCE originates from the mutual information maximization principle in contrastive predictive coding (CPC). By implicitly estimating and optimizing the ratio between the conditional distribution and the marginal distribution (i.e., the density ratio), the model is able to capture global dependencies within the data. Its core mechanism is to maximize the similarity between positive sample pairs (context and true future observations) in the latent space, while minimizing the similarity between negative sample pairs (context and randomly sampled observations). Based on this principle, we apply the InfoNCE loss function to align cross-modal prototype features, thereby enhancing the representational capacity of the prototypes. Additionally, to improve classification performance, we introduce a negative log-probability loss function to optimize the model parameters. Its form is given by Equation (7):
L cls = log P ( y ^ = k x j q , ρ ) , y j q
where P ( y ^ = k x j q , ρ ) is derived from Equation (2), and y j q is the true label of the query set x j q . By minimizing this loss function, the model can more accurately predict the category of the sample. Therefore, the total loss during training is
L total = 1 M q j = 1 M q log P ( y ^ = k x j q , ρ ) , y j q + 1 N k = 1 N L ita k ( ρ k ξ , ρ ζ )

4. Experiment

4.1. Experiment Setup

To validate the effectiveness of the ProFusion model, we evaluated it on 15 benchmark datasets, which include ImageNet [31], StanfordCars [44], UCF101 [45], Caltech101 [46], Flowers102 [47], SUN397 [48], DTD [49], EuroSAT [50], FGVCAircraft [51], OxfordPets [52], Food101 [53], ImageNetV2 [54], ImageNet-Sketch [55], ImageNet-A [56], and ImageNet-R [57]. In addition, to highlight the superiority of the ProFusion, we compared it with several SOTA models, including CoOp [12], CLIP-Adapter [39], Tip-Adapter [11], Proto-CLIP [16], GDA-CLIP [9] and PMPro [40].
In the experiments, we used the BEiT3-base-itc and BEiT3-large-itc pre-trained models as the backbone networks. To increase data diversity, we applied random cropping and horizontal flipping to the support set. The ProFusion model was built using the PyTorch framework and trained on a single NVIDIA GeForce RTX 3090 GPU with a batch size of 256. The AdamW optimizer was used, with an initial learning rate of 0.0001, and the CosineAnnealingLR scheduler for learning rate adjustment.
Meanwhile, the hyperparameters α , β , and γ , as mentioned in Equation (4), play a crucial role in improving the classification precision. To determine the optimal hyperparameter values for each dataset, we performed a grid search within the range of α , β , γ [ 0.0 , 1.0 ] with a step size of 0.1, while imposing the constraint α + β + γ = 1 to limit the search space. We evaluated the performance of all candidate weight combinations on the validation set and selected the combination that yielded the best performance for the final testing phase. This approach facilitated a reasonable allocation of weights across modalities. The model was divided into two versions: ProFusion and ProFusion-F. For the ProFusion model, image prototypes, text prototypes, and multimodal feature fusion prototypes were constructed using the encoder, and these prototypes were directly used for classifying the query set. For the ProFusion-F model, the prototype feature parameters were fine-tuned using the support set to further enhance classification performance. Furthermore, for each dataset, we continued to use the text prompt templates selected in previous works [9,40], as shown in Table 2.

4.2. Comparison with SOTA Models

To validate the superiority of our ProFusion model in different scenarios, we present a performance comparison in Table 3, showcasing ProFusion against the SOTA few-shot learning models on 11 datasets in various shot settings. In the case of an extreme one-shot setting, ProFusion leverages the rich pre-trained knowledge of multimodal models by directly classifying test images using image prototypes, text prototypes, and multimodal feature fusion prototypes. Our model significantly outperforms existing models. For instance, on the DTD dataset, ProFusion achieves a one-shot classification accuracy of 62.41%, outperforming TiP-Adapter and Proto-CLIP, which attain 46.22% and 46.04%, respectively. This corresponds to an absolute improvement of 16.19% over Tip-Adapter and 16.37% over Proto-CLIP. Similarly, on the ImageNet dataset, ProFusion reaches an accuracy of 72.63%, surpassing Tip-Adapter (60.70%) by 11.93% and Proto-CLIP (60.31%) by 12.32%. On the StanfordCars dataset, ProFusion achieves a classification accuracy of 83.08%, outperforming GAD-CLIP (56.77%) by a substantial 26.31%. Furthermore, ProFusion-F, which fine-tunes the three types of prototypes using support sets, further enhances classification performance. In the 16-shot setting, ProFusion-F maintains its advantage, achieving 77.61% accuracy on the ImageNet dataset, an improvement of 12.10% over Tip-Adapter-F (65.51%) and 11.86% over Proto-CLIP-F (65.75%). On the Food101 dataset, ProFusion-F achieves a classification accuracy of 87.62%, outperforming PMPro (79.31%) by 8.31%. In general, our method demonstrates significant improvements in few-shot learning tasks, especially in limited-data and complex scenarios, highlighting its ability to effectively utilize multimodal pre-trained knowledge for improved classification accuracy.
However, under the one-shot setting of the EuroSAT fine-grained dataset, the high similarity between categories leads to insufficient discriminability of support set samples, affecting the model’s classification capability. To enhance the stability of the result, we conducted multiple experiments and reported the average accuracy as the final result. Additionally, under the one-shot condition on both FGVCAircraft and EuroSAT, fine-tuned ProFusion-F slightly underperforms the non-fine-tuned ProFusion. For example, on FGVCAircraft, ProFusion-F achieves 23.82%, slightly lower than ProFusion’s 23.19%. This phenomenon arises from the limited inter-class variability in fine-grained tasks, where fine-tuning with only a single support sample is prone to overfitting.
To enhance the generalization capability of the model, we incorporate text information from class names to compensate for the insufficiency of visual information. Inspired by prototypical networks, we construct three types of prototype as class anchors and classify test images based on their similarity, effectively mitigating overfitting. In addition, we adopt the BEiT-3 multimodal pre-trained model as the backbone network to extract features from both image and text information, thereby avoiding the overfitting issue that may arise from training a feature extractor from scratch. However, due to differences among datasets, the model exhibits varying performance across different datasets. As shown in Table 3, for conventional datasets (such as Flowers102, OxfordPets, Caltech101, etc.), where there are clear class differences, the model is able to easily extract discriminative features, leading to better recognition performance. For example, under the one-shot condition, ProFusion achieves an accuracy of 96.02% on the Caltech101 dataset. In contrast, for fine-grained datasets (such as FGVCAircraft, DTD, EuroSAT, etc.), the task complexity is higher, the class differences are subtle, and the image textures are abstract, making it difficult for the model to learn effective distinguishing features from a very limited number of samples, resulting in poorer performance. For example, under the 1-shot setting, ProFusion achieves an accuracy of only 23.82% on the FGVCAircraft dataset.
As shown in Figure 3, the average accuracy of ProFusion under 1-shot to 16-shot conditions is compared to that of the other SOTA models on the ImageNet dataset. ProFusion demonstrates significant performance advantages, achieving an average accuracy of 74.30%, which is significantly higher than CLIP-Adapter (62.17%), Tip-Adapter-F (62.97%) and Proto-CLIP-F (62.39%). Specifically, compared to GDA-CLIP (61.97%) and PMPro (63.11%), ProFusion’s accuracy improves by 12.33% and 11.19%, respectively. Meanwhile, ProFusion-F, which leverages the support set for fine-tuning, further boosts the average accuracy to 75.00%, a 0.70% improvement over ProFusion.
To comprehensively demonstrate the robustness of our model, we calculate and compare the average classification accuracy on 11 datasets under different shot settings. As shown in Table 4, ProFusion consistently outperforms other methods across all shot settings. Without requiring additional training, ProFusion achieves an accuracy of 73.55% for 1-shot and 80.46% for 16-shot, with an overall average accuracy of 77.46%. In contrast, ProFusion-F, which uses support set fine-tuning, further enhances performance across all shot settings, reaching 73.67% for 1-shot, 83.63% for 16-shot, and an impressive average accuracy of 78.82%. Moreover, the experimental results indicate that both ProFusion and ProFusion-F consistently surpass the SOTA models in different shot settings. For example, in the 1-shot setting, ProFusion outperforms Tip-Adapter-F by 8.95%, while ProFusion-F exceeds PMPro by 8.04%. In the 16-shot setting, ProFusion-F demonstrates a 7.80% improvement over Tip-Adapter-F. Overall, the average accuracy of ProFusion-F exceeds that of CALIP and GDA-CLIP by 8.06% and 9.29%, respectively, reflecting its advantage in few-shot classification tasks.
We also performed experiments to evaluate the performance of our model in out-of-distribution generalization. Specifically, we trained our model using a 16-shot setting on the ImageNet dataset. We then directly transferred the model to target datasets, including ImageNetV2, ImageNet-Sketch, ImageNet-A, and ImageNet-R. As shown in Table 5, we compared our method with CLIP, Tip-Adapter, Tip-Adapter-F, Proto-CLIP, Proto-CLIP-F, MaPLe and GDA-CLIP. The experimental results demonstrate that both ProFusion and ProFusion-F exhibit significant advantages in out-of-distribution generalization tasks. On the target datasets, our model consistently outperforms the other baselines, especially on ImageNet-R and ImageNet-Sketch, where ProFusion improves by 7.33% and ProFusion-F improves by 11.67%, showing remarkable performance gains over other models. However, on the ImageNet-V2 and ImageNet-A datasets, ProFusion performs relatively poorly, improving by only 1.88% and 0.24% compared to the second-place Tip-Adapter series methods, respectively. This is mainly due to the presence of challenging and adversarial samples in these two datasets, which increase the difficulty of classification. Overall, ProFusion-F achieves an average score of 66.04%, surpassing the 59.76% of CLIP and the 60.22% of GDA-CLIP, demonstrating the superiority of our method in out-of-distribution generalization tasks.

4.3. Ablation Study

In the ablation study presented in Table 6, we used BEiT3-large-itc as the backbone network ( K = 16 ) to evaluate the impact of image prototypes, text prototypes and multimodal feature fusion prototypes on model performance. Experimental results show that when image and text prototypes are used independently, there is no significant difference in classification accuracy. For example, using only image prototypes, the model achieves an accuracy of 97.61% on the Caltech101 dataset, which is comparable to using only text prototypes (97.65%). However, on regular datasets (such as OxfordPets and SUN397), where inter-class differences are more pronounced, the semantic information provided by text descriptions enables relatively accurate classification, resulting in better performance than unimodal image prototypes. When using both image and text prototypes, the ImageNet’s accuracy increases to 77.48%, but the classification accuracy drops on more challenging datasets such as EuroSAT (84.07%) and FGVCAircraft (42.00%).The primary reason for this is that, in fine-grained datasets, the semantic information conveyed by textual descriptions (e.g., “a photo of {class}”) differs from the actual visual information. The alignment module encourages the image prototypes to move closer to the text prototypes, which weakens the discriminative ability of the image prototypes and ultimately leads to lower classification accuracy compared to using unimodal image prototypes alone. To this end, we perform interactive fusion of image and text information through the shared attention mechanism of the fusion module and leverage the vision-language feed-forward network to capture the cross-modal relationships between images and text, thereby constructing the fused prototypes. After introducing the fused prototypes, the model shows improvements across multiple datasets. For example, on the FGVCAircraft dataset, the accuracy increases by 2.10% compared to using only image and text prototypes, reaching 44.10%. On the EuroSAT dataset, the accuracy improves by by 3.73%, reaching 87.80%.
In Table 7, we compare the performance of the baseline fusion strategy with our proposed fusion strategy in 11 datasets. The baseline strategy performs a simple fusion of image prototypes and text prototypes using element-wise multiplication, while our method utilizes the fusion module of a multimodal pre-trained model to generate more information-rich fused prototypes. The experimental results show that on datasets such as ImageNet, OxfordPets, and Flowers102, the performance of both methods is comparable. However, in fine-grained datasets such as FGVCAircraft, EuroSAT, DTD, and UCF101, our method shows significant advantages. For example, on the FGVCAircraft dataset, the precision increases from 41.88% in the baseline to 44. 10% with our model. On the EuroSAT dataset, our model improves by 3.69% over the baseline (84.11%). On the DTD dataset, the accuracy increases from 76.36% in the baseline to 77.54%. The experimental results clearly show that simply performing a basic fusion of image and text features is insufficient to fully exploit the complex relationships between the image and text, leading to significantly poorer performance on fine-grained tasks. In contrast, our proposed fusion strategy not only performs well on general datasets but also exhibits significant improvements on fine-grained datasets, fully demonstrating its superior ability to capture complex image–text associations and integrate multimodal information.
Figure 4 presents the image classification results using two different multimodal pre-trained models, BEiT3-base-itc and BEiT3-large-itc, as backbone networks. In general, the stronger the backbone network, the more discriminative feature representations it can learn, thereby improving classification accuracy. For example, on the ImageNet dataset, when using the BEiT3-base-itc pre-trained model, the zero-shot accuracy of BEiT3 is 68.96%. However, with the BEiT3-large-itc pre-trained model, the accuracy improves to 71.89%. Additionally, we compare our method with existing SOTA approaches, GDA-CLIP and Tip-Adapter, by using the same pre-trained models as backbone networks. In ImageNet (K = 16), with the BEiT3-base-itc backbone, ProFusion-F achieves an accuracy of 73.28%, surpassing GDA-CLIP (71.84%) by 1.44%. When using the BEiT3-large-itc backbone, ProFusion-F reaches 77.61%, exceeding GDA-CLIP (76.45%) by 1.16% and Tip-Adapter-F (76.72%) by 0.89%. Similarly, on the UCF101 dataset (K = 16), with the BEiT3-base-itc backbone, ProFusion-F achieves an accuracy of 80.78%, improving by 1.98% over Tip-Adapter-F (78.80%). With the BEiT3-large-itc backbone, ProFusion-F reaches an accuracy of 86.12% in the 16-shot setting, outperforming GDA-CLIP (85.23%) by 0.89%.
Data augmentation is an important technique to improve the generalization ability of models. Applying random transformations to the support set increases the diversity of images and mitigates the model’s overfitting to the training data distribution. In Figure 5, we evaluate the impact of data augmentation on model performance on the four datasets: UCF101, SUN397, Food101, and StanfordCars. The data augmentation method used includes random cropping and random horizontal flipping (with a probability of 50%), followed by image normalization. The model without data augmentation only applies resizing and normalization to the images. The experimental results show that for the UCF101 dataset, data augmentation improves performance in most shot settings, with the largest improvement (0.87%) observed in the 8-shot setting. However, a slight performance drop (−0.63%) is observed in the 1-shot setting. With experimental analysis, it is found that under the 1-shot setting of the UCF101 dataset, moderately reducing data augmentation increases accuracy to 75.28%, achieving a 0.29% improvement over Without Augmentation (74.99%) and a 0.92% improvement over With Augmentation (74.36%). This indicates that excessive data augmentation interfere with essential features of the original samples, ultimately leading to reduced model performance. On the SUN397 dataset, the effect of data augmentation is relatively stable, providing noticeable improvements in both the 1-shot and 16-shot settings (0.66% improvement for 1-shot and 0.50% for 16-shot), but almost no difference is observed in the 2-shot setting, with an improvement of only 0.03%. Generally, data augmentation can increase the diversity of samples in few-shot learning tasks, thus improving the generalization ability of the model to some extent. However, the specific effect is closely related to the characteristics of the dataset and the number of samples.

4.4. Visualization

To better illustrate the effectiveness of prototype features, we employ the t-SNE technique to visualize the multimodal feature fusion prototypes on both the validation and test sets of OxfordPets. In addition, we performed a comparative visualization of image and text prototypes on the EuroSAT test set. As shown in Figure 6a,b, the fusion prototypes of different categories are distributed near the cluster centers of their corresponding category test samples, demonstrating high consistency and robustness on both the validation and test sets. In the two-dimensional space after dimensionality reduction, image feature points from different categories form distinct and compact clusters, with the fusion prototypes positioned at the cluster centers of their respective categories. These results indicate that multimodal information fusion can effectively integrate information from different modalities, effectively representing each category. Furthermore, as illustrated in Figure 6c,d on the EuroSAT test set, both the image and text prototypes effectively serve as anchors for their respective categories. Our experimental findings further validate the effectiveness of the prototype network approach for few-shot image classification tasks.
As shown in Figure 7, we visualize the image and text prototypes for 10 categories in the EuroSAT dataset, comparing the results before and after fine-tuning. During the fine-tuning process, we introduce an image–text alignment module to enhance the consistency of image and text prototypes in the feature space. From the visualization results, it is evident that Figure 7a represents the case without fine-tuning, where the image and text prototypes of the same category are noticeably distant in the feature space. In contrast, Figure 7b shows the results after fine-tuning with the alignment module, where the image prototypes and their corresponding text prototypes are much closer. The experimental results demonstrate that incorporating the image–text alignment module during fine-tuning helps to improve the consistency of prototypes within the same category, thereby enhancing the semantic alignment between cross-modal representations.

4.5. Similarity Measurement Methods

In classification tasks, the choice of an appropriate similarity measurement method is crucial, as different metrics can significantly impact model performance. To investigate this effect, we perform experiments on the DTD and UCF101 datasets, evaluating multiple commonly used similarity metrics in various shot settings. As shown in Table 8, the squared Euclidean distance (SED) consistently outperforms other similarity metrics, achieving the highest classification accuracy in all shot configurations. For example, in the 1-shot setting, SED achieves an accuracy of 63.36% on the DTD dataset, demonstrating a slight advantage over alternative methods. More importantly, in the 16-shot setting, SED achieves an accuracy of 77.54%, significantly exceeding other metrics. In comparison, cosine similarity (CS) and matrix multiplication (MM) exhibit a noticeable performance gap. Specifically, in the 16-shot setting, CS and MM achieve classification accuracies of 75.77% and 75.71% on the DTD dataset, respectively, which are approximately 1.8% lower than that of SED. These findings underscore the critical role of similarity measurement methods in classification tasks, as they fundamentally influence how the model quantifies relationships between samples, directly impacting its robustness and overall effectiveness.

4.6. Performance Analysis

We compare our model, ProFusion, with SOTA approaches, including Tip-Adapter, Tip-Adapter-F, Proto-CLIP-F, and GDA-CLIP, from multiple perspectives, such as training time, testing time, and classification accuracy. All models use a batch size of 256, and all experiments are performed on a single NVIDIA GeForce RTX 3090 GPU. As shown in Table 9, compared to fine-tuning-based methods such as Tip-Adapter-F and Proto-CLIP-F, ProFusion-F not only reduces training time but also achieves significantly higher accuracy, reaching 73.28%, which is an improvement of 4.61% over Tip-Adapter-F. Additionally, ProFusion is training-free, achieving an accuracy of 71.48%, which is a 3.65% improvement over Proto-CLIP-F, although with a significant increase in testing time. It can be concluded that ProFusion and ProFusion-F are slightly superior in accuracy, but their efficiency in training and testing time is slightly inferior.

4.7. Hyperparameter Analysis

We conducted experiments on the Caltech101, UCF101, DTD, EuroSAT, and OxfordPets datasets to investigate the impact of the hyperparameters α , β , and γ on classification performance. The experiments compare the average accuracy across multiple datasets with the same weight configuration. As shown in Figure 8, the impact of different prototype features on the final classification accuracy varies significantly across different weight configurations. As shown in Figure 8a, image prototypes are crucial to the classification results, with the overall accuracy steadily increasing as the α ratio increases. Additionally, when the γ ratio is between 0.2 and 0.8, the overall classification accuracy remains high and stable, with the average accuracy reaching its highest value of 85.63% when γ is 0.8, indicating that the effective fusion of multimodal information enhances the generalization ability of the model. In contrast, when relying solely on the prediction probability of the text prototype (i.e., β 1 ), the classification accuracy decreases, which restricts the ability of the model. The main reason for this is that, in some fine-grained datasets, simple textual information lacks sufficient discriminative power to effectively distinguish between visually similar categories, making it difficult for the model to make accurate predictions. Therefore, only by balancing and optimizing the weight distribution of different prototype features in the prediction probabilities for test images can we fully leverage the advantages of multimodal information, thereby improving the model’s robustness.
In the classification task, ProFusion uses the grid-searched hyperparameters α , β , and γ to regulate the contributions of the image prototype, text prototype, and multimodal feature fusion prototype to the final prediction probability. We attempted to replace grid-searched hyperparameters with learnable parameters to reduce tuning time and computational overhead. However, experimental results indicate that this approach performs poorly, failing to achieve the expected model performance. The primary reason for this is the limited number of training samples, which leads to overfitting. As shown in Figure 8d, in the 1-shot setting, the performance gap between grid-searched hyperparameters ( α , β , γ ) and learnable parameters reaches 8.93%. As the number of samples increases, this gap gradually decreases. This indicates that the number of training samples plays a crucial role in the training of learnable parameters. However, in few-shot image classification, due to the extremely limited number of training samples and the large number of learnable parameters, overfitting is likely to occur. Therefore, in few-shot image classification, the grid-searched hyperparameters α , β , and γ are superior to the learnable parameters and serve as a relatively better choice.

5. Discussion

The superior performance of traditional machine learning models is primarily attributed to iterative training on large-scale datasets. However, this also limits their generalization ability when dealing with a few-shot data. We propose the ProFusion model, which fully leverages the rich visual and textual knowledge embedded in multimodal pre-trained models to effectively overcome the limitations imposed by data scarcity in few-shot image classification tasks. As shown in Figure 1, powerful multimodal pre-trained models can extract features with distinct class separability, ensuring that samples from the same category exhibit a more compact distribution in the feature space, while clear and well-defined boundaries emerge between different categories. Consequently, even under limited data conditions, strong feature extraction capabilities enable effective classification. Furthermore, we draw inspiration from prototypical networks by representing each category using the mean multimodal features of multiple samples from the same category. As illustrated in Figure 6, the image prototypes, text prototypes, and symmetrically fused multimodal feature fusion prototypes are positioned at the center of the test image features corresponding to their respective categories, allowing for a more accurate representation of the category features.
Current models primarily focus on the independent utilization of visual or textual information, without fully exploring or achieving an efficient and symmetrical fusion of the two. As a result, there are still inherent discrepancies between cross-modal features. Therefore, we utilize the fusion module of multimodal pre-trained models to jointly encode image and text information. In the fusion module, images and text are processed separately through symmetrical visual and language feed-forward networks. At the same time, a visual-language feed-forward network is used to capture the cross-modal relationships between images and text. A shared self-attention module aligns and interacts with image and text information, thus achieving a deep fusion of multimodal image–text information, mitigating the inherent discrepancies between cross-modal features. In terms of experiments, this study was conducted on 15 public datasets and compared with current SOTA methods. The experimental results show that the proposed method outperforms other approaches and demonstrates significant performance improvements.
Although the proposed method (ProFusion) demonstrates superior performance compared to existing few-shot image classification methods, there is still room for improvement in certain tasks, such as FGVCAircraft. For example, by analyzing tools such as the confusion matrix, the model’s prediction performance across different categories can be revealed, providing a basis for targeted improvements to the model. Additionally, the quality of support set samples significantly impacts the performance of few-shot image classification models. Therefore, selecting high-quality training samples becomes a key factor in enhancing performance. Additionally, future work could explore the integration of other multimodal fusion methods (e.g., BLIP, Flamingo, etc.) and leverage their cross-modal information processing mechanisms to further enhance the fusion performance of the model. Furthermore, besides commonly used modalities like image and text, other modalities of information could be explored to improve the model’s performance in data-scarce scenarios. Finally, we anticipate that future research will combine existing large-scale multimodal model techniques with few-shot learning, advancing the development of few-shot learning.

6. Conclusions

To alleviate the problems of data scarcity and inherent disparities between cross-modal features, we introduce the ProFusion model for few-shot image classification, which utilizes multimodal pre-trained models and prototypical networks to construct multimodal prototypes and innovatively employs a fusion module to jointly encode images and text, generating fusion prototypes to mitigate inherent disparities between cross-modal features. The alignment module is also introduced to ensure the consistency of the image and text prototypes. ProFusion calculates the similarity of the test images to the three prototypes and generates the prediction result through weighted summation. Compared to the existing model, ProFusion demonstrates superior performance by integrating prototypes from multimodal information.

Author Contributions

Conceptualization, Z.C. and J.Z.; methodology, Z.C. and X.W.; validation, H.W. and Y.C.; investigation, H.W.; resources, J.Z.; data curation, Z.C.; writing—original draft preparation, Z.C.; writing—review and editing, J.Z.; visualization, Y.C.; supervision, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (61906044), the Natural Science Foundation of Anhui Province (2408085MF154), and the key projects of natural science research in Anhui colleges and universities (2023AH050406, 2023AH050418 and 2022AH051324).

Data Availability Statement

Data are contained within this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2017; Volume 30. [Google Scholar]
  2. Devlin, J.; Chang, Mi.; Lee, K.; Toutanova, K. Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  3. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models Are Few-Shot Learners. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2020; Volume 33, pp. 1877–1901. [Google Scholar]
  4. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  5. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar] [CrossRef]
  6. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  7. Wang, Y.; Yao, Q.; Kwok, J.T.; Ni, L.M. Generalizing from a Few Examples: A Survey on Few-Shot Learning. ACM Comput. Surv. 2019, 53, 63. [Google Scholar] [CrossRef]
  8. Liu, Y.; Zhang, H.; Zhang, W.; Lu, G.; Tian, Q.; Ling, N. Few-Shot Image Classification: Current Status and Research Trends. Electronics 2022, 11, 1752. [Google Scholar] [CrossRef]
  9. Wang, Z.; Liang, J.; Sheng, L.; He, R.; Wang, Z.; Tan, T. A hard-to-beat baseline for training-free clip-based adaptation. arXiv 2024, arXiv:2402.04087. [Google Scholar]
  10. Finn, C.; Abbeel, P.; Levine, S. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, Australia, 6–11 August 2017; PMLR: Sydney, Australia, 2017; pp. 1126–1135. [Google Scholar]
  11. Zhang, R.; Zhang, W.; Fang, R.; Gao, P.; Li, K.; Dai, J.; Qiao, Y.; Li, H. Tip-Adapter: Training-Free Adaption of CLIP for Few-Shot Classification. In Computer Vision—ECCV 2022, Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer Nature: Cham, Switzerland, 2022; pp. 493–510. [Google Scholar]
  12. Zhou, K.; Yang, J.; Loy, C.C.; Liu, Z. Learning to prompt for vision-language models. Int. J. Comput. Vis. 2022, 130, 2337–2348. [Google Scholar] [CrossRef]
  13. Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; Duerig, T. Scaling Up Visual and Vision-Language Representation Learning with Noisy Text Supervision. In Proceedings of the International Conference on Machine Learning, Vienna, Austria, 18–24 July 2021; pp. 4904–4916. [Google Scholar]
  14. Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning Transferable Visual Models from Natural Language Supervision. In Proceedings of the International Conference on Machine Learning, Vienna, Austria, 18–24 July 2021; pp. 8748–8763. [Google Scholar]
  15. Yang, J.; Duan, J.; Tran, S.; Xu, Y.; Chanda, S.; Chen, L.; Zeng, B.; Chilimbi, T.; Huang, J. Vision-Language Pre-Training with Triple Contrastive Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 15671–15680. [Google Scholar]
  16. P, J.J.; Palanisamy, K.; Chao, Y.-W.; Du, X.; Xiang, Y. Proto-CLIP: Vision-Language Prototypical Network for Few-Shot Learning. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates, 14–18 October 2024. [Google Scholar]
  17. Wang, W.; Bao, H.; Dong, L.; Bjorck, J.; Peng, Z.; Liu, Q.; Aggarwal, K.; Mohammed, O.K.; Singhal, S.; Som, S.; et al. Image as a Foreign Language: BEiT Pretraining for Vision and Vision-Language Tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 19175–19186. [Google Scholar]
  18. Snell, J.; Swersky, K.; Zemel, R. Prototypical Networks for Few-Shot Learning. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
  19. Zhong, X.; Gu, C.; Ye, M.; Huang, W.; Lin, C.-W. Graph Complemented Latent Representation for Few-Shot Image Classification. IEEE Trans. Multimed. 2022, 25, 1979–1990. [Google Scholar] [CrossRef]
  20. Phaphuangwittayakul, A.; Guo, Y.; Ying, F. Fast Adaptive Meta-Learning for Few-Shot Image Generation. IEEE Trans. Multimed. 2021, 24, 2205–2217. [Google Scholar] [CrossRef]
  21. Guo, K.; Shen, C.; Hu, B.; Hu, M.; Kui, X. RSNet: Relation Separation Network for Few-Shot Similar Class Recognition. IEEE Trans. Multimed. 2022, 25, 3894–3904. [Google Scholar] [CrossRef]
  22. Shorten, C.; Khoshgoftaar, T.M. A Survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  23. Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A Survey of Transfer Learning. J. Big Data 2016, 3, 9. [Google Scholar] [CrossRef]
  24. Peng, H. A Comprehensive Overview and Survey of Recent Advances in Meta-Learning. arXiv 2020, arXiv:2004.11149. [Google Scholar]
  25. Song, Y.; Wang, T.; Cai, P.; Mondal, S.K.; Sahoo, J.P. A Comprehensive Survey of Few-Shot Learning: Evolution, Applications, Challenges, and Opportunities. ACM Comput. Surv. 2023, 55, 271. [Google Scholar] [CrossRef]
  26. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2014; Volume 27. [Google Scholar]
  27. Kingma, D.P.; Welling, M. An Introduction to Variational Autoencoders. Found. Trends® Mach. Learn. 2019, 12, 307–392. [Google Scholar] [CrossRef]
  28. Reddy, M.D.M.; Basha, M.S.M.; Hari, M.M.C.; Penchalaiah, M.N. DALL-E: Creating Images from Text. UGC Care Group J. 2021, 8, 71–75. [Google Scholar]
  29. Li, J.; Li, D.; Xiong, C.; Hoi, S. BLIP: Bootstrapping Language-Image Pre-Training for Unified Vision-Language Understanding and Generation. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2022; pp. 12888–12900. [Google Scholar]
  30. Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; Sutskever, I. Zero-shot text-to-image generation. In Proceedings of the International Conference on Machine Learning, Online, 18–24 July 2021; pp. 8821–8831. [Google Scholar]
  31. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Li, F.-F. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  32. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Computer Vision—ECCV 2014, Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Part V 13; Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
  33. Bandy, J.; Vincent, N. Addressing “documentation debt” in machine learning research: A retrospective datasheet for BookCorpus. arXiv 2021, arXiv:2105.05241. [Google Scholar]
  34. Yu, Z.; Zhao, J.; Guo, C.; Yang, Y. StableNet: Distinguishing the hard samples to overcome language priors in visual question answering. IET Comput. Vis. 2024, 18, 315–327. [Google Scholar] [CrossRef]
  35. Tang, Y.; Lin, Z.; Wang, Q.; Zhu, P.; Hu, Q. Amu-tuning: Effective logit bias for clip-based few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 10–15 June 2024; pp. 23323–23333. [Google Scholar]
  36. Bai, Y.; Xu, X.; Liu, Y.; Khan, S.; Khan, F.; Zuo, W.; Goh, R.S.M.; Feng, C.-M. Sentence-level prompts benefit composed image retrieval. arXiv 2023, arXiv:2310.05473. [Google Scholar]
  37. Wang, P.; Yang, A.; Men, R.; Lin, J.; Bai, S.; Li, Z.; Ma, J.; Zhou, C.; Zhou, J.; Yang, H. Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In Proceedings of the International Conference on Machine Learning, Baltimore, MD, USA, 17–23 July 2022; pp. 23318–23340. [Google Scholar]
  38. Gupta, T.; Kembhavi, A. Visual programming: Compositional visual reasoning without training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 14953–14962. [Google Scholar]
  39. Gao, P.; Geng, S.; Zhang, R.; Ma, T.; Fang, R.; Zhang, Y.; Li, H.; Qiao, Y. Clip-adapter: Better vision-language models with feature adapters. Int. J. Comput. Vis. 2024, 132, 581–595. [Google Scholar] [CrossRef]
  40. Su, Y.; Liu, X.; Zhao, Y.; Hong, R.; Wang, M. Partial-tuning based mixed-modal prototypes for few-shot classification. IEEE Trans. Multimed. 2024, 26, 9175–9186. [Google Scholar] [CrossRef]
  41. Khattak, M.U.; Rasheed, H.; Maaz, M.; Khan, S.; Khan, F.S. Maple: Multi-modal prompt learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
  42. Guo, Z.; Zhang, R.; Qiu, L.; Ma, X.; Miao, X.; He, X.; Cui, B. Calip: Zero-shot enhancement of clip with parameter-free attention. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023. [Google Scholar]
  43. Oord, A.v.d.; Li, Y.; Vinyals, O. Representation learning with contrastive predictive coding. arXiv 2018, arXiv:1807.03748. [Google Scholar]
  44. Krause, J.; Stark, M.; Deng, J.; Li, F.-F. 3D object representations for fine-grained categorization. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 2–8 December 2013; pp. 554–561. [Google Scholar]
  45. Soomro, K.; Zamir, A.R.; Shah, M. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv 2012, arXiv:1212.0402. [Google Scholar]
  46. Li, F.; Fergus, R.; Perona, P. Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories. In Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop, Washington, DC, USA, 27 June–2 July 2004; p. 178. [Google Scholar]
  47. Nilsback, M.-E.; Zisserman, A. Automated flower classification over a large number of classes. In Proceedings of the Sixth Indian Conference on Computer Vision, Graphics & Image Processing, Washington, DC, USA, 16–19 December 2008; pp. 722–729. [Google Scholar]
  48. Xiao, J.; Hays, J.; Ehinger, K.A.; Oliva, A.; Torralba, A. SUN database: Large-scale scene recognition from abbey to zoo. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 3485–3492. [Google Scholar]
  49. Cimpoi, M.; Maji, S.; Kokkinos, I.; Mohamed, S.; Vedaldi, A. Describing textures in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3606–3613. [Google Scholar]
  50. Helber, P.; Bischke, B.; Dengel, A.; Borth, D. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2217–2226. [Google Scholar] [CrossRef]
  51. Maji, S.; Rahtu, E.; Kannala, J.; Blaschko, M.; Vedaldi, A. Fine-grained visual classification of aircraft. arXiv 2013, arXiv:1306.5151. [Google Scholar]
  52. Parkhi, O.M.; Vedaldi, A.; Zisserman, A.; Jawahar, C.V. Cats and dogs. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 16–21 June 2012; pp. 3498–3505. [Google Scholar]
  53. Bossard, L.; Guillaumin, M.; Van Gool, L. Food-101 – Mining discriminative components with random forests. In Computer Vision—ECCV 2014, Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Part VI; Springer: Cham, Switzerland, 2014; pp. 446–461. [Google Scholar]
  54. Recht, B.; Roelofs, R.; Schmidt, L.; Shankar, V. Do ImageNet classifiers generalize to ImageNet? In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 5389–5400. [Google Scholar]
  55. Wang, H.; Ge, S.; Lipton, Z.; Xing, E.P. Learning robust global representations by penalizing local predictive power. In Advances in Neural Information Processing Systems (NeurIPS); MIT Press: Cambridge, MA, USA, 2019. [Google Scholar]
  56. Hendrycks, D.; Zhao, K.; Basart, S.; Steinhardt, J.; Song, D. Natural adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15262–15271. [Google Scholar]
  57. Hendrycks, D.; Basart, S.; Mu, N.; Kadavath, S.; Wang, F.; Dorundo, E.; Desai, R.; Zhu, T.; Parajuli, S.; Guo, M.; et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 8340–8349. [Google Scholar]
Figure 1. Comparison of t-SNE visualization of image features extracted by different pre-trained models. The symbol (•), displayed in various colors, indicates image features from distinct categories. (a) t-SNE visualization of image features extracted by the ResNet50 pre-trained model. (b) t-SNE visualization of image features extracted by the BEiT3-large-itc pre-trained model.
Figure 1. Comparison of t-SNE visualization of image features extracted by different pre-trained models. The symbol (•), displayed in various colors, indicates image features from distinct categories. (a) t-SNE visualization of image features extracted by the ResNet50 pre-trained model. (b) t-SNE visualization of image features extracted by the BEiT3-large-itc pre-trained model.
Symmetry 17 00796 g001
Figure 2. Overview of the ProFusion model. ProFusion leverages a BEiT-3 multimodal pre-trained model with frozen parameters to construct image prototypes, text prototypes, and multimodal feature fusion prototypes, which are then fine-tuned. Given a query set image, the model first extracts its features and calculates the squared Euclidean distances to the three types of prototypes. Next, the results are converted into probabilities using the softmax function. Finally, a weighted sum of the three prediction probabilities is computed to generate the final classification result. P n denotes the text prompt template and SED stands for the squared Euclidean distance. P v , P t , and P f represent the prediction probabilities of the query set images based on the image prototype, text prototype, and fusion prototype, respectively.
Figure 2. Overview of the ProFusion model. ProFusion leverages a BEiT-3 multimodal pre-trained model with frozen parameters to construct image prototypes, text prototypes, and multimodal feature fusion prototypes, which are then fine-tuned. Given a query set image, the model first extracts its features and calculates the squared Euclidean distances to the three types of prototypes. Next, the results are converted into probabilities using the softmax function. Finally, a weighted sum of the three prediction probabilities is computed to generate the final classification result. P n denotes the text prompt template and SED stands for the squared Euclidean distance. P v , P t , and P f represent the prediction probabilities of the query set images based on the image prototype, text prototype, and fusion prototype, respectively.
Symmetry 17 00796 g002
Figure 3. The comparison of average accuracy between ProFusion and other SOTA models on the ImageNet dataset.
Figure 3. The comparison of average accuracy between ProFusion and other SOTA models on the ImageNet dataset.
Symmetry 17 00796 g003
Figure 4. Ablation experiment on the backbone network. The experiments are performed on the ImageNet and UCF101 datasets using BEiT3-base-itc and BEiT3-large-itc as backbone networks. The backbone network used in (a,c) is BEiT3-base-itc, while the backbone network in (b,d) is BEiT3-large-itc.
Figure 4. Ablation experiment on the backbone network. The experiments are performed on the ImageNet and UCF101 datasets using BEiT3-base-itc and BEiT3-large-itc as backbone networks. The backbone network used in (a,c) is BEiT3-base-itc, while the backbone network in (b,d) is BEiT3-large-itc.
Symmetry 17 00796 g004
Figure 5. Impact of image data augmentation strategies on model performance. Specifically, panels (ad) show experimental results with and without data augmentation strategies on the UCF101, SUN397, Food101, and StanfordCars datasets, respectively.
Figure 5. Impact of image data augmentation strategies on model performance. Specifically, panels (ad) show experimental results with and without data augmentation strategies on the UCF101, SUN397, Food101, and StanfordCars datasets, respectively.
Symmetry 17 00796 g005
Figure 6. t-SNE visualizations of prototype features on the OxfordPets and EuroSAT datasets under the 16-shot setting. The symbol (•), shown in different colors, represents image features from distinct categories in the validation and test sets. (a) t-SNE visualization of multimodal fusion prototype features (✩) and image features from the validation set for the selected 20 categories on the OxfordPets dataset. (b) t-SNE visualization of multimodal fusion prototype features (✩) and image features from the test set for the selected 20 categories on the OxfordPets dataset. (c) t-SNE visualization of image prototype features (Δ) and image features from the test set for all categories on the EuroSAT dataset. (d) t-SNE visualization of text prototype features (◊) and image features from the test set for all categories on the EuroSAT dataset.
Figure 6. t-SNE visualizations of prototype features on the OxfordPets and EuroSAT datasets under the 16-shot setting. The symbol (•), shown in different colors, represents image features from distinct categories in the validation and test sets. (a) t-SNE visualization of multimodal fusion prototype features (✩) and image features from the validation set for the selected 20 categories on the OxfordPets dataset. (b) t-SNE visualization of multimodal fusion prototype features (✩) and image features from the test set for the selected 20 categories on the OxfordPets dataset. (c) t-SNE visualization of image prototype features (Δ) and image features from the test set for all categories on the EuroSAT dataset. (d) t-SNE visualization of text prototype features (◊) and image features from the test set for all categories on the EuroSAT dataset.
Symmetry 17 00796 g006
Figure 7. Visualization comparison of image and text prototypes before and after fine-tuning (FT), where the fine-tuning process incorporates an image–text alignment module. (a) t-SNE visualization of image and text prototypes before fine-tuning. (b) t-SNE visualization of image and text prototypes after introducing the image–text alignment module during fine-tuning. The numerical labels indicate the category numbers. Different colors of the symbols (Δ) and (✩) represent image prototypes and text prototypes of different categories, respectively; the same color indicates that the image prototype and text prototype belong to the same category.
Figure 7. Visualization comparison of image and text prototypes before and after fine-tuning (FT), where the fine-tuning process incorporates an image–text alignment module. (a) t-SNE visualization of image and text prototypes before fine-tuning. (b) t-SNE visualization of image and text prototypes after introducing the image–text alignment module during fine-tuning. The numerical labels indicate the category numbers. Different colors of the symbols (Δ) and (✩) represent image prototypes and text prototypes of different categories, respectively; the same color indicates that the image prototype and text prototype belong to the same category.
Symmetry 17 00796 g007
Figure 8. The impact of different prototype features on classification accuracy in terms of their contribution to the predicted probabilities of test images. (a) The weight α assigned to the image prototypes. (b) The weight β assigned to the text prototypes. (c) The weight γ assigned to the fusion prototypes. The shaded region indicates the range between the maximum and minimum classification accuracy observed for different weight distributions. (d) The impact of grid-searched hyperparameters and learnable parameters on model performance.
Figure 8. The impact of different prototype features on classification accuracy in terms of their contribution to the predicted probabilities of test images. (a) The weight α assigned to the image prototypes. (b) The weight β assigned to the text prototypes. (c) The weight γ assigned to the fusion prototypes. The shaded region indicates the range between the maximum and minimum classification accuracy observed for different weight distributions. (d) The impact of grid-searched hyperparameters and learnable parameters on model performance.
Symmetry 17 00796 g008
Table 1. Comparison between ProFusion and existing state-of-the-art (SOTA) methods. “Image-Tuned” and “Text-Tuned” indicate fine-tuning of image and text features, respectively; “Image–Text AM” refers to the alignment of image and text features; “MF Fusion” denotes multimodal feature fusion. The symbols “✔“ and “✘“ indicate the presence or absence of the corresponding technique in each model.
Table 1. Comparison between ProFusion and existing state-of-the-art (SOTA) methods. “Image-Tuned” and “Text-Tuned” indicate fine-tuning of image and text features, respectively; “Image–Text AM” refers to the alignment of image and text features; “MF Fusion” denotes multimodal feature fusion. The symbols “✔“ and “✘“ indicate the presence or absence of the corresponding technique in each model.
ModelImage-TunedText-TunedImage–Text AMMF Fusion
CoOp [12]
CLIP-Adapter [39]
Tip-Adapter [11]
Proto-CLIP [16]
PMPro [40]
GDA-CLIP [9]
ProFusion (Ours)
Table 2. Number of categories, sample sizes for the validation and test sets, and corresponding text prompt templates for each dataset.
Table 2. Number of categories, sample sizes for the validation and test sets, and corresponding text prompt templates for each dataset.
DatasetClassesVal SamplesTest SamplesPrompt Template
ImageNet [31]100050,00050,000“itap of a {class}.”, “a bad photo of the {class}.”
“a origami {class}.”, “a photo of the large {class}.”
“a {class} in a video game.”, “art of the {class}.”
“a photo of the small {class}.”
FGVCAircraft [51]10033333333“a photo of a {class}, a type of aircraft.”
OxfordPets [52]377363699“a photo of a {class}, a type of pet.”
StanfordCars [44]19616358041“a photo of a {class}.”
EuroSAT [50]1054008100“a centered satellite photo of {class}.”
Caltech101 [46]10016492465“a photo of a {class}.”
SUN397 [48]397397019,850“a photo of a {class}.’
DTD [49]4711281692“{class} texture.”
Flowers102 [47]10216332463“a photo of a {class}, a type of flower.”
Food101 [53]10120,20030,300“a photo of {class}, a type of food.”
UCF101 [45]10118983783“a photo of a person doing {class}.”
Table 3. Comparison with SOTA models across 11 datasets.
Table 3. Comparison with SOTA models across 11 datasets.
DatasetImageNetFGVCPetsCarsEuroSATCaltechSUNDTDFlowersFoodUCF
Classes 1000 100 37 196 10 100 397 47 102 101 101
Zero-shot CLIP [14]60.3317.1085.8355.7437.5285.9258.5242.2066.0277.3261.35
1-shot
CoOP [12]57.159.6485.8955.5950.6387.5360.2944.3968.1274.3261.92
CLIP-Adapter [39]61.2017.4985.9955.1361.4088.6061.3045.8073.4976.8262.20
CALIP [42]61.5118.8186.9357.7566.6989.2863.1650.1776.3877.6365.59
Tip-Adapter [11]60.7019.0586.1057.5454.3887.1861.3046.2273.1277.4262.60
Tip-Adapter-F [11]61.1320.2287.0058.8659.5389.3362.5049.6579.9877.5164.87
Proto-CLIP [16]60.3119.5986.1057.2955.5387.9960.8146.0476.9877.3663.15
Proto-CLIP-F [16]60.3219.5085.7257.3454.9388.0760.8335.6477.4777.3463.07
Proto-CLIP-F-QT [16]59.1216.2683.6252.7761.9588.4861.4332.2768.5375.1662.44
GDA-CLIP [9]60.6417.2985.4956.7758.3087.2859.9546.2672.1177.4262.61
PMPro [40]60.5722.2385.9460.1466.0489.6664.0449.7680.1976.7266.61
ProFusion (Ours)72.6323.8290.2283.0865.7696.0272.1362.4185.5585.4272.03
ProFusion-F (Ours)73.0423.1990.7984.1462.1797.2473.7563.3682.4185.9174.36
2-shot
CoOP [12]57.8118.6882.6458.2861.5087.9359.4845.1577.5172.4964.09
CLIP-Adapter [39]61.5220.1086.7358.7463.9089.3763.2951.4881.6177.2267.12
CALIP [42]62.0819.7486.4259.2174.5790.5765.0855.2883.0878.1868.77
Tip-Adapter [11]60.9621.2187.0357.9361.6888.4462.7049.4779.1377.5264.74
Tip-Adapter-F [11]61.6923.1987.0361.5066.1589.7463.6453.7282.3077.8166.43
Proto-CLIP [16]60.6422.1487.3860.0164.8989.0563.1251.0683.3977.3467.46
Proto-CLIP-F [16]60.6422.1487.3860.0464.8689.0963.2049.8883.5277.3467.49
Proto-CLIP-F-QT [16]60.4820.0185.2860.0263.5989.4965.4645.6981.2076.1568.83
GDA-CLIP [9]61.2322.4586.6959.3067.8888.6363.6451.2689.2177.8766.28
PMPro [40]62.1825.1487.0063.8569.1590.5966.8956.2184.7378.1770.02
ProFusion (Ours)73.3229.8590.1984.2865.2296.8074.5965.7890.3785.7078.22
ProFusion-F (Ours)73.5829.3191.4784.5965.4897.0775.5267.4390.9186.3979.70
4-shot
CoOP [12]59.9921.8786.7062.6270.1889.5563.4753.4986.2073.3367.03
CLIP-Adapter [39]61.8422.5987.4662.4573.3889.9865.9656.8687.1777.9269.05
CALIP [42]63.0624.9687.6362.4378.0591.3867.2259.6388.4478.3971.22
Tip-Adapter [11]60.9822.4186.4561.4565.3289.8964.1553.9683.8077.5466.36
Tip-Adapter-F [11]62.5225.8087.5464.5774.1290.5666.2157.3988.8378.2470.55
Proto-CLIP [16]61.3023.2587.1963.3368.6789.5765.5155.9188.2377.5869.50
Proto-CLIP-F [16]61.3023.3186.9563.3468.5289.6265.5757.2188.2777.5869.55
Proto-CLIP-F-QT [16]61.8027.6387.1166.2480.6491.8168.0956.8689.8576.9470.16
GDA-CLIP [9]61.7128.2487.1962.7575.4090.6265.2757.4789.7377.8371.31
PMPro [40]63.0226.9789.1566.4074.4992.4568.8061.2389.4478.7774.17
ProFusion (Ours)74.1330.8790.8785.0773.0997.2876.6169.5094.0785.9780.70
ProFusion-F (Ours)74.7834.4191.5286.3376.2197.6577.3470.6995.1386.7382.87
8-shot
CoOP [12]61.5626.1385.3268.4376.7390.2165.5259.9791.1871.8271.94
CLIP-Adapter [39]62.6826.2587.6567.8977.9391.4067.5061.0091.7278.0473.30
CALIP [42]64.1935.5288.1467.9483.8192.4869.6163.9893.2478.7675.76
Tip-Adapter [11]61.4525.5987.0362.9367.9589.8365.6258.6387.9877.7668.68
Tip-Adapter-F [11]64.0030.2188.0969.2577.9391.4468.8762.7191.5178.6474.25
Proto-CLIP [16]62.1227.6388.0464.9369.4290.2267.3759.3492.0877.9071.08
Proto-CLIP-F [16]63.9231.3288.5570.3578.9492.5469.5962.3593.7978.2974.81
Proto-CLIP-F-QT [16]64.0335.8287.4671.5081.8992.6270.0264.0194.2878.6175.34
GDA-CLIP [9]62.4634.0789.1268.9381.7091.3568.5862.7793.2378.4375.74
PMPro [40]65.3932.2590.1169.3681.6393.0669.5264.0793.0278.8276.98
ProFusion (Ours)75.2434.2392.0486.7175.1696.9178.3372.9394.7286.5983.61
ProFusion-F (Ours)76.0140.8992.3187.7381.6097.7379.1475.0097.6587.1185.41
16-shot
CoOP [12]62.9531.2687.0173.3683.5391.8369.2663.5894.2174.6775.71
CLIP-Adapter [39]63.5932.0187.8474.0184.4392.4969.5565.9693.9078.2576.76
CALIP [42]68.8145.4489.2976.6188.6794.1971.3268.4496.9779.3480.20
Tip-Adapter [11]62.0229.7688.1466.7770.5490.1866.8560.9389.8977.8370.58
Tip-Adapter-F [11]65.5135.5589.7075.7484.5492.8671.4766.5594.8079.4378.03
Proto-CLIP [16]62.7729.6788.6168.1172.9591.0868.0961.6492.9478.1173.35
Proto-CLIP-F [16]65.7537.5689.6275.2583.5393.4371.9468.5695.7879.0977.50
Proto-CLIP-F-QT [16]65.9140.6589.3476.7686.5993.5972.1968.5096.3579.3478.11
GDA-CLIP [9]63.8240.6188.8175.1286.1292.5570.7066.5195.7279.0577.53
PMPro [40]65.3936.1291.4774.0886.8193.9171.8568.7995.5379.3179.28
ProFusion (Ours)76.1837.7892.0487.6874.4697.1678.8274.0595.4586.9084.56
ProFusion-F (Ours)77.6144.1092.9489.5787.8098.1380.3777.5498.1787.6286.12
Table 4. Average accuracies for 11 datasets under different shot settings, compared with SOTA models.
Table 4. Average accuracies for 11 datasets under different shot settings, compared with SOTA models.
ModelNumber of ShotsAverage
1-Shot 2-Shot 4-Shot 8-Shot 16-Shot
Zero-shot CLIP [14]Zero-Shot58.90
CoOP [12]59.5962.3266.7769.8973.4066.39
CLIP-Adapter [39]62.6765.5568.6171.4074.4468.53
CALIP [42]64.9067.5470.2273.9578.1270.95
Tip-Adapter-F [11]64.6066.6569.6772.4575.8369.84
Proto-CLIP-F [16]61.8465.9668.2973.1376.1869.08
GDA-CLIP [9]62.6966.7769.7773.3176.0569.72
PMPro [40]65.6368.5471.3573.9376.5971.21
ProFusion (Ours)73.5575.8578.0179.6880.4677.51
ProFusion-F (Ours)73.6776.5079.4281.8783.6379.01
Table 5. Out-of-distribution Generalization.
Table 5. Out-of-distribution Generalization.
MethodTrainSourceTarget
ImageNet −V2 −Sketch −A −R Avg
Zero-shot CLIP [14]68.7962.2548.3850.7177.7159.76
Tip-Adapter [11]70.8363.7248.9851.0977.2660.26
Tip-Adapter-F [11]73.7566.0049.1450.6777.9360.94
Proto-CLIP [16]71.8564.4548.9150.7377.7860.47
Proto-CLIP-F [16]73.7762.4348.3850.6477.7859.81
MaPLe [41]70.7264.0749.1550.9076.9860.28
GDA-CLIP [9]72.2264.8649.0050.1476.8860.22
ProFusion (Ours)76.1867.8860.8250.1985.2666.04
ProFusion-F (Ours)77.6166.7760.3451.3385.0865.88
+3.84+1.88+11.67+0.24+7.33+5.10
Table 6. The impact of different types of prototypes on model performance. The symbols “✔” and “–” indicate the use and absence of different types of prototypes, respectively.
Table 6. The impact of different types of prototypes on model performance. The symbols “✔” and “–” indicate the use and absence of different types of prototypes, respectively.
PrototypeDataset
Image Text Fusion ImageNet FGVC Pets Cars EuroSAT Caltech SUN DTD Flowers Food UCF
74.4143.2290.6588.7286.4897.6177.9174.8297.8185.5085.09
77.3733.5492.1589.0384.0997.6580.2074.9497.4887.2185.51
77.4842.0092.5089.1884.0797.6180.2875.8997.7787.4085.06
77.2843.5192.5788.8185.8897.7779.3175.8397.6987.2885.28
77.6144.1092.9489.5787.8098.1380.3777.5498.1787.6286.12
Table 7. The impact of different feature fusion strategies on model performance. The ”Baseline” refers to the method of constructing the feature fusion prototype by element-wise multiplication of the image and text prototypes; “Ours” refers to the method of constructing the feature fusion prototype using the fusion module.
Table 7. The impact of different feature fusion strategies on model performance. The ”Baseline” refers to the method of constructing the feature fusion prototype by element-wise multiplication of the image and text prototypes; “Ours” refers to the method of constructing the feature fusion prototype using the fusion module.
MethodImageNetFGVCPetsCarsEuroSATCaltechSUNDTDFlowersFoodUCF
8-shot
Baseline76.0140.2392.1587.4680.5397.5778.9573.8297.6586.9983.74
Ours76.0140.8992.3187.7381.6097.7379.1475.0097.6587.1185.41
16-shot
Baseline76.8041.8892.4588.2184.1197.6979.8376.3698.0587.4485.70
Ours77.6144.1092.9489.5787.8098.1380.3777.5498.1787.6286.12
Table 8. Classification results of different similarity measurement methods. MM denotes matrix multiplication; CS denotes cosine similarity; SED denotes squared Euclidean distance.
Table 8. Classification results of different similarity measurement methods. MM denotes matrix multiplication; CS denotes cosine similarity; SED denotes squared Euclidean distance.
DatasetMethod1-Shot2-Shot4-Shot8-Shot16-Shot
MM73.6278.5682.6185.3685.64
UCF101CS73.6279.2082.5385.4685.78
SED74.3679.7082.8785.4186.12
MM62.3566.6168.2673.8875.71
DTDCS63.3066.4368.2673.8875.77
SED63.3667.4370.6975.0077.54
Table 9. Comparison of ProFusion performance with SOTA models in the ImageNet dataset in terms of training time, testing time, and classification accuracy.
Table 9. Comparison of ProFusion performance with SOTA models in the ImageNet dataset in terms of training time, testing time, and classification accuracy.
MethodTrain. SetTrain. TimeTest. TimeAcc. (%)Gain. (%)
Tip-Adapter [11]003.4 min65.620
Tip-Adapter-F [11]16-shot6.4 min3.4 min68.673.05
Proto-CLIP-F [16]16-shot6.8 min0.2 min67.832.21
GDA-CLIP [9]002.7 min67.001.38
ProFusion (Ours)001.9 min71.485.86
ProFusion-F (Ours)16-shot4.19 min1.9 min73.287.66
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, J.; Cao, Z.; Wang, H.; Wang, X.; Chen, Y. ProFusion: Multimodal Prototypical Networks for Few-Shot Learning with Feature Fusion. Symmetry 2025, 17, 796. https://doi.org/10.3390/sym17050796

AMA Style

Zhao J, Cao Z, Wang H, Wang X, Chen Y. ProFusion: Multimodal Prototypical Networks for Few-Shot Learning with Feature Fusion. Symmetry. 2025; 17(5):796. https://doi.org/10.3390/sym17050796

Chicago/Turabian Style

Zhao, Jia, Ziyang Cao, Huiling Wang, Xu Wang, and Yingzhou Chen. 2025. "ProFusion: Multimodal Prototypical Networks for Few-Shot Learning with Feature Fusion" Symmetry 17, no. 5: 796. https://doi.org/10.3390/sym17050796

APA Style

Zhao, J., Cao, Z., Wang, H., Wang, X., & Chen, Y. (2025). ProFusion: Multimodal Prototypical Networks for Few-Shot Learning with Feature Fusion. Symmetry, 17(5), 796. https://doi.org/10.3390/sym17050796

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop