Next Article in Journal
Intelligent Traffic Management in Next-Generation Networks
Next Article in Special Issue
Neural Network-Based Price Tag Data Analysis
Previous Article in Journal
Multi-Attribute Decision Making for Energy-Efficient Public Transport Network Selection in Smart Cities
Previous Article in Special Issue
Graph Representation-Based Deep Multi-View Semantic Similarity Learning Model for Recommendation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DA-GAN: Dual Attention Generative Adversarial Network for Cross-Modal Retrieval

College of Information and Intelligence, Hunan Agricultural University, Changsha 410128, China
*
Author to whom correspondence should be addressed.
Future Internet 2022, 14(2), 43; https://doi.org/10.3390/fi14020043
Submission received: 4 January 2022 / Revised: 12 January 2022 / Accepted: 13 January 2022 / Published: 27 January 2022
(This article belongs to the Special Issue Advances Techniques in Computer Vision and Multimedia)

Abstract

:
Cross-modal retrieval aims to search samples of one modality via queries of other modalities, which is a hot issue in the community of multimedia. However, two main challenges, i.e., heterogeneity gap and semantic interaction across different modalities, have not been solved efficaciously. Reducing the heterogeneous gap can improve the cross-modal similarity measurement. Meanwhile, modeling cross-modal semantic interaction can capture the semantic correlations more accurately. To this end, this paper presents a novel end-to-end framework, called Dual Attention Generative Adversarial Network (DA-GAN). This technique is an adversarial semantic representation model with a dual attention mechanism, i.e., intra-modal attention and inter-modal attention. Intra-modal attention is used to focus on the important semantic feature within a modality, while inter-modal attention is to explore the semantic interaction between different modalities and then represent the high-level semantic correlation more precisely. A dual adversarial learning strategy is designed to generate modality-invariant representations, which can reduce the cross-modal heterogeneity efficiently. The experiments on three commonly used benchmarks show the better performance of DA-GAN than these competitors.

1. Introduction

Cross-modal retrieval [1,2] is a hot issue in the field of multimedia [3]. As shown in Figure 1, it is aiming to find objects of one modality by queries of another modality. Recently, multimedia data [4] is growing exponentially, which is widely used in several scenarios, such as information retrieval, recommendation system [5], social network [6], etc. It makes this problem attract increasing interest by a growing number of researchers.
The main challenge of cross-modal retrieval is how to eliminate the heterogeneity between multimedia objects and how to bridge the semantic gap [7,8] by understanding cross-modal consistent semantic concepts. In the existing literature, the classic way to overcome this challenge is to construct a common latent subspace [9], in which the multimedia instances are represented in the same form and the semantic features can be aligned [10]. As a traditional approach, Canonical Correlation Analysis (CCA) [11] is adopted by many researches [12,13,14,15] to learn correlation between cross-modal instances with the same category label. Although these CCA-based methods are supported by classical statistical theory, they cannot represent the complex non-linear semantic correlation. To break this limitation, some non-linear extensions such as KCCA [11], RCCA [16], LPCCA [17], etc. have been proposed to enhance the cross-modal representation.
Thanks to the powerful representation ability of deep learning models [18,19,20,21], cross-modal semantic representation learning has been boosted significantly. For instance, several CCA-based approaches, e.g., deep CCA [22], DisDCCA [23], DCCAE [24], are extended by integrating CCA with DNNs. In recent years, attention mechanisms are exploited to support cross-modal feature learning, which is used to discover more significant semantic details from heterogeneous cross-modal representations. With the help of the attention techniques, high-level semantics can be selectively focused on during the learning, which augments the semantic modeling and reduces the influence of noise on representation learning [25,26,27,28,29].
Our method. To implement the above idea, this paper proposes a new approach, named Dual Attention Generative Adversarial Network (DA-GAN). This method combines adversarial learning, intra-modal, and inter-modal attention mechanism to improve cross-modal representation capability. Specifically, the inputs are divided into three groups: an image-text pair I i , T i , L i with category label L i , a group of images and a group of texts with the same label L i . For the generator, we utilize visual CNN and textual CNN to generate visual and textual feature vectors respectively. Then these feature vectors are fed into a two-channel intra-attention model (each channel per modality) to learn intra-modal high-level semantic feature representation with the help of a group of images and texts. At the top of this model, a two-channel encoder is implemented by DNN to learn modality-consistent representations, at the top of which an inter-attention model captures the important semantic features across different modalities. Besides, a two-channel decoder is to re-construct the feature representation for intra-modal adversarial learning. In addition, two types of discriminators are used to form a dual adversarial learning strategy to narrow the heterogeneity gap.
Contributions. This paper has three-fold contributions, which are listed as follows.
  • We propose a novel Dual Attention Generative Adversarial Network (DA-GAN) for cross-modal retrieval, which is an integration of the adversarial learning method with a dual attention mechanism.
  • To narrow semantic gap and learn high-level semantic features, a dual attention mechanism is designed to capture important semantic features from cross-modal instances in both intra-modal view and inter-modal view, which enhances abstract concepts learning across different modalities.
  • To reduce heterogeneity gap, a cross-modal adversarial learning model is employed to learn consistent feature distribution via intra-modal and inter-modal adversarial loss.
Roadmap. The rest of this paper is organized as follows: related works on cross-modal retrieval, attention models, and generative adversarial network are introduced in Section 2. In Section 3, the problem definition and related concepts are proposed. In Section 4, we discuss the details of the proposed DA-GAN. Section 5 presents the experiments and the results. At last, Section 6 concludes this paper.

2. Related Work

2.1. Cross-Modal Retrieval

The main challenge of cross-modal retrieval [30,31,32,33] is to diminish the heterogeneity gap and semantics gap by learning a consistent semantic subspace, in which the cross-modal similarity can be directly measured. The existing methods include CCA-based methods, deep learning-based methods, and hashing-based methods. We review them in brief as follows.
CCA-Based Methods. Rasiwasia et al. [34] is the first to use CCA [11] for cross-modal correlation learning. After this work, several CCA-based methods are proposed to enhance cross-modal representation learning. For example, Sharma et al. [14] studied a supervised extension of CCA, which is a general multi-view and kernelizable feature learning method. Pereira et al. [12] proposed three CCA-based approaches, namely correlation matching (CM), semantic matching (SM), and semantic correlation matching (SCM). Gong et al. [13] presented a three-view CCA model in which the abstract semantic information is learned by a third view module to support semantic correlation learning. In [15], cluster-CCA method is developed to generate discriminant cross-modal representations.
Deep Learning-Based Methods. Recently, deep learning [18,19,35] techniques have made great progress, which empowers the multimedia analysis [36,37,38,39] and cross-modal representation [40,41]. To learn non-linear correlations from different data modalities, Andrew et al. [42] proposed to integrate deep neural networks into the CCA method. It is a two-channel model, each of which is for one modality. Benton et al. [22] introduced Deep Generalized Canonical Correlation Analysis (DGCCA) to learn non-linear transformations of arbitrarily many views. Gu et al. [43] designed generative processes so as to learn global and local features from cross-modal samples. Zhen et al. [44] introduced a method named Deep Supervised Cross-modal Retrieval (DSCMR) with a weight-sharing strategy to explore the cross-modal consistent relationship.

2.2. Attention Models

Attention mechanism [45] is widely applied in image caption [46], action recognition [47], fine-grained image classification [48], visual questing answering [49], cross-modal retrieval [25] and etc. For example, Wu et al. [50] introduced a deep attention-based spatially recursive model to consider spatial dependencies during feature learning. Sudhakaran et al. [51] proposed Long Short-Term Attention method to capture features from spatial relevant parts across the video frames.
For cross-modal task, Peng et al. [25] proposed a modality-specific cross-modal similarity approach by using a recurrent attention network. Wang et al. [52] designed a hierarchically aligned cross-modal attention (HACA) model to fuse both global and local temporal dynamics of different modalities. Xu et al. [26] developed a Cross-modal Attention with Semantic Consistency (CASC) method to realize local alignment and multi-label prediction for image-text matching. Liu et al. [53] proposed a cross-modal attention-guided erasing approach to comprehend and align cross-modal information for referring expression grounding. Huang et al. [54] used object-oriented encoders along with inter-modal and intra-modal attention networks to improve inter-modal dependencies. Fang et al. [27] introduced subjective attention-based multi-task auxiliary cross-modal fusion method to enhance the robustness and contextual awareness of image fusion.

2.3. Generative Adversarial Network

Generative adversarial network (GAN) is devised by Goodfellow et al. [55], which is a powerful generative model applied in various multimedia tasks [56]. Wang et al. [57] is the first to employ GAN to learn modality-invariant features to diminish cross-modal heterogeneity. Liu et al. [58] presented an adversarial learning-based image-text embedding method to make the distributions of different modalities consistent. Huang et al. [59] studied an adversarial-based transfer model to realize knowledge transfer, and generate modality-indiscriminative representations.
With the support of GAN, many works proposed effective cross-modal hashing methods to realize efficient retrieval in binary Hamming space [60,61]. For example, [62] presented a GAN-based semi-supervised cross-modal hashing approach is presented, which is to learn semantic correlations from unlabeled samples via a minimax game.

3. Preliminaries

In this section, the formal problem definition and related notions are presented. Then, we review the theory of generative adversarial networks, which is the base of the proposed technique. Table 1 summarizes the mathematical notations used in this paper.

3.1. Problem Definition

This work considers two common modalities: image and text. Let D = { I i , T i , L i } i = 1 n be a multimedia dataset that contains n image-text pairs, where I i R λ I and T i R λ T represent i-th image sample and text sample in their original space respectively, λ I and λ T are the dimensions of image and text original space. Each pair is assigned a semantic label vector that is denoted as L i = ( L i ( 1 ) , L i ( 2 ) , , L i ( λ L ) ) R λ L , where λ L is the number of semantic categories in D . If I i and T i belong to the same semantic category, then L i ( c ) = 1 ; otherwise L i ( j ) = 0 . Cross-modal retrieval aims to search multimedia instances, which are different from the modality of the query Q but similar enough to Q. If the query is an image, denoted as Q I , we call this type of cross-modal as image-to-text (I2T) retrieval; otherwise text-to-image (T2I) retrieval. In the following the definition of I2T and T2I retrieval are formulated.
Definition 1.
Cross-Modal Retrieval.Given a multimedia dataset D = { I i , T i , L i } i = 1 n and two queries Q I and Q T . The I2T retrieval is to return a set of results
R I 2 T = T j | S i m T i , Q I S i m T , Q I , T i D , T D R I 2 T j = 1 k ,
where S i m ( · ) denotes the similarity function, k is the number of results.
Apparently, Definition 1 indicates that the key problem of cross-modal retrieval is to realize the function S i m ( · ) . However, due to the heterogeneity gap and the semantic gap, it is hard to measure the semantic similarity between instances of different modalities in their original space. Therefore, two non-linear mappings Φ I ( · ) : R λ I R λ C and Φ T ( · ) : R λ T R λ C need to be learned, which is to transform images and texts into a λ C -dimensional common semantic subspace. Thus, the heterogeneity of different modalities can be diminished and the cross-modal representations can be described by a set of semantic concepts C = { C } l = 1 λ C , As a result, the cross-modal similarity can be measured accurately by the following function.
Definition 2.
Cross-Modal Similarity Function.Given a multimedia dataset D , an image I D and a text T D , the cross-modal similarity between I and T is defined as
S i m ( I , T ) = i = 1 λ C Φ I ( I ) ( i ) × Φ I ( T ) ( i ) i = 1 λ C Φ I ( I ) ( i ) 2 × i = 1 λ C Φ I ( T ) ( i ) 2 ,
where Φ I ( I ) and Φ I ( T ) denote the cross-modal representations in the common semantic subspace. Φ I ( I ) ( i ) and Φ I ( T ) ( i ) are the i-th element of representation vectors, respectively.
To learn these two non-linear mappings, we propose a deep architecture by using adversarial learning, which generates modality-invariant representations from multi-modality data and realizes cross-modal semantic augmentation via a dual attention mechanism.

3.2. Review of Generative Adversarial Netw

As a powerful technique, generative adversarial networks (GANs) [55] have be utilized in many multimedia tasks, such as image synthesis, video generation, motion generation, face aging, etc. It consists of two components: a generator G ( · ; θ G ) and a discriminator D ( · ; θ D ) , where θ G and θ D are the model parameter vectors. During the training, the generator G ( · ; θ G ) tries to make the synthetic image more realistic to fool the discriminator D ( · ; θ D ) . The discriminator D ( · ; θ D ) makes its efforts to distinguish the fake samples from real samples. In other words, G ( · ; θ G ) and D ( · ; θ D ) are diametrically against to each other.
Specifically, let I be a real image sample obey natural data distribution P d a t a ( I ) , z R λ z be a random noise vector generated from distribution P z ( z ) . After fed into the generator G ( · ; θ G ) , z is transformed into a synthetic sample G ( z ; θ G ) that obeys the generative distribution P G . The discriminator receives the real sample I and the synthetic sample G ( z ; θ G ) as inputs, and outputs the discriminant result D ( G ( z ; θ G ) ; θ D ) , a probability that G ( z ; θ G ) is produced by the generator. This adversarial process can be formulated as
arg min G · ; θ G max D · ; θ D L G A N G ( · ; θ G ) , D ( · ; θ D ) = E I P d a t a ( I ) l o g D I ; θ D + E z P z ( z ) l o g 1 D G z ; θ G ; θ D ,
where E I P d a t a ( I ) · and E z P z ( z ) · denote mathematical expectations:
E I P d a t a ( I ) l o g D I ; θ D = I P d a t a I l o g D I ; θ D d I ,
E z P z ( z ) l o g 1 D G z ; θ G ; θ D = z P z ( z ) l o g 1 D G z ; θ G ; θ D d z .
In the training process, the generator G ( · ; θ G ) , on one hand, synthesizes images as authentic as possible to fool the discriminator D ( · ; θ D ) by minimizing the loss function. On the other hand, the discriminator ( · ; θ D ) does its utmost to recognize the fake samples from real samples by maximizing the loss function, shown as follows:
arg min G · ; θ G L G A N G · ; θ G , D · ; θ D = I P d a t a ( I ) l o g D I ; θ D d I ,
arg max D · ; θ D L G A N G · ; θ G , D · ; θ D = z P z ( z ) l o g 1 D G z ; θ G ; θ D d z .

4. Methodology

In this section, we discuss the proposed Dual Attention Generative Adversarial Network (DA-GAN). This method is to learn cross-modal non-linear mappings in an adversarial manner, in which a dual attention mechanism is developed to mine important semantic details to bridge heterogeneity gap and semantic gap. In Section 4.1 we introduce the overview of DA-GAN, and in Section 4.2 and Section 4.3 discuss the multi-modal feature learning and adversarial learning with dual attention mechanism. The implementation details are described in Section 4.4.

4.1. Overview of DA-GAN

Figure 2 illustrates the framework of DA-GAN. It consists of three layers: the input layer, generation layer, and discrimination layer.
The Input Layer. The input layer is responsible for training data preparation. To capture more semantic knowledge, two types of samples are selected from the training dataset. The one type is the image-text sample pairs { I i , T i , L i } i = 1 n , and the other type is a group of images { I j , L i } j = 1 m and a group of texts { T j , L i } j = 1 m that have the same semantic label. They are fed into the generation layer to produce the common semantic representations.
The Generation Layer. The generation layer is a deep cross-modal generative model with intra-modal attention (intra-attention) and inter-modal attention (inter-attention). Specifically, the visual and textual features are extracted by a two-channel multi-modal feature learning model ImgCNN ( · ; θ F e a I ) and TxtCNN ( · ; θ F e a T ) , one channel per modality, where θ F e a I and θ F e a T denote parameter vectors. For image modality, it consists of several layers of convolutional networks, which generates visual convolutional representations ξ I ( i ) and ξ I s ( i ) of inputs I i and { I j } j = 1 m , respectively. For text modality, the feature learning model consists of a word2vec model to produce word embeddings, and a combination of a bidirectional LSTM [63] (BiLSTM) and a textual CNN to output textual convolutional representations ξ T ( i ) and ξ T s ( i ) . A two-channel intra-attention modal is proposed to capture the important semantic details from each category (each channel per modality). It receives the convolutional representation pair ξ I ( i ) , ξ ¯ I ( i ) and ξ T ( i ) , ξ ¯ T ( i ) and generates the intra-attention masks for both image and text, then outputs the attention-aware representations ξ I ( i ) and ξ T ( i ) . To narrow the heterogeneity gap, a two-channel encoder with weight-sharing strategy over two branches is used following the intra-attention model. Under weight sharing constraint, it generates λ C -dimensional visual and textual representations F I ( i ) R λ C and F T ( i ) R λ C , which are fed into an inter-attention model to realize cross-modal semantic feature augmentation. Besides, a two-channel decoder (one channel per modality) is employed to reconstruct the image and text representations ζ I ( i ) and ζ T ( i ) from distribution-consistent representations F I ( i ) and F T ( i ) .
The Discrimination Layer. In discrimination layer, there are three types of discriminators, i.e., semantic category discriminator D S ( · ; θ S ) , intra-modal discriminator D I n t r a ( · ; θ I n t r a ) and inter-modal discriminator D I n t e r ( · ; θ I n t e r ) , to conduct semantic discrimination, intra-modality and inter-modality discrimination. D S ( · ; θ S ) and D I n t r a ( · ; θ I n t r a ) are two-channel models (one channel per modality). The Former is to predict the semantic labels of convolutional representations ξ I ( i ) and ξ T ( i ) , as well as common semantic representations F I ( i ) and F T ( i ) by semantic discrimination loss. The latter is to distinguish the reconstructed representations ζ I ( i ) and ζ T ( i ) from convolutional representations ξ I ( i ) and ξ T ( i ) via intra-modality discrimination loss. The inter-modal discriminator D I n t e r ( · ; θ I n t e r ) aims to discriminate the outputs of inter-attention model, i.e., F I ( i ) and F T ( i ) from image and text modality.

4.2. Multi-Modal Feature Learning

The multi-modal feature learning model consists of two channels: visual feature learning model ImgCNN ( · ; θ F e a I ) and textual feature learning model TxtCNN ( · ; θ F e a T ) to generate convolutional representations of image and text samples.

4.2.1. Visual Feature Learning

The visual feature learning model is to project visual samples from original data space into convolutional feature space. Formally, ξ I ( i ) = ImgCNN ( I i ; θ F e a I ) , ξ I ( i ) = ( ξ I ( i ) ( 1 ) , ξ I ( i ) ( 2 ) , , ξ I ( i ) ( γ ) ) R γ . We use a pre-trained AlexNet [64] to implement visual feature learning. We refine this model on the training dataset via squared loss. Suppose the training set D = { I i , T i , L i } i = 1 n contains n image samples, the ground-true probability vector of i-th sample is denoted as p ( I i ) = L i / | | L i | | 1 , where | | · | | 1 is the L1 norm. The predictive probability vector is p ( I i ) = ( p i ( 1 ) , p i ( 2 ) , , p i ( λ L ) ) . Thus, the objective function is
arg min θ F e a I L F i n e θ F e a I = 1 n i = 1 n j = 1 λ L p i ( j ) p i ( j ) 2 .

4.2.2. Textual Feature Learning

The textual feature learning model is a combination of a Word2Vec model, a BiLSTM model and a textual convolutional network [65]. It generates textual convolutional representations, i.e., ξ T ( i ) = TxtCNN ( T i ; θ F e a T ) , ξ T ( i ) = ( ξ T ( i ) ( 1 ) , ξ T ( i ) ( 2 ) , , ξ T ( i ) ( γ ) ) R γ . More concretely, a Word2Vec model Word 2 Vec ( · ; θ w 2 v ) generates ϵ -dimensional word embedding w j R ϵ for each word in T i . Suppose the length of each text sample T i D is l (padded if necessary), then the embedding of it is denoted as
E i ( 1 , l ) = Word 2 Vec T i ; θ w 2 v = w 1 w 2 w l ,
where ⋈ denotes vector concatenation operator. The word embeddings are fed into a BiLSTM model to encode the contextual semantic information from both the previous and future context on forward and reverse direction, h ( t ) = BiLSTM ( E ( 1 , l ) ; θ B i ) , h ( t ) R λ B .
The following textual CNN model receives h ( t ) at time t and encode local semantic information. Let the convolutional kernels be { K j } j = 1 κ with size λ B × m , for the d-th window of the input vector covered by j-th kernel K j , namely ( h ( t ) , h ( t + 1 ) , , t ( t + m 1 ) ) , the value of convolution is:
h ^ j ( d ) ( t ) = σ i = 0 m 1 h ( t + i 1 ) * K j + β ,
where σ ( · ) : R R denotes an activation function, * denotes convolutional operator, and β is a bias term. For j-th kernel, the result of the convolution at each window on vector h ( t )  is
h ^ j ( t ) = h ^ j ( 1 ) ( t ) , h ^ j ( 2 ) ( t ) , , h ^ j ( l d + 1 ) ( t ) .
Then, a max pooling operation is conducted on the all the vectors ( h ^ 1 ( t ) , h ^ 2 ( t ) , , h ^ κ ( t ) ) as follows:
h ˙ 1 ( t ) , h ˙ 2 ( t ) , , h ˙ κ ( t ) = MaxPooling h ^ 1 ( t ) , h ^ 2 ( t ) , , h ^ κ ( t ) = m a x ( h ^ 1 ( t ) ) , m a x ( h ^ 2 ( t ) ) , , m a x ( h ^ κ ( t ) ) ,
where m a x ( · ) is the function to choose the maximal element of a vector. This κ -dimensional vector is fed into the last FC layer with drop-out to restraint over-fitting:
ξ T ( i ) ( 1 ) , ξ T ( i ) ( 2 ) , , ξ T ( i ) ( γ ) = W f c × h ˙ 1 ( t ) , h ˙ 2 ( t ) , , h ˙ κ ( t ) Ω + β ,
where W f s is the parameters of FC layer, β is the bias term, ⊙ denotes element-wise multiplication operator, and Ω is a mask to realize drop-out.

4.2.3. Semantic Grouping of Samples

As described in Section 4.1, for each pair I i , T i , L i , the input layer produces a group of images and a group of texts, which belong to the same semantic category to I i , T i , L i . In other words, it randomly samples α images { I j , L i } j = 1 α and texts { T j , L i } j = 1 α according to the semantic label L i from training set D . After that, these two groups are fed into visual and textual feature learning model, respectively, i.e.,
ξ I ( i ) ( j ) j = 1 α = ImgCNN I j j = 1 α ; θ F e a I ,
ξ T ( i ) ( j ) j = 1 α = TxtCNN T j j = 1 α ; θ F e a T .
The final convolutional representations of the two groups are the average of each representations, i.e.,
ξ ¯ I ( i ) = 1 α j = 1 α ξ I ( i ) ( j ) , ξ ¯ T ( i ) = 1 α j = 1 α ξ T ( i ) ( j ) .
In this work, ξ ¯ I ( i ) and ξ ¯ T ( i ) are used to represent the common semantic features of the category labeled by L i .

4.3. Adversarial Learning with Dual Attention

In DA-GAN, a novel dual attention mechanism is proposed to learn more discriminative representations via modeling intra-modal and inter-modal semantic correlations by two attention models: intra-attention and inter-attention. Besides, three types of discriminative models are integrated into the framework to achieve modality-invariant representations in an adversarial manner.

4.3.1. Intra-Attention

Intra-Attention model aims to learn more discriminative feature representations by modeling the intra-modal semantic correlations. In our method, it is a two-channel model, one channel per modality. Since the images and texts are processed in the same way, we take the image intra-attention as an example. For the feature representation pair ξ I ( i ) , ξ ¯ I ( i ) , ξ I ( i ) , ξ ¯ I ( i ) R x × y × d , x , y , d denote the weight, height and depth of the tensors. For convenience of discussion, we reshape these two tensors as ξ I ( i ) = ( ξ I ( i ) ( 1 ) , ξ I ( i ) ( 2 ) , , ξ I ( i ) ( p ) ) and ξ ¯ I ( i ) = ( ξ ¯ I ( i ) ( 1 ) , ξ ¯ I ( i ) ( 2 ) , , ξ ¯ I ( i ) ( p ) ) , where p = x × y is the number of spatial positions of each tensor. The semantic correlation between ξ I ( i ) and ξ ¯ I ( i ) can be modeled by the semantic correlation matrix M I ( i ) R p × p :
M I ( i ) = M I ( 1 ) ( 1 ) M I ( 1 ) ( 2 ) M I ( 1 ) ( p ) M I ( 2 ) ( 1 ) M I ( 2 ) ( 2 ) M I ( 2 ) ( p ) M I ( p ) ( 1 ) M I ( p ) ( 2 ) M I ( p ) ( p ) ( i ) , M I ( j ) ( k ) = ξ I ( i ) ξ ¯ I ( i ) = ξ I ( i ) ( j ) ξ I ( i ) ( j ) 2 ξ ¯ I ( i ) ( k ) ξ ¯ I ( i ) ( k ) 2 , j , k = 1 , 2 , , p .
where · 2 is the L2 norm, notation ⨂ is called semantic correlation multiplication. Obviously, M I ( i ) encode the semantic correlation between the single-sample I i and the corresponding group { I j } j = 1 α . We reshape it in the following form:
M I ( i ) = m I ( i ) ( 1 ) , m I ( i ) ( 2 ) , , m I ( i ) ( p ) .
where m I ( i ) ( j ) R p is the encoding of semantic correlation between the local single-sample feature representation ξ I ( i ) ( j ) and all the grouping-sample feature representations { ξ ¯ I ( i ) ( k ) } k = 1 p . Therefore, the local semantic correlation between a specific feature representation ξ I ( i ) and the average semantic representation ξ ¯ I ( i ) of the corresponding category can be measured directly.
The intra-attention map A I ( i ) is generated from the semantic correlation matrices M I ( i ) via learning a convolutional operation to fuse the semantic correlations between local single-sample feature vector ξ I ( i ) ( j ) and all the grouping-sample features { ξ ¯ I ( i ) ( k ) } k = 1 p . Specifically, let K I ( i ) R p × 1 be the convolutional kernel, which is learned from the inputs ξ I ( i ) , ξ ¯ I ( i ) by meta learning as follows:
K I ( i ) = W 2 × σ ( W 1 × ( 1 p j = 1 p M I ( 1 ) ( j ) , 1 p j = 1 p M I ( 2 ) ( j ) , , 1 p j = 1 p M I ( p ) ( j ) ) ) ,
where W 1 and W 2 denote the model parameter vectors, σ ( · ) is a non-linear activation function, here we employ ReLU function. Then a softmax operation is conducted on the convolution result to generate intra-attention map A I ( i ) R x × y :
A I ( i ) = A I ( i ) ( 1 ) , A I ( i ) ( 2 ) , A I ( i ) ( p ) , A I ( i ) ( j ) = e x p 1 Γ K I ( i ) × m I ( i ) ( j ) j = 1 p e x p 1 Γ K I ( i ) × m I ( i ) ( j ) ,
where Γ is the temperature hyperparameter that influences the entropy. In the same way, the intra-attention map of text modality A I ( i ) R x × y is achieved. Finally, a residual attention mechanism is utilized to calculate the results for both modalities:
ξ I ( i ) = ξ I ( i ) 1 + A I ( i ) , ξ T ( i ) = ξ T ( i ) ; 1 + A T ( i ) ,
where ⨀ is the element-wise multiplication.
Following intra-attention model, a two-channel encoder E ( · ; θ E n c I ) and E ( · ; θ E n c T ) is to generate common representations F I ( i ) and F T ( i ) . In this model, weight-sharing constraint is applied in last few layers to learn the cross-modal consistent joint distribution, which diminishes heterogeneity effectively.

4.3.2. Inter-Attention

To realize semantic augmentation in the common representation subspace, an inter-attention model is designed to learn the semantic relationship between image and text, i.e.,
F I ( i ) , F T ( i ) = InterAtt F I ( i ) , F T ( i ) ; θ I n t e r .
Similar to the intra-attention mechanism, it calculates the cross-modal semantic correlation matrix U ( i ) from F I ( i ) and F T ( i ) :
U ( j ) ( k ) = F I ( i ) F T ( i ) = F I ( i ) ( j ) F I ( i ) ( j ) 2 F ¯ T ( i ) ( k ) F ¯ T ( i ) ( k ) 2 , j , k = 1 , 2 , , p .
and then generates two correlation matrices:
U I ( i ) = U ( i ) = u I ( i ) ( 1 ) , u I ( i ) ( 2 ) , , u I ( i ) ( p ) , U T ( i ) = U ( i ) = u T ( i ) ( 1 ) , u T ( i ) ( 2 ) , , u T ( i ) ( p ) .
Similar to Equation (14), u I ( i ) ( j ) , R p encodes semantic correlation between the local image feature vector F I ( i ) ( j ) in j-th position and all the text feature vectors { F T ( i ) ( k ) } k = 1 p . u T ( i ) ( 2 ) R p encodes the semantic correlation between the local text feature vector F T ( i ) ( k ) in j-th position and all the image feature vectors { F I ( i ) ( j ) } j = 1 p . Then, two convolutional kernels K ^ I ( i ) and K ^ T ( i ) are learned in the same way as Equation (15) and the inter-attention maps A ^ I ( i ) and A ^ T ( i ) for image and text are achieved by Equation (16). Thus, more discriminative cross-modal representations F I ( i ) and F T ( i ) can be achieved by residual attention mechanism.

4.3.3. Discriminative Model

Three types of discrimination model are integrated into DA-GAN framework: (1) a semantic discriminator D S ( · ; θ S ) to realize semantic discrimination, (2) a two-channel intra-modal discriminator D I ( · ; θ D I ) and D T ( · ; θ D I ) and (3) a two-channel inter-modal discriminator D ^ I ( · ; θ ^ D I ) and D ^ T ( · ; θ ^ D T ) to realize intra-modal and inter-modal adversarial learning.
Semantic Discriminator. Semantic discriminator D S ( · ; θ S ) is used to recognize the semantic category of the instance in common semantic representation subspace. To this end, a two-channel network with softmax function is added on the top of inter-attention model (one channel per modality), which takes F I ( i ) and F T ( i ) and inputs and outputs the predicted probability distribution P I ( F I ( i ) ) and P T ( F T ( i ) ) to calculate the semantic discrimination loss:
L S e m θ S = 1 m i = 1 m L i l o g P I F I ( i ) + l o g P I F T ( i ) ,
where θ S = ( θ G I , θ G T , θ C ) denotes the parameter vector of this model, θ C is the parameter vector of the classifier. θ G I and θ G T denote the parameter vector of the image and text generation model respectively, i.e., θ G I = ( θ F e a I , θ I n t r a I , θ E n c I , θ I n t e r I ) and θ G T = ( θ F e a T , θ I n t r a T , θ E n c T , θ I n t e r T ) .
Intra-Model Discriminator. The intra-modal discriminator tries to discriminate the real representations ξ I ( i ) ( ξ T ( i ) ) from intra-attention model and the synthetic representations ζ I ( i ) ( ζ T ( i ) ) from decoder as inputs. For simplicity, we denote this branch network as G A N 1 , whose objective function is:
arg min G I , G T max D I , D T L G A N 1 θ G I , θ G T , θ D I , θ D T = E I P I ( I ) l o g D I I ; θ D I + E I P I ( I ) l o g 1 D I G I I ; θ G I ; θ D I + E T P T ( T ) l o g D T T ; θ D T + E T P T ( T ) l o g 1 D T G T T ; θ G T ; θ D T .
Inter-Modal Discriminator. Similar to intra-modal discriminator, the inter-modal discriminator has two channels, the subnetwork for image modality is to recognize the visual common representation as the real sample. By contrast, the subnetwork for text modality aims to recognize the textual common representation as the real sample. This branch of the adversarial network is denoted as G A N 2 . The objective function is:
arg min G I , G T max D ^ I , D ^ T L G A N 2 θ G I , θ G T , θ ^ D I , θ ^ D T = E I , T P I ( I ) , P T ( T ) [ l o g D ^ I G ( I ; θ G I ) ; θ ^ D I l o g D ^ I G T ; θ G T ; θ ^ D I + l o g D ^ T G T T ; θ G T ; θ ^ D T l o g D ^ T G I ; θ G I ; θ ^ D T ] .

4.3.4. Optimization

According to the above discussion, the DA-GAN model can be optimized by the following objective functions:
arg min G I , G T max D I , D T L G A N 1 θ G I , θ G T , θ D I , θ D T + L G A N 2 θ G I , θ G T , θ ^ D I , θ ^ D T ,
arg min θ S L S e m θ S .
For discrimination in G A N 1 , the intra-modal discriminator takes the convolutional representation ξ I ( i ) ( ξ T ( i ) ) and the reconstruction representation from ζ I ( i ) ( ζ T ( i ) ) decoder as inputs. It maximizes the log-likelihood for discriminating the real data ξ I ( i ) ( ξ T ( i ) ) and the synthetic data ζ I ( i ) ( ζ T ( i ) ) by stochastic gradient ascending:
θ D I θ D I + η θ D I 1 m i = 1 n l o g D I ξ I ( i ) ; θ D I + l o g 1 D I ζ I ( i ) ; θ D I ,
θ D T θ D T + η θ D T 1 m i = 1 n l o g D T ξ T ( i ) ; θ D T + l o g 1 D T ζ T ( i ) ; θ D T .
For discrimination in G A N 2 , the subnetwork for image modality receives the image common representation F I ( i ) as the real instance and the text common representation F T ( i ) as the fake instance. The stochastic gradient ascending is calculated as:
θ ^ D I θ ^ D I + η θ ^ D I 1 m i = 1 n l o g D ^ I F I ( i ) , ξ I ( i ) ; θ ^ D I + l o g 1 D ^ I F T ( i ) , ξ I ( i ) ; θ ^ D I ,
θ ^ D T θ ^ D T + η θ ^ D T 1 m i = 1 n l o g D ^ T F T ( i ) , ξ T ( i ) ; θ ^ D T + l o g 1 D ^ T F T ( i ) , ξ I ( i ) ; θ ^ D T .
For the two-channel generative model, it aims to generate more authentic data from the original sample to fit the real semantic distribution by minimizing the objective function. Both of the subnetworks are optimized by stochastic gradient descent (SGD) as follows:
θ G I θ G I η θ G I 1 m i = 1 n l o g D ^ T F I ( i ) , ξ T ( i ) ; θ ^ D T + l o g D I ζ T ( i ) ; θ D I ,
θ G T θ G T η θ G T 1 m i = 1 n l o g D ^ I F T ( i ) , ξ I ( i ) ; θ ^ D I + l o g D T ζ T ( i ) ; θ D T .
Besides, the generative model is optimized by the semantic discrimination to learning abstract semantic concepts:
θ S θ S η θ S 1 m i = 1 m L i l o g P I F I ( i ) + l o g P I F T ( i ) ,
where η denotes the learning rate, m denotes the number of samples in each mini-batch.
The pseudocode of optimizing the proposed model is shown in Algorithm 1. Before training the G A N 1 and G A N 2 , we pre-train the multi-modal feature learning model and intra-attention modal for both image and text on the training set, which is to prevent the instability of training G A N 1 and G A N 2 . The minimax game is implemented by Adam [66].
Algorithm 1:Pseudocode of optimizing DA-GAN
1:
Initialization: a training set D = { I i , T i , L i } i = 1 n , mini-batch size m, the number of generative model training steps k, learning rate η .
2:
pre-train ImgCNN ( · ; θ F e a I ) and IntraAtt I · ; θ I n t r a I ;
3:
pre-train TxtCNN ( · ; θ F e a T ) and IntraAtt T · ; θ I n t r a T ;
4:
repeat until convergence:
5:
fork steps do
6:
   Update the parameters of generator for image θ G I by Equation (30);
7:
   Update the parameters of generator for text θ G T by Equation (31);
8:
   Update the parameters of generators for both image and text θ G I and θ G T by Equation (32);
9:
end for
10:
Update the parameters of intra-modal discriminator θ D I for image by Equation (26);
11:
Update the parameters of intra-modal discriminator θ D T for text by Equation (27);
12:
Update the parameters of inter-modal discriminator for image θ ^ D I by Equation (28);
13:
Update the parameters of inter-modal discriminator for text θ ^ D T by Equation (29);
14:
Output: the optimized DA-GAN model.

4.4. Implementation Details

Multi-Modal Feature Learning Model. The image feature learning model is implemented by the AlexNet [64] pre-trained on ImageNet dataset. Each input is resized into 256 × 256 without cropping and 227 × 227 patches are extracted randomly from the inputs. The 4096-dimensional feature maps from the fc7 layer are treated as the outputs. To improve the learning performance, we fine-tune this model on the training dataset via squared loss. The mini-batch size of 128. the learning rate of the convolutional layer and fully-connected layer are set as 0.001 and 0.002, respectively. The momentum, weight decay, and drop-out rate are set to 0.9, 0.0005, and 0.5, respectively. The convolutional kernel size is set to 3 × 300 , following which is one layer fully-connected network. The drop-out rate is set to 0.5 to avoid over-fitting. The dimension of the last fully-connected layer is set to 4096. The Textual feature learning model includes a pre-trained word2vec model Skip-gram on Wikipedia corpus which contains over 1.8 billion words. This model outputs 300-dimensional word vectors from texts. The textual CNN contains a filter with a size of 3 × 300. The last fully-connected layer has 4096 dimensions and the learning rate is set to 0.01.
Encoder and Decoder. The two-channel encoder is implemented by a two-layer fully-connected network. For each channel, both of the fc layers are 1024-dimensional, and the weights of the second layer are shared over two branches to model the cross-modal joint distributions. Each branch of the decoder has two layers of fully-connected networks. The dimension of these two layers are 1024 and 4096, respectively.
Intra-modal and Inter-modal Discriminator. For intra-modal discriminator, each branch of it is constructed by one FC layer. To discriminate the convolutional representations and the reconstructed representations, the former is labeled by tag 1, and the latter is labeled by tag 0. For the inter-modal discriminator, both of the two channels are two-layer fully-connected networks. The 1ts layer has 1024 dimensions, and the 2nd layer with a sigmoid activation function calculates the predicted score for each input representation. The common representations of image modality are labeled by 1 and the representations of text modality are labeled by 0. For text modality, these two types of representations are labeled in the opposite way.

5. Experiments

5.1. Datasets

All the experiments are conducted on three widely-used benchmark datasets: Wikipedia [34], NUS-WIDE [67] and Pascal Sentences [68]. Some image and text samples of these three datasets are shown in Figure 3.

5.2. Competitors

We compare the proposed DA-GAN with 13 competitors, including 6 traditional cross-modal retrieval approaches, i.e., CCA [69], KCCA [11], MCCA [70], MvDA [71], MvDA-VC [72] and JRL [73], as well as 7 deep learning-based approaches, i.e., DCCA [42], DCCAE [24], CCL [74], CMDN [75], ACMR [57], DSCMR [44], CM-GANs [76]. The brief introductions of them are listed here.
  • CCA [69] is a statistical method that is to learn linear correlations between samples of different modalities.
  • KCCA [11] is a non-linear extension of CCA, which employs kernel function to improve the performance of common subspace learning.
  • MCCA [70] is a generalization of CCA to more than two views, which is used to recognize similar patterns across multiple domains.
  • MvDA [71] jointly learns multiple view-specific linear transforms so as to construct a common subspace for multiple views.
  • MvDA-VC [72] is an extension of MvDA with with view consistency, which utilize the structure similarity of views corresponding to the same object.
  • JRL [73] uses sparse projection matrix and semi-supervised regularization to explore correlations of labeled and unlabeled cross-modal samples.
  • DCCA [42] is implemented by deep neural networks to learn non-linear correlation. It has two separated DNNs, one branch per modality.
  • DCCAE [24] is a DCCA extension that integrates CCA model and autoencoder-based model to realize multi-view representation learning.
  • CCL [74] realizes a hierarchical network to combine multi-grained fusion and cross-modal correlation exploiting. It includes two learning stages to realize representation learning and intrinsic relevance exploiting.
  • CMDN [75] contains two learning stages to model the complementary separate representation of different modalities, and combines cross-modal representations to generate rich cross-media correlation.
  • ACMR [57] is a adversarial learning-based method to construct a common subspace for different modalities by generating modality-invariant representations.
  • DSCMR [44] exploits semantic discriminative features from both label space and common representation space by supervised learning, and minimizes modality invariance loss via weight-sharing to generate modality-invariant representation.
  • CM-GANs [76] models cross-modal joint distributions by two parallel GANs to generate modality-invariance representations.

5.3. Performance Metrics

Two tasks are considered, i.e., (1) I2T retrieval and (2) T2I retrieval, both of which are defined in Definition 1. Besides, we utilize PR-curves and mAP score to measure the retrieval performance:
P r = T P T P + F P , R e = T P T P + F N ,
m A P = 1 | { Q } | i = 1 | { Q } | A P ( Q )

5.4. Experimental Results

5.4.1. Results on Wikipedia Dataset

The mAP scores of DA-GAN and the 13 competitors on the Wikipedia dataset are reported in Table 2. For both I2T and T2I tasks, the proposed DA-GAN outperforms all these state-of-the-arts by 54.3% and 63.9% respectively, higher than the two best competitors, i.e., DSCMR [44] (I2T mAP = 52.1%) and CM-GANs [76] (T2I mAP = 62.1%). Besides, the average mAP of DA-GAN is the highest, which is 3% higher than CM-GANs. The main reason is that the combination of intra- and inter-modal attention captures more single-modal and cross-modal semantic correlations. Although both DSCMR and CM-GANs extract the semantic information by supervised learning, they do not learn the inter-modal semantic correlation effectively to realize cross-modal semantic augmentation. On the other hand, except for DCCA and DCCAE whose mAPs (I2T mAP = 44.4% and 43.5%, T2I mAP = 39.6% and 38.5%) are a bit lower than JRL (I2T mAP = 44.9%, T2I mAP = 41.8%).
Figure 4 and Figure 5 illustrate the I2T and T2I mAP scores of each category on Wikipedia dataset. The average mAP scores are shown in Figure 6. Obviously, for all these approaches, there are big differences between the retrieval precisions of different categories. Specifically, for both I2T and T2I tasks, the performances on “biology”, “geography & places”, “sport & recreation” and “warfare” are better than other categories. That is mainly because the samples in the above categories are semantically independent of other categories, and have more obvious distinguishing features than other categories. In contrast, the categories “art & architecture”, “history” and “royalty & nobility” are relative to each other in abstract semantics. The samples of the categories have more confusing features. From Figure 4 and Figure 5, it is clear that DA-GAN has better semantic recognition ability. For example, the highest I2T and T2I mAP scores of DA-GAN on “biology”, “sport & recreation” and “warfare” are near 83% and 85%, higher than the competitive rivals such as DSCMR (I2T mAP = 78%, T2I mAP = 73%), CCL (I2T mAP = 73%, T2I mAP = 69%) and CM-GANs (I2T mAP = 74%, T2I mAP = 82%).
Figure 7a,d show the I2T and T2I precisions of DA-GAN and the competitors on different recalls, respectively. For both I2T and T2I tasks, DA-GAN has the highest precision at all levels of recall, which exhibits the performance improvement by adversarial learning with a dual attention mechanism. DSCMR and CM-GANs are still the most two competitive rivals, but they cannot defeat DA-GAN at any recall value.

5.4.2. Results on Nus-Wide Dataset

The mAP scores on NUS-WIDE of DA-GAN and competitors are reported in Table 3. Compared with the results on Wikipedia, the precision of all these methods are relatively higher. The proposed method performs well on this dataset, which defeats CM-GANs (I2T mAP = 78.1%, T2I mAP = 72.4%, Aver. mAP = 75.3%) and DSCMR (I2T mAP = 61.1%, T2I mAP = 61.5%, Aver. mAP = 61.3%) by I2T mAP = 79.7%, T2I mAP = 75.2%, Aver. mAP = 77.5%. It indicates that the dual attention mechanism can discover more important semantic features between different modalities to generate more discriminant representations. On the other hand, we observe that the performance of other traditional and deep learning-based approaches are far behind our method even though the precisions of them are obviously higher than the results on Wikipedia.
The PR curve of DA-GAN and the state-of-the-arts are presented in Figure 7b,c. We can find that the trends of the precisions on NUS-WIDE are different from the situations on Wikipedia. For the I2T task (shown in Figure 7b), the precision of DA-GAN and the competitors decline obviously in the interval [ 0.0 , 0.2 ] . After that, the downward trend tends to be gentle. When the recall is larger than 0.8, fast performance degradation occurs, except for three traditional methods, i.e., CCA, KCCA, and MCCA. At all levels of recall, the precision of DA-GAN is higher than all the rivals. For the T2I task (shown in Figure 7c), the performance of all these approaches shows a gradual downward trend. Although the precision of CM-GANs is slightly higher than our method in the interval [ 0.1 , 0.2 ] , it cannot defeat DA-GAN when the recall is larger than 0.2. The retrieval accuracies of other approaches, as expected, are much lower than DA-GAN.

5.4.3. Results on Pascal Sentences Dataset

The Comparison of mAP scores of DA-GAN and the 13 state-of-the-arts on Pascal Sentences dataset are shown in Table 4. Once again, DA-GAN is the winner in this contest, which achieves I2T mAP = 72.9%, T2I mAP = 73.5% and average mAP = 73.2%, defeats the runner-up DSCMR (I2T mAP = 71.0%, T2I mAP = 72.2%, average mAP = 71.6%) by 2.5%, 1.3% and 1.6%, respectively. Different from the above comparisons, CM-GANs (I2T mAP = 61.2%, T2I mAP = 61.0%, average mAP = 61.1%) performs worse than DA-GAN and DSCMR evidently. As analyzed above, the performance improvement mainly comes from the integration of intra- and inter-modal attention as well as adversarial learning.
Figure 8, Figure 9 and Figure 10 illustrate the I2T, T2I and average mAP scores of each approaches on 20 categories on Pascal Sentences dataset, respectively. For both I2T and T2I tasks, all these approaches have poor cross-modal retrieval performance in some categories, such as “bottle” and “chair”. It is mainly because the objects in these categories are relatively small. By contrast, the precisions on “aeroplane”, “bird”, “cat”, “horse”, “motorbike”, “sheep” and “train” are obviously higher since these samples contain much more discriminative semantic features. Specifically, for the I2T task, the mAP of DA-GAN reaches nearly 90%, 91% and 92% on “aeroplane”, “cat” and “train”, respectively. For the T2I task, it achieves nearly 92%, 93%, and 95% on these three categories. From Figure 10 we observe that the semantic recognition performance of DA-GAN is the best among these 14 approaches.
Figure 7c,f show the PR curves of DA-GAN and 13 state-of-the-arts for I2T and T2I tasks, respectively. On both tasks, it is clear that the changing of performance of DA-GAN and CM-GANs are very similar. Although CM-GANs show good performance, they cannot overcome our method. For the I2T task, the precision of DA-GAN declines slowly when the recall increases from 0.2 to 0.8. After that, it drops sharply. In contrast, the performance of our method shows a significant downward trend for the T2I task, but it is still the best.

6. Conclusions

We present a new deep adversarial model for cross-modal retrieval, called Dual Attention Generative Adversarial Network (DA-GAN). This method utilizes a novel dual attention mechanism to focus on important semantic details in a uni-modal manner and a cross-modal manner, which can effectively learn high-level semantic interaction across different modalities. Besides, a dual adversarial learning method that learns modality-consistent representation is proposed to reduce the heterogeneity gap. Comprehensive experiments on four commonly used multimedia datasets indicate the great performance of the proposed method.

Author Contributions

Conceptualization, L.C. and L.Z.; methodology, L.C. and H.Z.; software, L.C. and L.Z.; validation, L.C. and X.Z.; formal analysis, H.Z.; investigation, X.Z. and L.C.; resources, H.Z. and X.Z.; data curation, L.Z.; writing—original draft preparation, L.C.; writing—review and editing, X.Z.; visualization, L.Z.; supervision, X.Z. and H.Z.; project administration, X.Z.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Key Research and Development Program of Hunan Province (2020NK2033), and the National Natural Science Foundation of China (62072166).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

This work is supported in the Key Research and Development Program of Hunan Province (2020NK2033), and the National Natural Science Foundation of China (62072166).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Y. Survey on deep multi-modal data analytics: Collaboration, rivalry, and fusion. ACM Trans. Multimed. Comput. Commun. Appl. 2021, 17, 1–25. [Google Scholar] [CrossRef]
  2. Ranjan, V.; Rasiwasia, N.; Jawahar, C.V. Multi-label cross-modal retrieval. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 4094–4102. [Google Scholar]
  3. Chen, Y.; Ren, P.; Wang, Y.; de Rijke, M. Bayesian personalized feature interaction selection for factorization machines. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Paris, France, 21–25 July 2019; pp. 665–674. [Google Scholar]
  4. Wu, Y.; Yang, Y. Exploring Heterogeneous Clues for Weakly-Supervised Audio-Visual Video Parsing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 1326–1335. [Google Scholar]
  5. Chen, Y.; Wang, Y.; Ren, P.; Wang, M.; de Rijke, M. Bayesian feature interaction selection for factorization machines. Artif. Intell. 2022, 302, 103589. [Google Scholar] [CrossRef]
  6. Zhang, C.; Wang, Y.; Zhu, L.; Song, J.; Yin, H. Multi-graph heterogeneous interaction fusion for social recommendation. ACM Trans. Inf. Syst. 2021, 40, 1–26. [Google Scholar] [CrossRef]
  7. Gu, C.; Bu, J.; Zhou, X.; Yao, C.; Ma, D.; Yu, Z.; Yan, X. Cross-modal Image Retrieval with Deep Mutual Information Maximization. arXiv 2021, arXiv:2103.06032. [Google Scholar]
  8. Zhang, C.; Song, J.; Zhu, X.; Zhu, L.; Zhang, S. Hcmsl: Hybrid cross-modal similarity learning for cross-modal retrieval. ACM Trans. Multimed. Comput. Commun. Appl. 2021, 17, 1–22. [Google Scholar] [CrossRef]
  9. Zhang, C.; Zhong, Z.; Zhu, L.; Zhang, S.; Cao, D.; Zhang, J. M2guda: Multi-metrics graph-based unsupervised domain adaptation for cross-modal Hashing. In Proceedings of the 2021 International Conference on Multimedia Retrieval, Taipei, Taiwan, 21–24 August 2021; pp. 674–681. [Google Scholar]
  10. Thomas, C.; Kovashka, A. Preserving semantic neighborhoods for robust cross-modal retrieval. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 317–335. [Google Scholar]
  11. Hardoon, D.R.; Szedmák, S.; Shawe-Taylor, J. Canonical correlation analysis: An overview with application to learning methods. Neural Comput. 2004, 16, 2639–2664. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Pereira, J.C.; Coviello, E.; Doyle, G.; Rasiwasia, N.; Lanckriet, G.R.G.; Levy, R.; Vasconcelos, N. On the role of correlation and abstraction in cross-modal multimedia retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 521–535. [Google Scholar] [CrossRef] [Green Version]
  13. Gong, Y.; Ke, Q.; Isard, M.; Lazebnik, S. A multi-view embedding space for modeling internet images, tags, and their semantics. Int. Comput. Vis. 2014, 106, 210–233. [Google Scholar] [CrossRef] [Green Version]
  14. Sharma, A.; Kumar, A.; Daume, H.; Jacobs, D.W. Generalized multiview analysis: A discriminative latent space. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2160–2167. [Google Scholar]
  15. Rasiwasia, N.; Mahajan, D.; Mahadevan, V.; Aggarwal, G. Cluster canonical correlation analysis. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, AISTATS 2014, Reykjavik, Iceland, 22–25 April 2014. [Google Scholar]
  16. Lopez-Paz, D.; Sra, S.; Smola, A.; Ghahramani, Z.; Schölkopf, B. Randomized nonlinear component analysis. In Proceedings of the International Conference on Machine Learning, Beijing, China, 21–26 June 2014; pp. 1359–1367. [Google Scholar]
  17. Sun, T.; Chen, S. Locality preserving cca with applications to data visualization and pose estimation. Image Vis. Comput. 2007, 25, 531–543. [Google Scholar] [CrossRef] [Green Version]
  18. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  19. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [Green Version]
  20. Wang, Y.; Lin, X.; Wu, L.; Zhang, W. Effective multi-query expansions: Collaborative deep networks for robust landmark retrieval. IEEE Trans. Image Process. 2017, 26, 1393–1404. [Google Scholar] [CrossRef]
  21. Qian, B.; Wang, Y.; Hong, R.; Wang, M.; Shao, L. Diversifying inference path selection: Moving-mobile-network for landmark recognition. IEEE Trans. Image Process. 2021, 30, 4894–4904. [Google Scholar] [CrossRef]
  22. Benton, A.; Khayrallah, H.; Gujral, B.; Reisinger, D.; Zhang, S.; Arora, R. Deep generalized canonical correlation analysis. arXiv 2017, arXiv:1702.02519. [Google Scholar]
  23. Elmadany, N.E.D.; He, Y.; Guan, L. Multiview learning via deep discriminative canonical correlation analysis. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 2409–2413. [Google Scholar]
  24. Wang, W.; Arora, R.; Livescu, K.; Bilmes, J.A. On deep multi-view representation learning. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; Volume 37, pp. 1083–1092. [Google Scholar]
  25. Peng, Y.; Qi, J.; Yuan, Y. Modality-specific cross-modal similarity measurement with recurrent attention network. IEEE Trans. Image Process. 2018, 27, 5585–5599. [Google Scholar] [CrossRef] [Green Version]
  26. Xu, X.; Wang, T.; Yang, Y.; Zuo, L.; Shen, H.T. Cross-modal attention with semantic consistence for image-text matching. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 5412–5425. [Google Scholar] [CrossRef]
  27. Fang, A.; Zhao, X.; Zhang, Y. Cross-modal image fusion theory guided by subjective visual attention. arXiv 2019, arXiv:1912.10718. [Google Scholar]
  28. Zhu, L.; Zhang, C.; Song, J.; Liu, L.; Zhang, S.; Li, Y. Multi-graph based hierarchical semantic fusion for cross-modal representation. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 5–9 July 2021; pp. 1–6. [Google Scholar]
  29. Wu, L.; Wang, Y.; Gao, J.; Wang, M.; Zha, Z.-J.; Tao, D. Deep coattention-based comparator for relative representation learning in person re-identification. IEEE Trans. Neural Netw. Learn. 2020, 32, 722–735. [Google Scholar] [CrossRef]
  30. Wang, K.; He, R.; Wang, L.; Wang, W.; Tan, T. Joint feature selection and subspace learning for cross-modal retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 2010–2023. [Google Scholar] [CrossRef]
  31. Zhu, L.; Long, J.; Zhang, C.; Yu, W.; Yuan, X.; Sun, L. An efficient approach for geo-multimedia cross-modal retrieval. IEEE Access 2019, 7, 180571–180589. [Google Scholar] [CrossRef]
  32. Zhu, L.; Song, J.; Zhu, X.; Zhang, C.; Zhang, S.; Yuan, X. Adversarial learning-based semantic correlation representation for cross-modal retrieval. IEEE Multimed. 2020, 27, 79–90. [Google Scholar] [CrossRef]
  33. Wang, C.; Yang, H.; Meinel, C. Deep semantic mapping for cross-modal retrieval. In Proceedings of the 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), Vietri sul Mare, Italy, 9–11 November 2015; pp. 234–241. [Google Scholar]
  34. Rasiwasia, N.; Pereira, J.C.; Coviello, E.; Doyle, G.; Lanckriet, G.R.; Levy, R.; Vasconcelos, N. A new approach to cross-modal multimedia Retrieval. In Proceedings of the 18th ACM International Conference on Multimedia, Firenze, Italy, 25–29 October 2010; pp. 251–260. [Google Scholar]
  35. Ngiam, J.; Khosla, A.; Kim, M.; Nam, J.; Lee, H.; Ng, A.Y. Multimodal deep learning. In Proceedings of the 28th International Conference on Machine Learning, Bellevue, WA, USA, 28 June–2 July 2011. [Google Scholar]
  36. Wu, L.; Wang, Y.; Shao, L. Cycle-consistent deep generative hashing for cross-modal retrieval. IEEE Trans. Image Process. 2018, 28, 1602–1612. [Google Scholar] [CrossRef] [Green Version]
  37. Zhao, L.; Chen, Z.; Yang, L.T.; Deen, M.J.; Wang, Z.J. Deep semantic mapping for heterogeneous multimedia transfer learning using co-occurrence data. Acm Trans. Multimed. Comput. Commun. Appl. 2019, 15, 1–21. [Google Scholar] [CrossRef]
  38. Wang, Y.; Zhang, W.; Wu, L.; Lin, X.; Fang, M.; Pan, S. Iterative views agreement: An iterative low-rank based structured optimization method to multi-view spectral clustering. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016; pp. 2153–2159. [Google Scholar]
  39. Zhang, W.; Yao, T.; Zhu, S.; Saddik, A.E. Deep learning–based multimedia analytics: A review. ACM Trans. Multimed. Comput. Appl. 2019, 15, 1–26. [Google Scholar] [CrossRef]
  40. Wei, Y.; Zhao, Y.; Lu, C.; Wei, S.; Liu, L.; Zhu, Z.; Yan, S. Cross-modal retrieval with cnn visual features: A new baseline. IEEE Trans. Cybern. 2016, 47, 449–460. [Google Scholar] [CrossRef]
  41. Zhu, L.; Song, J.; Wei, X.; Yu, H.; Long, J. Caesar: Concept augmentation based semantic representation for cross-modal retrieval. Multimed. Tools Appl. 2020, 1, 1–31. [Google Scholar] [CrossRef]
  42. Andrew, G.; Arora, R.; Bilmes, J.A.; Livescu, K. Deep canonical correlation Analysis. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; Volume 28, pp. 1247–1255. [Google Scholar]
  43. Gu, J.; Cai, J.; Joty, S.R.; Niu, L.; Wang, G. Look, imagine and match: Improving textual-visual cross-modal retrieval with generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7181–7189. [Google Scholar]
  44. Zhen, L.; Hu, P.; Wang, X.; Peng, D. Deep supervised cross-modal Retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 10394–10403. [Google Scholar]
  45. Gao, P.; Jiang, Z.; You, H.; Lu, P.; Hoi, S.C.H.; Wang, X.; Li, H. Dynamic fusion with intra- and inter-modality attention flow for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 6639–6648. [Google Scholar]
  46. Xu, K.; Ba, J.; Kiros, R.; Cho, K.; Courville, A.; Salakhudinov, R.; Zemel, R.; Bengio, Y. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 2048–2057. [Google Scholar]
  47. Liu, J.; Wang, G.; Hu, P.; Duan, L.-Y.; Kot, A.C. Global context-aware attention lstm networks for 3d action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1647–1656. [Google Scholar]
  48. Xiao, T.; Xu, Y.; Yang, K.; Zhang, J.; Peng, Y.; Zhang, Z. The application of two-level attention models in deep convolutional neural network for fine-grained image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 842–850. [Google Scholar]
  49. Lu, J.; Yang, J.; Batra, D.; Parikh, D. Hierarchical question-image co-attention for visual question answering. In Advances in Neural Information Processing Systems; 2016; pp. 289–297.
  50. Wu, L.; Wang, Y.; Li, X.; Gao, J. Deep attention-based spatially recursive networks for fine-grained visual recognition. IEEE Trans. Cybern. 2019, 49, 1791–1802. [Google Scholar] [CrossRef]
  51. Sudhakaran, S.; Escalera, S.; Lanz, O. Lsta: Long short-term attention for egocentric action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9954–9963. [Google Scholar]
  52. Wang, X.; Wang, Y.-F.; Wang, W.Y. Watch, listen, and describe: Globally and locally aligned cross-modal attentions for video captioning. In Proceedings of the NAACL-HLT, New Orleans, LA, USA, 1–6 June 2018; pp. 795–801. [Google Scholar]
  53. Liu, X.; Wang, Z.; Shao, J.; Wang, X.; Li, H. Improving referring expression grounding with cross-modal attention-guided erasing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1950–1959. [Google Scholar]
  54. Huang, P.-Y.; Chang, X.; Hauptmann, A.G. Improving what cross-modal retrieval models learn through object-oriented inter-and intra-modal attention networks. In Proceedings of the 2019 on International Conference on Multimedia Retrieval, Ottawa, ON, Canada, 10–13 June 2019; pp. 244–252. [Google Scholar]
  55. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Advances in Neural Information Processing Systems; 2014; pp. 2672–2680. Available online: https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf (accessed on 3 January 2022).
  56. Xu, X.; He, L.; Lu, H.; Gao, L.; Ji, Y. Deep adversarial metric learning for cross-modal retrieval. World Wide Web 2019, 22, 657–672. [Google Scholar] [CrossRef]
  57. Wang, B.; Yang, Y.; Xu, X.; Hanjalic, A.; Shen, H.T. Adversarial cross-modal Retrieval. In Proceedings of the 2017 ACM on Multimedia Conference, Mountain View, CA, USA, 23–27 October 2017; Liu, Q., Lienhart, R., Wang, H., Chen, S.K., Boll, S., Chen, Y.P., Friedland, G., Li, J., Yan, S., Eds.; ACM: New York, NY, USA, 2017; pp. 154–162. [Google Scholar]
  58. Liu, R.; Zhao, Y.; Wei, S.; Zheng, L.; Yang, Y. Modality-invariant image-text embedding for image-sentence matching. ACM Trans. Multimed. Comput. Commun. Appl. 2019, 15, 1–19. [Google Scholar] [CrossRef]
  59. Huang, X.; Peng, Y.; Yuan, M. MHTN: Modal-adversarial hybrid transfer network for cross-modal retrieval. IEEE Trans. Cybern. 2020, 50, 1047–1059. [Google Scholar] [CrossRef] [Green Version]
  60. Zheng, F.; Tang, Y.; Shao, L. Hetero-manifold regularisation for cross-modal hashing. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1059–1071. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Wang, Y.; Lin, X.; Wu, L.; Zhang, W.; Zhang, Q. LBMCH: Learning bridging mapping for cross-modal hashing. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, Santiago, Chile, 9–13 August 2015; Baeza-Yates, R., Lalmas, M., Moffat, A., Ribeiro-Neto, B.A., Eds.; ACM: New York, NY, USA, 2015; pp. 999–1002. [Google Scholar]
  62. Zhang, J.; Peng, Y.; Yuan, M. SCH-GAN: Semi-supervised cross-modal hashing by generative adversarial network. IEEE Trans. Cybern. 2020, 50, 489–502. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Graves, A.; Mohamed, A.; Hinton, G.E. Speech recognition with deep recurrent neural networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 6645–6649. [Google Scholar]
  64. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1106–1114. [Google Scholar]
  65. Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; Kuksa, P.P. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 2011, 12, 2493–2537. [Google Scholar]
  66. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  67. Chua, T.-S.; Tang, J.; Hong, R.; Li, H.; Luo, Z.; Zheng, Y. Nus-wide: A real-world web image database from national university of Singapore. In Proceedings of the ACM International Conference on Image and Video Retrieval, Fira, Greece, 8–10 July 2009; ACM: New York, NY, USA, 2009; p. 48. [Google Scholar]
  68. Rashtchian, C.; Young, P.; Hodosh, M.; Hockenmaier, J. Collecting image annotations using amazon’s mechanical turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, Los Angeles, CA, USA, 6 June 2010; Association for Computational Linguistics: Stroudsburg, PA, USA, 2010; pp. 139–147. [Google Scholar]
  69. Hotelling, H. Relations between two sets of variates. Biometrika 1936, 28, 321–377. [Google Scholar] [CrossRef]
  70. Rupnik, J.; Shawe-Taylor, J. Multi-view canonical correlation analysis. In Proceedings of the Conference on Data Mining and Data Warehouses (SiKDD 2010), Ljubljana, Slovenia, 12 October 2010; pp. 1–4. [Google Scholar]
  71. Kan, M.; Shan, S.; Zhang, H.; Lao, S.; Chen, X. Multi-view discriminant Analysis. In Proceedings of the Computer Vision—ECCV 2012—12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Fitzgibbon, A.W., Lazebnik, S., Perona, P., Sato, Y., Schmid, C., Eds.; Proceedings, Part I, ser. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7572, pp. 808–821. [Google Scholar]
  72. Kan, M.; Shan, S.; Zhang, H.; Lao, S.; Chen, X. Multi-view discriminant analysis. IEEE Trans. Pattern Anal. Machine Intell. 2016, 38, 188–194. [Google Scholar] [CrossRef]
  73. Zhai, X.; Peng, Y.; Xiao, J. Learning cross-media joint representation with sparse and semisupervised regularization. IEEE Trans. Circuits Syst. Video Techn. 2014, 24, 965–978. [Google Scholar] [CrossRef]
  74. Peng, Y.; Qi, J.; Huang, X.; Yuan, Y. CCL: Cross-modal correlation learning with multigrained fusion by hierarchical network. IEEE Trans. Multimed. 2018, 20, 405–420. [Google Scholar] [CrossRef] [Green Version]
  75. Peng, Y.; Huang, X.; Qi, J. Cross-media shared representation by hierarchical learning with multiple deep networks. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016; pp. 3846–3853. [Google Scholar]
  76. Peng, Y.; Qi, J. Cm-gans: Cross-modal generative adversarial networks for common representation learning. ACM Trans. Multimed. Comput. Commun. Appl. 2019, 15, 1–24. [Google Scholar] [CrossRef]
Figure 1. Illustration of cross-modal retrieval.
Figure 1. Illustration of cross-modal retrieval.
Futureinternet 14 00043 g001
Figure 2. The framework of DA-GAN. The input layer feeds two types of samples into the generation layer: (1) the image-text sample pairs { I i , T i , L i } i = 1 n and (2) for each pair, a group of images { I j , L i } j = 1 m and a group of texts { T j , L i } j = 1 m that have the same semantic label are selected from multimedia dataset. The generation layer consists of a two-channel CNN-based multi-modal feature learning model, a two-channel intra-attention model, a two-channel encoder, a two-channel decoder, as well as an inter-attention model. The discrimination layer includes: a two-channel intra-modal discriminator to discriminate the convolutional feature representation and common semantic representation, a two-channel semantic discriminator and an inter-modal discriminator to distinguish the common semantic representations of different modalities.
Figure 2. The framework of DA-GAN. The input layer feeds two types of samples into the generation layer: (1) the image-text sample pairs { I i , T i , L i } i = 1 n and (2) for each pair, a group of images { I j , L i } j = 1 m and a group of texts { T j , L i } j = 1 m that have the same semantic label are selected from multimedia dataset. The generation layer consists of a two-channel CNN-based multi-modal feature learning model, a two-channel intra-attention model, a two-channel encoder, a two-channel decoder, as well as an inter-attention model. The discrimination layer includes: a two-channel intra-modal discriminator to discriminate the convolutional feature representation and common semantic representation, a two-channel semantic discriminator and an inter-modal discriminator to distinguish the common semantic representations of different modalities.
Futureinternet 14 00043 g002
Figure 3. Some image and text samples of Wikipedia, NUS-WIDE and Pascal Sentences.
Figure 3. Some image and text samples of Wikipedia, NUS-WIDE and Pascal Sentences.
Futureinternet 14 00043 g003
Figure 4. The mAP of I2T task of each category on Wikipedia dataset for our method DA-GAN and the competitors.
Figure 4. The mAP of I2T task of each category on Wikipedia dataset for our method DA-GAN and the competitors.
Futureinternet 14 00043 g004
Figure 5. The mAP of T2I task of each category on Wikipedia dataset for our method DA-GAN and the competitors.
Figure 5. The mAP of T2I task of each category on Wikipedia dataset for our method DA-GAN and the competitors.
Futureinternet 14 00043 g005
Figure 6. The average mAP of each category on Wikipedia dataset for our method DA-GAN and the competitors.
Figure 6. The average mAP of each category on Wikipedia dataset for our method DA-GAN and the competitors.
Futureinternet 14 00043 g006
Figure 7. The PR curve of our method DA-GAN and the competitors on Wikipedia, NUS-WIDE and Pascal Sentences dataset. (ac) show the PR curves of I2T task on Wikipedia, NUS-WIDE and Pascal Sentences; (df) report PR curves of I2T task.
Figure 7. The PR curve of our method DA-GAN and the competitors on Wikipedia, NUS-WIDE and Pascal Sentences dataset. (ac) show the PR curves of I2T task on Wikipedia, NUS-WIDE and Pascal Sentences; (df) report PR curves of I2T task.
Futureinternet 14 00043 g007
Figure 8. The mAP of I2T task of each category on Pascal Sentences dataset for our method DA-GAN and the competitors.
Figure 8. The mAP of I2T task of each category on Pascal Sentences dataset for our method DA-GAN and the competitors.
Futureinternet 14 00043 g008
Figure 9. The mAP of T2I task of each category on Pascal Sentences dataset for our method DA-GAN and the competitors.
Figure 9. The mAP of T2I task of each category on Pascal Sentences dataset for our method DA-GAN and the competitors.
Futureinternet 14 00043 g009
Figure 10. The average mAP of each category on Pascal Sentences dataset for the proposed method and the state-of-the-arts.
Figure 10. The average mAP of each category on Pascal Sentences dataset for the proposed method and the state-of-the-arts.
Futureinternet 14 00043 g010
Table 1. The mathematical notations.
Table 1. The mathematical notations.
NotationDefinition
D a multimedia dataset
I i the i-th image sample
T i the i-th text sample
L i a label vector
Qa cross-modal query
R the set of results
Φ ( · ) a non-linear mapping
C the set of semantic concepts
θ the parameter vector of model
ξ I ( i ) the i-th visual convolutional representation
ξ T ( i ) the i-th textual convolutional representation
ξ I ( i ) the attention-aware representation of image I i
ξ T ( i ) the attention-aware representation of text T i
F I ( i ) the cross-modal common semantic representation of image I i
F T ( i ) the cross-modal common semantic representation of text T i
F I ( i ) the attention-aware cross-modal common semantic representation of image I i
F T ( i ) the attention-aware cross-modal common semantic representation of text T i
h a hidden vector
K a convolutional kernel
M a semantic correlation matrix
A an attention map
U a cross-modal semantic correlation matrix
ζ ( i ) I a reconstructed representation of i-th image
ζ ( i ) T a reconstructed representation of i-th text
Table 2. The comparison results (mAP@50 in %) with 13 competitors on Wikipedia dataset. The best performance values are in bold-font.
Table 2. The comparison results (mAP@50 in %) with 13 competitors on Wikipedia dataset. The best performance values are in bold-font.
Traditional MethodI2TT2IAver.
CCA [69]13.413.313.4
KCCA [11]19.818.619.2
MCCA [70]34.130.732.4
MvDA [71]33.730.832.3
MvDA-VC [72]38.835.837.3
JRL [73]44.941.843.4
Deep Learning-Based MethodI2TT2IAver.
DCCA [42]44.439.642.0
DCCAE [24]43.538.541.0
CCL [74]50.445.748.1
CMDN [75]48.742.745.7
ACMR [57]47.743.445.6
DSCMR [44]52.147.849.9
CM-GANs [76]50.062.156.1
The Proposed MethodI2TT2IAver.
DA-GAN54.363.959.1
Table 3. The comparison results (mAP@50 in %) with 13 competitors on NUS-WIDE dataset. The best performance values are in bold-font.
Table 3. The comparison results (mAP@50 in %) with 13 competitors on NUS-WIDE dataset. The best performance values are in bold-font.
Traditional MethodI2TT2IAver.
CCA [69]37.839.438.6
KCCA [11]36.239.437.8
MCCA [70]44.846.245.5
MvDA [71]50.152.651.3
MvDA-VC [72]52.655.754.2
JRL [73]58.659.859.2
Deep Learning-Based MethodI2TT2IAver.
DCCA [42]53.254.954.0
DCCAE [24]51.154.052.5
CCL [74]50.653.552.1
CMDN [75]49.251.550.4
ACMR [57]58.859.959.3
DSCMR [44]61.161.561.3
CM-GANs [76]78.172.475.3
The Proposed MethodI2TT2IAver.
DA-GAN79.775.277.5
Table 4. The comparison results (mAP@50 in %) with 13 competitors on Pascal Sentences dataset. The best performance values are in bold-font.
Table 4. The comparison results (mAP@50 in %) with 13 competitors on Pascal Sentences dataset. The best performance values are in bold-font.
Traditional MethodI2TT2IAver.
CCA [69]22.522.722.6
KCCA [11]43.339.841.6
MCCA [70]66.448.955.45
MvDA [71]59.462.661.0
MvDA-VC [72]64.867.366.1
JRL [73]52.753.453.1
Deep Learning-Based MethodI2TT2IAver.
DCCA [42]67.867.767.8
DCCAE [24]68.067.167.5
CCL [74]57.656.156.9
CMDN [75]54.452.653.5
ACMR [57]67.167.667.3
DSCMR [44]71.072.271.6
CM-GANs [76]61.261.061.1
The Proposed MethodI2TT2IAver.
DA-GAN72.973.573.2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cai, L.; Zhu, L.; Zhang, H.; Zhu, X. DA-GAN: Dual Attention Generative Adversarial Network for Cross-Modal Retrieval. Future Internet 2022, 14, 43. https://doi.org/10.3390/fi14020043

AMA Style

Cai L, Zhu L, Zhang H, Zhu X. DA-GAN: Dual Attention Generative Adversarial Network for Cross-Modal Retrieval. Future Internet. 2022; 14(2):43. https://doi.org/10.3390/fi14020043

Chicago/Turabian Style

Cai, Liewu, Lei Zhu, Hongyan Zhang, and Xinghui Zhu. 2022. "DA-GAN: Dual Attention Generative Adversarial Network for Cross-Modal Retrieval" Future Internet 14, no. 2: 43. https://doi.org/10.3390/fi14020043

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop