Joint Sentiment Part Topic Regression Model for Multimodal Analysis
Abstract
:1. Introduction
- We propose a joint sentiment part topic regression model (JSP). Our method can automatically generate the sentimental polarity of the document using the intermodal information and sentiment labels.
- We add sentiment label layers to the framework to make the analysis results more reliable. A two modalities feature and sentimental labels are used to explore the internal relationship between modal and sentimental to solve the problem of multimodal content sentiment analysis.
- The proposed model is tested on four real-world datasets of different orders of magnitude. The evaluation results demonstrate the effectiveness of our method in multimodal sentiment analyses. The experimental results show that our proposed model has great sentiment analysis ability in real-world multimodal datasets and has good performance when compared with the state-of-the-art sentiment recognition methods.
2. Related Works
2.1. Single Modality Sentiment Analysis
2.1.1. Textual Sentiment Analysis
2.1.2. Visual Sentiment Analysis
2.2. Multimodal Sentiment Analysis
2.3. Multimodal Latent Dirichlet Allocation
3. Method
3.1. Proposed Method
3.1.1. The Overview of Proposed Method
3.1.2. Data Representation
3.1.3. Joint Sentiment Part Topic Regression Model (JSP)
- For the sentiment label l:
- -
- Select an assignment | Dir ();
- -
- Find the sentiment label l| ∼Mult().
- For each text word r∈ {1, …, N}:
- -
- Find a topic distribution |∼Dir () and a word assignment |∼Dir ();
- -
- Find a topic z in document d, and z∼Mult (θd,l);
- -
- Figure out the real text word r with the parameter from the multinomial distribution.
- Introduce the mean Gaussian linear regression parameter x:
- -
- Let the proportion of x be:
- For the image feature w:
- -
- Choose a topic distribution s|x∼Mult(x);
- -
- Choose the image feature w with the parameter from the multinomial distribution.
3.1.4. Model Optimization
4. Mathematical Inference
5. Experiments
5.1. Datasets
- Flickr [53]: Flickr is one of the most commonly used multimodal sentiment datasets and contains 297,462 image–text pairs with weak markers, of which 143,542 datapoints are labeled negative. It contains 1211 adjective–noun pairs. The corresponding image of adjective-noun pairs (ANPs) can be queried through its API. The sentimental labels of these image–text pairs are also marked by the corresponding ANP.
- Flickr-ML [54]: Jie et al. [53] constructed Flickr-ML which contains 21,037 image–text pairs, of which 10,145 have negative markers. This dataset is partially labeled with sentiment and has a more accurate classification label. Jie et al. randomly selected 30,000 image–text pairs in the original Flickr, split evenly between positive and negative tags. Then, five annotators were used to mark the image–text pairs to calibrate the sentiment labels. In this way, the strongly tagged Flickr-ML dataset was obtained.
- Twitter [55]: 24,795 image–text pairs were obtained after filtering the original 183,456 pieces of data crawled by a relevant API. In Twitter, there are a total number of 12,357 negative pairs.
- Visual sentiment ontology (VSO) [56]: This dataset contains a total of 603 pictures, which are split into 21 types. Ziyuan Zhao et al. used the processing method to extract the middle layer visual features to obtain the similarity coefficient, and the data were weakly labeled by support vector machine (SVM). We screened out 564 qualified data, including 276 negative data. Among them, 400 pieces of data are selected use in training, and the rest as the test data of the later model performance.
- Review Text Content (RTC) [57]: This dataset is composed of restaurant reviews collected and manually annotated by Ganu et al. It contains 150 different restaurants and more than 3200 sentences, which are broken down into six broad categories and each sentence contains one or more categories of labels.
- Subject MR (subjMR) [58]: The subjective MR used here is a new version of Pang and Lee et al., which screens out data containing opinion information based on the original MR dataset, including more than 2000 documents.
- MDS (Multi-domain Sentiment) [59]: This dataset is built by Blitzer et al. and collected from product reviews on Amazon, including electronics, kitchenware, DVDs, and books. There are more than 2000 pieces of data in each category, with positive and negative labels accounting for half.
5.2. Baseline Models
- Text only [60]: The model uses text features through a logistic regression model. The text topic is extracted and then the sentiment is classified by SVM.
- Visual only [61]: Just uses deep visual features through a logistic regression model.
- CCR [62]: A cross-modality consistent regression model, which uses progressive CNN to extract image feature and title information to represent the text information, for joint textual–visual sentiment analysis.
- JST [54]: Joint sentiment topic model, a probabilistic model framework based on LDA, this model can infer both sentiments and topics in the documents. Different from JS-mmLDA, JST is a single-modality sentiment analysis model, which is only used to process textual information.
- T-LSTM-E [62]: Tree-structured long short-term memory embedding, an image–text joint sentiment analysis model which integrates tree-structured long short-term memory (T-LSTM) with visual attention mechanism to capture the correlation between the image and the text.
- TFN (Tensor Fusion Network) [63]: A deep multimodal sentiment analysis method modeling intra-modality and inter-modality dynamics together into a joint framework.
5.3. Preprocessing
5.4. Experiment Evaluation Metrics
5.5. Experiment Results and Analysis
5.5.1. Comparative Experiments with Different Datasets
5.5.2. Ablation Experiments
5.5.3. Experiments with Different Parameter Settings
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Blei, D.M.; Jordan, M.I. Modeling annotated data. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval, Toronto, ON, Canada, 14 July 2003. [Google Scholar]
- Bollen, J.; Mao, H.; Zeng, X. Twitter mood predicts the stock market. J. Comput. Sci. 2011, 2, 1–8. [Google Scholar] [CrossRef] [Green Version]
- Li, X.; Xie, H.; Chen, L.; Wang, J.; Deng, X. News impact on stock price return via sentiment analysis. Knowl. Based Syst. 2014, 69, 14–23. [Google Scholar] [CrossRef]
- Kagan, V.; Stevens, A.; Subrahmanian, V.S. Using twitter sentiment to forecast the 2013 pakistani election and the 2014 Indian election. IEEE Intell. Syst. 2015, 30, 2–5. [Google Scholar] [CrossRef]
- Ibrahim, M.; Abdillah, O.; Wicaksono, A.F.; Adriani, M. Buzzer detection and sentiment analysis for predicting presidential election results in a twitter nation. In Proceedings of the IEEE International Conference on Data Mining Work-shop, ICDMW 2015, Atlantic City, NJ, USA, 14–17 November 2015; pp. 1348–1353. [Google Scholar]
- Caragea, C.; Squicciarini, A.C.; Stehle, S.; Neppalli, K.; Tapia, A.H. Mapping moods: Geo-mapped sentiment analysis during hurricane sandy. In Proceedings of the 11th International Conference on Information Systems for Crisis Response and Management, University Park, PA, USA, 18–21 May 2014. [Google Scholar]
- Yadav, S.; Ekbal, A.; Saha, S.; Bhattacharyya, P. Medical sentiment analysis using social media: Towards building a patient assisted system. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, 7–12 May 2018. [Google Scholar]
- Cambria, E. Affective computing and sentiment analysis. IEEE Intell. Syst. 2016, 31, 102–107. [Google Scholar] [CrossRef]
- dos Santos, C.N.; Gatti, M. Deep convolutional neural networks for sentiment analysis of short texts. In Proceedings of the 25th International Conference on Computational Linguistics, Dublin, Ireland, 23–29 August 2014. [Google Scholar]
- Saif, H.; He, Y.; Fernandez, M.; Alani, H. Contextual semantics for sentiment analysis of Twitter. Inf. Process. Manag. 2016, 52, 5–19. [Google Scholar] [CrossRef] [Green Version]
- You, Q.; Luo, J.; Jin, H.; Yang, J. Robust image sentiment analysis using progressively trained and domain transferred deep networks. arXiv 2015, arXiv:1509.06041. [Google Scholar]
- Sun, M.; Yang, J.; Wang, K.; Shen, H. Discovering affective regions in deep convolutional neural networks for visual sentiment prediction. In Proceedings of the 2016 IEEE International Conference on Multimedia and Expo (ICME), Seattle, WA, USA, 29 August 2016; pp. 1–6. [Google Scholar]
- You, Q.; Luo, J.; Jin, H.; Yang, J. Cross-modality consistent regression for joint visual-textual sentiment analysis of social multimedia. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining—WSDM ’16, San Francisco, CA, USA, 19 February 2016; pp. 13–22. [Google Scholar]
- Carneiro, G.; Vasconcelos, N. Formulating semantic image annotation as a supervised learning problem. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)—Workshops, San Diego, CA, USA, 25 July 2005. [Google Scholar]
- Li, J.; Wang, J. Automatic linguistic indexing of pictures by a statistical modeling approach. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1075–1088. [Google Scholar] [CrossRef] [Green Version]
- Barnard, K.; Duygulu, P.; de Freitas, N.; Forsyth, D.; Blei, D.M.; Jordan, M.I. Matching words and pictures. J. Mach. Learn. Res. 2003, 3, 1107–1135. [Google Scholar]
- Feng, S.; Manmatha, R.; Lavrenko, V. Multiple Bernoulli relevance models for image and video annotation. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, Washington, DC, USA, 19 July 2004. [Google Scholar]
- Lavrenko, V.; Manmatha, R.; Jeon, J. A model for learning the semantics of pictures. In Advances in Neural Information Processing Systems (NIPS); MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
- Mullen, T.; Collier, N. Sentiment analysis using support vector machines with diverse information sources. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, Barcelona, Spain, 25–26 July 2004. [Google Scholar]
- Maas, A.L.; Daly, R.E.; Pham, P.T.; Huang, D.; Ng, A.Y.; Potts, C. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, OR, USA, 19–24 June 2011. [Google Scholar]
- Remus, R. Asvuniofleipzig: Sentiment analysis in twitter using data-driven machine learning techniques. In Proceedings of the 7th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2013, Atlanta, GA, USA, 14–15 June 2013. [Google Scholar]
- Wang, X.; Wei, F.; Liu, X.; Zhou, M.; Zhang, M. Topic sentiment analysis in twitter. In Proceedings of the 20th ACM international conference on Multimedia—MM ’12, Glasgow Scotland, UK, 21 October 2011; pp. 1031–1040. [Google Scholar]
- Pang, B.; Lee, L.; Vaithyanathan, S. Thumbs up? Sentiment classification using machine learning techniques. arXiv 2002, arXiv:cs/0205070. [Google Scholar]
- Baccianella, S.; Esuli, A.; Sebastiani, F. SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of the International Conference on Language Resources and Evaluation, LREC 2010, Valletta, Malta, 17–23 May 2010. [Google Scholar]
- Taboada, M.; Brooke, J.; Tofiloski, M.; Voll, K.; Stede, M. Lexicon-based methods for sentiment analysis. Comput. Linguist. 2011, 37, 267–307. [Google Scholar] [CrossRef]
- Kanayama, H.; Nasukawa, T. Fully automatic lexicon expansion for domain-oriented sentiment analysis. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing—EMNLP ’06, Stroudsburg, PA, USA, 15 July 2006; pp. 355–363. [Google Scholar]
- Tumasjan, T.O.; Sprenger, P.G.; Sandner, I.M. Welpe, predicting elections with twitter: What 140 characters reveal about political sentiment. In Proceedings of the Fourth International Conference on Weblogs and Social Media, ICWSM 2010, Washington, DC, USA, 23–26 May 2010. [Google Scholar]
- Turney, P.D. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. arXiv 2002, arXiv:cs/0212032. [Google Scholar]
- Schouten, K.; Frasincar, F.; Jong, F.D. Ontology-enhanced aspect-based sentiment analysis. In Proceedings of the International Conference on Web Engineering; Springer: Cham, Switzerland, 2017. [Google Scholar]
- Glorot, X.; Bordes, A.; Bengio, Y. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, WA, USA, 28 June–2 July 2011; pp. 513–520. [Google Scholar]
- Tang, D.; Wei, F.; Qin, B.; Liu, T.; Zhou, M. Coooolll: A deep learning system for twitter sentiment classification. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland, 23–24 August 2014; pp. 208–212. [Google Scholar]
- Kim, Y. Convolutional neural networks for sentence classification. arXiv 2014, arXiv:1408.5882. [Google Scholar]
- Yang, Z.; Yang, D.; Dyer, C.; He, X.; Smola, A.; Hovy, E. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Human language technologies, San Diego, CA, USA, 12–17 June 2016. [Google Scholar]
- Siersdorfer, S.; Minack, E.; Deng, F.; Hare, J. Analyzing and predicting sentiment of images on the social web. In Proceedings of the International Conference on Internet of things and Cloud Computing—ICC ’16, Firenze, Italy, 21 October 2010; p. 715. [Google Scholar]
- Zhao, S.; Yue, G.; Xiaolei, J. Exploring principles-of-art features for image emotion recognition. In Proceedings of the 2014 ACM Multimedia Conference, Orlando, FL, USA, 11 November 2014. [Google Scholar]
- Borth, D.; Ji, R.; Chen, T.; Breuel, T.; Chang, S.-F. Large-scale visual sentiment ontology and detectors using adjective noun pairs. In Proceedings of the 21st ACM international conference on Multimedia—MM ’13, Barcelona, Spain, 24 October 2013; pp. 223–232. [Google Scholar]
- Yuan, J.; McDonough, S.; You, Q.; Luo, J. Sentribute. In Proceedings of the second international workshop on Cloud data management—CloudDB ’10, San Francisco, CA, USA, 28 October 2013; Volume 10, pp. 10–18. [Google Scholar]
- Xu, N.; Mao, W. MultiSentiNet: A deep semantic network for multimodal sentiment analysis. In Proceedings of the 26th ACM International Conference on Information and Knowledge Management, CIKM 2017, Singapore, 6–10 November 2017. [Google Scholar]
- Xu, C.; Cetintas, S.; Lee, K.-C.; Li, L.-J. Visual sentiment prediction with deep convolutional neural networks. arXiv 2014, arXiv:1411.5731. [Google Scholar]
- Poria, S.; Chaturvedi, I.; Cambria, E.; Hussain, A. Convolutional MKL based multimodal emotion recognition and sentiment analysis. In Proceedings of the 16th IEEE International Conference on Data Mining, ICDM 2016, Barcelona, Spain, 12–15 December 2016. [Google Scholar]
- Poria, S.; Peng, H.; Hussain, A.; Howard, N.; Cambria, E. Ensemble application of convolutional neural networks and multiple kernel learning for multimodal sentiment analysis. Neurocomputing 2017, 261, 217–230. [Google Scholar] [CrossRef]
- Majumder, N.; Hazarika, D.; Gelbukh, A.; Cambria, E.; Poria, S. Multimodal sentiment analysis using hierarchical fusion with context modeling. Knowl. Based Syst. 2018, 161, 124–133. [Google Scholar] [CrossRef] [Green Version]
- Huang, F.; Zhang, X.; Zhao, Z.; Xu, J.; Li, Z. Image-text sentiment analysis via deep multimodal attentive fusion. Knowl. Based Syst. 2019, 167, 26–37. [Google Scholar] [CrossRef]
- Hazarika, D.; Gorantla, S.; Poria, S.; Zimmermann, R. Self-attentive feature-level fusion for multimodal emotion detection. In Proceedings of the 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Miami, FL, USA, 28 June 2018; pp. 196–201. [Google Scholar]
- Brady, K.; Gwon, Y.; Khorrami, P.; Godoy, E.; Campbell, W.; Dagli, C.; Huang, T.S. Multi-modal audio, video and physiological sensor learning for continuous emotion prediction. In Proceedings of the 6th International Workshop on Modeling in Software Engineering—MiSE 2014, Amsterdam, The Netherlands, 12 October 2016; pp. 97–104. [Google Scholar]
- Huang, J.; Li, Y.; Tao, J.; Lian, Z.; Wen, Z.; Yang, M.; Yi, J. Continuous multimodal emotion prediction based on long short term memory recurrent neural network. In Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge—AVEC ’17, Mountain View, CA, USA, 22 October 2017; pp. 11–18. [Google Scholar]
- You, Q.; Jin, H.; Luo, J. Visual sentiment analysis by attending on local image regions. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
- Kawde, P.; Verma, G.K. Multimodal affect recognition in V-A-D space using deep learning. In Proceedings of the 2017 International Conference On Smart Technologies For Smart Nation (SmartTechCon), Bangalore, India, 14 May 2017; pp. 890–895. [Google Scholar]
- Cen, P.; Zhang, K.; Zheng, D. Sentiment analysis using deep learning approach. J. Artif. Intell. 2020, 2, 17–27. [Google Scholar] [CrossRef]
- Putthividhy, D.; Attias, H.T.; Nagarajan, S.S. Topic regression multi-modal Latent Dirichlet Allocation for image annotation. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 3408–3415. [Google Scholar]
- Blei, D.M.; Ng, A.Y.; Jordan, M.I. Latent dirichlet allocation. J. Mach. Learn. Res. 2003, 3, 993–1022. [Google Scholar]
- Lin, C.; He, Y.; Everson, R.M.; Rüger, S. Weakly supervised joint sentiment-topic detection from text. IEEE Trans. Knowl. Data Eng. 2011, 24, 1134–1145. [Google Scholar] [CrossRef] [Green Version]
- Ding, W.; Song, X.; Guo, L.; Xiong, Z.; Hu, X. A Novel Hybrid HDP-LDA Model for Sentiment Analysis. In Proceedings of the 2013 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), Atlanta, GA, USA, 17–20 November 2013; Volume 1, pp. 329–336. [Google Scholar]
- Katsurai, M.; Satoh, S. Image sentiment analysis using latent correlations among visual, textual, and sentiment views. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 2837–2841. [Google Scholar]
- Ganu, G.; Elhadad, N.; Marian, A. Beyond the Stars: Improving rating predictions using revew text cotent. In Proceedings of the Twelfth International Workshop on the Web and Databases, Providence, RI, USA, 28 June 2009. [Google Scholar]
- Pang, B.; Lee, L. A sentimental education. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics—ACL’04, Barcelona, Spain, 15 July 2004; pp. 271–278. [Google Scholar]
- Blitzer, J.; Dredze, M.; Pereira, F. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistic, Prague, Czech Republic, 23–30 June 2007; pp. 440–447. [Google Scholar]
- Le, Q.V.; Mikolov, T. Distributed representations of sentences and documents. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21–26 June 2014. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- You, Q.; Cao, L.; Jin, H.; Luo, J. Robust visual-textual sentiment analysis. In Proceedings of the 2016 ACM on Internet Measurement Conference—IMC ’16, Santa Monica, CA, USA, 16 November 2016; pp. 1008–1017. [Google Scholar]
- Zadeh, A.; Chen, M.; Poria, S.; Cambria, E.; Morency, L.-P. Tensor fusion network for multimodal sentiment analysis. arXiv 2017, arXiv:1707.07250. [Google Scholar]
- Steyvers, M.; Griffiths, T. Probabilistic topic models. Handb. Latent Semant. Anal. 2015, 427, 424–440. [Google Scholar] [CrossRef]
- Minka, T. Estimating a Dirichlet Distribution. Available online: https://tminka.github.io/papers/dirichlet/ (accessed on 17 October 2020).
Method | Recall | Accuracy | F-Score | Precision |
---|---|---|---|---|
Proposed method | 0.854 | 0.847 | 0.819 | 0.856 |
Text only | 0.734 | 0.722 | 0.694 | 0.674 |
Visual only | 0.742 | 0.732 | 0.712 | 0.693 |
CCR (cross-modality consistent regression model) | 0.823 | 0.822 | 0.821 | 0.832 |
JST (joint sentiment topic mode) | 0.844 | 0.817 | 0.823 | 0.832 |
T-LSTM-E (tree-structured long short-term memory embedding) | 0.851 | 0.844 | 0.83 | 0.843 |
TFN (tensor fusion network) | 0.867 | 0.832 | 0.832 | 0.844 |
Method | Recall | Accuracy | F-Score | Precision |
---|---|---|---|---|
Proposed method | 0.874 | 0.865 | 0.827 | 0.864 |
Text only | 0.769 | 0.776 | 0.743 | 0.722 |
Visual only | 0.821 | 0.789 | 0.752 | 0.743 |
CCR | 0.816 | 0.838 | 0.827 | 0.837 |
JST | 0.826 | 0.834 | 0.826 | 0.843 |
T-LSTM-E | 0.862 | 0.857 | 0.833 | 0.862 |
TFN | 0.882 | 0.863 | 0.852 | 0.867 |
Method | Recall | Accuracy | F-Score | Precision |
---|---|---|---|---|
Proposed method | 0.859 | 0.842 | 0.836 | 0.843 |
Text only | 0.711 | 0.721 | 0.685 | 0.687 |
Visual only | 0.734 | 0.724 | 0.71 | 0.704 |
CCR | 0.824 | 0.83 | 0.819 | 0.847 |
JST | 0.832 | 0.823 | 0.814 | 0.833 |
T-LSTM-E | 0.833 | 0.835 | 0.843 | 0.863 |
TFN | 0.855 | 0.843 | 0.833 | 0.857 |
Method | Recall | Accuracy | F-Score | Precision |
---|---|---|---|---|
Proposed method | 0.843 | 0.842 | 0.869 | 0.865 |
Text only | 0.725 | 0.764 | 0.724 | 0.711 |
Visual only | 0.756 | 0.774 | 0.745 | 0.732 |
CCR | 0.823 | 0.822 | 0.842 | 0.853 |
JST | 0.832 | 0.834 | 0.848 | 0.859 |
T-LSTM-E | 0.851 | 0.833 | 0.853 | 0.872 |
TFN | 0.866 | 0.84 | 0.877 | 0.888 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, M.; Zhu, Y.; Gao, W.; Cao, M.; Wang, S. Joint Sentiment Part Topic Regression Model for Multimodal Analysis. Information 2020, 11, 486. https://doi.org/10.3390/info11100486
Li M, Zhu Y, Gao W, Cao M, Wang S. Joint Sentiment Part Topic Regression Model for Multimodal Analysis. Information. 2020; 11(10):486. https://doi.org/10.3390/info11100486
Chicago/Turabian StyleLi, Mengyao, Yonghua Zhu, Wenjing Gao, Meng Cao, and Shaoxiu Wang. 2020. "Joint Sentiment Part Topic Regression Model for Multimodal Analysis" Information 11, no. 10: 486. https://doi.org/10.3390/info11100486
APA StyleLi, M., Zhu, Y., Gao, W., Cao, M., & Wang, S. (2020). Joint Sentiment Part Topic Regression Model for Multimodal Analysis. Information, 11(10), 486. https://doi.org/10.3390/info11100486