# Taylor-ChOA: Taylor-Chimp Optimized Random Multimodal Deep Learning-Based Sentiment Classification Model for Course Recommendation

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Work

**(a)****Hierarchical Approach:**

**(b)****Deep Learing Approach:**

**(c)****Query-based Approach:**

**(d)****Other Approaches:**

## 3. Proposed Method

#### 3.1. Acquisition of Input Data

#### 3.2. Matrix Construction

**Course preference matrix**: The input data ${D}_{s}$ are acquired from the dataset and presented to the course preference matrix ${U}_{i}$. Each course has a specific ID that is denoted as service ID, and the Scholar ID who searched for the specific course is represented in the visitor preference matrix. The list of courses searched by scholars is given by

**Course preference binary matrix**: Once the course preference matrix ${U}_{i}$ is generated, the course preference binary matrix ${B}^{{U}_{i}}$ is performed based on the courses preferred, which is denoted as 0 and 1. For each course, the corresponding binary values of every course are given in the binary sequence. If a scholar preferred a course, then it is represented as 1, otherwise it is represented as 0. The course preference binary matrix is expressed as

**Course subscription matrix**: The course subscription binary matrix $U{L}_{j}$ specifies the scholar who searches for a particular course. Thus, the courses searched by scholar are given as

**Course subscription binary matrix**: After generating the course subscription matrix $U{L}_{j}$, the course subscription binary matrix ${B}^{U{L}_{j}}$ is constructed based on courses subscribed, which is represented either as 0 or 1. For each course, the corresponding binary values for the subscribed course are given in the binary sequence. If the scholar searched for a course, it is denoted as 1, otherwise it is denoted as 0. The course subscription binary matrix is given as

#### 3.3. Course Grouping Using DEC Algorithm

**Clustering with KL convergence**: By considering an initial estimate of cluster centroids ${\left(\right)}_{{\ell}_{j}}^{}jk$ and non-linear mapping ${f}_{\theta}$, an unsupervised algorithm with two steps is devised for improving the process of clustering. In the initial phase, soft assignment is measured among the cluster centroids and embedded points. In the second phase, deep mapping ${f}_{\theta}$ is updated and the cluster centroids are refined based on the present high confidence assignments in terms of the auxiliary target distribution. This procedure is iteratively performed until the convergence condition is satisfied.

**Soft assignment**: Here, the student’s t-distribution is used as a kernel for measuring the similarity among the centroid ${\ell}_{j}$ and embedded point ${S}_{i}$.

**KL divergence optimization**: KL divergence optimization is designed for refining the clusters iteratively by understanding their assignments with higher confidence using the auxiliary target function. It computes the loss of convergence ${a}_{i}$ among the auxiliary distribution and soft assignment ${b}_{i}$.

#### 3.4. Course Matching Using RV Coefficient

**User query**: When the user query arrives, the sequence of queries is given as

**Binary query sequence**: The sequence of queries is transformed to binary query sequence formulated as

**Course matching using RV coefficient**: The course grouping is done using the RV coefficient by considering the course grouped sequence G and binary query sequence ${B}^{{Q}_{z}}$. Moreover, the RV coefficient is defined as the multivariate rationalization of the squared Pearson correlation coefficient because the RV coefficient considers the values within the range of 0 and 1. It measures the proximity of two sets of points characterized in a matrix form. The RV coefficient equation is given as follows:

#### 3.5. Relevant Scholar Retrieval

**Best course group**: The best course group ${R}_{c}$ for the relevant scholar retrieval is expressed as

**Binary best course group**: For each best course group, the corresponding binary values for the retrieved best course are given in a binary sequence. If the best course is retrieved by the scholar, it is indicated as 1, otherwise it is denoted as 0.

**Matching query and best course group using Bhattacharya coefficient**: Once the scholar retrieved the best course, the binary query sequence ${B}^{{Q}_{z}}$ and the best course group ${B}^{{R}_{c}}$ are compared using the Bhattacharya coefficient. The Bhattacharyya distance computes the similarity of two probability distributions, and the equation is expressed as

Algorithm 1 Pseudo-code of Course Review Framework. |

Input: UserID: D${}_{s}$, ItemID: ${D}_{c},\phantom{\rule{4.pt}{0ex}}\mathrm{Review}:\phantom{\rule{4.pt}{0ex}}R,\phantom{\rule{4.pt}{0ex}}\mathrm{Query}\phantom{\rule{4.pt}{0ex}}{Q}_{s},\phantom{\rule{4.pt}{0ex}}\mathrm{Cluster}\phantom{\rule{4.pt}{0ex}}\mathrm{size}\phantom{\rule{4.pt}{0ex}}{C}_{s}=3;$ Parameter $={U}_{i}$ course preference matrix, G best-clustered course group, ${R}_{C}$ relevant scholar retrieved, n courses in optimal clustered group, ${B}^{U}$ course preference binary matrix, n number of scholars, m number of courses, k is the total number of preferred course.Output: Best course ${C}_{b}$BeginRead Input $({D}_{\left(s\right)},{D}_{\left(c\right)},R);$${B}^{\left({U}_{\left(i\right)}\right)},{B}^{\left(U{L}_{\left(j\right)}\right)}={U}_{\left(i\right)}({D}_{\left(s\right)},{D}_{\left(c\right)})$$G=DEC({B}^{\left(U{L}_{\left(j\right)}\right)},$ clustersize $=3)$$\phantom{\rule{4.pt}{0ex}}\mathrm{Find}\phantom{\rule{4.pt}{0ex}}G$ $G=$ course Matching phase $\left(\right)$ Compute ${R}_{C}=$ Relevant visitor phase $\left(\right)$; ${C}_{b}=$ Matched visitor phase $\left(\right)$ //Course preference matrix phase ${B}^{{U}_{i}}=\left(\right)open="("\; close=")">{D}_{s},{D}_{c}$ if scholar search the course; Print 1else Print 0${B}^{U{L}_{j}}=\left(\right)open="("\; close=")">{D}_{s},{D}_{c}$ if (m course is visited by the scholar) Print 1else Print 0${Q}_{z}$ generation based on ${B}^{U{L}_{j}}$ $//\mathrm{Course}\phantom{\rule{4.pt}{0ex}}\mathrm{matching}\phantom{\rule{4.pt}{0ex}}\mathrm{phase}$ $RV.grp=\left[\right]$ for $j=1$ to G $Su{m}_{{RV}_{val}}=0$For $\phantom{\rule{1.em}{0ex}}\mathrm{j}=1$ to $len\left(h\right)$ $Su{m}_{{RV}_{val}}+=\mathrm{RV}$ coeff $\left(\right)$ End forRV.grp.app end($Su{m}_{R{V}_{val}}$) End for$G=max(RV\xb7grp)$ //Relevant scholar phase ${R}_{C}=\left[\right]$ for $\mathrm{j}=1$ to $len\left(\mathrm{h}\right)$$C=$ got scholars who viewed the courses ${R}_{C}.\mathrm{append}\phantom{\rule{4.pt}{0ex}}\left(\right)open="("\; close=")">{B}^{{U}_{i}}\left(C\right)$ End forReturn ${R}_{C}$ //Matched scholar phase ${C}_{b}=\left[\right]$ for $\mathrm{j}=1$ to len$\left(\right)$ ${C}_{b}$.append (Bhattacharya $\left(\right)$ End forSort by $min\left(\right)open="("\; close=")">{C}_{b}$ Return ${C}_{b}$ |

#### 3.6. Sentiment Classification

**Acquisition of significant features for sentiment classification**: The significant features, such as SentiWordNet-based statistical features, classification-specific features, and TF-IDF features, are extracted from the best course ${C}_{b}$ for improving the course recommendation process. The extracted features are elucidated below.

**(a) SentiWordNet-based statistical features**: SentiWordNet [28] groups the words into multiple sets of synonyms, called synsets. Every synset is associated with a polarity score, such as positive or negative. The scores take a value between 0 and 1, and their summation provides a value of 1 for every synset. By considering the scores provided, it is feasible to decide whether the estimation is positive or negative. The words present in the SentiWordNet database are based on the parts of speech attained from WordNet, and it utilizes a program to apply the scores to every word. The weight tuning of positive and negative score values can be expressed as

**(b) Classification-specific features**: The various classification specific features, such as capitalized words, numerical words, punctuation marks, and elongated words are explained below.

**(c) TF-IDF**: TF-IDF [29] is used to create a composite weight for every term in each of the review data. TF measures how frequently a term occurs in review data, whereas IDF measures how significant a term is. The TF-IDF score is computed as

#### 3.7. Sentiment Classification Using Proposed TaylorChOA-Based RMDL

**(a) Architecture of RMDL**: RMDL [25] is a robust method that comprises three basic deep learning models, namely deep neural networks (DNN), recurrent neural networks (RNN), and a convolutional neural network (CNN) model. The structure of RMDL is presented in Figure 2.

**Long short-term memory (LSTM)**: LSTM is a class of RNN that is used to maintain long-term relevancy in an improved manner. This LSTM network effectively addresses the vanishing gradient issue. LSTM consists of a chain-like structure and utilizes multiple gates for handling huge amounts of data. The step-by-step procedure of LSTM cell is expressed as follows:

**Gated recurrent unit (GRU)**: GRU is a gating strategy for RNN that consists of two gates. Here, GRU does not have internal memory, and the step by step procedure for GRU cells is given as

**(b) Training of RMDL using the proposed TaylorChOA**: The training procedure of RMDL [25] is performed using the developed optimization method, known as TaylorChOA. The developed TaylorChOA is designed by the incorporation of the Taylor concept and ChOA. ChOA [27] is motivated by the characteristics of chimps for hunting prey. It is mainly accomplished for solving the problems based on convergence speed by learning through the high dimensional neural network. In addition, the independent groups have different mechanisms for updating the parameters to explore the chimp with diverse competence in search space. The dynamic strategies effectively balance the global and local search problems. The Taylor concept [26] exploits the preliminary dataset and the standard form of the system for validating the Taylor series expansion in terms of a specific degree. The incorporation of the Taylor series with the ChOA shows the effectiveness of the developed scheme and minimized the computational complexity. The algorithmic procedure of the proposed TaylorChOA algorithm is illustrated below.

Algorithm 2 Pseudo-code of proposed TaylorChOA algorithm. |

Input: ${Z}_{i}$Output: ${Z}_{chimp}(s+1)$Initialize population Initialize the parameters, like $v,u,x$, and r Determine the position of each chimp while $(s<\aleph );\aleph -$ maximum iterationsfor each chimpExtract the group of chimps Use the grouping mechanism to update $v,u$, and r end forfor each search chimpif $(\varpi <0.5)$if $\left(\right|x|<1)$Update position of search agent using Equation (56) else if $(x>1)$Choose a random search agent end ifelse if $(\varpi >0.5)$Update position of search using the chaotic value end ifend forUpdate $v,u,x$, and r $s=s+1$ end whileReturn the best solution |

## 4. Systems Implementation and Evaluation

#### 4.1. Description of Datasets

#### 4.2. Experimental Setup

#### 4.3. Evaluation Metrics

**Precision:**This is the proportion of true positives to overall positives, and the precision measure is expressed as

**Recall:**Recall is a measure that defines the proportion of true positives to the summing up of false negatives and true positives, and the equation is given as

**F1-score:**This is a statistical measure of the accuracy of a test or an individual based on the recall and precision, which is given as

#### 4.4. Baseline Methods

- HSACN [14]: The method was formulated to learn item and user representations from reviews.
- MCNN [20]: Multichannel Deep Convolutional Neural Network for Recommender Systems.
- Query Optimization [22]: The Query Optimization method for course recommendation model designed to improve the categorization of action verbs to a more precise level.
- DCBVN [21]: Demand-aware Collaborative Bayesian Variational Network for course recommendation.
- Proposed TaylorChOA-based RMDL: Proposed TaylorChOA-based RMDL model is developed for recommending the finest courses.

## 5. Results and Discussion

#### 5.1. Results Based on E-Khool Dataset, with Respect to Number of Iterations (10 to 50)

#### 5.1.1. Performance Analysis Based on Cluster Size = 3

#### 5.1.2. Performance Analysis Based on Cluster Size = 4

**Comparison of existing methods and the proposed TaylorChOA-based RMDL using E-Khool dataset, in terms of precision, recall, and F1-score:**

#### 5.1.3. Comparative Analysis Based on Cluster Size = 3 in terms of Precision, Recall, and F1-Score Using E-Khool Dataset

#### 5.1.4. Comparative Analysis Based on Cluster Size = 4 in Terms of Precision, Recall, and F1-Score Using E-Khool Dataset

#### 5.2. Results Based on Coursera Course Dataset with Respect to the Number of Iterations (10 to 50)

#### 5.2.1. Performance Analysis Based on Cluster Size = 3

#### 5.2.2. Performance Analysis Based on Cluster Size = 4

**Comparison of existing methods and the proposed TaylorChOA-based RMDL using Coursera Course Dataset, in terms of precision, recall, and F1-Score**

#### 5.2.3. Analysis Based on Cluster Size = 3 in Terms of Precision, Recall, and F1-Score

#### 5.2.4. Analysis Based on Cluster Size = 4 in Terms of Precision, Recall, and F1-Score

## 6. Conclusions and Future Work

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## Abbreviations

ChOA | Chimp Optimization Algorithm |

DCBVN | Demand-aware Collaborative Bayesian Variational Network |

DÉCOR | Deep learning-enabled Course Recommender System |

DNN | Deep Neural Networks |

GRU | Gated Recurrent Unit |

HANCI | Hierarchical Attention Network Oriented towards Crowd Intelligence |

HSACN | Hierarchical Self-Attentive Convolution Network |

LSTM | Long Short-Term Memory |

MCNN | Multi-model Convolutional Neural Network |

NLP | Natural Language Processing |

RMDL | Random Multi-model Deep Learning |

RNN | Recurrent Neural Network |

## References

- Wen-Shung Tai, D.; Wu, H.-J.; Li, P.-H. Effective e-learning recommendation system based on self-organizing maps and association mining. Electron. Libr.
**2008**, 26, 329–344. [Google Scholar] [CrossRef] - Persky, A.M.; Joyner, P.U.; Cox, W.C. Development of a course review process. Am. J. Pharm. Educ.
**2012**, 76, 130. [Google Scholar] [CrossRef] - Guanchen, W.; Kim, M.; Jung, H. Personal customized recommendation system reflecting purchase criteria and product reviews sentiment analysis. Int. J. Electr. Comput. Eng.
**2021**, 11, 2399–2406. [Google Scholar] [CrossRef] - Gunawan, A.; Cheong, M.L.F.; Poh, J. An Essential Applied Statistical Analysis Course using RStudio with Project-Based Learning for Data Science. In Proceedings of the 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), Wollongong, Australia, 4–7 December 2018; pp. 581–588. [Google Scholar]
- Assami, S.; Daoudi, N.; Ajhoun, R. A Semantic Recommendation System for Learning Personalization in Massive Open Online Courses. Int. J. Recent Contrib. Eng. Sci. IT
**2020**, 8, 71–80. [Google Scholar] [CrossRef] - Hua, Z.; Wang, Y.; Xu, X.; Zhang, B.; Liang, L. Predicting corporate financial distress based on integration of support vector machine and logistic regression. Expert Syst. Appl.
**2007**, 33, 434–440. [Google Scholar] [CrossRef] - Aher, S.B.; Lobo, L. Best combination of machine learning algorithms for course recommendation system in e-learning. Int. J. Comput. Appl.
**2012**, 41. [Google Scholar] [CrossRef] - Tarus, J.K.; Niu, Z.; Mustafa, G. Knowledge-based recommendation: A review of ontology-based recommender systems for e-learning. Artif. Intell. Rev.
**2018**, 50, 21–48. [Google Scholar] [CrossRef] - Zhang, H.; Huang, T.; Lv, Z.; Liu, S.; Zhou, Z. MCRS: A course recommendation system for MOOCs. Multimed. Tools Appl.
**2018**, 77, 7051–7069. [Google Scholar] [CrossRef] - Li, Q.; Kim, J. A Deep Learning-Based Course Recommender System for Sustainable Development in Education. Appl. Sci.
**2021**, 11, 8993. [Google Scholar] [CrossRef] - Almahairi, A.; Kastner, K.; Cho, K.; Courville, A. Learning distributed representations from reviews for collaborative filtering. In Proceedings of the 9th ACM Conference on Recommender Systems, Vienna, Austria, 16–20 September 2015; pp. 147–154. [Google Scholar]
- Yang, C.; Zhou, W.; Wang, Z.; Jiang, B.; Li, D.; Shen, H. Accurate and Explainable Recommendation via Hierarchical Attention Network Oriented Towards Crowd Intelligence. Knowl.-Based Syst.
**2021**, 213, 106687. [Google Scholar] [CrossRef] - Zheng, L.; Noroozi, V.; Yu, P.S. Joint deep modeling of users and items using reviews for recommendation. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, Cambridge, UK, 6–10 February 2017; pp. 425–434. [Google Scholar]
- Zeng, H.; Ai, Q. A Hierarchical Self-attentive Convolution Network for Review Modeling in Recommendation Systems. arXiv
**2020**, arXiv:2011.13436. [Google Scholar] - Dong, X.; Ni, J.; Cheng, W.; Chen, Z.; Zong, B.; Song, D.; Liu, Y.; Chen, H.; De Melo, G. Asymmetrical hierarchical networks with attentive interactions for interpretable review-based recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 7667–7674. [Google Scholar]
- Wang, H.; Wu, F.; Liu, Z.; Xie, X. Fine-grained interest matching for neural news recommendation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, WA, USA, 5–10 July 2020; pp. 836–845. [Google Scholar]
- Bansal, T.; Belanger, D.; McCallum, A. Ask the gru: Multi-task learning for deep text recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, Boston, MA, USA, 15–19 September 2016; pp. 107–114. [Google Scholar]
- Tay, Y.; Luu, A.T.; Hui, S.C. Multi-pointer co-attention networks for recommendation. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 2309–2318. [Google Scholar]
- Bai, Y.; Li, Y.; Wang, L. A Joint Summarization and Pre-Trained Model for Review-Based Recommendation. Information
**2021**, 12, 223. [Google Scholar] [CrossRef] - Da’u, A.; Salim, N.; Rabiu, I.; Osman, A. Recommendation system exploiting aspect-based opinion mining with deep learning method. Inf. Sci.
**2020**, 512, 1279–1292. [Google Scholar] - Wang, C.; Zhu, H.; Zhu, C.; Zhang, X.; Chen, E.; Xiong, H. Personalized Employee Training Course Recommendation with Career Development Awareness. In Proceedings of the Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; pp. 1648–1659. [Google Scholar]
- Rafiq, M.S.; Jianshe, X.; Arif, M.; Barra, P. Intelligent query optimization and course recommendation during online lectures in E-learning system. J. Ambient. Intell. Humaniz. Comput.
**2021**, 12, 10375–10394. [Google Scholar] [CrossRef] - Sulaiman, M.S.; Tamizi, A.A.; Shamsudin, M.R.; Azmi, A. Course recommendation system using fuzzy logic approach. Indones. J. Electr. Eng. Comput. Sci.
**2020**, 17, 365–371. [Google Scholar] [CrossRef] - Xie, J.; Girshick, R.; Farhadi, A. Unsupervised deep embedding for clustering analysis. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 478–487. [Google Scholar]
- Kowsari, K.; Heidarysafa, M.; Brown, D.E.; Meimandi, K.J.; Barnes, L.E. Rmdl: Random multimodel deep learning for classification. In Proceedings of the 2nd International Conference on Information System and Data Mining, Lakeland, FL, USA, 9–1 April 2018; pp. 19–28. [Google Scholar]
- Mangai, S.A.; Sankar, B.R.; Alagarsamy, K. Taylor series prediction of time series data with error propagated by artificial neural network. Int. J. Comput. Appl.
**2014**, 89, 41–47. [Google Scholar] - Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl.
**2020**, 149, 113338. [Google Scholar] [CrossRef] - Ohana, B.; Tierney, B. Sentiment classification of reviews using SentiWordNet. In Proceedings of the IT&T, Dublin, Ireland, 22–23 October 2009. [Google Scholar]
- Christian, H.; Agus, M.P.; Suhartono, D. Single document automatic text summarization using term frequency-inverse document frequency (TF-IDF). ComTech Comput. Math. Eng. Appl.
**2016**, 7, 285–294. [Google Scholar] [CrossRef]

**Figure 1.**An illustration of TaylorChOA-based RMDL for sentiment analysis-based course recommendation.

**Figure 2.**An illustration of random multimodel deep learning for sentiment analysis-based course recommendation.

**Figure 3.**Performance analysis with cluster size = 3 using E-Khool dataset: (

**a**) precision, (

**b**) recall, and (

**c**) F1-score.

**Figure 4.**Performance analysis with cluster size = 4 using E-Khool dataset: (

**a**) precision, (

**b**) recall, and (

**c**) F1-score.

**Figure 5.**Comparative analysis with cluster size = 3 using K-Khool dataset: (

**a**) precision, (

**b**) recall, and (

**c**) F1-score.

**Figure 6.**Comparative analysis with cluster size = 4 using E-Khool dataset: (

**a**) precision, (

**b**) recall, and (

**c**) F1-score.

**Figure 7.**Performance analysis with cluster size = 3 using Coursera Course Dataset: (

**a**) precision, (

**b**) recall, and (

**c**) F1-score.

**Figure 8.**Performance analysis with cluster size =4 using Coursera Course Dataset: (

**a**) precision, (

**b**) recall, and (

**c**) F1-score.

**Figure 9.**Comparative analysis with cluster size =3 using Coursera Course Dataset: (

**a**) precision, (

**b**) recall, and (

**c**) F1-score.

**Figure 10.**Comparative analysis with cluster size = 4 using Coursera Course Dataset: (

**a**) precision, (

**b**) recall, and (

**c**) F1-score.

**Table 1.**Comparison of proposed TaylorChOA-based RMDL with existing methods using E-Khool dataset, in terms of precision, recall, and F1-score.

Method | Metrics | HSACN | MCNN | Qu Opt. | DCBVN | Proposed Method |
---|---|---|---|---|---|---|

Cluster Size = 3 | Precision | 0.684 | 0.784 | 0.814 | 0.896 | 0.925 |

Recall | 0.685 | 0.805 | 0.854 | 0.925 | 0.944 | |

F1-score | 0.684 | 0.794 | 0.833 | 0.910 | 0.934 | |

Cluster Size = 4 | Precision | 0.674 | 0.798 | 0.825 | 0.905 | 0.936 |

Recall | 0.695 | 0.814 | 0.854 | 0.925 | 0.941 | |

F1-score | 0.685 | 0.806 | 0.839 | 0.915 | 0.938 |

**Table 2.**Comparison of proposed TaylorChOA-based RMDL with existing methods using Coursera Course dataset, in terms of precision, recall, and F1-score.

Method | Metrics | HSACN | MCNN | Qu Opt. | DCBVN | Proposed Method |
---|---|---|---|---|---|---|

Cluster Size = 3 | Precision | 0.672 | 0.772 | 0.798 | 0.877 | 0.908 |

Recall | 0.665 | 0.795 | 0.837 | 0.914 | 0.928 | |

F1-score | 0.667 | 0.776 | 0.813 | 0.899 | 0.919 | |

Cluster Size = 4 | Precision | 0.667 | 0.776 | 0.813 | 0.899 | 0.919 |

Recall | 0.676 | 0.798 | 0.839 | 0.907 | 0.926 | |

F1-score | 0.674 | 0.788 | 0.825 | 0.899 | 0.925 |

Dataset | Time | HSACN | MCNN | Qu Opt. | DCBVN | Proposed Method |
---|---|---|---|---|---|---|

E-Khool | Seconds | 182.41 | 180.41 | 162.25 | 145.36 | 127.25 |

Coursera Course | Seconds | 192.45 | 187.52 | 170.54 | 153.25 | 133.84 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Banbhrani, S.K.; Xu, B.; Lin, H.; Sajnani, D.K.
Taylor-ChOA: Taylor-Chimp Optimized Random Multimodal Deep Learning-Based Sentiment Classification Model for Course Recommendation. *Mathematics* **2022**, *10*, 1354.
https://doi.org/10.3390/math10091354

**AMA Style**

Banbhrani SK, Xu B, Lin H, Sajnani DK.
Taylor-ChOA: Taylor-Chimp Optimized Random Multimodal Deep Learning-Based Sentiment Classification Model for Course Recommendation. *Mathematics*. 2022; 10(9):1354.
https://doi.org/10.3390/math10091354

**Chicago/Turabian Style**

Banbhrani, Santosh Kumar, Bo Xu, Hongfei Lin, and Dileep Kumar Sajnani.
2022. "Taylor-ChOA: Taylor-Chimp Optimized Random Multimodal Deep Learning-Based Sentiment Classification Model for Course Recommendation" *Mathematics* 10, no. 9: 1354.
https://doi.org/10.3390/math10091354