A Quantitative Social Network Analysis of the Character Relationships in the Mahabharata
Abstract
:1. Introduction
Research Aim
2. Materials and Methods
2.1. Processing Pipeline
2.2. Corpus and Text Preprocessing
2.2.1. Establishing Ground Truths
2.2.2. Word Vectors
2.3. Social Network Construction
2.3.1. Locally Weighted KNN Graph Social Network
Algorithm 1: lw-KNN procedure |
Input: Word Vectors as Matrix. Output: Modified KNN Matrix.
|
2.3.2. Co-Occurrence-Derived Social Network
2.3.3. Ground Truth Social Network
2.4. Social Network Analysis
2.4.1. Individual Character-to-Character Relationships
2.4.2. Social Network Structure with Centralities
- Betweenness centrality—the extent to which a node is in the shortest path between all other nodes [38].
- Closeness centrality—a calculation of the inverse distance of the shortest part to other nodes [39].
- Degree centrality—the same as the number of edges a node possesses [40].
- Eigenvector centrality (normalized)—the number of nodes with high centrality scores connecting to the target node [41].
2.4.3. Community Detection with Spectral Clustering
Algorithm 2: Shi-Malik spectral clustering method |
Input: Similarity Matrix. Output: Clusters.
|
2.5. Experimental Setup
- LSA word vectors are obtained by varying the dimension from 100 to 1000 for the full-text corpus and 10 to 100 for the summary text corpus.
- From each set of word vectors in both texts, a social network is constructed.
- Similarly, a social network using co-occurrence analysis of the full text is constructed.
- For all constructed social networks, the following are computed:
- For each character in a network, their character-to-character F-score, precision, and recall are computed with respect to the ground truth social network.
- For all nodes in a network, the four different types of centralities are computed. Then, for each centrality, the overall RMSE and correlation (with p-value) between the constructed network and the ground social network are computed.
- The character-to-character performance metrics are computed for each group against the ground truth social network.
- The performance of the detected groups using spectral clustering is computed against the ground truth Pandavas and Kauravas groups.
3. Results
3.1. Social Networks
3.2. Character-to-Character Relationships
3.3. Social Network Centralities
- Betweenness: The mean RMSE ( = 0.162) of the full-text lw-KNN networks was lower than the RMSE ( = 0.188) of the summary text lw-KNN generated social network (t(18) = −12.33, p < 0.001). The full-text lw-KNN had the lowest error (RMSE = 0.159) at a vector dimension of 500 which was lower than the co-occurrence network (RMSE = 0.179) and summary text lw-KNN (RMSE = 0.183). The summary text lw-KNN had the first occurrence of its lowest at a vector dimension of 70.
- Closeness: The mean RMSE ( = 0.033) of the summary text lw-KNN networks was lower than the RMSE ( = 0.036) of the full-text lw-KNN social networks (t(18) = 4.31, p < 0.001). The summary text lw-KNN had the lowest error (RMSE = 0.033) at a vector dimension of 70, full-text lw-KNN had the second lowest error (RMSE = 0.034) at a vector dimension of 500, and the co-occurrence had the worst error (RMSE = 0.037).
- Degree: The mean RMSE ( = 0.063) of the full-text lw-KNN social networks ( = 0.063) was lower than the RMSE ( = 0.070) of the summary text networks (t(18) = −7.31, p < 0.001). The full-text lw-KNN had the lowest (RMSE = 0.060) at a vector dimension of 500, followed by the co-occurrence network (RMSE = 0.065), and the summary text lw-KNN (RMSE = 0.068) at a vector dimension of 50.
- Eigenvector: The mean RMSE ( = 0.069) of the full-text lw-KNN social networks is lower than the mean RMSE ( = 0.076) of the summary text lw-KNN networks (t(18) = −6.08, p < 0.001). The full-text lw-KNN had the lowest error (RMSE = 0.066) at a vector dimension of 500, followed by the summary text lw-KNN (RMSE = 0.068) at a vector dimension of 50, and then the co-occurrence network (co-occurrence RMSE = 0.074).
3.4. Spectral Clustering Community Detection
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Elson, D.; Dames, N.; McKeown, K. Extracting Social Networks from Literary Fiction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, Uppsala, Sweden, 11–16 July 2010; Association for Computational Linguistics: Uppsala, Sweden, 2010; pp. 138–147. [Google Scholar]
- Grayson, S.; Mulvany, M.; Wade, K.; Meaney, G.; Greene, D. Novel2Vec: Characterising 19th Century Fiction via Word Embeddings. In Proceedings of the 24th Irish Conference on Artificial Intelligence and Cognitive Science (AICS’16), University College Dublin, Dublin, Ireland, 20–21 September 2016; pp. 68–79. [Google Scholar]
- Kerr, S. Jane Austen in vector space: Applying vector space models to 19th century literature. In Proceedings of the JADH 2016 Conference, Tokyo, Japan, 12–14 September 2016; pp. 19–22. [Google Scholar]
- Grayson, S.; Mulvany, M.; Wade, K.; Meaney, G.; Greene, D. Exploring the Role of Gender in 19th Century Fiction Through the Lens of Word Embeddings. In Proceedings of the Language, Data, and Knowledge; Gracia, J., Bond, F., McCrae, J.P., Buitelaar, P., Chiarcos, C., Hellmann, S., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 358–364. [Google Scholar]
- Agarwal, A.; Kotalwar, A.; Rambow, O. Automatic Extraction of Social Networks from Literary Text: A Case Study on Alice in Wonderland. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, Nagoya, Japan, 20–23 November 2013; Asian Federation of Natural Language Processing: Nagoya, Japan, 2013; pp. 1202–1208. [Google Scholar]
- Alexander, S. Social Network Analysis and the Scale of Modernist Fiction. In Modernism/Modernity Print Plus; Johns Hopkins University Press: Baltimore, MD, USA, 2019. [Google Scholar]
- Butts, C.T. Social network analysis: A methodological introduction. Asian J. Soc. Psychol. 2008, 11, 13–41. [Google Scholar] [CrossRef]
- Das, D.; Das, B.; Mahesh, K. A computational analysis of Mahabharata. In Proceedings of the 13th International Conference on Natural Language Processing, Varanasi, India, 17–18 December 2016; pp. 219–228. [Google Scholar]
- Londhe, S. A Tribute to Hinduism: Thoughts and Wisdom Spanning Continents and Time About India and Her Culture; Pragun Publication: New Delhi, India, 2008. [Google Scholar]
- Kestemont, M.; Daelemans, W.; De Pauw, G. Weigh your words—Memory-based lemmatization for Middle Dutch. Lit. Linguist. Comput. 2010, 25, 287–301. [Google Scholar] [CrossRef]
- Pettersson, E.; Megyesi, B.; Tiedemann, J. An SMT approach to automatic annotation of historical text. NEALT Proc. Ser. 2013, 18, 54–69. [Google Scholar]
- Wilkens, M. Digital humanities and its application in the study of literature and culture. Comp. Lit. 2015, 67, 11–20. [Google Scholar] [CrossRef]
- Lea, R. The Big Question: Are Books Getting Longer? 2015. Available online: https://www.theguardian.com/books/2015/dec/10/are-books-getting-longer-survey-marlon-james-hanya-yanagihara (accessed on 4 June 2021).
- von Luxburg, U. A tutorial on spectral clustering. Stat. Comput. 2007, 17, 395–416. [Google Scholar] [CrossRef]
- Ganguli, K.M. The Complete Mahabharata in English; Bharata Press: Calcutta, India, 1884. [Google Scholar]
- Hinduism. Available online: https://sacred-texts.com/hin/index.htm (accessed on 21 June 2021).
- Chakravarti, B. Penguin Companion to the Mahabharata; Penguin: London, UK, 2007. [Google Scholar]
- Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.S.; Dean, J. Distributed representations of words and phrases and their compositionality. In Proceedings of the Advances in Neural Information Processing Systems; Burges, C.J., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2013; Volume 26. [Google Scholar]
- Pennington, J.; Socher, R.; Manning, C.D. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar] [CrossRef]
- Gultepe, E.; Kamkarhaghighi, M.; Makrehchi, M. Document classification using convolutional neural networks with small window sizes and latent semantic analysis. Web Intell. 2020, 18, 239–248. [Google Scholar] [CrossRef]
- Levy, O.; Goldberg, Y.; Dagan, I. Improving Distributional Similarity with Lessons Learned from Word Embeddings. Trans. Assoc. Comput. Linguist. 2015, 3, 211–225. [Google Scholar] [CrossRef]
- Scott, J. Trend report social network analysis. Sociology 1988, 22, 109–127. [Google Scholar] [CrossRef]
- Knoke, D.; Yang, S. Social Network Analysis; SAGE Publications: Thousand Oaks, CA, USA, 2019. [Google Scholar]
- Freeman, L.C. Centrality in social networks conceptual clarification. Soc. Netw. 1978, 1, 215–239. [Google Scholar] [CrossRef]
- Brass, D.J. Being in the Right Place: A Structural Analysis of Individual Influence in an Organization. Adm. Sci. Q. 1984, 29, 518–539. [Google Scholar] [CrossRef]
- Gansner, E.R.; Koren, Y.; North, S. Graph drawing by stress majorization. In Proceedings of the Graph Drawing: 12th International Symposium, GD 2004, New York, NY, USA, 29 September–2 October 2004; Revised Selected Papers 12. Springer: Berlin/Heidelberg, Germany, 2005; pp. 239–250. [Google Scholar]
- Eppstein, D.; Paterson, M.S.; Yao, F.F. On Nearest-Neighbor Graphs. Discret. Comput. Geom. 1997, 17, 263–282. [Google Scholar] [CrossRef]
- Eppstein, D.; Erickson, J. Iterated nearest neighbors and finding minimal polytopes. Discret. Comput. Geom. 1994, 11, 321–350. [Google Scholar] [CrossRef]
- Boiman, O.; Shechtman, E.; Irani, M. In defense of Nearest-Neighbor based image classification. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; IEEE: Anchorage, AK, USA, 2008; pp. 1–8. [Google Scholar] [CrossRef]
- Preparata, F.P.; Shamos, M.I. Computational Geometry: An Introduction; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Bordag, S. A comparison of co-occurrence and similarity measures as simulations of context. In Proceedings of the International Conference on Intelligent Text Processing and Computational Linguistics, La Rochelle, France, 7–13 April 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 52–63. [Google Scholar]
- Lu, W.; Cheng, Q.; Lioma, C. Fixed versus dynamic co-occurrence windows in TextRank term weights for information retrieval. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval; Association for Computing Machinery, New York, NY, USA, 12–16 August 2012; SIGIR ’12. pp. 1079–1080. [Google Scholar] [CrossRef]
- Reddy, B.R.; Suresh, A.; Mani, M.R.; Kumar, V.V. Classification of Textures Based on Features Extracted from Preprocessing Images on Random Windows. Int. J. Adv. Sci. Technol. 2009, 9, 9–18. [Google Scholar]
- Berry, D.; Widder, S. Deciphering microbial interactions and detecting keystone species with co-occurrence networks. Front. Microbiol. 2014, 5. [Google Scholar] [CrossRef] [PubMed]
- Araújo, M.B.; Rozenfeld, A.; Rahbek, C.; Marquet, P.A. Using species co-occurrence networks to assess the impacts of climate change. Ecography 2011, 34, 897–908. [Google Scholar] [CrossRef]
- Liu, H.; Cong, J. Language clustering with word co-occurrence networks based on parallel texts. Chin. Sci. Bull. 2013, 58, 1139–1144. [Google Scholar] [CrossRef]
- Qiu, J.P.; Dong, K.; Yu, H.Q. Comparative study on structure and correlation among author co-occurrence networks in bibliometrics. Scientometrics 2014, 101, 1345–1360. [Google Scholar] [CrossRef]
- Barthelemy, M. Betweenness centrality in large complex networks. Eur. Phys. J. B 2004, 38, 163–168. [Google Scholar] [CrossRef]
- Cohen, E.; Delling, D.; Pajor, T.; Werneck, R.F. Computing classic closeness centrality, at scale. In Proceedings of the Second Edition of the ACM Conference on Online Social Networks—COSN ’14, New York, NY, USA, 1–2 October 2014; ACM Press: Dublin, Ireland, 2014; pp. 37–50. [Google Scholar] [CrossRef]
- Bródka, P.; Skibicki, K.; Kazienko, P.; Musiał, K. A degree centrality in multi-layered social network. In Proceedings of the 2011 International Conference on Computational Aspects of Social Networks (CASoN), Salamanca, Spain, 19–21 October 2011; pp. 237–242. [Google Scholar] [CrossRef]
- Bonacich, P. Some unique properties of eigenvector centrality. Soc. Netw. 2007, 29, 555–564. [Google Scholar] [CrossRef]
- Li, Y.; He, K.; Kloster, K.; Bindel, D.; Hopcroft, J. Local Spectral Clustering for Overlapping Community Detection. ACM Trans. Knowl. Discov. Data 2018, 12, 1–27. [Google Scholar] [CrossRef]
- van Gennip, Y.; Hunter, B.; Ahn, R.; Elliott, P.; Luh, K.; Halvorson, M.; Reid, S.; Valasik, M.; Wo, J.; Tita, G.E.; et al. Community Detection Using Spectral Clustering on Sparse Geosocial Data. SIAM J. Appl. Math. 2013, 73, 67–83. [Google Scholar] [CrossRef]
- Van Lierde, H.; Chow, T.W.S.; Chen, G. Scalable Spectral Clustering for Overlapping Community Detection in Large-Scale Networks. IEEE Trans. Knowl. Data Eng. 2020, 32, 754–767. [Google Scholar] [CrossRef]
- Verma, D.; Meila, M. A comparison of spectral clustering algorithms. Univ. Wash. Tech Rep UWCSE030501 2003, 1, 1–18. [Google Scholar]
- Ng, A.; Jordan, M.; Weiss, Y. On Spectral Clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems; Dietterich, T., Becker, S., Ghahramani, Z., Eds.; MIT Press: Cambridge, MA, USA, 2001; Volume 14. [Google Scholar]
- Shi, J.; Malik, J. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 888–905. [Google Scholar] [CrossRef]
- Yu, S.X.; Shi, J. Multiclass spectral clustering. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; Volume 1, pp. 313–319. [Google Scholar] [CrossRef]
- Munkres, J. Algorithms for the Assignment and Transportation Problems. J. Soc. Ind. Appl. Math. 1957, 5, 32–38. [Google Scholar] [CrossRef]
- Ratner, B. The correlation coefficient: Its values range between+ 1/- 1, or do they? J. Target. Meas. Anal. Mark. 2009, 17, 139–142. [Google Scholar] [CrossRef]
- Labatut, V.; Bost, X. Extraction and Analysis of Fictional Character Networks: A Survey. ACM Comput. Surv. 2019, 52, 1–40. [Google Scholar] [CrossRef]
- Masías, V.H.; Baldwin, P.; Laengle, S.; Vargas, A.; Crespo, F.A. Exploring the prominence of Romeo and Juliet’s characters using weighted centrality measures. Digit. Scholarsh. Humanit. 2017, 32, 837–858. [Google Scholar] [CrossRef]
- Hutchinson, S.; Datla, V.; Louwerse, M. Social networks are encoded in language. In Proceedings of the Annual Meeting of the Cognitive Science Society, Sapporo, Japan, 1–4 August 2012; Cognitive Science Society: Boston, MA, USA, 2012; Volume 34, pp. 491–496. [Google Scholar]
- Hutchinson, S.; Louwerse, M. Extracting Social Networks from Language Statistics. Discourse Process. 2018, 55, 607–618. [Google Scholar] [CrossRef]
- Valente, T.W.; Coronges, K.; Lakon, C.; Costenbader, E. How Correlated Are Network Centrality Measures? Connect. (Tor. Ont.) 2008, 28, 16–26. [Google Scholar]
- Tas, O.; Kiyani, F. A survey automatic text summarization. Press. Procedia 2007, 5, 205–213. [Google Scholar] [CrossRef]
- Haque, M.M.; Pervin, S.; Begum, Z. Literature review of automatic single document text summarization using NLP. Int. J. Inno. Appl. Stud. 2013, 3, 857–865. [Google Scholar]
- Gupta, V.; Lehal, G.S. A Survey of Text Summarization Extractive Techniques. Int. J. Emerg. Technol. Web Intell. 2010, 2, 258–268. [Google Scholar] [CrossRef]
- Kazantseva, A. Automatic Summarization of Short Fiction. Ph.D. Thesis, University of Ottawa, Ottawa, CA, USA, 2007. [Google Scholar] [CrossRef]
- Lin, C.Y. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out; Association for Computational Linguistics: Barcelona, Spain, 2004; pp. 74–81. [Google Scholar]
- Papineni, K.; Roukos, S.; Ward, T.; Zhu, W.J. Bleu: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, PA, USA, 7–12 July 2002; Association for Computational Linguistics: Philadelphia, PA, USA, 2002; pp. 311–318. [Google Scholar] [CrossRef]
- Deerwester, S.; Dumais, S.T.; Furnas, G.W.; Landauer, T.K.; Harshman, R. Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 1990, 41, 391–407. [Google Scholar] [CrossRef]
- Jorge-Botana, G.; Olmos, R.; Luzón, J.M. Bridging the theoretical gap between semantic representation models without the pressure of a ranking: Some lessons learnt from LSA. Cogn. Process. 2020, 21, 1–21. [Google Scholar] [CrossRef] [PubMed]
- Yang, F. An extraction and representation pipeline for literary characters. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 22 February–1 March 2022; Volume 36, pp. 13146–13147. [Google Scholar]
- Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient estimation of word representations in vector space. arXiv 2013, arXiv:1301.3781. [Google Scholar] [CrossRef]
- McCann, B.; Bradbury, J.; Xiong, C.; Socher, R. Learned in Translation: Contextualized Word Vectors. In Advances in Neural Information Processing Systems; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
- Buddhi, D.; Joshi, A.; Negi, P. Language Model Based Related Word Prediction from an Indian Epic-Mahabharata. In Proceedings of the International Interdisciplinary Humanitarian Conference for Sustainability (IIHC), Bengaluru, India, 18–19 November 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1468–1474. [Google Scholar]
- Gadesha, V.; Joshi, K.; Naik, S. Estimating Related Words Computationally Using Language Model from the Mahabharata an Indian Epic. In ICT Analysis and Applications: Proceedings of ICT4SD 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 627–638. [Google Scholar]
- Chandra, R.; Ranjan, M. Artificial intelligence for topic modelling in Hindu philosophy: Mapping themes between the Upanishads and the Bhagavad Gita. PLoS ONE 2022, 17, e0273476. [Google Scholar] [CrossRef]
Pandavas (Label 1) | Kauravas (Label 0) |
---|---|
Abhimanyu | Ashwathama |
Arjuna | Bhishma |
Bheema | Dhritarashtra |
Ghatotkacha | Drona |
Krishna | Duryodhana |
Nakula | Dushasana |
Sahadeva | Jayadratha |
Shikhandi (Amba) | Kripa |
Yudhishthira | Sanjay |
Shakuni |
Network | Vector Dimension | Category 1 | Category 10 | Category 15 | ||||||
---|---|---|---|---|---|---|---|---|---|---|
F | P | R | F | P | R | F | P | R | ||
Full-text lw-KNN | 100 | 0.645 | 0.788 | 0.573 | 0.766 | 0.860 | 0.712 | 0.788 | 0.812 | 0.765 |
200 | 0.676 | 0.817 | 0.600 | 0.784 | 0.878 | 0.725 | 0.909 | 0.938 | 0.882 | |
300 | 0.655 | 0.780 | 0.588 | 0.784 | 0.872 | 0.730 | 0.848 | 0.875 | 0.824 | |
400 | 0.655 | 0.770 | 0.594 | 0.783 | 0.866 | 0.735 | 0.812 | 0.812 | 0.812 | |
500 | 0.666 | 0.785 | 0.605 | 0.785 | 0.866 | 0.740 | 0.812 | 0.812 | 0.812 | |
600 | 0.667 | 0.785 | 0.607 | 0.785 | 0.866 | 0.740 | 0.812 | 0.812 | 0.812 | |
700 | 0.666 | 0.785 | 0.607 | 0.785 | 0.866 | 0.740 | 0.812 | 0.812 | 0.812 | |
800 | 0.663 | 0.782 | 0.603 | 0.789 | 0.873 | 0.743 | 0.812 | 0.812 | 0.812 | |
900 | 0.654 | 0.772 | 0.596 | 0.783 | 0.866 | 0.736 | 0.812 | 0.812 | 0.812 | |
1000 | 0.649 | 0.765 | 0.594 | 0.774 | 0.853 | 0.734 | 0.812 | 0.812 | 0.812 | |
Summary-text lw-KNN | 10 | 0.614 | 0.632 | 0.643 | 0.705 | 0.672 | 0.767 | 0.667 | 0.562 | 0.818 |
20 | 0.616 | 0.611 | 0.674 | 0.681 | 0.634 | 0.77 | 0.714 | 0.625 | 0.833 | |
30 | 0.603 | 0.6 | 0.649 | 0.683 | 0.646 | 0.75 | 0.714 | 0.625 | 0.833 | |
40 | 0.593 | 0.591 | 0.641 | 0.671 | 0.632 | 0.748 | 0.759 | 0.688 | 0.846 | |
50 | 0.589 | 0.591 | 0.628 | 0.672 | 0.632 | 0.744 | 0.759 | 0.688 | 0.846 | |
60 | 0.591 | 0.588 | 0.64 | 0.679 | 0.639 | 0.762 | 0.759 | 0.688 | 0.846 | |
70 | 0.581 | 0.58 | 0.621 | 0.661 | 0.625 | 0.73 | 0.759 | 0.688 | 0.846 | |
80 | 0.581 | 0.58 | 0.621 | 0.661 | 0.625 | 0.73 | 0.759 | 0.688 | 0.846 | |
90 | 0.581 | 0.58 | 0.621 | 0.661 | 0.625 | 0.73 | 0.759 | 0.688 | 0.846 | |
100 | 0.581 | 0.58 | 0.621 | 0.661 | 0.625 | 0.73 | 0.759 | 0.688 | 0.846 | |
Co-occurrence | - | 0.615 | 0.777 | 0.567 | 0.709 | 0.739 | 0.717 | 0.741 | 0.625 | 0.909 |
Network | Vector Dimension | Pandavas | Kauravas | ||||
---|---|---|---|---|---|---|---|
F | P | R | F | P | R | ||
Full-text lw-KNN | 100 | 0.747 | 0.814 | 0.708 | 0.562 | 0.766 | 0.463 |
200 | 0.765 | 0.828 | 0.724 | 0.603 | 0.807 | 0.499 | |
300 | 0.748 | 0.809 | 0.711 | 0.579 | 0.757 | 0.488 | |
400 | 0.748 | 0.797 | 0.720 | 0.579 | 0.748 | 0.490 | |
500 | 0.749 | 0.790 | 0.730 | 0.598 | 0.781 | 0.502 | |
600 | 0.752 | 0.79 | 0.737 | 0.597 | 0.781 | 0.502 | |
700 | 0.752 | 0.790 | 0.737 | 0.596 | 0.781 | 0.500 | |
800 | 0.752 | 0.790 | 0.737 | 0.590 | 0.776 | 0.495 | |
900 | 0.752 | 0.790 | 0.737 | 0.574 | 0.758 | 0.480 | |
1000 | 0.747 | 0.782 | 0.737 | 0.569 | 0.751 | 0.478 | |
Summary-text lw-KNN | 10 | 0.665 | 0.613 | 0.758 | 0.573 | 0.647 | 0.548 |
20 | 0.656 | 0.587 | 0.794 | 0.583 | 0.630 | 0.577 | |
30 | 0.664 | 0.602 | 0.781 | 0.553 | 0.599 | 0.541 | |
40 | 0.643 | 0.578 | 0.774 | 0.551 | 0.602 | 0.533 | |
50 | 0.633 | 0.578 | 0.737 | 0.554 | 0.602 | 0.538 | |
60 | 0.655 | 0.585 | 0.787 | 0.539 | 0.590 | 0.521 | |
70 | 0.633 | 0.578 | 0.735 | 0.538 | 0.582 | 0.528 | |
80 | 0.633 | 0.578 | 0.735 | 0.538 | 0.582 | 0.528 | |
90 | 0.633 | 0.578 | 0.735 | 0.538 | 0.582 | 0.528 | |
100 | 0.633 | 0.578 | 0.735 | 0.538 | 0.582 | 0.528 |
Network | V. Dim. | Betweenness | Closeness | Degree | Eigenvector | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
RMSE | Corr. | p | RMSE | Corr. | p | RMSE | Corr. | p | RMSE | Corr. | p | ||
Full-text lw-KNN | 100 | 0.166 | 0.399 | 0.08 | 0.036 | 0.405 | 0.08 | 0.065 | 0.445 | 0.05 | 0.075 | 0.351 | 0.13 |
200 | 0.159 | 0.447 | 0.05 | 0.033 | 0.522 | 0.02 | 0.058 | 0.582 | 0.01 | 0.067 | 0.540 | 0.01 | |
300 | 0.162 | 0.442 | 0.05 | 0.036 | 0.444 | 0.05 | 0.062 | 0.523 | 0.02 | 0.068 | 0.500 | 0.02 | |
400 | 0.162 | 0.456 | 0.04 | 0.035 | 0.472 | 0.04 | 0.061 | 0.555 | 0.01 | 0.067 | 0.530 | 0.02 | |
500 | 0.159 | 0.470 | 0.04 | 0.034 | 0.509 | 0.02 | 0.060 | 0.571 | 0.01 | 0.066 | 0.544 | 0.01 | |
600 | 0.161 | 0.459 | 0.04 | 0.035 | 0.476 | 0.03 | 0.062 | 0.536 | 0.01 | 0.069 | 0.501 | 0.02 | |
700 | 0.163 | 0.445 | 0.05 | 0.035 | 0.476 | 0.03 | 0.062 | 0.533 | 0.02 | 0.069 | 0.501 | 0.02 | |
800 | 0.164 | 0.434 | 0.06 | 0.036 | 0.445 | 0.05 | 0.064 | 0.508 | 0.02 | 0.070 | 0.467 | 0.04 | |
900 | 0.164 | 0.433 | 0.06 | 0.036 | 0.441 | 0.05 | 0.064 | 0.499 | 0.03 | 0.071 | 0.454 | 0.04 | |
1000 | 0.165 | 0.434 | 0.06 | 0.037 | 0.421 | 0.06 | 0.066 | 0.470 | 0.04 | 0.073 | 0.419 | 0.07 | |
Sum.-text lw-KNN | 10 | 0.183 | 0.276 | 0.24 | 0.035 | 0.397 | 0.08 | 0.072 | 0.420 | 0.07 | 0.077 | 0.389 | 0.09 |
20 | 0.202 | 0.197 | 0.41 | 0.035 | 0.406 | 0.08 | 0.075 | 0.419 | 0.07 | 0.081 | 0.347 | 0.13 | |
30 | 0.187 | 0.260 | 0.27 | 0.033 | 0.474 | 0.03 | 0.068 | 0.506 | 0.02 | 0.074 | 0.448 | 0.05 | |
40 | 0.191 | 0.259 | 0.27 | 0.033 | 0.466 | 0.04 | 0.071 | 0.484 | 0.03 | 0.078 | 0.402 | 0.08 | |
50 | 0.189 | 0.262 | 0.26 | 0.033 | 0.479 | 0.03 | 0.068 | 0.505 | 0.02 | 0.074 | 0.456 | 0.04 | |
60 | 0.191 | 0.254 | 0.28 | 0.033 | 0.465 | 0.04 | 0.070 | 0.487 | 0.03 | 0.074 | 0.453 | 0.04 | |
70 | 0.183 | 0.311 | 0.18 | 0.033 | 0.469 | 0.04 | 0.070 | 0.491 | 0.03 | 0.076 | 0.426 | 0.06 | |
80 | 0.183 | 0.311 | 0.18 | 0.033 | 0.469 | 0.04 | 0.070 | 0.491 | 0.03 | 0.076 | 0.426 | 0.06 | |
90 | 0.183 | 0.311 | 0.18 | 0.033 | 0.469 | 0.04 | 0.070 | 0.491 | 0.03 | 0.076 | 0.426 | 0.06 | |
100 | 0.183 | 0.311 | 0.18 | 0.033 | 0.469 | 0.04 | 0.070 | 0.491 | 0.03 | 0.076 | 0.426 | 0.06 | |
Co-occurrence | - | 0.179 | 0.355 | 0.12 | 0.037 | 0.451 | 0.05 | 0.065 | 0.516 | 0.01 | 0.072 | 0.466 | 0.04 |
Network | Vector Dimension | F | P | R |
---|---|---|---|---|
Full-text lw-KNN | 100 | 0.495 | 0.515 | 0.516 |
200 | 0.520 | 0.581 | 0.625 | |
300 | 0.697 | 0.717 | 0.736 | |
400 | 0.697 | 0.717 | 0.736 | |
500 | 0.642 | 0.672 | 0.702 | |
600 | 0.749 | 0.763 | 0.771 | |
700 | 0.749 | 0.763 | 0.771 | |
800 | 0.700 | 0.707 | 0.707 | |
900 | 0.700 | 0.707 | 0.707 | |
1000 | 0.700 | 0.707 | 0.707 | |
Sum.-text lw-KNN | 10 | 0.540 | 0.540 | 0.542 |
20 | 0.500 | 0.505 | 0.505 | |
30 | 0.583 | 0.626 | 0.667 | |
40 | 0.540 | 0.571 | 0.583 | |
50 | 0.540 | 0.571 | 0.583 | |
60 | 0.540 | 0.571 | 0.583 | |
70 | 0.540 | 0.571 | 0.583 | |
80 | 0.540 | 0.571 | 0.583 | |
90 | 0.540 | 0.571 | 0.583 | |
100 | 0.540 | 0.571 | 0.583 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gultepe, E.; Mathangi, V. A Quantitative Social Network Analysis of the Character Relationships in the Mahabharata. Heritage 2023, 6, 7009-7030. https://doi.org/10.3390/heritage6110366
Gultepe E, Mathangi V. A Quantitative Social Network Analysis of the Character Relationships in the Mahabharata. Heritage. 2023; 6(11):7009-7030. https://doi.org/10.3390/heritage6110366
Chicago/Turabian StyleGultepe, Eren, and Vivek Mathangi. 2023. "A Quantitative Social Network Analysis of the Character Relationships in the Mahabharata" Heritage 6, no. 11: 7009-7030. https://doi.org/10.3390/heritage6110366
APA StyleGultepe, E., & Mathangi, V. (2023). A Quantitative Social Network Analysis of the Character Relationships in the Mahabharata. Heritage, 6(11), 7009-7030. https://doi.org/10.3390/heritage6110366