Next Article in Journal
A New Comparative Study of Dimensionality Reduction Methods in Large-Scale Image Retrieval
Previous Article in Journal
Cognitive Networks Extract Insights on COVID-19 Vaccines from English and Italian Popular Tweets: Anticipation, Logistics, Conspiracy and Loss of Trust
Previous Article in Special Issue
Gender Stereotypes in Hollywood Movies and Their Evolution over Time: Insights from Network Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Knowledge Modelling and Learning through Cognitive Networks

1
CogNosco Lab, Department of Computer Science, University of Exeter, Exeter EX4 4PY, UK
2
Faculty of Industrial Engineering and Management, Technion-Israel Institute of Technology, Haifa 3200003, Israel
*
Authors to whom correspondence should be addressed.
Big Data Cogn. Comput. 2022, 6(2), 53; https://doi.org/10.3390/bdcc6020053
Submission received: 7 May 2022 / Accepted: 11 May 2022 / Published: 13 May 2022
(This article belongs to the Special Issue Knowledge Modelling and Learning through Cognitive Networks)
Knowledge modelling is a growing field at the fringe of computer science, psychology and network science [1,2]. This research area aims to build models of knowledge that can provide interpretable insights starting from data, its associations, commonalities, recurrent patterns and correlations. Historically, artificial intelligence (AI) contributed vastly to the field through models like artificial neural networks, e.g., recurrent neural networks or deep learning, as methods able to extract knowledge and learn from data, cf. [1]. Recent advancements from fields like network and data science supported the creation of novel approaches to knowledge modelling, inspired by theoretical frameworks of cognition and language processing: cognitive networks are mental representations of knowledge where nodes represent concepts and links indicate conceptual associations, e.g., concepts sounding similarly or being related according to a given semantic definition, cf. [3,4].
Despite being both referred to as “networks”, artificial neural networks (ANNs) and cognitive networks (CNs) remain two frameworks that work well in synergy while remaining distinct. On the one hand, ANNs encapsulate in their network structure latent correlations in the data, making it difficult to identify what nodes and their interconnections represent [5]. On the other hand, CNs form one-to-one mappings of knowledge units, e.g., nodes represent specific concepts and links map specific types of conceptual associations [4]. Whereas CNs are evidently more interpretable and can be tuned to map specific aspects of human associative knowledge (e.g., semantic memory structure and its influence over cognitive traits [6]), CNs also lack the same generalisability and aptitude to learn from data that ANNs possess, also thanks to training and fine-tuning [5]. Another important difference is that ANNs focus on prediction by updating weights between layers while CNs focus on the representation of the complexity of systems via graphs [3,6].
The synergy of these two approaches can open new ways for modelling knowledge and learning in interpretable ways, able to account also for unseen data [3,6]. For instance, the structure of CNs can produce novel features that can then power artificial intelligence techniques inspired by human knowledge, with relevant advancements for natural language processing, automatic assessments of personality traits or other phenomena like emotional distress, as highlighted in all the papers published within this Special Issue (SI).
The current SI reports on recent developments in applying CNs and ANNs for achieving intelligent systems and data insights. This SI represents a multidisciplinary collection of 11 contributions using either CNs, ANNs or novel combinations and mainly organised along the lines of: (i) text processing and social media analysis, (ii) artificial intelligence for natural language processing and (iii) brain science and cognitive psychology.
Hassani and colleagues [C1] reviewed text mining techniques for understanding features of texts in large volumes and with the assistance of quantitative AI techniques. The authors also reviewed cutting-edge methods for understanding text sentiment (valence for psychologists), i.e., pleasantness/displeasure as expressed in language. The review critically covered the many advances in the field and underlined the need for novel cognitively-inspired methods, building a bridge between words in texts and ideas in the mind.
Stella and colleagues [C2] introduced network-based methods for identifying not only sentiment but also emotions in social media data. Focusing on the Italian twittersphere in the aftermath of the first COVID-19 lockdown, the authors reconstructed online stances of COVID-19 related hashtags and their emotional profiles. Emotional states as complex as trust, fear and anger were found to surround the same hashtag in different ways, according to contextual knowledge that was modelled as a cognitive network.
Pano and Kashef [C3] worked on COVID-19 tweets but under the perspective of monitoring conversations explicitly related to bitcoin. The authors tested 13 strategies for correlating textual data with bitcoin prices and identified a list of methodological working assumptions and limitations affecting predictions, showcasing a link between social discourse and price fluctuations but only in small time spans.
Prakash and colleagues [C4] used human-centred machine learning to predict the efficacy of treatment out of CNs built from brain data. The authors showed how recurrent neural networks were able to learn network features and achieve an accuracy of almost 78% in correctly classifying individuals according to their self-perceived efficacy of treatment. These findings open the way to novel ways for measuring psychological constructs from brain data, contributing to bridging the brain and mind aspects of human cognition.
Sermet and Demir [C5] outlined how cognitive, textual and social data might be combined in the AI pipeline of smart assistants, i.e., AI extracting insights from input data, predicting trends and managing conversations in natural language. The authors underlined how their cognitive computing approach enabled reusability and reliability and also discussed the relevance of their smart assistant in managing COVID-19 health data.
Sboev and colleagues [C6] introduced a context-dependent framework enforcing contextual semantic features of concepts in texts in an interpretable way and in synergy with pre-existing transformer networks. The authors’ approach enables natural language processing where explicit features of language and context are both accessible to experimenters, improving model interpretability and also performance. Deploying their architecture in medical reports, the authors report on the importance of contextual features over accurate predictions.
Fatima and colleagues [C7] used network features and recurrent neural networks to predict psychological constructs, i.e., depression, anxiety and stress. The authors used emotional recalls and psychometric data to train an AI in spotting depression, anxiety and stress levels out of word combinations. Their cognitive embedding assessed word centrality and semantic distance in a network representation of associative knowledge between 36k English words. The authors validated the AI on a set of suicide notes through the circumplex model of affect.
Nilsson and colleagues [C8] used CNs to model the mental lexicon of children with typical development and adolescents with intellectual disabilities. The authors found that adolescents with intellectual disabilities produced less modular, more clustered and less spread apart layouts of conceptual associations, clustering concepts more than children with typical development. The authors also discussed the interpretation of these differences and the potential role played by context and education.
Dresp-Langley [C9] used network features—related to connectivity, resilience and information processing—as explorative dimensions for self-organisation, i.e., the ability for a system to evolve dynamically towards a working conformation. The author showed how brain networks evolve towards self-organisation while minimising system complexity and enhancing its resilience and adaptiveness, with implications also for cognitive computing.
Vitevitch and colleagues [C10] used numerical simulations to bridge together CN structure and language processing. The authors explored three representations of human memory based on different phonological similarities between concepts. Simulations showed how activation spreading across network links could account for many effects observed in empirical experiments about phonotactic knowledge and affecting spoken word recognition. Their work underlines how cognitive networks can effectively model and test processes relative to language understanding and use.
Siew and colleagues [C11] adopted CNs to investigate the presence of stereotypical socio-cognitive representations of gender roles within Western movies from 1940 to 2019. The authors used word co-occurrences in movie synopses to capture syntactic relationships and semantic frames, blending natural language processing and cognitive network science methods. Their analysis identified the prevalence of stereotypical representations of female characters, being more entrenched in family and romance jargon than male counterparts. This approach opens new ways to quantify gender stereotypes as represented in cultural products.
Overall, our SI demonstrates the strengths and great potential of converging network science, data science, natural language processing, machine learning, and artificial intelligence to study knowledge representation and phenomena. Human knowledge is a complex system that traditionally was only examined indirectly. The expedited advancement in computational and analytical methodologies is rapidly advancing our understanding of its complexity. Our SI is what we hope is just one step forward in such a direction, a direction that harnesses state-of-the-art computational tools in the quest to better understand human knowledge.

List of Contributors:

C1.
Hassani, H.; Beneki, C.; Unger, S.; Mazinani, M.T.; Yeganegi, M.R. Text Mining in Big Data Analytics. Big Data Cogn. Comput. 2020, 4, 1.
C2.
Stella, M.; Restocchi, V.; De Deyne, S. #lockdown: Network-Enhanced Emotional Profiling in the Time of COVID-19. Big Data Cogn. Comput. 2020, 4, 14.
C3.
Pano, T.; Kashef, R.A. Complete VADER-Based Sentiment Analysis of Bitcoin (BTC) Tweets during the Era of COVID-19. Big Data Cogn. Comput. 2020, 4, 33.
C4.
Prakash, B.; Baboo, G.K.; Baths, V. A Novel Approach to Learning Models on EEG Data Using Graph Theory Features—A Comparative Study. Big Data Cogn. Comput. 2021, 5, 39.
C5.
Sermet, Y.; Demir, I. A Semantic Web Framework for Automated Smart Assistants: A Case Study for Public Health. Big Data Cogn. Comput. 2021, 5, 57.
C6.
Sboev, A.; Selivanov, A.; Moloshnikov, I.; Rybka, R.; Gryaznov, A.; Sboeva, S.; Rylkov, G. Extraction of the Relations among Significant Pharmacological Entities in Russian-Language Reviews of Internet Users on Medications. Big Data Cogn. Comput. 2022, 6, 10.
C7.
Fatima, A.; Li, Y.; Hills, T.T.; Stella, M. DASentimental: Detecting Depression, Anxiety, and Stress in Texts via Emotional Recall, Cognitive Networks, and Machine Learning. Big Data Cogn. Comput. 2021, 5, 77.
C8.
Nilsson, K.; Palmqvist, L.; Ivarsson, M.; Levén, A.; Danielsson, H.; Annell, M.; Schöld, D.; Socher, M. Structural Differences of the Semantic Network in Adolescents with Intellectual Disability. Big Data Cogn. Comput. 2021, 5, 25.
C9.
Dresp-Langley, B. Seven Properties of Self-Organization in the Human Brain. Big Data Cogn. Comput. 2020, 4, 10.
C10.
Vitevitch, M.S.; Niehorster-Cook, L.; Niehorster-Cook, S. Exploring How Phonotactic Knowledge Can Be Represented in Cognitive Networks. Big Data Cogn. Comput. 2021, 5, 47.
C11.
Kumar, A.M., Goh, J.Y.Q., Tan, T.H.H., Siew, C.S.Q. Gender Stereotypes in Western Movies and Their Evolution Over Time: Insights from Network Analysis. Big Data Cogn. Comput. 2022, 6, 50

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fensel, D. Ontologies; Springer: Berlin, Germany, 2001; pp. 11–18. [Google Scholar]
  2. Stella, M. Text-mining forma mentis networks reconstruct public perception of the STEM gender gap in social media. PeerJ Comput. Sci. 2020, 6, e295. [Google Scholar] [CrossRef]
  3. Hills, T.T.; Kenett, Y.N. Is the mind a network? Maps, vehicles, and skyhooks in cognitive network science. Top. Cogn. Sci. 2022, 14, 189–208. [Google Scholar] [CrossRef]
  4. Siew, C.S.; Wulff, D.U.; Beckage, N.M.; Kenett, Y.N. Cognitive network science: A review of research on cognition through the lens of network representations, processes, and dynamics. Complexity 2019, 2109, 2108423. [Google Scholar] [CrossRef]
  5. Aggarwal, C.C. Neural Networks and Deep Learning; Springer: San Francisco, CA, USA, 2018; Volume 10, p. 978-3. [Google Scholar]
  6. Stella, M.; Kenett, Y.N. Viability in multiplex lexical networks and machine learning characterizes human creativity. Big Data Cogn. Comput. 2019, 3, 45. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Stella, M.; Kenett, Y.N. Knowledge Modelling and Learning through Cognitive Networks. Big Data Cogn. Comput. 2022, 6, 53. https://doi.org/10.3390/bdcc6020053

AMA Style

Stella M, Kenett YN. Knowledge Modelling and Learning through Cognitive Networks. Big Data and Cognitive Computing. 2022; 6(2):53. https://doi.org/10.3390/bdcc6020053

Chicago/Turabian Style

Stella, Massimo, and Yoed N. Kenett. 2022. "Knowledge Modelling and Learning through Cognitive Networks" Big Data and Cognitive Computing 6, no. 2: 53. https://doi.org/10.3390/bdcc6020053

APA Style

Stella, M., & Kenett, Y. N. (2022). Knowledge Modelling and Learning through Cognitive Networks. Big Data and Cognitive Computing, 6(2), 53. https://doi.org/10.3390/bdcc6020053

Article Metrics

Back to TopTop