Next Article in Journal
TinyML Classification for Agriculture Objects with ESP32
Previous Article in Journal
Entrepreneurial Competencies in the Era of Digital Transformation: A Systematic Literature Review
Previous Article in Special Issue
Motivational Teaching Techniques in Secondary and Higher Education: A Systematic Review of Active Learning Methodologies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Using LLM to Identify Pillars of the Mind Within Physics Learning Materials

Faculty of Mathematics, Physics and Informatics, Comenius University Bratislava, 842 48 Bratislava, Slovakia
*
Author to whom correspondence should be addressed.
Digital 2025, 5(4), 47; https://doi.org/10.3390/digital5040047
Submission received: 14 July 2025 / Revised: 26 September 2025 / Accepted: 29 September 2025 / Published: 2 October 2025
(This article belongs to the Collection Multimedia-Based Digital Learning)

Abstract

Artificial intelligence tools are quickly being applied in many areas of science, including learning sciences. Learning requires various types of thinking, sustained by distinct sets of neural networks in the brain. Labelling these systems gives us tools to manage them. This paper presents a pilot application of Large Language Models (LLMs) to physics textbook analysis, grounded in a well-developed neural network theory known as the Five Pillars of the Mind. The domain-specific networks, innate sense, and the five pillars provide a framework with which to examine how physics is learnt. For example, one can identify which pillars are active when discussing a physics concept. Identifying which pillars belong to which physics concept may be significantly influenced by the bias of the author and could be too time-consuming for longer, more complex texts involving physics concepts. Therefore, using LLMs to identify pillars could enhance the application of this framework to physics education. This article presents a case study in which we used selected Large Language Models to identify pillars within eight pages of learning material concerning forces aimed at 12- to 14-year-old pupils. We used GPT-4o and o4-mini, as well as MAXQDA AI Assist. Results from these models were compared with the authors’ manual analysis. Precision, recall, and F1-Score were used to evaluate the results quantitatively. MAXQDA AI Assist obtained the best results with 1.00 precision, 0.67 recall, and an F1-Score of 0.80. Both products by OpenAI hallucinated and falsely identified several concepts, resulting in low precision and, consequently, low F1-Score. As predicted, ChatGPT o4-mini scored twice as high as ChatGPT 4o. The method proved to be promising, and its future development has the potential to provide research teams with analysis not only of written learning material, but also of pupils’ written work and their video-recorded activities.

1. Introduction

Educational sciences are significantly influenced by technological advancements in relation to both imaging methods and artificial intelligence. The development of artificial intelligence has enabled the everyday use of Large Language Models by a broad public. This development substantially changes everyone’s daily activities, including those of teachers and pupils. Researchers are also challenged to use it wisely to maintain academic honesty and consider the complexity of truth [1,2]. The development of neuroimaging methods in the late 20th century provided a technical basis for the emergence of new fields of study examining the brain, such as cognitive neuroscience [3]. This field studies the mutual influence of cognitive development and brain development. Understanding this mutual influence, along with the processes occurring in the brain during learning, enables a deeper comprehension of learning itself and a verification of many theories within cognitive science. As an example of a study within cognitive neuroscience, we present the work of Manson, Schumacher, and Just [4], who identified the brain locations associated with 45 physics concepts, ranging from basic to advanced. Interestingly, the same parts of the brain are active when thinking about different concepts, such as wavelengths, acceleration, dark matter, and cosmology. All of these are measurable magnitudes. New neuroimaging methods have enhanced the quality of this research, fulfilling people’s natural desire to know themselves better.
An example of the analysis and synthesis of outcomes from studies within cognitive neuroscience is the theory of the Five Pillars of the Mind [5], which has been formulated based on over 2000 neuroscientific studies. One of the roots of this theory was the finding that symbolic representations share the same or similar pathways in the brain [6]. This theory provides a framework that can be used to look at learning in general. Each pillar represents a different neural network that one uses and develops throughout one’s life. The author defines these pillars—symbols, patterns, order, categories, and relationships—using examples from language and math. By definition, there is no hierarchy between pillars. Symbols represent objects, processes, feelings, meanings or functions. Patterns are models, recurring designs, or organisational structures used to guide people in completing tasks, such as work patterns, school schedules, good (or bad) behaviour, and other routines. Order is the organisation or disposition of things or people in relation to each other based on a specific arrangement, method, direction, or structure [5]. Categories are defined as divisions and classifications of things that share qualities and relationships that express connections, relations, links, or associations between two or more objects, people, or concepts [5]. The theory of the Five Pillars of the Mind proved to be useful; it was developed further in the book Writing, Thinking, and the Brain [7].
In our previous work, we identified examples of pillars in physics using concepts from fluid mechanics [8]. Prior to that study, we analysed selected parts of Slovak and international physics textbooks concerning the inclined plane and related physics concepts, such as forces or Newton’s laws, to identify pillars that might be developed with them. We focused on identifying pillars that could be developed with key physics concepts discussed within learning materials. Table 1 and Figure 1 present a summary of the results, illustrating the contribution of various physics concepts to the development of each pillar within the discussed topic. The percentages in parentheses indicate the proportion of concepts developing a particular pillar of the total number of discussed concepts.
Within the topic of inclined planes, 14 key physics concepts were identified aimed at 11- to 16-year-olds, whereas for fluid mechanics, 11 key physics concepts were identified, aimed at 15- to 16-year-olds. These results lead us to hypothesise that we primarily develop symbols and relationships when discussing physics concepts. However, the identification of pillars can be strongly biassed depending on the knowledge and prior experience of the person identifying them [9,10]. This could be avoided by using Large Language Models [11] or a larger research team.
Within this article, we explore what is technically possible, while also considering what makes pedagogical sense [12]. We explore the possibilities of using selected Large Language Models to identify key pillars. ChatGPT is an artificial intelligence chatbot that has been trained using Reinforcement Learning from Human Feedback [13]. Conversional format enables multiple meanings and multiple concepts, which is important in learning material [14]. OpenAI states that ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers and that there are biases in the training data [7]. Another OpenAI product, ChatGPT o4-mini, is the latest small o-series model optimised for quick and effective reasoning. Reasoning models are supposed to excel in “complex problem solving, coding, scientific reasoning, and multi-step planning for agentic workflow” [15]. ChatGPT 4o and ChatGPT o4-mini accept both text and image inputs, and compared to ChatGPT o4-mini, ChatGPT 4o benefits from very precise instructions [16]. Reasoning models, such as ChatGPT o4-mini are supposed to execute tasks with high accuracy and precision, which is suitable for expertise-requiring domains [17]. We decided to use ChatGPT 4o and ChatGPT o4-mini due to these specifications and the fact that they are widely used Large Language Models which are available free of cost for limited requests per day.
MAXQDA is a software used for qualitative data analysis, and AI Assist has recently been added to its functions. Integrating a large language model into this software enables researchers to summarise uncoded and coded data, suggest codes and subcodes, code documents, and interact with documents or coded segments [18]. AI Assist utilises a deterministic model, which has limited creativity and fewer hallucinations, as the results are derived solely from the uploaded data corpus. Hallucinations may occur once a researcher’s queries extend beyond the dataset. It has been developed specifically for qualitative data analysis, which makes it more suitable than general generative language models [19].

2. Methodology

2.1. Task Definition and Dataset Preparation

LLMs were asked to identify specific objects within the given learning material. We chose a standard physics textbook [20] that has been used in many curriculum designs over the years. We selected an eight-page section of the textbook (pp. 8–15) concerning forces, especially the introduction to forces, for children aged 12 years. The first chapter of the learning material discusses what a force is, how it can be represented, the types of forces that exist, and how forces are measured. It continues with an explanation of balanced, unbalanced, and resultant forces, explaining Newton’s Laws of Motion without explicitly mentioning that they are Newton’s Laws of Motion. The third part discusses friction, how it can be reduced, and how it can be useful. Gravity is explained in the fourth and final part of the learning material, which explains the link between gravity, mass, and weight, as well as how one’s weight can differ on different planets. Within the whole learning material, there is only one equation [20]:
Weight = mass × gravitational field strength
As already noted, we used ChatGPT 4o, ChatGPT o4-mini, and MAXQDA24 AI Assist to identify pillars. The preparation of the dataset that was later evaluated is shown in Figure 2. ChatGPT 4o and ChatGPT o4-mini, both available free of charge to limited daily requests, were assigned the same task. The process began with uploading the document for analysis, followed by the provision of a detailed explanation of pillars as defined by Tokuhama-Espinosa [5]. The models were then instructed to identify the pillars within the attached learning material. Upon receiving the initial outputs, we further asked the models to identify physics concepts discussed within the learning material and to associate each concept with the corresponding pillar(s) being developed with that concept. MAXQDA AI Assist, an add-on to MAXQDA24 software, was used in its free version. A MAXQDA24 project was created, the document was uploaded for analysis, and five codes —each representing one of the pillars—were defined. The definitions of Tokuhama-Espinosa [5] were used as the code descriptions. An example of code definition in MAXQDA24 software is displayed in Figure 3. Subsequently, the AI Assist coding function was employed by selecting the document to be coded and assigning one of the predefined codes. This step was reiterated five times, selecting a different code–pillar each time.
MAXQDA AI Assist can also be used to chat with the document. We utilised this feature to identify physics concepts within a learning material and then coded the document accordingly. Consequently, we identified overlaps between codes concerning concepts and codes concerning pillars, thereby gaining output in the form of concept–pillar associations.

2.2. Evaluation Criteria and Comparative Framework

We present a comparison and evaluation of the performance of two different categories of AI models: Open AI products and a MAXQDA product. Three widely followed evaluation metrics [21,22] were used to assess their performance: precision, recall, and F1-Score. To compare their performance in relation to a specific task and our research, the following comparison fields were chosen:
(a)
Physics concepts identified within a learning material.
(b)
Pillars assigned to the identified physics concepts.

2.3. Authors’ Identification

The authors’ identification of pillars and concepts was used as a reference document. The authors first identified the pillars using MAXQDA24 software. These identifications were compared and further discussed until a unified identification was obtained. The same protocol was followed when the identified concepts developed within the learning material were examined. Overlapping coded segments for pillars and concepts were identified, and a summary table of the authors’ identification was created. An example of overlapping coded segments for pillars and concepts is displayed in Figure 4. The red-highlighted segment is coded with the concept code “Force”, and within this segment, the pillars’ “Symbols”, and “Relationships” were used as codes. This means that within this segment, a concept force is discussed, and potential pillar symbols and relationships are developed.
The authors identified 13 physics concepts (see Table 2) discussed throughout the entire learning material. Table 2 presents a contingency table with physics concepts as rows and Pillars of the Mind as columns. The table also summarises the number of coded segments for each pillar and the percentage of total coded segments that overlap with physics concept codes. It is important to note that the overall number of coded segments for the pillars was higher than in Table 2, because not every coded pillar segment overlaps with a physics concept code. For example, the structural element Objectives (Figure 4) was coded with code order, as it provides a structure to guide pupils through the learning material.
Figure 5 presents a bar chart and a pie chart illustrating the authors’ identification of the intersection between coded segments for pillars and physics concepts. The graphs indicate that symbols and relationships are significantly more developed than patterns and order, consistent with the other learning materials mentioned above. According to the authors’ analysis, categories are developed more frequently within this learning material than in the above materials.

3. Results

This study examined the performance of selected LLMs in identifying physics concepts and Pillars of the Mind within a section of learning material. The most efficient performance in identifying physics concepts (see Table 3) was achieved by MAXQDA AI Assist, with a precision of 1.00, a recall of 0.667, and an F1-Score of 0.80. ChatGPT 4o and ChatGPT o4-mini falsely identified several physics concepts, leading to 0.692 precision for OpenAI o4-mini and only 0.308 precision for ChatGPT 4o. ChatGPT 4o identified 13 concepts, but only four were correctly identified. There was a mention of electric and magnetic forces as different types of forces, which caused ChatGPT 4o to identify Ohm’s Law and electromagnetic induction as physics concepts discussed in the material. However, there was no mention of wave motion, experimental error analysis, or any related concepts to these terms in the learning material; ChatGPT 4o identified these physics concepts as being discussed in the material. ChatGPT o4-mini also identified 13 concepts, but 9 of them were incorrect. Terminal velocity, circular motion, Hooke’s Law, and scientific investigation were among the concepts incorrectly identified. Although these concepts were not discussed, there were some sections that are linked to these concepts. For example, measurement of force with a force metre was mentioned, but no experiments were described; tension in a rope was mentioned, but no form of Hooke’s Law was discussed.
Another task that was assigned to the LLMs was pillar identification. They were asked to assign pillars to previously identified physics concepts discussed in the learning material. Results of all LLMs were seemingly consistent in the distribution of physics concepts per pillar. Figure 6 displays a bar graph presenting the number of identified physics concepts associated with each of the Five Pillars of the Mind as determined by various models. The models compared include two summaries of the same version of ChatGPT 4o, MAXQDA AI Assist, ChatGPT o4-mini, and the authors’ manual analysis. Order was the most consistently identified pillar; on the other hand, categories allocate the highest variability. It is important to note that ChatGPT o4 1 and ChatGPT o4 2 are two distinct summaries from the same version of ChatGPT 4o with different results.
Conclusions drawn from this comparison are as follows:
  • Relationships are, according to LLM, the most frequently developed pillar.
  • Order is mostly identified in the structure of the learning material.
  • Authors’ manual identification and LLM’s identification are not consistent.
  • ChatGPT is not consistent when performing a task several times.
Conclusion IV is illustrated better in Figure 7. This figure displays two summaries of ChatGPT o4 in a bar plot of the distribution of physics concepts developed per pillar identified by ChatGPT o4. This is an initial graph obtained solely from the exact answer of ChatGPT o4.
To further examine this inconsistency, we calculated the odds of identification of a pillar across pillars for both summaries. We plotted the calculated odds (see Figure 8). The highest inconsistency according to odds is for relationships; for summary 1, the odds for relationships are 0.33 and for summary 2, 0.53. Considering odds, ChatGPT o4 is the most consistent when identifying patterns.

4. Conclusions

The main objective of this study was to evaluate the performance of LLMs in a specific task: the identification of pillars and physics concepts in a section of learning material. We selected ChatGPT 4o, ChatGPT o4-mini, and MAXQDA AI Assist to perform this task. The last two performed consistently when identifying physics concepts, but MAXQDA AI Assist scored higher with 1.0 precision, 0.67 recall, and 0.80 F1-Score. ChatGPT o4-mini obtained twice as high a score as ChatGPT 4o, with 0.692 precision, 0.60 recall, and 0.643 F1-Score. Comparison of the identification of pillars reveals that relationships are the most developed pillars in physics learning materials, with order being mostly identified in the structure of the learning material that guides pupils through it. Human manual analysis of learning materials is more detailed; pillars are assigned to smaller text segments, leading to deviation from LLM results. While this study provides valuable insight into the usage of LLM for research on physics education, it has some limitations. The relatively small sample size (one section of learning material) may limit the generalizability of the findings, mostly concerning generalisations of the most developed pillars within physics education. Further studies with larger and more diverse samples are needed to validate and extend the results provided in this paper. Despite these limitations, these findings contribute important knowledge on the use of LLMs for physics education research and offer a foundation for a method we are developing for the effective and precise identification of Pillars of the Mind within physics learning materials. While most of the research on educational applications of AI focuses on pupils’ work, teaching, and learning [23,24], we are trying to use AI to research the structure of thinking and development of textbooks, which is still only in its early stages. In the study ‘The Vision of University Students from the Educational Field in the Integration of ChatGPT’ [25], it can be seen that students do not forecast the vision of such an analysis.
The method of utilisation of LLMs described in this article was also discussed with T. Tokuhama-Espinosa, author of The Five Pillars of the Mind, from Harvard University, Figure 9, who found this approach interesting and promising.
Obtaining such a method will provide research teams with more accurate and time-efficient analysis not only of learning materials, but also pupils’ answers, lab work, and recordings of lessons, all of which are necessary for theoretically grounded research on applying cognitive neuroscience to physics education. A “pillar map” of the learning material in our research is a map that can be used by the author of the learning material or an expert analysing such material. However, to improve metacognition, it can also be prepared in a form accessible to teachers [26] and learners. Impact can be forecast in a more efficient brain-friendly curriculum, and also in connection to Self-Regulated Learning, in its new, emerging form through ChatGPT [27]. By making the cognitive brain-friendly architecture of a topic explicit, learners can better plan, monitor, and evaluate their own progress. One example of a textbook that attempts this is Science by P. Morris and P Deo [28]. In the near future, we can connect the theory of the Five Pillars of the Mind with the theory of primitive phenomenologies, which A. diSessa proposed some decades ago [29]. However, without powerful digital technologies, we were not able to apply this theory broadly. For an examination of the development of neuroconstructivism, we refer the reader to one of the first books on this topic, The Natural History of Mind by G.R. Taylor [30].
Of course, AI must be implemented carefully in the educational context [31]. The importance of trust in the utilisation of AI in educational research is well discussed in [32].

Author Contributions

Conceptualization, P.D. and D.Č.; methodology, D.Č.; software, P.D.; validation, P.D. and D.Č.; formal analysis, P.D.; investigation, D.Č.; resources, P.D., and D.Č.; data curation, D.Č.; writing—original draft preparation, P.D. and D.Č.; writing—review and editing, P.D.; visualization, P.D. and D.Č.; supervision, P.D.; project administration, P.D.; funding acquisition, P.D. All authors have read and agreed to the published version of the manuscript.”

Funding

This research was funded by the EU NextGenerationEU through the Recovery and Resilience Plan for Slovakia under the project No. 09I03-03-V04-00093.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author, as the textbook analyzed is subject of copyright.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Marx, V. Quest for AI literacy. Nat. Methods 2024, 21, 1412–1415. [Google Scholar] [CrossRef] [PubMed]
  2. Birhane, A.; Kasirzadeh, A.; Leslie, D.; Wachter, S. Science in the age of large language models. Nat. Rev. Phys. 2023, 5, 277–280. [Google Scholar] [CrossRef]
  3. Carew, T.; Magsamen, S. Neuroscience and education: An ideal partnership for producing evidence-based solutions to guide 21st century learning. Neuron 2010, 67, 685–688. [Google Scholar] [CrossRef] [PubMed]
  4. Manson, R.A.; Schumacher, R.A.; Just, M. The Neuroscience of Advanced Scientific Concepts. npj Sci. Learn. 2021, 6, 29. [Google Scholar] [CrossRef] [PubMed]
  5. Tokuhama-Espinosa, T. Five Pillars of the Mind: Redesigning Education to Suit the Brain; W. W. Norton: New York, NY, USA, 2019. [Google Scholar]
  6. Dehaene, S. Reading in the Brain: The New Science of How We Read; Penguin Books: London, UK, 2009. [Google Scholar]
  7. Tokuhama-Espinosa, T.; Nazareno, J.R.S.; Rappleye, C. Writing, Thinking, and the Brain; Teachers College Press: New York, NY, USA, 2025. [Google Scholar]
  8. Červeňová, D.; Demkanin, P. The theory of Five Pillars of the Mind and Physics Education. J. Phys. Conf. Ser. 2025, 2950, 012009. [Google Scholar] [CrossRef]
  9. Elster, A.; Sagiv, L. Personal Values and Cognitive Biases. J. Personal. 2024. [Google Scholar] [CrossRef] [PubMed]
  10. Kakinohana, R.K.; Pilati, R. Differences in decisions affected by cognitive biases: Examining human values, need for cognition, and numeracy. Psicol. Reflex. Crit. 2023, 36, 26. [Google Scholar] [CrossRef] [PubMed]
  11. Khan, S.M.F.A.; Shehawy, Y.M. Perceived AI Consumer-Driven Decision Integrity: Assessing Mediating Effect of Cognitive Load and Response Bias. Technologies 2025, 13, 374. [Google Scholar] [CrossRef]
  12. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  13. Lo, C.K. What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature. Educ. Sci. 2023, 13, 410. [Google Scholar] [CrossRef]
  14. Rospigliosi, P. Artificial intelligence in teaching and learning: What questions should we ask of ChatGPT? Interact. Learn. Environ. 2023, 31, 1–3. [Google Scholar] [CrossRef]
  15. OpenAI. Introducing ChatGPT. OpenAI. Available online: https://openai.com/index/chatgpt/ (accessed on 20 April 2025).
  16. OpenAI. Reasoning Guide. OpenAI. Available online: https://platform.openai.com/docs/guides/reasoning?api-mode=responses (accessed on 20 April 2025).
  17. OpenAI. Reasoning Best Practices. OpenAI. Available online: https://platform.openai.com/docs/guides/reasoning-best-practices (accessed on 20 April 2025).
  18. VERBI Software. MAXQDA 24 Online Manual. 2023. Available online: https://www.maxqda.com/download/manuals/MAX24-Online-Manual-Complete-EN.pdf (accessed on 20 April 2025).
  19. Kucartz, U.; Rädiker, S. Integrating Ai in Qualitative Content Analysis. 2024. Available online: https://qca-method.net/documents/Kuckartz-Raediker-2024-Integrating-AI-in-Qualitative-Content-Analysis.pdf (accessed on 1 May 2025).
  20. Reynolds, H. Complete Physics for Cambridge Secondary 1; Oxford University Press: Oxford, UK, 2013; p. 4426. ISBN 9780198394426. [Google Scholar]
  21. Li, I.; Fabbri, A.R.; Tung, R.R.; Radev, D.R. What should I learn first: Introducing LectureBank for NLP education and prerequisite chain learning. Proc. AAAI Conf. Artif. Intell. 2019, 33, 6674–6681. [Google Scholar] [CrossRef]
  22. Reales, D.; Manrique, R.; Grévisse, C. Core concept identification in educational resources via knowledge graphs and large language models. SN Comput. Sci. 2024, 5, 1029. [Google Scholar] [CrossRef]
  23. Zhu, W.; Wei, L.; Qin, Y. Artificial Intelligence in Education (AIEd): Publication Patterns, Keywords, and Research Focuses. Information 2025, 16, 725. [Google Scholar] [CrossRef]
  24. Garzón, J.; Patiño, E.; Marulanda, C. Systematic Review of Artificial Intelligence in Education: Trends, Benefits, and Challenges. Multimodal Technol. Interact. 2025, 9, 84. [Google Scholar] [CrossRef]
  25. Cebrián Cifuentes, S.; Guerrero Valverde, E.; Checa Caballero, S. The Vision of University Students from the Educational Field in the Integration of ChatGPT. Digital 2024, 4, 648–659. [Google Scholar] [CrossRef]
  26. Tokuhama-Espinosa, T.; Nouri, A. Teachers’ Mind, Brain, and Education Literacy: A Survey of Scientists’ Views. Mind Brain Educ. 2023, 17, 170–174. [Google Scholar] [CrossRef]
  27. Ng, D.T.K.; Tan, C.W.; Leung, J.K.L. Empowering student self-regulated learning and science education through ChatGPT: A pioneering pilot study. Br. J. Educ. Technol. 2024, 55, 1328–1353. [Google Scholar] [CrossRef]
  28. Morris, P.; Deo, P. Sciences. MYP by Concept 1; Hodder Education: London, UK, 2020; ISBN 9781471880377. [Google Scholar]
  29. di Sessa, A.A.; Levin, M. Processes of Building Theories of Learning: Three Contrasting Cases. In Engaging with Contemporary Challenges Through Science Education Research; Levrini, O., Tasquier, G., Amin, T.G., Branchetti, L., Levin, M., Eds.; Contributions from Science Education Research; Springer: Cham, Switzerland, 2021; Volume 9. [Google Scholar] [CrossRef]
  30. Taylor, G.R. The Natural History of the Mind; Elsevier-Dutton Publishing: New York, NY, USA, 1979; ISBN 0-525-16424-3. [Google Scholar]
  31. Bai, L.; Liu, X.; Su, J. ChatGPT: The cognitive effects on learning and memory. Brain-x 2023, 1, e30. [Google Scholar] [CrossRef]
  32. Đerić, E.; Frank, D.; Milković, M. Trust in Generative AI Tools: A Comparative Study of Higher Education Students, Teachers, and Researchers. Information 2025, 16, 622. [Google Scholar] [CrossRef]
Figure 1. Bar chart summarising identifications of pillars developed in learning materials within different topics (authors).
Figure 1. Bar chart summarising identifications of pillars developed in learning materials within different topics (authors).
Digital 05 00047 g001
Figure 2. Dataset preparation design (authors).
Figure 2. Dataset preparation design (authors).
Digital 05 00047 g002
Figure 3. Example of code definition (code memo) in MAXQDA software (authors).
Figure 3. Example of code definition (code memo) in MAXQDA software (authors).
Digital 05 00047 g003
Figure 4. MAXQDA24 coded segments overlap—example (authors).
Figure 4. MAXQDA24 coded segments overlap—example (authors).
Digital 05 00047 g004
Figure 5. Graphical representation of the authors’ identification of physics concepts and Pillars of the Mind within the learning material.
Figure 5. Graphical representation of the authors’ identification of physics concepts and Pillars of the Mind within the learning material.
Digital 05 00047 g005
Figure 6. A bar chart illustrating the distribution of physics concepts across pillars using various models and methods (authors).
Figure 6. A bar chart illustrating the distribution of physics concepts across pillars using various models and methods (authors).
Digital 05 00047 g006
Figure 7. Illustration of the inconsistency of ChatGPT o4 in a performance of a task—bar chart (authors).
Figure 7. Illustration of the inconsistency of ChatGPT o4 in a performance of a task—bar chart (authors).
Digital 05 00047 g007
Figure 8. Illustration of the inconsistency of ChatGPT o4 in a performance of a task—odds plot (authors).
Figure 8. Illustration of the inconsistency of ChatGPT o4 in a performance of a task—odds plot (authors).
Digital 05 00047 g008
Figure 9. Validation of the approaches used in this article; online, 21 August 2025 (authors).
Figure 9. Validation of the approaches used in this article; online, 21 August 2025 (authors).
Digital 05 00047 g009
Table 1. Summary of identifications of pillars developed in learning materials within different topics (authors).
Table 1. Summary of identifications of pillars developed in learning materials within different topics (authors).
PillarDiscussed TopicN (%)
SymbolsInclined plane7 (50.0%)
Fluid mechanics6 (54.5%)
PatternsInclined plane4 (28.6%)
Fluid mechanics3 (27.3%)
OrderInclined plane4 (28.6%)
Fluid mechanics2 (18.2%)
CategoriesInclined plane4 (28.6%)
Fluid mechanics1 (9.1%)
RelationshipsInclined plane8 (57.0%)
Fluid mechanics8 (72.3%)
Table 2. Authors’ identification of physics concepts and Pillars of the Mind within the learning material.
Table 2. Authors’ identification of physics concepts and Pillars of the Mind within the learning material.
ConceptSymbolsPatternsOrderCategoriesRelationships
Gravitational field strength20013
Gravitational field10000
Mass00013
3rd Newton’s Law of Motion01100
2nd Newton’s Law of Motion00121
1st Newton’s Law of Motion44134
Measurement20013
Resistive forces21020
Friction10231
Weight30114
Gravitational force31115
Force92055
Sum27972029
%0.290.100.080.220.32
Table 3. Evaluation metrics of the performance of selected LLMs in physics concept identification within the learning material.
Table 3. Evaluation metrics of the performance of selected LLMs in physics concept identification within the learning material.
PRF1-Score
ChatGPT 4o 0.3080.3640.333
ChatGPT o4-mini0.6920.6000.643
MAXQDA AI Assist1.0000.6670.800
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Červeňová, D.; Demkanin, P. Using LLM to Identify Pillars of the Mind Within Physics Learning Materials. Digital 2025, 5, 47. https://doi.org/10.3390/digital5040047

AMA Style

Červeňová D, Demkanin P. Using LLM to Identify Pillars of the Mind Within Physics Learning Materials. Digital. 2025; 5(4):47. https://doi.org/10.3390/digital5040047

Chicago/Turabian Style

Červeňová, Daša, and Peter Demkanin. 2025. "Using LLM to Identify Pillars of the Mind Within Physics Learning Materials" Digital 5, no. 4: 47. https://doi.org/10.3390/digital5040047

APA Style

Červeňová, D., & Demkanin, P. (2025). Using LLM to Identify Pillars of the Mind Within Physics Learning Materials. Digital, 5(4), 47. https://doi.org/10.3390/digital5040047

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop