Higher Mathematics Education and AI Prompt Patterns: Examples from Selected University Classes
Abstract
1. Introduction and Literature Review
- RQ1:
- How do students evaluate AI-supported mathematics activities designed using structured prompt patterns (engagement, perceived usefulness, clarity of interaction, and reflection)?
- RQ2:
- RQ2: How is students’ evaluation associated with prior experience using AI tools (regular vs. occasional/none)?
- RQ3:
- How do students’ perceptions differ between the two implementations (complex numbers vs. conditional probability; Applied Mathematics vs. Computer Science cohorts) when the same questionnaire is applied?
2. General Prompt Patterns
- Zero-Shot Prompting—AI responds without prior examples, testing its general knowledge and reasoning skills [10].
- Few-Shot Prompting—AI is provided with a few examples, allowing it to adapt the tone, structure, and reasoning of its response [11].
- Chain-of-Thought Prompting—AI reveals intermediate reasoning steps rather than only the final result, which enhances transparency in problem-solving [12].
- Persona-Based Prompting—AI assumes a specific role (e.g., teacher, student, historical figure, or coach) to shape its responses accordingly [7].
- Constraint-Based Prompting—The response is restricted by format or structure (e.g., bullet list, specific word limit) [12].
- Negative Prompting—Specifies what AI should avoid, such as jargon or overly technical language [13].
- Data-Driven Prompting—AI analyzes numerical data, tables, or code fragments to generate responses based on evidence [15].
- Game-Play Prompting—The interaction takes the form of a game, quiz, or challenge to increase engagement.
- Template Prompting—AI responses follow predefined templates for consistency and evaluation.
- Cognitive Verifier—AI not only generates an answer but also verifies the correctness of the reasoning.
- Adaptive Quiz Prompting—Questions dynamically adjust to the learner’s level of proficiency [16].
2.1. Educational Prompt Patterns
- Knowledge BuildingThe first category includes patterns aimed at developing and organizing knowledge.{Socratic Reversal involves the student taking on the role of the teacher and explaining the material to the AI. This reversal of roles promotes deeper understanding through verbalization and justification.Feynman Prompt encourages the student to explain content in the simplest possible way, as if teaching a complete beginner. This approach helps verify the student’s level of understanding and identify gaps in knowledge.In the Gap Finder pattern, the AI analyzes the student’s responses and points out areas that require further work. This serves a diagnostic function and supports individualized learning.The final pattern in this category, Concept Contrast, consists of comparing concepts with similar meanings or functions, which helps develop analytical skills and supports more structured knowledge acquisition.
- Active PracticeThe second category includes patterns designed to support practical application of knowledge.In Show-Then-Do, the AI initially presents a correct example of a solution, after which the student completes a similar task independently. This facilitates the transfer of knowledge into practice.Error Injection involves the AI deliberately introducing an error that the student must identify and correct, fostering self-correction skills and critical thinking.Test Me refers to the AI generating quizzes that assess student progress, reinforcing knowledge and supporting ongoing monitoring of learning.The Rewrite Challenge pattern asks the student to transform a correct solution into an equivalent alternative form. This develops cognitive flexibility and deepens understanding of the content.
- Creative ThinkingThe third category focuses on patterns that stimulate creativity.In Many Ways, the student looks for multiple possible approaches to solving a problem, encouraging divergent thinking.Analogy Builder involves creating analogies to explain complex or abstract concepts, thus supporting knowledge transfer and deeper comprehension.Debate Simulation presents two opposing viewpoints in the form of a simulated debate, allowing students to explore different perspectives and develop argumentation skills.
- Reflection and MetacognitionThe final category encompasses patterns that support self-reflection and conscious engagement with the learning process.Confidence Check asks the student to assess their confidence in an answer, after which the AI provides commentary. This helps develop cognitive awareness and self-regulation.In AI-as-Coach, the AI acts as a mentor, helping the student with planning their learning and developing strategies to improve their study habits.Reverse Role Play involves the student acting as the teacher while the AI takes the role of the student. This promotes reflection and enhances the student’s ability to explain material clearly.Finally, Learning Diary encourages students to keep a reflective journal with AI support, fostering systematic self-reflection and the development of metacognitive competencies.
2.2. Teacher-Oriented Patterns
- Curriculum Generator, in which the AI supports the teacher in planning a course or lesson cycle by taking into account learning objectives, curriculum standards, and available time, and then proposing the sequence of topics and appropriate instructional materials.
- Misconception Map, which enables the identification of common misconceptions and typical errors made by students regarding a given topic. This allows the teacher to anticipate potential difficulties and prepare strategies for addressing them, thereby increasing instructional effectiveness.
- Scaffold Builder, which generates a sequence of tasks or examples with increasing levels of difficulty. Students can develop new skills smoothly, avoiding overly abrupt cognitive jumps that might reduce motivation.
- Differentiation Designer, which adapts the same content to different student ability levels. The AI modifies, for example, text length, vocabulary range, or the number of hints, thus supporting instructional differentiation.
- Rubric-Based Feedback, in which the AI evaluates the student’s work based on predefined criteria, generating descriptive feedback that highlights strengths and areas requiring improvement. The teacher retains control over the process by approving or modifying the AI’s suggestions.
- Adaptive Test Generation, which makes it possible to create tests that automatically adjust difficulty to the student’s abilities. This enables more precise assessment of knowledge and supports personalized educational development.
- AI as Peer Reviewer, in which the system acts as a “peer”. Instead of formal grading, it focuses on offering support, suggesting improvements, and encouraging revision, making the evaluation process less stressful and more constructive.
- Curriculum Generator (example: complex numbers, 90 min session). Prompt example: “Propose a lesson plan that introduces the imaginary unit i, operations on , and the trigonometric form. Include two diagnostic questions, three worked examples, five student exercises with increasing difficulty, and a five-minute exit quiz. Ensure common misconceptions are addressed”.
- Misconception Map (example: conditional probability). Prompt example: “List frequent student errors when applying Bayes’ rule and the law of total probability (e.g., confusing with , forgetting normalization, mixing priors and likelihoods). For each misconception, propose a short counterexample and a corrective prompt”.
- Scaffold Builder (example: Bayes’ rule progression). Prompt example: “Create a sequence of six tasks: start with reading probabilities from a contingency table, then computing , then total probability, then Bayes’ rule in a medical-test setting, ending with a student-designed scenario. Add one hint per task that can be revealed progressively”.
- Adaptive Test Generation (example: complex numbers operations). Prompt example: “Generate a short quiz where difficulty adapts based on student answers: start with addition/multiplication, then division and conjugates, then argument/modulus, then one task involving geometric interpretation. After each response, ask a one-sentence justification to discourage guessing”.
- AI as Peer Reviewer (example: solution/explanation quality). Prompt example: “Act as a peer reviewer of my solution: check for missing assumptions, unclear steps, and notation errors; suggest a cleaner mathematical explanation without providing a completely new solution”.
2.3. Recommendations and Best Practices
- Combining multiple prompt types (e.g., quiz + reflection + coaching).
- Adapting prompts to the learner’s experience and cognitive level.
- Maintaining flexibility—AI should encourage creativity rather than rigid responses.
- Teaching students to critically evaluate AI-generated answers.
2.4. Operationalization of the Teaching Experience
3. Description of Lesson Scenarios Within the Module “Fundamentals of Operations on Complex Numbers”
3.1. Part 1—Introduction to Complex Numbers
- Conceptual Warm-up: Analysis of the equation and introduction of the idea of the imaginary number through historical analogies (“just as negative numbers were once discovered”).
- Computational Exercises: Addition, subtraction, multiplication, and division of complex numbers in algebraic form , implemented using the Show-Then-Do, Prompt Me First, and Step-by-Step Debugger patterns.
- Quiz and Reflection: Short tasks based on the Test Me prompt, paraphrasing of answers (Explain-Back), and analysis of gaps (What Did I Miss).
3.2. Part 2—Trigonometric Form of a Complex Number
- Calculate the modulus and argument of a complex number;
- Express complex numbers in the form ;
- Perform multiplication and division of complex numbers in trigonometric form, applying the rules of angle addition and subtraction.
3.3. Part 3—Sets of Complex Numbers
- Flipped Diagnosis: AI assesses students’ prior understanding through diagnostic questions such as “What does the expression mean?”.
- Modeling and Practice: Recognition and description of geometric sets such as circles, annuli, half-planes, and sectors; work with incorrect examples (Error Injection) and exploratory questions (What-If, e.g., “How does the set change if we add the condition ?”).
- Reflection and Design: Creation and explanation of an original example of a complex-number set, supported by the AI-as-Coach and Reflection Prompt patterns.
3.4. Summaryof the Module
3.5. Analysis of the Evaluation Survey Results for the Module on “Fundamentals of Operations on Complex Numbers”
- Regular users: 56 students;
- Occasional users: 40 students;
- Never used AI before: 4 students.
- Regular users provided the highest overall evaluation (mean overall score: 7.5/10).
- Occasional users were more cautious (mean: 6.6/10).
- Students without prior AI experience rated the module positively (mean: 7.0/10), although the group is very small.
3.5.1. Part B: Impact of AI on Engagement and Learning (B1–B8)
- B5: AI encouraged me to think independently and explain my solutions. Mean: 3.93; of students selected 4 or 5.
- B1: Learning with the use of AI was engaging for me. Mean: 3.92; of students selected 4 or 5; only gave a low rating (1 or 2).
- B2: AI helped me better understand the concept of complex numbers. Mean: 3.84; of students selected 4 or 5.
- B3: Working with AI helped me notice my own calculation mistakes faster. Mean: 3.72; selected 4 or 5.
- B8: The classes developed my ability to reflect on my mathematical thinking. Mean: 3.70; selected 4 or 5.
- B6 Interaction with AI was clear and logical. Mean: 3.58; selected 4 or 5.
- B4: Work patterns supported my learning. Mean: 3.53; selected 4 or 5; gave a low rating (1 or 2).
- B7: Working with AI was a better experience compared to traditional exercises without AI. Mean: 3.02; selected 4 or 5; gave a low rating.
3.5.2. Part C: Evaluation of Didactic Modules (C1–C5)
- C1: Introduction to complex numbers: Mean 4.24; of students selected 4 or 5, only gave 1 or 2.
- C4: Quizzes and interactive AI-based tests: Mean 3.99; selected 4 or 5, selected 1 or 2.
- C2: Trigonometric form: Mean 3.95; selected 4 or 5, selected 1 or 2.
- C5: Reflection and discussion of errors: Mean 3.91; selected 4 or 5, selected 1 or 2.
- C3: Sets of complex numbers: Mean 3.72; selected 4 or 5, selected 1 or 2.
3.5.3. Most Frequent Themes in Open-Ended Responses (Part D)
Positive Themes
- Learning from errors and feedback: Students valued AI’s support in identifying mistakes, correcting them, and providing explanatory feedback.
- Quizzes and interactive tests: Many respondents described quizzes and “check-yourself” tasks as the most beneficial form of work.
- Dialogue and interaction with AI: Students appreciated being able to ask follow-up questions, explore different explanations, and engage in conversation.
- Clarity and structure: AI-generated notes and explanations were often described as clear, logical, and easy to follow.
- Examples and analogies: Concrete examples and conceptual analogies were reported as especially supportive in building intuition.
- Errors, “hallucinations”, and limited trust: Some students encountered incorrect or misleading answers and expressed reduced trust in AI-generated explanations.
- Communication difficulties: Certain participants struggled with prompt formulation or interpreting AI’s responses.
- Preference for traditional classes: A noticeable group emphasized that AI cannot replace a human instructor, citing better interaction, motivation and clarity.
3.5.4. Overall Evaluation and Future Perspectives (Part E)
- Mean overall score: 7.12/10.
- Range: 2–10.
- Most common ratings: 8 (31 students), 7 (23 students), 6 (15 students).
- Yes: 59 students.
- Undecided: 27 students.
- No: 14 students.
4. Description of Lesson Scenario Within the Module “Conditional Probabilities”
4.1. Part A—Flipped Diagnosis
- Conceptual Warm-up: A guided discussion in which students are asked to state the definition and meaning of a conditional probability of an event by revisiting familiar ideas. They restate the definition of in their own words and contrast joint vs. marginal probabilities.
- Computational Exercises: Short, targeted tasks in which students select an appropriate representation (tree, table, or Venn diagram) for a two-step process and compute a basic conditional probability. These are implemented through quick Show-Then-Do interactions and Prompt-Me-First nudges.
- Quiz and Reflection: A brief closing exchange where students explain the domain condition for conditional probability and restate the concept in their own language (Explain-Back). The AI highlights any misconceptions (“What Did I Miss?”), preparing students for the more involved applications of total probability and Bayes’ rule in later sections.
4.2. Part B—Modeling and Practice (Show-Then-Do and Error Injection)
- Modeling Tasks: Students analyze short scenarios (population groups, email filtering, diagnostic tests) and select a suitable representation (tree, table, or Venn diagram).
- Guided computation: The AI uses Show-Then-Do prompts to demonstrate a structure and then asks students to replicate it. Small corrections, reminders about normalization, and clarifications about joint vs. conditional probabilities support accurate execution.
- Error Awareness: Students examine typical mistakes (e.g., confusing with ) and briefly explain the correct reasoning. This reinforces conceptual clarity.
4.3. Part C—Application and Reflection
- Applied Computation: Students work through scenarios such as quality control, computing both overall event probabilities via total probability and posteriors via Bayes’ rule.
- Interpretation: Students explain their results in plain language, identify where Bayes’ rule is used in the solution, and justify their diagram choice.
- Reflection and Transfer: Learners design a simple scenario of their own (priors, likelihoods, posterior query) and sketch an appropriate representation, reinforcing the ability to transfer the method to new problems.
4.4. Summary of the Module
4.5. Analysis of the Evaluation Survey Results for the Module on “Conditional Probabilities”
- Regular users: 155 students;
- Occasional users: 58 students;
- Never used AI before: 0 students.
- Regular users provided the highest overall evaluation (mean overall score: 8.44/10).
- Occasional users were more cautious (mean: 7.69/10).
4.5.1. Part B: Impact of AI on Engagement and Learning (B1–B8)
- Learning with the use of AI was engaging for me. Mean: 4.15, high ratings 4–5: 75.59%, low ratings 1–2: 5.63%.
- Work patterns (e.g., “Show first, then do”, “Explain in your own words”) supported my learning. Mean: 4.11, high ratings 4–5: 78.40%, low ratings 1–2: 4.69%.
- Interaction with AI was clear and logical. Mean: 4.11, high ratings 4–5: 75.59%, low ratings 1–2: 8.45%.
- AI helped me better understand the concept of conditional probabilities. Mean: 4.00, high ratings 4–5: 71.36%, low ratings 1–2: 7.51%.
- Working with AI helped me notice my own calculation mistakes faster. Mean: 3.95, high ratings 4–5: 69.48%, low ratings 1–2: 11.74%.
- The classes developed my ability to reflect on my mathematical thinking. Mean: 3.70, high ratings 4–5: 62.91%, low ratings 1–2: 12.21%.
- AI encouraged me to think independently and explain solutions. Mean: 3.67, high ratings 4–5: 60.56%, low ratings 1–2: 16.43%.
- Working with AI was a better experience compared to traditional exercises without AI. Mean: 3.56, high ratings 4–5: 50.23%, low ratings 1–2: 16.90%.
4.5.2. Part C: Evaluation of Didactic Modules (C1, C4, C5)
- C1—Conditional probabilities: mean 4.15 (4–5: , 1–2: ).
- C4—Quizzes and interactive tests: mean 4.14 (4–5: , 1–2: ).
- C5—Reflection and error analysis: mean 4.14 (4–5: , 1–2: ).
4.5.3. Most Frequent Themes in Open-Ended Responses (Part D)
- Immediate feedback and error correction: Many students highlighted the usefulness of AI in correcting mistakes, checking intermediate steps, and offering clarifying feedback. Error correction was frequently described as the most valuable activity for consolidating understanding.
- Dialogue and iterative questioning: Students appreciated being able to ask follow-up questions, request clarifications, or explore alternative explanations. The conversational aspect helped them unpack difficult concepts and verify their reasoning.
- Quizzes and structured tasks: Interactive tasks such as quizzes and “check-yourself” items were viewed as effective for practice and self-evaluation. Several respondents emphasized that these activities helped them monitor progress and reinforce the material.
- Fast and accessible explanations: Participants noted the advantage of receiving immediate responses and step-by-step explanations. The speed and availability of AI were repeatedly mentioned as beneficial for understanding probabilities and related concepts.
- Message limits and interaction constraints: A commonly mentioned frustration was reaching the platform’s message limit (ChatGPT), which interrupted the flow of problem-solving or explanation.
- Communication and formatting difficulties: Several students struggled with articulating prompts clearly or using the correct notation (especially LaTeX or formula formatting). These issues sometimes made the interaction slower or less effective.
- Preference for traditional guidance: A portion of the students expressed that, although helpful, AI cannot replace a human instructor. They emphasized that traditional teachers provide motivation, clearer explanations, and more personalized feedback.
- Occasional inconsistencies: Some students reported that AI explanations were sometimes incomplete, too general, or not well adapted to the specific problem. These inconsistencies required additional verification or repeated prompts, reducing their confidence in certain answers.
4.5.4. Overall Evaluation and Future Perspectives (Part E)
- Mean overall score: 8.23.
- Range: 1–10.
- Most common ratings: 8 (58 students), 9 (52 students), 10 (47 students).
- Yes: 175 students.
- No: 14 students.
- Undecided: 24 students.
5. Comparative Discussion Between Modules
5.1. Categorization of Responses
- “It was much easier to understand the topic thanks to unlimited time and the ability to ask even the simplest questions”.
- “Very good—I could get extensive feedback and felt that the tutor focused only on me”.
- “AI as a tutor is like a personal teacher who answers every question”.
- “In my opinion, AI as a tutor is a very good solution. It’s like a private tutor who is infinitely patient”.
- “10/10”.
- “AI can serve as a supplement to traditional instructors”.
- “Quite good”.
- “I wouldn’t compare AI directly to a teacher—it’s a unique and interesting way to explore topics”.
- “Positive—it won’t replace the teacher, but it’s a good addition”.
- “It’s fine and patient, but human instructors bring more energy and passion”.
- “A traditional instructor is better”.
- “I prefer traditional classes. Human contact is important to me”.
- “AI doesn’t work well long-term”.
- “Nothing can replace a human”.
- “Traditional teachers are definitely better”.
5.2. Group Differences in Perceptions of AI-Based Learning
- B1 (Learning with AI was engaging for me).Romanian students rated the engaging nature of learning with AI significantly higher than Polish students (, ). Rank sums (RO: 35,123.5; PL: 14,017.5) further confirm the higher evaluations of the Romanian group.
- B3 (Thanks to working with AI, I noticed my own errors in calculations more quickly).In this case as well, Romanian students scored higher (, ), indicating that they perceived AI more often as a tool that facilitated the detection of their own errors.
- B4 (Working patterns (e.g., “Show first, then do”, “Explain in your own words”) made learning easier for me).This item showed one of the strongest differences: Romanian students clearly rated the impact of AI work structures on learning effectiveness higher (, ).
- B6 (Interaction with the AI was understandable and logical).Here, also, Romanian students gave significantly higher scores (, ), suggesting that their perception of the clarity and coherence of AI interaction was more positive.
- B7 (Working with AI was a better experience than traditional exercises without AI).Romanian students more frequently considered AI-based experiences superior to traditional exercises (, ).
5.3. Influence of Experience on the Obtained Results–Statistical Analysis
- Yes, regularly.
- Occasionally.
- Never.
- EX1—students with high experience (response: Yes, regularly);
- EX2—students with low or moderate experience (responses: Occasionally or Never).
5.4. Contextual Factors and Implementation Differences
6. Conclusions
- AI is generally viewed as engaging and supportive, particularly in fostering independent thinking and clarifying complex concepts.
- Students highly value quizzes, structured explanations, and opportunities to correct mistakes.
- AI is not considered a replacement for traditional teaching, but, rather, a useful complementary tool.
- Prior experience with AI significantly shapes evaluations.
- Students are open to further integration of AI in mathematics education, provided its limitations are acknowledged and it is used in ways that stimulate reflection and active learning.
7. Future Research Directions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A. Evaluation Survey After Completing the Module
| No. | Statement | 1 | 2 | 3 | 4 | 5 |
| B1 | Learning with the use of AI was engaging for me. | □ | □ | □ | □ | □ |
| B2 | AI helped me better understand the concept of a complex number/conditional probabilities. | □ | □ | □ | □ | □ |
| B3 | Working with AI helped me notice my own calculation mistakes faster. | □ | □ | □ | □ | □ |
| B4 | Work patterns (e.g., “Show first, then do”, “Explain in your own words”) supported my learning. | □ | □ | □ | □ | □ |
| B5 | AI encouraged me to think independently and explain solutions. | □ | □ | □ | □ | □ |
| B6 | Interaction with AI was clear and logical. | □ | □ | □ | □ | □ |
| B7 | Working with AI was a better experience compared to traditional exercises without AI. | □ | □ | □ | □ | □ |
| B8 | The classes developed my ability to reflect on my mathematical thinking. | □ | □ | □ | □ | □ |
| No. | Module Component | 1 | 2 | 3 | 4 | 5 |
| C1 | Part 1–Introduction to complex numbers/Conditional probabilities | □ | □ | □ | □ | □ |
| C2 | Part 2–Trigonometric form/- | □ | □ | □ | □ | □ |
| C3 | Part 3–Sets of complex numbers/- | □ | □ | □ | □ | □ |
| C4 | Quizzes and interactive AI-based tests | □ | □ | □ | □ | □ |
| C5 | Reflection and discussion of errors | □ | □ | □ | □ | □ |
- What helped you the most in understanding the topic of complex numbers while working with AI?
- What was the most difficult aspect of working with AI for you?
- How do you evaluate the role of AI as a “tutor” compared with a traditional instructor?
- Which patterns or forms of work (e.g., quizzes, correcting mistakes, dialogue) were the most valuable for you?
- Would you like similar AI-enhanced classes to appear in other mathematics courses?□ Yes □ No □ I have no opinion
| Category | Pattern | Action Description | Educational Goal/Effect |
|---|---|---|---|
| Knowledge Building | Socratic Reversal | The student assumes the role of the teacher and explains the material to the AI. | Deepening understanding through verbalization and justification. |
| Feynman Prompt | The student explains content in the simplest possible manner, as for a beginner. | Verifying the level of understanding and identifying knowledge gaps. | |
| Gap Finder | The AI analyzes the student’s responses and identifies areas requiring improvement. | Diagnosing difficulties and individualizing support. | |
| Concept Contrast | Comparing concepts with similar meanings or functions. | Developing analytical skills and structuring knowledge. | |
| Active Practice | Show-Then-Do | The AI presents a correct example, and the student completes an analogous task. | Transferring knowledge into independent practice. |
| Error Injection | The AI intentionally introduces an error, and the student must identify and correct it. | Developing self-correction skills and critical thinking. | |
| Test Me | The AI generates quizzes to assess student progress. | Reinforcing knowledge and ongoing progress monitoring. | |
| Rewrite Challenge | The student transforms a correct solution into an equivalent form. | Enhancing cognitive flexibility and deepening understanding. | |
| Creative Thinking | Many Ways | Searching for various methods of solving a single problem. | Stimulating creativity and divergent thinking. |
| Analogy Builder | Creating analogies to explain difficult concepts. | Knowledge transfer and deepened understanding of abstract content. | |
| Debate Simulation | The AI presents two opposing viewpoints in debate form. | Developing argumentation and evaluating multiple perspectives. | |
| Reflection and Metacognition | Confidence Check | The student evaluates their confidence in an answer, and the AI provides commentary. | Developing cognitive awareness and self-regulation. |
| AI-as-Coach | The AI acts as a mentor supporting learning planning. | Strengthening autonomy and learning strategies. | |
| Reverse Role Play | The student plays the role of the teacher, and the AI plays the role of the student. | Deepening reflection and improving explanation skills. | |
| Learning Diary | The student keeps a reflection journal with AI support. | Systematic self-reflection and developing metacognitive competencies. |
References
- Stańdo, J.; Fechner, Ż.; Dąbrowicz-Tlałka, A.; Kujawska, K.; Musielak, M.M. Exploring AI Chatbots for Learning Mathematics: Students’ Perspectives on Accuracy and Educational Value. In Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium, Blue Sky, and WideAIED, Proceedings of the 26th International Conference, AIED 2025, Palermo, Italy, 22–26 July 2025; Springer: Cham, Switzerland, 2025; pp. 282–288. [Google Scholar] [CrossRef]
- Mohamed, M.Z.B.; Hidayat, R.; Suhaizi, N.N.B.; Sabri, N.B.M.; Mahmud, M.K.H.B.; Baharuddin, S.N.B. Artificial Intelligence in Mathematics Education: A Systematic Literature Review. Int. Electron. J. Math. Educ. 2022, 17, em0694. [Google Scholar] [CrossRef] [PubMed]
- Yi, L.; Liu, D.; Jiang, T.; Xian, Y. The Effectiveness of AI on K-12 Students’ Mathematics Learning: A Systematic Review and Meta-Analysis. Int. J. Sci. Math. Educ. 2024, 23, 1105–1126. [Google Scholar] [CrossRef]
- Łupińska Dubicka, A.; Mozyrska, D. ChatGPT w nauczaniu programowania: Do’swiadczenia studentów i wyzwania dydaktyczne. In Wybrane Zagadnienia Informatyki Technicznej; Online; Oficyna Wydawnicza Politechniki Białostockiej: Bialystok, Poland, 2025; ISBN 978-83-68673-08-1. [Google Scholar] [CrossRef]
- Kasneci, E.; Seßler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
- Lee, D.; Palmer, E. Prompt engineering in higher education: A systematic review to help inform curricula. Int. J. Educ. Technol. High. Educ. 2025, 22, 7. [Google Scholar] [CrossRef]
- White, J.; Fu, Q.; Hays, S.; Sandborn, M.; Olea, C.; Gilbert, H.; Elnashar, A.; Spencer-Smith, J.; Schmidt, D.C. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. arXiv 2023, arXiv:2302.11382. [Google Scholar] [CrossRef]
- Sahoo, P.; Singh, A.K.; Saha, S.; Jain, V.; Mondal, S.S.; Chadha, A. A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications. arXiv 2024, arXiv:2402.07927. [Google Scholar] [CrossRef]
- Naskręcki, B.; Ono, K. Mathematical discovery in the age of artificial intelligence. Nat. Phys. 2025, 21, 1504–1506. [Google Scholar] [CrossRef]
- Kojima, T.; Gu, S.S.; Reid, M.; Matsuo, Y.; Iwasawa, Y. Large Language Models are Zero-Shot Reasoners. arXiv 2022, arXiv:2205.11916. [Google Scholar]
- Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models are Few-Shot Learners. arXiv 2020, arXiv:2005.14165. [Google Scholar] [CrossRef]
- Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Ichter, B.; Xia, F.; Chi, E.; Le, Q.; Zhou, D. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. arXiv 2022, arXiv:2201.11903. [Google Scholar]
- Ban, Y.; Wang, R.; Zhou, T.; Cheng, M.; Gong, B.; Hsieh, C. Understanding the Impact of Negative Prompts: When and How Do They Take Effect? In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024. [Google Scholar]
- Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.L.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. Training Language Models to Follow Instructions with Human Feedback. arXiv 2022, arXiv:2203.02155. [Google Scholar] [CrossRef]
- Salesforce. What Is Prompt Grounding?—A Generative AI Tutorial, 2024. Available online: https://www.salesforce.com/blog/what-is-grounding/ (accessed on 17 December 2025).
- Kabir, M.R.; Lin, F.O. An LLM-Powered Adaptive Practicing System. In Proceedings of the LLM@AIED, Tokyo, Japan, 3–7 July 2023. [Google Scholar]
| Category | PL Group | RO Group |
|---|---|---|
| Positive | ∼35–40% | ∼35–40% |
| Neutral | ∼25–30% | ∼30–35% |
| Negative | ∼30–35% | ∼35–40% |
| Unclear | ∼10% | ∼5% |
| B1 | B2 | B3 | B4 | B5 | B6 | B7 | B8 | |
|---|---|---|---|---|---|---|---|---|
| RO Group | 4.150 | 3.995 | 3.953 | 4.113 | 3.671 | 4.113 | 3.563 | 3.704 |
| PL Group | 3.920 | 3.840 | 3.720 | 3.530 | 3.930 | 3.580 | 3.020 | 3.700 |
| Total | 4.077 | 3.946 | 3.879 | 3.927 | 3.754 | 3.942 | 3.390 | 3.703 |
| M–W test (p) | 0.017 | 0.265 | 0.035 | 0.001 | 0.067 | 0.001 | 0.001 | 0.958 |
| C4 | C5 | Final Reflection | |
|---|---|---|---|
| RO Group | 4.136 | 4.141 | 8.235 |
| PL Group | 3.990 | 3.910 | 7.120 |
| Total | 4.089 | 4.067 | 7.879 |
| M–W test (p) | 0.147 | 0.054 | 0.001 |
| Item | EX1 | EX2 | p-Value |
|---|---|---|---|
| B1 | 4.313 | 3.588 | 0.001 |
| B2 | 4.161 | 3.500 | 0.001 |
| B3 | 4.047 | 3.529 | 0.001 |
| B4 | 4.066 | 3.637 | 0.001 |
| B5 | 3.754 | 3.755 | 0.922 |
| B6 | 4.123 | 3.569 | 0.001 |
| B7 | 3.592 | 2.971 | 0.001 |
| B8 | 3.773 | 3.559 | 0.122 |
| C4 | 4.194 | 3.873 | 0.006 |
| C5 | 4.180 | 3.833 | 0.001 |
| Overall rate | 8.190 | 7.235 | 0.001 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Brandibur, O.; Filipowicz-Chomko, M.; Girejko, E.; Kaslik, E.; Mozyrska, D.; Mureșan, R.; Pappas, N.; Tănasie, A.L.; Zaharia, C. Higher Mathematics Education and AI Prompt Patterns: Examples from Selected University Classes. Appl. Sci. 2026, 16, 339. https://doi.org/10.3390/app16010339
Brandibur O, Filipowicz-Chomko M, Girejko E, Kaslik E, Mozyrska D, Mureșan R, Pappas N, Tănasie AL, Zaharia C. Higher Mathematics Education and AI Prompt Patterns: Examples from Selected University Classes. Applied Sciences. 2026; 16(1):339. https://doi.org/10.3390/app16010339
Chicago/Turabian StyleBrandibur, Oana, Marzena Filipowicz-Chomko, Ewa Girejko, Eva Kaslik, Dorota Mozyrska, Raluca Mureșan, Nikos Pappas, Adriana Loredana Tănasie, and Claudia Zaharia. 2026. "Higher Mathematics Education and AI Prompt Patterns: Examples from Selected University Classes" Applied Sciences 16, no. 1: 339. https://doi.org/10.3390/app16010339
APA StyleBrandibur, O., Filipowicz-Chomko, M., Girejko, E., Kaslik, E., Mozyrska, D., Mureșan, R., Pappas, N., Tănasie, A. L., & Zaharia, C. (2026). Higher Mathematics Education and AI Prompt Patterns: Examples from Selected University Classes. Applied Sciences, 16(1), 339. https://doi.org/10.3390/app16010339

