Next Article in Journal
Curating Hybrid-Style University Experiences: Framework Development for Student Social Connectedness Using Interaction Design and Placemaking
Next Article in Special Issue
School Dropout in Italy: A Secondary Analysis on Statistical Sources Starting from Primary School
Previous Article in Journal
Advancing Women’s Leadership in United Arab Emirates Higher Education: Perspectives from Emirati Women
Previous Article in Special Issue
Is Reality in Conflict with Perception? The Impact of Technology-Enhanced Active Learning and Formative Assessment on the Formation of Pre-Service Teachers in the Social Sciences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Qualitative-Content-Analytical Approach to the Quality of Primary Students’ Questions: Testing a Competence Level Model and Exploring Selected Influencing Factors

by
Yannick Schilling
1,*,
Leonie Hillebrand
1 and
Miriam Kuckuck
2
1
Department of Geography and General Studies, University of Wuppertal, 42119 Wuppertal, Germany
2
Didactics of General Studies, University of Wuppertal, 42119 Wuppertal, Germany
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(9), 1003; https://doi.org/10.3390/educsci14091003
Submission received: 30 May 2024 / Revised: 9 September 2024 / Accepted: 10 September 2024 / Published: 12 September 2024

Abstract

:
There is a consensus on the importance of students’ questions in educational contexts due to diverse potentials to promote learning. Engaging with students’ questions in primary school is highly relevant as it fosters critical thinking skills, encourages curiosity, and cultivates a deeper understanding of subject matter. At the same time, research findings agree that students’ questions about the subject matter are rare. Research on the quality of students’ questions in the classroom mostly focuses on secondary or higher education. However, when it comes to the quality of students’ questions in primary schools, there is a research gap, although it is possible to use questions in primary school lessons to improve learning processes. Against this background, the present study takes up a competence level model for assessing the quality of students’ questions in General Studies and evaluates its use in a qualitative–explorative setting on the questions from a non-probabilistic random sample (n = 477). The results of the analysis are further used to look for indications of the influences of the grade level and the subject matter on competence levels. Further, they also allow conclusions to be drawn for primary school teacher education. The competence level model in modified form turns out to be a reliable instrument for assessing the competence levels of questions. In addition, a weak positive correlation was found between the level of competence levels and the students’ grade level. The conclusion is that there is a need for tailored support across different grade levels. The detected lack of consistent connection with the subject matter highlights the importance of diverse instructional approaches.

1. Introduction

1.1. Students’ Questions: Importance in Educational Contexts and Research Findings

When educational researchers refer to questions in the context of teaching and education, the focus is usually on teachers’ questions. Research on students’ questions is much rarer [1,2,3,4]. A similar status quo can be reconstructed in the classroom: The teachers’ questions dominate the lessons and questions asked by students are rather seldom events [1,5]. These facts contradict the international, educationally legitimized claim that students’ questions should be stimulated by the teacher and included in the lesson [6,7]. Asking questions is closely linked to critical thinking and therefore also to the OECD Learning Framework 2030 [8,9]. However, this claim does not refer to all utterances that are syntactically marked as questions, but rather questions that are aimed at the subject matter. Such questions are also referred to as epistemic questions, as they aim to generate knowledge [3]. Within learning processes, students’ questions are associated with various learning-promoting potentials: “Despite the lack of student questioning, the literature indicates good theoretical, empirical, and policy reasons for the importance of students’ generating questions to support their learning” [10] (p. 60). Summaries of these potentials can be found at Chin and Osborne [11], Miller and Brinkmann [12] as well as Schilling and Kuckuck [13].
Depending on the educational institution and the subject, different importance is attached to students’ questions. This article focuses on primary schools in Germany and the subject of General Studies. General Studies is an interdisciplinary school subject that integrates the subject-specific approaches of multiple related sciences [14,15]. It combines content from social sciences, natural sciences, and technology [16,17,18]. In an international context, different subject concepts and designations can be found [19,20]. In addition to its multi-perspectivity, General Studies is committed to support students in exploring their living environment independently [17,21]. This central idea is also the basis for the constructivist, problem- and action-oriented understanding of teaching and learning [15]. Furthermore, General Studies is based on an inclusive educational approach [22]. On the one hand, students’ questions are therefore seen as an approach to learning requirements [13,17]. On the other hand, lessons that take up students’ questions enable participation and thus fulfill a central criterion of an inclusive school subject [23].
With reference to all those potentials, the question of the status quo of the quantity and quality of students’ questions in the classroom arises. First of all, it should be noted that the majority of research findings are outdated. Overall, the findings are consistent in that the number of questions asked by students is low as well as their quality. [1,2,3,11,24]. For learning processes, however, the quantity is much less important than the quality of the questions [25,26]. Therefore, the following section (Section 1.2) focuses on the assessment of the quality of students’ questions. The reference to "students" in this paper includes primary school students of all genders.
Another important question arises with regard to possible influencing factors on the creation and quality of questions that have been identified in previous research. It is assumed that there are multiple influencing factors [26,27]: Age, prior knowledge, interest in the subject matter, metacognitive and communicative-linguistic skills are relevant factors with regard to the students [4,27,28,29,30,31,32]. The teacher’s lesson design and the classroom atmosphere are just as important as the subject matter itself [27,33].

1.2. Classification Systems for Assessing the Quality of (Students’) Questions

As this study is concerned with the primary education sector, it should first be noted that most studies relate to secondary or higher education. Most research on classifying or assessing the quality of students’ questions is based on Bloom’s (revised) educational objectives taxonomy [1,29,34,35,36]. On this basis, Graesser, Person, and Huber [34,37] developed a cognitive psychology-based taxonomy with 18 question categories. These categories are based on a differentiation of the questions with regard to the expected length of the answer. The authors differentiate between short-answer- and long-answer-questions. Some of the long-answer questions also have a deep-reasoning character. Deep-reasoning-questions are characterized by asking about reasoning patterns in logical, causal, or goal-oriented systems and are therefore considered to be of higher quality [37]. Scardamalia and Bereiter [38] distinguish between basic-information-questions and wonderment-questions. While the former type includes questions about basic orientation information, questions of the “wonderment” type reflect curiosity, skepticism, or knowledge-based speculation. Niegemann and Stadler [36] assume five quality levels of questions. The spectrum ranges from questions that do not intend any learning (quality level 0) to questions that intend long answers and have a deep-reasoning character (quality level 4). The taxonomies and models used in each case cannot be presented in detail here due to their complexity. An overview can be found at Chin and Osborne [11]. Differences in classification systems, countries with different education systems, and questioners result in a lack of comparability of the results. For this reason, results of these studies are not presented in detail here. With the same argumentation, Brinkmann [26] states that existing instruments are only partially applicable in the primary education sector. The research desideratum resulting from this circumstance was taken up by her [26] for the first time facing the subject of General Studies at the primary education sector: She developed a competence level model for the qualitative differentiation of students’ questions in General Studies. Brinkmann’s [26] model is presented in Section 1.3, as it forms the basis for this study.

1.3. Brinkmann’s Competence Level Model for Analyzing the Level of Abstraction of Student’s Questions

With the intention of identifying distinguishing criteria to differentiate the level of abstraction of students’ questions, Brinkmann [26] carried out a preliminary study. Using qualitative content analysis, she examined 711 students’ questions that were recorded in writing by the researcher over a period of five years (2003–2008) in five different classes (grade levels 2–4) at a primary school in Germany. All of the questions were asked at the beginning of various lessons in General Studies. Brinkmann’s [26] aim was to identify the students’ ideas and thought patterns by analyzing the questions. This resulted in four distinguishing criteria. These criteria refer to the awareness [39] of the subject matter ex-pressed in the questions. The first distinguishing criterion is the level of prior knowledge of the subject matter. In dichotomous form, prior knowledge can either be present or absent. The use of technical terms, for example, is an indicator of existing prior knowledge. Secondly, a distinction can be made between a narrow and a broad focus of attention. Questions with a narrow focus of attention are aimed at specific details, such as names, quantifications, or superlatives. They usually require a short answer. With a broad focus of attention, the intention is opening a large number of partial aspects. The answer has to be more extensive. If a student asks a question about the size of the earth, for example, this question is based on a narrow focus of attention. If a student asks why the moon always looks different, the focus of attention is assumed to be broad. The extent to which the question is aimed at a conceptual understanding is analyzed with the third differentiation criterion. This dimension implicates assessing the extent to which a question expresses the intention to explore causes, discover connections, or understand modes of operation. Again, in a dichotomous manner, this criterion can either be fulfilled or not fulfilled. The question about the ever-changing appearance of the moon is intended to explore causes and therefore aims at a conceptual understanding. The fourth and final distinguishing criterion relates to a philosophical horizon. This includes questions to which there is no clear answer. It is also possible to ask about dimensions or topics whose answers cannot be deduced from largely established bodies of knowledge. In this case, it is necessary to struggle for an interpretative reality of one’s own. There is no philosophical horizon in either of the example questions used so far. However, such a horizon is present in the following example: “Why do planets exist if you can’t live on them?”. The interplay of these four criteria and their respective characteristics results in five competence levels. The more aspects of awareness can be categorized as visible for one question, the higher it is placed in the competence level model. As this is a competence level model, higher competence levels are based on higher levels of abstraction of the questions. The competence levels are in turn assigned to specific question types and example questions. At competence level 1, neither prior knowledge nor the intention of a conceptual understanding nor a philosophical horizon are visible. At the same time, there is a narrow focus of attention. The highest level of abstraction is reached at competence level 5. Questions that show prior knowledge have a broad focus of attention, intend a conceptual understanding, and indicate a philosophical horizon are classified here.
Figure 1 schematically summarizes the structure of Brinkmann’s [26] competence level model.
Brinkmann [26] used this model in order to analyze 137 student’s questions on the subject matter of outer space. The primary school students were between 8 and 9 years old. The results of the analysis are presented below in the form of relative frequencies of the competence levels: 33.6% of the questions were assigned to competence level 1. This is followed by the questions at competence level 2 with a share of 24.1% and the questions at competence level 3 with 16.8%. Finally, a slightly higher value of 17.5% was recorded for competence level 4. The fifth and highest competence level was the least frequent at 8%. When interpreting her findings, the author refers to the many factors that may influence the quality of student questions, see also Section 1.1. Both the competence level model and the influencing factors are the subject of the present study.

1.4. Aim of the Study

This study aims to answer three research questions. The first question relates to insights that can be gained during the analysis process of the questions. Brinkmann [26] states that her model is not a finished product. The author assumes, for example, that other learning objects require the addition of further question types. For this reason, the first research question focuses on the applicability of the model against the background of a different sample. A possible need for modification of the model is to be determined.
RQ 1: To what extent is Brinkmann’s competence level model [26] suitable for analyzing students’ questions from a different sample? What modifications are necessary?
Further data exploration is carried out on the basis of analysis results in the form of the questions assigned to the competence levels. The students’ grade level and the subject matter were selected as independent variables to provide an indication of their influence on the competence levels of the questions.
RQ 2: Are there any indications of connections between the identified competence levels of the questions and the students’ grade level?
RQ3: Are there any indications of connections between the identified competence levels of the questions and the subject matter?

2. Materials and Methods

2.1. Research Design and Sample

Due to the limited amount of available research, this study is based on a qualitative–explorative research design [40,41]. A total of 21 prospective primary school teachers studying for a Master’s degree in General Studies were prepared to stimulate students’ questions as part of the introductory lesson for a new series of lessons in General Studies. These trained prospective primary school teachers then conducted one introductory lesson at each of the 21 different primary schools between October 2022 and January 2023. The selection of school locations in North Rhine-Westphalia in Germany could not be influenced. North Rhine-Westphalia was chosen because it is the federal state with the most primary schools in Germany [42]. As the selection of school locations could not be influenced, this is a non-probabilistic random sample [43]. Beyond the objective of encouraging students to ask questions, the only requirement was that the questions should be recorded in writing. To ensure that all questions were recorded in full, a teacher from the school was also present and recorded the questions in writing as well. The independent variables grade level as an access to the approximate age of the students and the subject matter of the introductory lesson were also recorded. In North Rhine-Westphalia, primary school comprises four school years (grade levels). Students start school at the age of 6 and thus enter the first-grade level [44]. This means that students in the first grade are between 6 and 7 years old. Accordingly, the fourth school year can be assumed to be between 9 and 10 years old. The 21 introductory lessons thus resulted in 21 data sets of students’ questions. These differ in terms of the composition of the learning group, the age of the students, the subject matter, the planned course of the lesson and in many further aspects. The number of questions asked per data set is between 0 and 67 questions. All background variables per data set are listed in the Appendix A (see Table A1). The names of the data records are made up of the subject matter and the grade level. If the subject matter and grade level are identical, letters are added to differentiate between them. A total of 477 students’ questions can be used for analysis. Grade level 1 is represented by three data sets and grade level 2 by four data sets. Grade levels 3 and 4 are both included in the sample with seven data sets. The subject matters are fundamentally different. However, the subject matters “space” (3×) and “electricity” (2×) are represented several times.

2.2. Instruments and Data Analysis

To answer the research questions, the competence level model by Brinkmann [26] described in Section 1.3 was used to analyze the students’ questions. Two researchers trained in using the model carried out the analysis. They pursued the primary goal of clearly assigning each question to a competence level. The assignment to a question type was of secondary relevance. A first analysis drew attention to some questions that could not be clearly assigned. These include questions (a) that were incomprehensible in terms of language or content or that had no recognizable connection to the subject matter. In addition, there were questions (b) that could be assigned to a competence level based on the four distinguishing features, but they did not correspond to any existing question type. As a result, modifications were made to the competence level model before starting a second pass of analysis. Regarding the questions of type (a), a competence level 0 was introduced, similar to Niegemann and Stadler’s [36] model. Niegemann and Stadler [36] place questions at quality level 0 that cannot be analyzed for various reasons. Questions of this type were also excluded from further analysis by Moore et al. [35]. To analyze the questions of type (b), further question types were added to competence levels 1 and 2. The adapted competence level model is attached (see Table A2, Table A3, Table A4, Table A5 and Table A6). All changes compared to the competence level model by Brinkmann [26] are marked with an asterisk.
As already described in Section 1.3, the assignment to one of the five competence levels results from the characteristics of the four distinguishing criteria. However, to filter out questions of type (a) in advance, each question must be checked, if it is understandable in terms of language and content and has a recognizable reference to the subject matter. If this is not the case, the question is assigned to competence level 0 and not analyzed further. However, if the question is understandable in terms of language and content and makes a recognizable reference to the subject matter, the distinguishing criteria can be considered. In a content-analytical-interpretative process [45,46,47], the following key questions were used to decide on the characteristics of the distinguishing criteria:
  • Prior knowledge [characteristics: not visible | visible]: Does the question reveal any prior knowledge that goes beyond everyday knowledge?
  • Focus of attention [characteristics: narrow | broad]: Is the focus of attention narrow or broad in terms of the expected response? Does the question relate to a specific detail (narrow focus of attention) or is it necessary to explore many partial aspects to answer it (broad focus of attention)?
  • Intention of conceptual understanding [characteristics: not visible | visible]: Does the question express the intention to fathom causes, discover connections, or understand modes of operation?
  • Philosophical horizon [characteristics: not visible | visible]: Is there a clear answer for this question? Does it touch on topics whose answers cannot be obtained from largely established bodies of knowledge? Do we have to struggle for our interpretative reality?
Figure 2 shows the modified competence level model in schematic form.
After completing the second pass of analysis, there was no further need to modify the model. The two researchers compared the competence level assigned to each question. In the event of a deviation, the researchers agreed on a competence level in the sense of consensus coding [46]. The deviations were identified in the assessment of the characteristics of the distinguishing criteria. In case of shared doubt, the lower competence level was then selected so as not to over-interpret the questions.
To illustrate the analysis process and results, two questions from the sample are analyzed below as contrastive examples. While in example 1 (see Table 1) no prior knowledge beyond everyday knowledge is recognizable, the use of the technical term “Faraday cage” in example 2 (see Table 2) certainly indicates (conceptual) prior knowledge. The focus of attention can also be assessed differently in each case. The question about the number of spines on hedgehogs is aimed at a specific detail, resulting in a narrow focus of attention. In example 2, a more complex answer is required, resulting in a broad focus of attention. The situation is similar with the intention for conceptual understanding: The question about Faraday’s cage suggests that the student has encountered something within its comprehension process that cannot be easily integrated. The student does not just want to ask for a number, but to understand a new aspect. There is no philosophical horizon in both examples.
To check the reliability of the analysis instrument, 25% of the material (120 questions) was given to a third trained person to calculate the intercoder reliability concerning the assignment to the competence levels [46,48]. The weighted Cohen’s kappa resulted in an intercoder agreement of 0.821. This very good agreement indicates a reliable analysis instrument [48,49]. Upon completion of the qualitative content analysis process, each individual question was assigned to a competence level in the form of a numerical value (0–6). This procedure is a quantification of qualitative data [50,51,52]. As the competence level model is based on an ordinal scale, statistical calculations are possible to a limited extent [53]. This fact is used to statistically investigate research question 2. To identify indications of correlations between the competence level of the questions and the students’ grade level, the rank correlation according to Spearman is calculated using SPSS statistics software, version 29 (IBM) [53,54]. Diagrams created using Excel software, version 2021 (Microsoft), are used to visualize other findings.

3. Results

3.1. To What Extent Is Brinkmann’s Competence Level Model [26] Suitable for Analyzing Questions from a Different Sample? What Modifications Are Necessary? (RQ 1)

Some of the results relating to the first research question can already be found in Section 2.2, as these were obtained from the analysis process. In the course of using Brinkmann’s [26] competence level model to analyze the questions from the present sample, two modifications were necessary. New question types had to be added to competence levels 1 and 2. Furthermore, it became apparent that a competence level 0 is necessary to analyze the present sample. The modified competence level model in the Appendix A (see Table A2, Table A3, Table A4, Table A5 and Table A6) also represents a result.
The relative frequency of the competence levels (see Figure 3) can be seen as a second result of analysis. Particularly striking is the fact that none of the 477 questions were assigned to competence level 5. Competence levels 1 (202 questions) and 3 (160 questions) were assigned most frequently. The proportion of competence level 1 corresponds to slightly more than 40% of all questions. Competence levels 0 (43 questions), 2 (40 questions), and 4 (32 questions) are represented by less than 10%.
The results of the analysis enable further data exploration. While the relative frequency of the competence levels is considered in Figure 3, Figure 4 visualizes the absolute frequency of the competence levels per data set (n = 21). The data sets, which already differ in terms of the quantity of questions, can thus be compared in terms of the distribution of questions across the competence levels. In addition to the clearly visible distinction in terms of the number of questions, the presentation requires a closer look at certain data sets. For example, the low number of questions in data set “Human senses_3” and the predominant proportion of questions at competence level 0 are striking. In data set “Christmas_3” there are only questions at competence level 1, and in data set “Space_4-a” no questions were recorded at all.

3.2. Are There Any Indications of Connections between the Identified Competence Levels of the Questions and the Students’ Grade Level? (RQ 2)

The grade level is used as a guiding indicator for the students’ age. Sorting the 21 data sets by grade level results in the following sub-samples:
  • Grade level 1 = 72 questions
  • Grade level 2 = 53 questions
  • Grade level 3 = 151 questions
  • Grade level 4 = 158 questions
Figure 5 shows the relative frequency of the competence levels for each of the above sub-samples separately for each of the four sub-samples:
There are different distributions of competence levels per grade level. For example, the proportion of questions at competence level 0 is highest in grade level 2, while it is considerably lower in the other grades. Questions at competence level 1 clearly dominate in grade level 1 with 70%. In grade level 2, their distribution is not half as high at around 27%. The proportion rises again in grade level 3 to approx. 35% and in 4 to approx. 44%. Compared to competence level 3, questions at competence level 2 are rather rare overall. Their share only exceeds 10% in grade levels 1 and 4. In grade levels 2 and 3, their distribution is almost identical at around 5%. Questions at competence level 3 are most highly represented in grade levels 2 and 3 at around 43%. In grade level 4 (approx. 30%) and 1 (just over 10%), the proportion of competence level 3 questions is lower. As competence level 5 could not be assigned to any of the questions, the highest competence level is the fourth. Competence level 4 is represented in grade level 1 with under 3%, in grade 2 not at all, in grade 3 with approx. 8% and in grade 4 with approx. 9%.
The statistical calculated Spearman’s rho correlation coefficient [53] allows the following conclusion: The competence level of the questions correlates significantly with the grade level, rs = 0.133, p = 0.004, n = 477. According to Cohen [55], this is a weak effect.

3.3. Are There Any Indications of Connections between the Identified Competence Levels of the Questions and the Subject Matter? (RQ 3)

To identify indications of connections between the distribution of questions on all competence levels with the subject matter, the three available data sets on the same subject matter (“space”) are considered. These can be compared with Brinkmann’s results [26] as a reference, see Figure 6. The following sub-samples are taken into account:
  • data set “Space_4-a” | grade level 4 | 0 questions
  • data set “Space_4-b” | grade level 4 | 33 questions
  • data set “Space_3” | grade level 3 | 67 questions
  • Brinkmann’s data set [26] | grade level 3 | 137 questions
In a direct comparison of the two data sets from grade level 4, it is initially noticeable that no questions were asked in data set “Space_4-a”. Thus, only the relative frequencies of the data sets “Space_4-b” and “Space_3” can be compared with the reference data. While competence levels 2 (with just under 10%) and 4 (with approx. 13%) have similar proportions, the other competence levels are represented differently. Competence level 0 is only represented in data set “Space_3” with approx. 7%. The proportion of competence level 1 in data set “Space_4-b” (almost 75%) is considerably higher compared to data set “Space_3” (just under 20%). Competence level 3 is represented in data set “Space_3” with just over 50%, while in data set “Space_4-b” only approx. 3% of the questions are located at this competence level. Brinkmann’s [26] reference data set shows completely different distributions of competence levels, including questions at competence level 5.

4. Discussion

4.1. To What Extent Is Brinkmann’s Competence Level Model [26] Suitable for Analyzing Questions from a Different Sample? What Modifications Are Necessary? (RQ 1)

Regarding research question 1, it can be summarized that Brinkmann’s [26] competence level model had to be modified for its application in the present sample. The need to extend the model by adding further question types presumably results from the different subject matters within the sample. This confirms Brinkmann’s [26] assumption that different subject matters require other (sub-)types of questions. The extension of the model by a competence level 0 for questions that cannot be analyzed further can be seen as the result of different research methodological framework conditions. Brinkmann [26] used a method triangulation against the background of semantic ambiguities of some questions. This enabled the opportunity, for example, to ask students explicit questions in interview situations to understand their intentions. This option was not available in the present study. For this reason, the results at hand can be compared only with the results from Brinkmann’s [26] study to a limited extent. It remains open to what extent deviating competence levels are assigned when (not) triangulating.
To ensure a high quality of the research results, the analysis was carried out independently by two researchers. Questions assigned to different competence levels were then discussed and assigned in the sense of consensus coding [46]. In cases of doubt, the lower competence level was chosen so as not to overrate the questions. Conversely, it is therefore possible that the competence level of individual questions was underestimated. A very high reliability of the analysis instrument was confirmed by comparing the consensually generated results with the results of a third independent person (see Section 2.2). To further optimize the analysis procedure, it seems reasonable to examine not only the intercoder agreement with regard to the assigned competence levels. Similar to Moore et al. [35], it is conceivable to check the agreement at the level of the distinguishing criteria.
These distinguishing criteria developed by Brinkmann [26] should also be discussed. For example, regarding the criterion “prior knowledge”, it is not clear when knowledge that goes beyond everyday knowledge can be assumed. The mere use of a technical term can also occur without reflection. Brinkmann [26] was also able to address this problem with her method triangulation. Nevertheless, the model does not distinguish between understood and unreflective use of technical terms. Nor does the model address incorrect (pre-)concepts that are expressed within the questions. The focus of attention criterion attributes intentions to the asking students regarding the expected scope of the answer. The extent to which this corresponds to the students’ actual intentions cannot be determined without method triangulation. The sample questions from Brinkmann’s [26] survey, whose intentions can be considered communicatively validated [47], were helpful for the present study.
The analysis results in terms of the relative frequency of the competence levels in relation to all questions (n = 477) can only be compared with existing studies to a limited extent, as other analysis instruments were used beyond the study by Brinkmann [26]. However, the trend can be confirmed that the lower competence levels occur considerably more frequently than the higher ones [2,11,27,34,36].
The dominance of questions at competence levels 1 and 3 is striking, with a cumulative relative frequency of 75% in the entire data set (n = 477). According to Brinkmann’s [26] competence level model, this distribution allows the conclusion that no prior knowledge is recognizable in these questions. The fact that no question was assigned to competence level 5 can possibly be explained by the underestimation of individual questions or other aspects related to the sample. A comparison of the relative frequencies of the competence levels of the individual data sets (see Figure 4) raises various questions, as there are large differences in the distributions. This awareness shows the necessity of including further background variables in a follow-up study, which must take place under controlled conditions.

4.2. Are There Any Indications of Connections between the Identified Competence Levels of the Questions and the Students’ Grade Level? (RQ 2)

The intention of research question 2 was to pursue further data exploration based on the analysis results and the background variable “grade level”, which is available for each data set. The comparison of the relative frequencies of the competence levels per grade level did not show a consistent picture. Furthermore, the Spearman rank correlation shows a very weak positive correlation between the level of competence and the grade level [53,55]. This statistical indication of a correlation between the level of competence and grade level should be discussed in case of the present sample. Both the number of available data sets per grade level and the number of questions per data set are different, so the statistical significance must be put into perspective. For the present sample, however, it can be stated that, statistically shown, higher competence levels tend to be weakly related to a higher grade level. However, an isolated examination of the relative frequency of competence level 1 across the four grades shows that the proportion is second highest in grade level 4. This underlines the thesis that the quality of student questions is influenced by multiple factors and that age alone is not the decisive factor here [26,27].
The relative frequencies of the competence levels of the questions in grade level 3 from the present sample (n = 151) can be compared with the data from Brinkmann’s [26] study (n = 137; see Section 1.3). Comparing the relative frequencies of competence level 1 in the two samples, we see that their proportion appears to be similar. The frequencies for all other competence levels are clearly different. This finding further suggests that a particular grade level—in this case, as access to a comparable age group—does not allow any conclusions to be drawn about the distribution of competence levels. Such a conclusion in turn strengthens the thesis of multifactorial influence [26,27].

4.3. Are There Any Indications of Connections between the Identified Competence Levels of the Questions and the Subject Matter? (RQ 3)

The third research question ties in with the second research question and also serves to explore the data further. The fact that the variable “subject matter” can be considered at all in the present study is due to the random overlap of a few subject matters within the non-probabilistic opportunity sample [43]. Before the results are taken up again, the following should be noted: in contrast to the background variable of grade level, the subject matter is a background variable that is less well-defined. Thus, a supposedly identical subject matter can result in completely different accentuations of the lesson unit, while the possible age range is limited for the grade level. The examination of the three data sets on the subject matter ‘space’ from the present sample did not reveal any clear indications of correlations between the topic and the relative frequencies of the competence levels. A direct comparison with Brinkmann’s [26] data also reveals no parallels. It should also be noted that the three data sets from the present sample are again based on different year groups. This methodological insight again suggests that a specific subject matter alone—in this case, the subject matter of space—does not allow any conclusions to be drawn about the distribution of competence levels. The many other influencing factors that may have affected the questions in the sample are considered in the following chapter.

4.4. Further Aspects of Research Design and Methods

Now that the results have been discussed, supplementary aspects of the research design and methodology are considered. The qualitative–explorative setting takes the less extensive state of research into account [1,2,26]. In addition to testing the analytical instrument, patterns were searched for in a very heterogeneous sample of 477 student questions with the partial aid of statistical (quantitative) calculations. The non-probabilistic random sample results from the context of data collection [43]. The addition of quantifying aspects to qualitative content analysis evaluations is well established [46,50]. However, to obtain valid statistical findings, a much more controlled setting is required. The results of the analysis process show that an analysis tool that focuses exclusively on the questions themselves limits the scope of the results.
The questions were stimulated and collected by 21 different trained students in the Master’s program. They were supported by a teacher who was present to ensure that the data were complete. The quality of the children’s questions may have been affected by the fact that prospective primary school teachers with incomplete professional qualifications asked the questions even though they had been trained previously. The effects of such training [13,56] and expertise difference between experts and novices [57,58] need to be evaluated elsewhere. Apart from the requirement for the prospective primary school teachers to stimulate the students to ask questions as a prelude to a new series of lessons, there were no other requirements. The sample is correspondingly heterogeneous.

4.5. Implications

Concerning the analysis of the quality of students’ questions using the (modified) competence level model by Brinkmann [26], it was shown that it is suitable for a reliable classification to competence levels. Through the interaction of various distinguishing criteria, such as prior knowledge, the model allows orienting conclusions for the assessment of the students’ level of competence. In the present study, Brinkmann’s [26] assumption of a need for modification depending on the subject matter was confirmed. This fact implies that such an instrument can never be complete. Furthermore, the added value of method triangulation with the function of communicative validation is evident to minimize the scope for interpretation of the questions [47]. The orientation toward examples is helpful, but more concrete decision rules would be desirable—for example, to assess whether a question contains prior knowledge. The partial diffuse distribution of questions across competence levels also suggests that further data or surveys are needed to assess the quality of students’ questions. However, as the questions are the outcome of a classroom situation, the context in which they are created is quite complex [59]. This complexity should also be reflected in an analysis tool or the interaction of several analysis tools. Competence level models that combine different aspects are also considered a promising approach for didactic diagnostics in inclusive contexts of teaching and learning [60].
Subsequent research projects should focus on the effects of method triangulation and the extent to which this leads to deviating assignments of competence levels against the background of the present competence level model. It would also be interesting to categorize the questions of the present sample using other analytical instruments to determine the extent to which the rather low quality of the students’ questions can also be replicated there. In the present study, only the students’ first questions [61] from the start of a new lesson unit were examined. The analysis of the second questions [61] from the further course of lesson unit can provide important insights into learning gains. This was also one of Brinkmann’s [26] intentions: with the help of the competence level model, student questions can be used to evaluate the possible increase in awareness of the subject matter at different points in the lesson. The researcher herself published another study on this [62].
Even if the findings obtained here cannot be generalized, two assumptions can be made: Younger students may be just as capable as older students of asking complex questions in primary schools (see Section 4.2). Furthermore, the subject matter does not appear to have a decisive influence on the quality of the questions (see Section 4.3). Accordingly, other factors or their interaction in the complex structure of the lesson are decisive. These factors and their interaction should be the subject of future research projects.
The previously reconstructed [2,11,27,34,36] and here replicated finding that the quality of students’ questions tends to be low is a reason for teachers to take this into ac-count: Formulating questions about the subject matter is a demanding requirement for students [13] and needs to be practiced [63]. Only in this way is it possible to utilize the potential for teaching and learning described in Section 1.1. With appropriate support, it is therefore possible for students to formulate questions at higher competence levels [64,65]. Against the background of the discourse on critical thinking, a higher complexity of questions is certainly to be welcomed [8].
Concerning the sample, there is an encouraging finding: If students are explicitly stimulated by the teacher to ask questions, they do so. The diverse potential for supporting learning processes can be used only if this is the case. Evaluating the quality of the questions is an important contribution to this.

Author Contributions

Conceptualization, Y.S. and M.K.; methodology, Y.S.; validation, Y.S., L.H. and M.K.; formal analysis, Y.S., L.H. and M.K.; investigation, Y.S. and L.H.; resources, Y.S. and M.K.; data curation, Y.S.; writing—original draft preparation, Y.S.; writing—review and editing, Y.S., L.H. and M.K.; visualization, Y.S.; supervision, M.K.; project administration, Y.S.; funding acquisition, Y.S. and M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the “Qualitätsoffensive Lehrerbildung”, a joint initiative by the federal and state governments of Germany, which aims to improve the quality of teacher education. The program is funded by the Federal Ministry of Education and Research (BMBF). The authors are responsible for the content of this publication; grant number 01JA1807.

Institutional Review Board Statement

Ethical review and approval were waived for this study because, according to the German legislation on research involving human subjects, ethical approval is only required when sensitive data are collected, when physical interventions are performed, or when subjects could be harmed. Before the start of the study, all participants (21 prospective primary school teachers) were informed of the aims of the study, that participation was voluntary, that by transmitting the data they were giving their consent to participate voluntarily, that participation could be discontinued at any time, and that full anonymity was guaranteed. Due to the fact that no personal data was collected from the students, full anonymity was also guaranteed for them.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Table A1. Background variables and characteristics of the individual data sets in the sample.
Table A1. Background variables and characteristics of the individual data sets in the sample.
Data SetSubject MatterGrade LevelDesignation of the Data Set [*]Number of Questions
1Space4Space_4-a0
2Human senses3Human senses_35
3Human skeleton3Human skeleton_36
4Water4Water_48
5Stick insects2Stick insects_211
6Hedgehogs1Hedgehogs_1-b13
7Christmas3Christmas_314
8Animals2Animals_214
9Birds2Birds_218
10Fire3Fire_323
11Animals3Animals_323
12Volcanoes4Volcanoes_425
13Bats3Bats_326
14Mobility2Mobility_228
15Hedgehogs1Hedgehogs_1-a28
16Rome4Rome_428
17Electricity4Electricity_4-b31
18Animals1Animals_133
19Space4Space_4-b33
20Electricity4Electricity_4-a43
21Space3Space_367
[*] The names of the data records are made up of the subject matter and the grade level. If the subject matter and grade are identical, letters are added to differentiate between them.
Table A2. Competence level 1: Question types and distinguishing criteria (modified version according to Brinkmann [26]).
Table A2. Competence level 1: Question types and distinguishing criteria (modified version according to Brinkmann [26]).
Nr.Prior KnowledgeFocus of
Attention
Intention of
Conceptual
Understanding
Philosophical Horizon
1.1Quartet questions to capture the diversity of the world in a certain system of order (e.g., “How big is the earth?“)
not visiblenarrownot visiblenot visible
1.2Record questions to capture dimensions (superlatives) (e.g., “Which planet is the largest in the entire universe?”)
not visiblenarrownot visiblenot visible
1.3Questions about the geographical classification or spatial differentiation of one’s personal living environment (e.g., “Where are the airports in North Rhine-Westphalia?”)
not visiblenarrownot visiblenot visible
1.4Verification questions (e.g., “Is it possible to land on the sun?”)
not visiblenarrownot visiblenot visible
1.5Questions about names or linguistic derivations to expand knowledge of the world (e.g., “Why is the water called water?”)
not visiblenarrownot visiblenot visible
1.6 *Questions on the reconstruction of foreign or historical living environments based on categories of one’s personal living environment (e.g., “How did the Romans live?”)
not visiblenarrownot visiblenot visible
1.7 *Questions about (historical) events, personalities, facts or origins (e.g., “When was the war?”)
not visiblenarrownot visiblenot visible
1.8 *Questions with the intention of being able to (visually) imagine a concept or phenomenon (e.g., “What does a volcano look like?”)
not visiblenarrownot visiblenot visible
The asterisks (*) mark the added question types.
Table A3. Competence level 2: Question types and distinguishing criteria (modified version according to Brinkmann [26]).
Table A3. Competence level 2: Question types and distinguishing criteria (modified version according to Brinkmann [26]).
Nr.Prior KnowledgeFocus of
Attention
Intention of
Conceptual
Understanding
Philosophical Horizon
2.1Quartet questions for advanced learners (e.g., “How big are sunspots?”)
visiblenarrownot visiblenot visible
2.2Expert record questions (e.g., “What is the second most poisonous animal after the poison dart frog?”)
visiblebroadnot visiblenot visible
2.3Verification questions (e.g., “Does Uranus has a ring?”)
visiblenarrownot visiblenot visible
2.4Comparison questions to differentiate prior knowledge by comparing two elements (e.g., “Is the sun further away from our earth than the moon?”)
visiblenarrownot visiblenot visible
2.5Decision questions to differentiate prior knowledge against the background of possible cases/scenarios (e.g., “Is the moon light or dark?”)
visiblenarrownot visiblenot visible
2.6Definition questions to understand terms (e.g., “What exactly is a sickle?”)
visiblenarrownot visiblenot visible
2.7Time-and-space questions to further develop the ability to orient oneself in time (e.g., “When did the Middle Ages begin?”)
visiblenarrownot visiblenot visible
2.8Collection questions to gather the most diverse and comprehensive information possible on an aspect (e.g., “What are all the rivers in North Rhine-Westphalia called?”)
not visiblebroadnot visiblenot visible
2.9 *Question about (historical) events, personalities, facts, or origins (e.g., “How was Caesar killed?”)
visiblenarrownot visiblenot visible
The asterisks (*) mark the added question types.
Table A4. Competence level 3: Question types and distinguishing criteria (modified version according to Brinkmann [26]).
Table A4. Competence level 3: Question types and distinguishing criteria (modified version according to Brinkmann [26]).
Nr.Prior KnowledgeFocus of
Attention
Intention of
Conceptual
Understanding
Philosophical Horizon
3.1Why questions that have a generalizing character and are aimed at regularities (e.g., “Why does the moon always look different?”)
not visiblebroadvisiblenot visible
3.2How questions to break down modalities and modes of operation (e.g., “How did the sun come into being and how did the moon and the earth come into being?”)
not visiblebroadvisiblenot visible
3.3Questions about the nature of things (e.g., “What is the moon made of?”)
not visiblebroadvisiblenot visible
3.4Question about consequences (e.g., “What is the gravitational pull like when you fly over a planet?”)
not visiblebroadvisiblenot visible
3.5Verification questions (e.g., “Did the moon and the sun look different in the past?”)
not visiblebroadvisiblenot visible
3.6Time-and-space questions to expand orientation knowledge (e.g., “What have people traded with in the past?”)
not visiblebroadvisiblenot visible
Table A5. Competence level 4: Question types and distinguishing criteria (modified version according to Brinkmann [26]).
Table A5. Competence level 4: Question types and distinguishing criteria (modified version according to Brinkmann [26]).
Nr.Prior KnowledgeFocus of
Attention
Intention of
Conceptual
Understanding
Philosophical Horizon
4.1Why questions that have a generalizing character (e.g., “Why does the earth revolve around itself?”)
visiblebroadvisiblenot visible
4.2Questions to break down modalities and modes of operation (e.g., “How did the urexplosion go?”)
visiblebroadvisiblenot visible
4.3Decision questions (e.g., “Where is the moon? Behind or in front of the earth?”)
visiblebroadvisiblenot visible
4.4Expert verification questions (e.g., “Is one half dark because the sun doesn’t shine on it?”)
visiblebroadvisiblenot visible
4.5Expert definition questions to understand complex terms or phenomena (e.g., “What does light years mean?”)
visiblebroadvisiblenot visible
4.6Time-and-space questions regarding a complex phenomenon in connection with a temporal structure (e.g., “When is there always a new moon?”)
visiblebroadvisiblenot visible
4.7Consequence questions for advanced learners to better understand the course of a particular scenario (e.g., “If the sun ever explodes, how will it explode?”)
visiblebroadvisiblenot visible
Table A6. Competence level 5: Question types and distinguishing criteria (modified version according to Brinkmann [26]).
Table A6. Competence level 5: Question types and distinguishing criteria (modified version according to Brinkmann [26]).
Nr.Prior KnowledgeFocus of
Attention
Intention of
Conceptual
Understanding
Philosophical Horizon
5.1Questions based on understood technical terms requiring a complex conclusion to answer (e.g., “Why is oxygen only on Earth?”)
visiblebroadvisiblenot visible
5.2Questions about the meaning of the nature of the living environment that focus on the “why” of a phenomenon (e.g., “Why does a planet exist if you can’t stand on it?”)
visiblebroadvisiblevisible
5.3Questions from a particular perspective (future significance, evaluations, etc.) that seek clarity about connections or patterns of interpretation in order to understand and categorize processes (e.g., “What happens if the rainforest is destroyed?”)
visiblebroadvisiblenot visible
5.4Questions about the whence and whither of humankind or of a philosophical nature (e.g., “Will people live on the other planets in the future?”)
not visiblebroadvisiblevisible

References

  1. Wu, L.; Liu, Y.; How, M.-L.; He, S. Investigating Student-Generated Questioning in a Technology-Enabled Elementary Science Classroom: A Case Study. Educ. Sci. 2023, 13, 158. [Google Scholar] [CrossRef]
  2. Niegemann, H. Lernen und Fragen: Bilanz und Perspektiven der Forschung. Unterrichtswissenschaft 2004, 32, 345–356. [Google Scholar] [CrossRef]
  3. Neber, H. Fragenstellen. In Handbuch Lernstrategien; Mandl, H., Friedrich, H.F., Eds.; Hogrefe: Göttingen, Germany, 2006; pp. 50–58. ISBN 978-3-8017-1813-8. [Google Scholar]
  4. Aflalo, E. Students generating questions as a way of learning. Act. Learn. High. Educ. 2021, 22, 63–75. [Google Scholar] [CrossRef]
  5. Pallesen, H.; Hörnlein, M. Warum Schüler*innen keine Fragen stellen.: Unterricht zwischen Sozialisation zur Fraglosigkeit und Bildungsanspruch. In Kinderperspektiven im Unterricht: Zur Ambivalenz der Anschaulichkeit; Rumpf, D., Winter, S., Eds.; Springer VS: Wiesbaden, Germany, 2019; pp. 3–10. ISBN 978-3-658-22432-5. [Google Scholar]
  6. Kultusministerkonferenz. Empfehlungen zur Arbeit in der Grundschule. Available online: https://www.kmk.org/fileadmin/pdf/PresseUndAktuelles/2015/Empfehlung_350_KMK_Arbeit_Grundschule_01.pdf (accessed on 29 May 2024).
  7. Department for Education. The National Curriculum in England: Key Stages 1 and 2 Framework Document. Available online: https://assets.publishing.service.gov.uk/media/5a81a9abe5274a2e8ab55319/PRIMARY_national_curriculum.pdf (accessed on 29 May 2024).
  8. OECD. The Future of Education and Skills. Education. 2023. Available online: https://www.oecd.org/en/about/projects/future-of-education-and-skills-2030.html (accessed on 29 May 2024).
  9. Lombardi, L.; Mednick, F.J.; Backer, F.D.; Lombaerts, K. Fostering Critical Thinking across the Primary School’s Curriculum in the European Schools System. Educ. Sci. 2021, 11, 505. [Google Scholar] [CrossRef]
  10. Spencer, A.G.; Causey, C.B.; Ernest, J.M.; Barnes, G.F. Using Student Generated Questions to Foster Twenty-First Century Learning: International Collaboration in Uganda. Excell. Educ. J. 2020, 9, 57–84. [Google Scholar]
  11. Chin, C.; Osborne, J. Students’ questions: A potential resource for teaching and learning science. Stud. Sci. Educ. 2008, 44, 1–39. [Google Scholar] [CrossRef]
  12. Miller, S.; Brinkmann, V. SchülerInnenfragen im Mittelpunkt des Sachunterrichts. In Sachunterricht in der Grundschule entwickeln—Gestalten—Reflektieren; Gläser, E., Schönknecht, G., Eds.; Grundschulverband: Frankfurt am Main, Germany, 2013; pp. 226–241. ISBN 9783941649095. [Google Scholar]
  13. Schilling, Y.; Kuckuck, M. Das Anregen und Berücksichtigen von Schüler * innenfragen im Sachunterricht: Impulse für eine vielperspektivische Professionalisierungsgelegenheit im Studium. Widerstreit Sachunterricht 2024, 28, 1–10. [Google Scholar] [CrossRef]
  14. Schmeinck, D.; Kidman, G. The Integrated Nature of Geography Education in German and Australian Primary Schools. In Teaching Primary Geography: Setting the Foundation; Kidman, G., Schmeinck, D., Eds.; Springer Nature AG: Cham, Switzerland, 2022; pp. 15–27. ISBN 978-3-030-99970-4. [Google Scholar]
  15. Schomaker, C.; Tänzer, S. Sachunterrichtsdidaktik: Bestandsaufnahme und Forschungsperspektiven. In Lernen im Fach und über das Fach hinaus: Bestandsaufnahmen und Forschungsperspektiven aus 17 Fachdidaktiken im Vergleich, 2nd ed.; Rothgangel, M., Abraham, U., Bayrhuber, H., Frederking, V., Jank, W., Vollmer, H.J., Eds.; Waxmann: Münster, Germany; New York, NY, USA, 2021; pp. 363–390. ISBN 9783830993070. [Google Scholar]
  16. Meschede, N.; Hartinger, A.; Möller, K. Sachunterricht in der Lehrerinnen- und Lehrerbildung: Rahmenbedingungen, Befunde und Perspektiven. In Handbuch Lehrerinnen- und Lehrerbildung; Cramer, C., König, J., Rothland, M., Blömeke, S., Eds.; Verlag Julius Klinkhardt: Bad Heilbrunn, Germany, 2020; pp. 541–548. ISBN 9783838554730. [Google Scholar]
  17. Gesellschaft für Didaktik des Sachunterrichts. Perspektivrahmen Sachunterricht, 2nd ed.; Verlag Julius Klinkhardt: Bad Heilbrunn, Germany, 2013; ISBN 978-3-7815-1992-3. [Google Scholar]
  18. Schilling, Y.; Beudels, M.; Kuckuck, M.; Preisfeld, A. Sachunterrichtsbezogene Teilstudiengänge aus NRW auf dem Prüfstand: Eine Dokumentenanalyse der Bachelor-und Masterprüfungsordnungen. Herausford. Lehr. *Innenbildung 2021, 4, 178–195. [Google Scholar] [CrossRef]
  19. Beudels, M.M.; Damerau, K.; Preisfeld, A. Effects of an Interdisciplinary Course on Pre-Service Primary Teachers’ Content Knowledge and Academic Self-Concepts in Science and Technology–A Quantitative Longitudinal Study. Educ. Sci. 2021, 11, 744. [Google Scholar] [CrossRef]
  20. Kahlert, J.; Fölling-Albers, M.; Götz, M.; Hartinger, A.; Miller, S.; Wittkowske, S. (Eds.) Handbuch Didaktik des Sachunterrichts, 3rd ed.; Verlag Julius Klinkhardt: Bad Heilbrunn, Germany, 2022; ISBN 978-3-8385-8801-8. [Google Scholar]
  21. Peschel, M.; Mammes, I. Der Sachunterricht und die Didaktik des Sachunterrichts als besondere Herausforderung für die Professionalisierung von Grundschullehrkräften. In Professionalisierung von Grundschullehrkräften: Kontext, Bedingungen und Herausforderungen; Mammes, I., Rotter, C., Eds.; Verlag Julius Klinkhardt: Bad Heilbrunn, Germany, 2022; pp. 188–203. ISBN 978-3-7815-5949-3. [Google Scholar]
  22. Schröer, F.; Tenberge, C. Theorien und Konzeptionen inklusiven Sachunterrichts. In Inklusive (Fach-)Didaktik in der Primarstufe: Ein Lehrbuch; Dexel, T., Ed.; Waxmann: Münster, Germany; New York, NY, USA, 2022; pp. 158–185. ISBN 9783838556864. [Google Scholar]
  23. Simon, T. Vielperspektivität und Partizipation als interdependente und konstitutive Merkmale einer inklusionsorientierten Sachunterrichtsdidaktik. In Ich und Welt verknüpfen: Allgemeinbildung, Vielperspektivität, Partizipation und Inklusion im Sachunterricht; Siebach, M., Simon, J., Simon, T., Eds.; Schneider Verlag Hohengehren GmbH: Baltmannsweiler, Germany, 2019; pp. 66–76. ISBN 9783834019516. [Google Scholar]
  24. Praetorius, A.-K.; Martens, M.; Brinkmann, M. Unterrichtsqualität aus Sicht der quantitativen und qualitativen Unterrichtsforschung. In Handbuch Schulforschung; Hascher, T., Idel, T.-S., Helsper, W., Eds.; Springer Fachmedien Wiesbaden: Wiesbaden, Germany, 2020; pp. 1–20. ISBN 978-3-658-24734-8. [Google Scholar]
  25. Wuttke, E. Unterrichtskommunikation und Wissenserwerb: Zum Einfluss von Kommunikation auf den Prozess der Wissensgenerierung; Lang: Frankfurt am Main, Germany, 2005; ISBN 3-631-53832-4. [Google Scholar]
  26. Brinkmann, V. Fragen Stellen an die Welt: Eine Untersuchung zur Kompetenzentwicklung in Einem an den Schülerfragen Orientierten Sachunterricht; Schneider Verlag Hohengehren: Baltmannsweiler, Germany, 2019; ISBN 9783834019233. [Google Scholar]
  27. Chin, C.; Brown, D.E.; Bruce, B.C. Student-generated questions: A meaningful aspect of learning in science. Int. J. Sci. Educ. 2002, 24, 521–549. [Google Scholar] [CrossRef]
  28. van der Meij, H.; Karabenick, S.A. The great divide between teacher and student questioning. In Strategic Help Seeking: Implications for Learning and Teaching; Karabenick, S.A., Ed.; L. Erlbaum Associates: Mahwah, NJ, USA, 1998; pp. 195–218. ISBN 9780805823844. [Google Scholar]
  29. Levin, A. Lernen durch Fragen: Wirkung von strukturierenden Hilfen auf das Generieren von Studierendenfragen als Begleitende Lernstrategie; Waxmann: Münster, Germany, 2005; ISBN 9783830914730. [Google Scholar]
  30. Otero, J.; Graesser, A.C. PREG: Elements of a Model of Question Asking. Cogn. Instr. 2001, 19, 143–175. [Google Scholar] [CrossRef]
  31. Levin, A.; Arnold, K.-H. Aktives Fragenstellen im Hochschulunterricht: Effekte des Vorwissens auf den Lernerfolg. Unterrichtswissenschaft 2004, 32, 295–307. [Google Scholar] [CrossRef]
  32. Aguiar, O.G.; Mortimer, E.F.; Scott, P. Learning from and responding to students’ questions: The authoritative and dialogic tension. J. Res. Sci. Teach. 2010, 47, 174–193. [Google Scholar] [CrossRef]
  33. Ritz-Fröhlich, G. Kinderfragen im Unterricht; Klinkhardt: Bad Heilbrunn, Germany, 1992; ISBN 3781507114. [Google Scholar]
  34. Graesser, A.C.; Person, N.K. Question Asking During Tutoring. Am. Educ. Res. J. 1994, 31, 104–137. [Google Scholar] [CrossRef]
  35. Moore, S.; Nguyen, H.A.; Bier, N.; Domadia, T.; Stamper, J. Assessing the Quality of Student-Generated Short Answer Questions Using GPT-3. In Educating for a New Future: Making Sense of Technology-Enhanced Learning Adoption; Hilliger, I., Muñoz-Merino, P.J., Laet, T.d., Ortega-Arranz, A., Farrell, T., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 243–257. ISBN 978-3-031-16289-3. [Google Scholar]
  36. Niegemann, H.; Stadler, S. Hat noch jemand eine Frage? Systematische Unterrichtsbeobachtung zu Häufigkeit und kognitivem Niveau von Fragen im Unterricht. Unterrichtswissenschaft 2001, 29, 171–192. [Google Scholar] [CrossRef]
  37. Graesser, A.C.; Person, N.K.; Huber, J. Mechanisms that Generate Questions. In Questions and Information Systems; Lauer, T.W., Peacock, E., Graesser, A.C., Eds.; Lawrence Erlbaum Associates: Hillsdale, NJ, USA, 1992; pp. 167–187. ISBN 9780805810189. [Google Scholar]
  38. Scardamalia, M.; Bereiter, C. Text-Based and Knowledge-Based Questioning by Children. Cogn. Instr. 1992, 9, 177–199. [Google Scholar] [CrossRef]
  39. Marton, F.; Booth, S. Learning and Awareness; Routledge: New York, NY, USA, 1997; ISBN 9780805824551. [Google Scholar]
  40. Creswell, J.W. Research design: Qualitative, Quantitative, and Mixed Methods Approaches, 3rd ed.; SAGE: Los Angeles, CA, USA, 2009; ISBN 9781412965569. [Google Scholar]
  41. Pajo, B. Introduction to Research Methods: A Hands-on Approach, 2nd ed.; SAGE: Los Angeles, CA, USA, 2018; ISBN 9781483386959. [Google Scholar]
  42. Statistisches Bundesamt. Statistischer Bericht. Allgemeinbildende Schulen. Available online: https://www.destatis.de/DE/Themen/Gesellschaft-Umwelt/Bildung-Forschung-Kultur/Schulen/Publikationen/Downloads-Schulen/statistischer-bericht-allgemeinbildende-schulen-2110100237005.xlsx?__blob=publicationFile (accessed on 29 May 2024).
  43. Döring, N. Forschungsmethoden und Evaluation in den Sozial-und Humanwissenschaften, 6th ed.; Springer: Berlin/Heidelberg, Germany, 2023; ISBN 978-3-662-64762-2. [Google Scholar]
  44. Ministerium für Schule und Bildung des Landes Nordrhein-Westfalen. Die Grundschule in Nordrhein-Westfalen. Informationen für Eltern. Available online: https://broschuerenservice.nrw.de/msb-duesseldorf/files?download_page=0&product_id=293&files=3/a/3a2910637f9ff37401346e40aea0aa5b.pdf (accessed on 29 May 2024).
  45. Flick, U. (Ed.) The SAGE Handbook of Qualitative Research Design; SAGE Publications Limited: London, UK, 2022; ISBN 9781529766943. [Google Scholar]
  46. Kuckartz, U.; Rädiker, S. Qualitative Content Analysis: Methods, Practice and Software, 2nd ed.; SAGE: Los Angeles, CA, USA, 2023; ISBN 978-1-5296-0913-4. [Google Scholar]
  47. Mayring, P. Einführung in die Qualitative Sozialforschung: Eine Anleitung zu Qualitativem Denken, 7th ed.; Beltz: Weinheim, Germany; Basel, Switzerland, 2023; ISBN 9783407296016. [Google Scholar]
  48. Wirtz, M.A.; Caspar, F. Beurteilerübereinstimmung und Beurteilerreliabilität: Methoden zur Bestimmung und Verbesserung der Zuverlässigkeit von Einschätzungen mittels Kategoriensystemen und Ratingskalen; Hogrefe: Göttingen, Germany; Bern, Switzerland; Toronto, ON, Canada; Seattle, DC, USA, 2002; ISBN 3801716465. [Google Scholar]
  49. Landis, J.R.; Koch, G.G. The Measurement of Observer Agreement for Categorical Data. Biometrics 1977, 33, 159. [Google Scholar] [CrossRef]
  50. Vogl, S. Quantifizierung. Köln Z Soziol 2017, 69, 287–312. [Google Scholar] [CrossRef]
  51. Kuckartz, U. Mixed Methods: Methodologie, Forschungsdesigns und Analyseverfahren; Springer VS: Wiesbaden, Germany, 2014; ISBN 978-3-531-93267-5. [Google Scholar]
  52. Kelle, U.; Erzberger, C. Qualitative und quantitative Methoden: Kein Gegensatz. In Qualitative Forschung: Ein Handbuch, 14th ed.; Flick, U., Kardorff, E., von Steinke, I., Eds.; Rowohlts Enzyklopädie im Rowohlt Taschenbuch Verlag: Reinbek bei Hamburg, Germany, 2022; pp. 299–309. ISBN 9783499556289. [Google Scholar]
  53. Kaptein, M.; van den Heuvel, E. Statistics for Data Scientists: An Introduction to Probability, Statistics, and Data Analysis; Springer International Publishing: Cham, Switzerland, 2022; ISBN 978-3-030-10531-0. [Google Scholar]
  54. Raithel, J. Quantitative Forschung: Ein Praxiskurs, 2nd ed.; VS Verlag für Sozialwissenschaften: Wiesbaden, Germany, 2008; ISBN 978-3-531-91148-9. [Google Scholar]
  55. Cohen, J. Statistical Power Analysis. Curr Dir Psychol Sci 1992, 1, 98–101. [Google Scholar] [CrossRef]
  56. Schilling, Y.; Molitor, A.-L.; Ritter, R.; Schellenbach-Zell, J. Anregung von Wissensvernetzung bei Lehramtsstudierenden mithilfe von Core Practices. In Vernetzung von Wissen bei Lehramtsstudierenden—Eine Black-Box für die Professionalisierungsforschung? Wehner, A., Masanek, N., Hellmann, K., Grospietsch, F., Heinz, T., Glowinski, I., Eds.; Klinkhardt: Bad Heilbrunn, Germany, 2024. [Google Scholar]
  57. Krauss, S. Expertise-Paradigma in der Lehrerinnen- und Lehrerbildung. In Handbuch Lehrerinnen- und Lehrerbildung; Cramer, C., König, J., Rothland, M., Blömeke, S., Eds.; Verlag Julius Klinkhardt: Bad Heilbrunn, Germany, 2020; pp. 154–162. ISBN 9783838554730. [Google Scholar]
  58. Gruber, H.; Stöger, H. Experten-Novizen-Paradigma. In Unterrichtsgestaltung als Gegenstand der Wissenschaft; Kiel, E., Ed.; Schneider Hohengehren: Baltmannsweiler, Germany, 2011; pp. 247–264. ISBN 9783834008923. [Google Scholar]
  59. Helmke, A. Unterrichtsqualität und Professionalisierung: Diagnostik von Lehr-Lern-Prozessen und evidenzbasierte Unterrichtsentwicklung; Klett Kallmeyer: Hannover, Germany, 2022; ISBN 9783772716850. [Google Scholar]
  60. Prengel, A. Didaktische Diagnostik als Element alltäglicher Lehrarbeit—“Formatives Assessment” im inklusiven Unterricht. In Diagnostik im Kontext inklusiver Bildung: Theorien, Ambivalenzen, Akteure, Konzepte; Amrhein, B., Ed.; Verlag Julius Klinkhardt: Bad Heilbrunn, Germany, 2016; pp. 49–63. ISBN 9783781554610. [Google Scholar]
  61. Ernst, K. Den Fragen der Kinder nachgehen. Die Grund. 1996, 98, 7–11. [Google Scholar]
  62. Brinkmann, V. “Werden die Pflanzen trotzdem angebaut, auch wenn es der Umwelt schadet, sie zu pflegen?”—Schülerfragen zum Thema Landwirtschaft. In Landwirtschaft im Sachunterricht: Mehr als ein Ausflug auf den Bauernhof?! Schneider, K., Queisser, U., Eds.; wbv Media GmbH & Co. KG: Bielefeld, Germany, 2022; pp. 53–73. ISBN 9783763967209. [Google Scholar]
  63. Mueller, R.G.W. Making Them Fit: Examining Teacher Support for Student Questioning. Soc. Stud. Res. Pract. 2016, 11, 40–55. [Google Scholar] [CrossRef]
  64. Rothstein, D.; Santana, L. Make Just One Change: Teach Students to Ask Their Own Questions; Harvard Education Press: Cambridge, MA, USA, 2011; ISBN 9781612500997. [Google Scholar]
  65. Godinho, S.; Wilson, J. Helping Your Pupils to Ask Questions; Routledge: London, UK, 2016; ISBN 9780415447270. [Google Scholar]
Figure 1. Schematic representation of Brinkmann’s [26] competence level model.
Figure 1. Schematic representation of Brinkmann’s [26] competence level model.
Education 14 01003 g001
Figure 2. Schematic representation of the modified competence level model.
Figure 2. Schematic representation of the modified competence level model.
Education 14 01003 g002
Figure 3. Relative frequency of competence levels taking into account all questions (n = 477) of the sample.
Figure 3. Relative frequency of competence levels taking into account all questions (n = 477) of the sample.
Education 14 01003 g003
Figure 4. Absolute frequency of competence levels per data set taking all questions (n = 477) into account.
Figure 4. Absolute frequency of competence levels per data set taking all questions (n = 477) into account.
Education 14 01003 g004
Figure 5. Relative frequency of the competence levels in a separate presentation of the grade level taking all questions (n = 477) into account.
Figure 5. Relative frequency of the competence levels in a separate presentation of the grade level taking all questions (n = 477) into account.
Education 14 01003 g005
Figure 6. Relative frequency of the competence levels in a separate presentation of the data sets on the subject matter “space” and Brinkmann’s data set [26] as a reference.
Figure 6. Relative frequency of the competence levels in a separate presentation of the data sets on the subject matter “space” and Brinkmann’s data set [26] as a reference.
Education 14 01003 g006
Table 1. Example of analysis 1.
Table 1. Example of analysis 1.
Question“How Many Spines Do Hedgehogs Get?”
Distinguishing CriteriaCharacteristic
Prior knowledge☒ not visible☐ visible
Focus of attention☒ narrow☐ broad
Intention of conceptual understanding☒ not visible☐ visible
Philosophical horizon☒ not visible☐ visible
Competence level☐ 0☒ 1☐ 2☐ 3☐ 4☐ 5
Question type1.1: Quartet questions to capture the diversity of the world in a certain system of order
Table 2. Example of analysis 2.
Table 2. Example of analysis 2.
Question“What Is a Faraday Cage?”
Distinguishing CriteriaCharacteristic
Prior knowledge☐ not visible☒ visible
Focus of attention☐ narrow☒ broad
Intention of conceptual understanding☐ not visible☒ visible
Philosophical horizon☒ not visible☐ visible
Competence level☐ 0☐ 1☐ 2☐ 3☒ 4☐ 5
Question type4.5: Expert definition questions to understand complex terms or phenomena
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Schilling, Y.; Hillebrand, L.; Kuckuck, M. A Qualitative-Content-Analytical Approach to the Quality of Primary Students’ Questions: Testing a Competence Level Model and Exploring Selected Influencing Factors. Educ. Sci. 2024, 14, 1003. https://doi.org/10.3390/educsci14091003

AMA Style

Schilling Y, Hillebrand L, Kuckuck M. A Qualitative-Content-Analytical Approach to the Quality of Primary Students’ Questions: Testing a Competence Level Model and Exploring Selected Influencing Factors. Education Sciences. 2024; 14(9):1003. https://doi.org/10.3390/educsci14091003

Chicago/Turabian Style

Schilling, Yannick, Leonie Hillebrand, and Miriam Kuckuck. 2024. "A Qualitative-Content-Analytical Approach to the Quality of Primary Students’ Questions: Testing a Competence Level Model and Exploring Selected Influencing Factors" Education Sciences 14, no. 9: 1003. https://doi.org/10.3390/educsci14091003

APA Style

Schilling, Y., Hillebrand, L., & Kuckuck, M. (2024). A Qualitative-Content-Analytical Approach to the Quality of Primary Students’ Questions: Testing a Competence Level Model and Exploring Selected Influencing Factors. Education Sciences, 14(9), 1003. https://doi.org/10.3390/educsci14091003

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop