1. Introduction
Reading comprehension of a text occurs as a result of a reader’s reading skills being put into practice. Once the mechanical process of encoding and recoding is automated, comprehension occurs with some fluidity, according to the skill of each person. It is in this last phase where the variables of memory, attention, and inference become especially relevant.
In this sense, there are numerous tests that evaluate these parameters, among others. In recent decades, memory, as well as the ability to infer and pay attention in the context of reading literacy, have been subjects of study individually. However, the scientific literature that integrates all three dimensions is scarce.
Standardized international tests, such as PISA—Programme for International Student Assessment (for secondary students) and PIRLS—Progress in International Reading Literacy Study (for primary students), encompass memory, inference, and attention as part of the process of understanding what is read, integrating these dimensions into the steps of their processes. PIRLS indicates that there are several steps necessary for achieving reading comprehension: 1. location and retrieval of explicit information (using attention); 2. drawing direct conclusions (using memory); 3. integration and interpretation of ideas and information (inferring); and 4. analysis and evaluation of content and textual elements (inferring). Additionally, PISA simplifies the process into three stages: 1. retrieving information (memory); 2. interpreting the text (attention); and 3. reasoning and evaluation (inference). Furthermore, PISA reorganizes these stages: it defers drawing conclusions until the stage of reasoning and evaluation. This means that PIRLS’s second stage is integrated into PISA’s third stage. However, it could also be seen that PIRLS’s second stage is integrated into PISA’s first stage, as it involves extracting direct conclusions, which could be viewed as the direct literal interpretation of Jiménez Pérez [
1] in the first stage of understanding the text. Just as PISA is developed in this manner, so is PIIAC, which is essentially the same set of tests developed by the OECD in terms of reading competence, similar to PISA for adolescents, but for adults.
In any case, a fusion of these three tests can be carried out to condense their specifications, as is the case with TEECLED (Test de Evaluación y Entrenamiento de la Competencia Lectora en Español y Digital—Test for Evaluation and Training of Reading Competence in Spanish and Digital [
2]), which is based on the experiential premise that reading competence is not a matter of age or educational level (although there may be relationships), but of training. Thus, TEECLED encompasses three dimensions around which the concept of reading competence revolves, and these are reading phases ordered by interconnected steps rather than absolute dimensions understood as isolated compartments.
According to TEECLED, the first dimension is the use of memory, which provides objectivity and accuracy to subsequent processes, facilitating understanding from the root. When executed correctly, it is characterized by an individual’s ability to effectively summarize (although the concept of attention is used, it refers exclusively to memory, not to the capacity for full awareness, attention to processes, or concentration on the event itself). The second dimension involves utilizing the level of attention optimally, enabling the use of obtained information to infer using prior knowledge as objectively as possible. This relates to a level of self-awareness regarding one’s own capabilities, the capacity to recall previously acquired knowledge (not referring to the use of immediate memory, but the ability to recall what has been read in an unbiased manner, without summarizing, but as to memories, experiences, and sensations, albeit with objective recounting).
In Spain and Latin America, reading competence continues to generate significant interest, perhaps due to the consistently poor scores obtained in standardized tests such as PIRLS (primary education students), PISA (secondary education students), and PIAAC (adults, regardless of their education). Thus, European guidelines, for instance, emphasize the importance of this competence (reading) as a fundamental axis upon which hinges not only a more successful educational process but also optimal citizen development [
3].
2. Memory
In Spain, traditionally (in the 50s and 60s) memory was used as an almost exclusive learning tool and taken as a sign of developed cognitive processes. Democracy was already developed; as it evolved in the 80s and 90s, it went to the other extreme, generating a teaching process in schools that repudiated the use of memory for all areas, including reading. But memory has a significant, though not a complete, role in reading literacy.
There are tests that measure attention in conjunction with memory. In any case, the most frequent application is that memory appears within a broader multifactorial perspective, although intervening in memory to improve it can yield positive results in reading skills [
4]. At lower levels of comprehension, working memory does not correlate; however, it does correlate in higher-level comprehension processes [
5], although memory, as a cognitive process involved in reading competence, involves “теснoй связи этoй характеристики с рабoчей памятью”—the close connection of this characteristic with working memory [
6] (p. 70). On the other hand, “memory, auditory memory and procedural memory contribute significantly for the explanation of reading process” [
7] (p. 29). To what extent memory affects reading literacy in relation to attention and the ability to infer becomes an important requirement for measuring reading literacy.
As for tests that evaluate it, TECOLEIN combines memory with inference. More specifically, TOMAL, perhaps the most specific test on this criterion, presents four main memory indices, verbal, non-verbal, compound, and delayed recall, in addition to complementary tests to calculate sequential recall, free recall, and attention/concentration. As to the VADS test (assessing auditory and visual memory of digits and, like TOMAL, used in dyslexia), which works mainly on immediate memory, it has been concluded that “a good performance in digit memory supposes a good capacity of concentration”; consequently, the poor performances of many students with learning disabilities on the digit test revealed a poor ability to establish and evoke symbol sequences (see [
8]). Nonetheless there is a certain tendency not to overvalue this finding, because, for example, for certain researchers, it may provide an advantage during a test that the user can consult the text, since decreasing the importance of memory can increase comprehension capacity [
1]. Regarding TALE 1995, the General Council of Psychology Spain in its evaluation of the ECLE questionnaire [
9], and the evaluation of Llorens Tatay et al. of CompLEC tests like ECL or EMLE-TAMLE [
10], have considered that the user can perform such a text query during the test. In any case, memory, in general terms, being dependent mainly on attention, determines to a certain extent the daily life of the individual, and in a society where sustained reading efforts are less and less often required of the brain, it can be a decisive factor as to better understanding what is read.
3. Attention
From the point of view of the emerging field of neuroeducation, attention is one of the fundamental pillars of the teaching and learning process at school and of human development in society, beyond the didactic field. The role of attention, as part of executive functioning, shows a significant relationship with the reading process; in addition, attention can predict reading competence [
11]. For the areas that are illuminated in the brain in image tests, Silva-Pereyra et al. [
12] affirm that “poor reading skills may involve failure to focus attention”.
Although it is not difficult to find tests that consider parameters such as inference and memory, as well as other parameters such as speed (the case of SCALE, in which students read texts silently and then aloud), speed and precision (KEY, TEGRA), and fluency (see, for example, the official reading fluency test of the University of Jaén, developed by Perea de la Casa et al. [
13]), such is not the case with regard to the parameter of attention, which appears above all in the areas corresponding to ADHD or dyslexia, as is the case with the WISC IV. In any case, Leobien [
14] considers it relevant, along with memory, sequencing skills, and reading speed. In this sense, we can also refer to the TOMAL and VADS tests (for the latter, it is assumed that poor concentration results in poor performance on digit memory), as well as tests of the D-2 attention test type, which consider this dimension with speed and precision variables, as well as the CEDEtest M10 and RM-2, which include “attention to detail” in their methodology.
During the reading process, paying full attention avoids the accumulating of difficulties in understanding what is read; it is common to realize that we think about other issues while reading. As a dimension which strengthens the ability to understand in general and interpret what has been read, attention is an important aspect to analyze in conjunction with memory and inference.
4. Inference
All of the tests mentioned in the Introduction have long since ceased to be interested in literal comprehension located at the basic linguistic, phonological, syntactic, and lexical levels since this is only the first level of comprehension of a text [
15]. Currently, they are more inclined towards a deeper understanding and aspects such as the ability to infer, that is, the ability of the human being to deduce and reach a conclusion with psychological–cognitive assumptions [
16,
17] (or as defined in studies such as that of the CADAH Foundation [
8] on the TALE, PROLEC-R, PROESC, EMLE, TOMAL and VADS tests). Indeed, reading is not only identifying a repertoire of signs “but already understanding, interpreting, discovering, enjoying” [
18] (p. 70). It is not surprising, then, that the inferential components of TECOLEIN, ECL, MARSI [
19], EGRA and ACL4 are common. In this manner, Allain Arteaga’s interactionist, cognitive, and psycholinguistic perspective [
20] focuses on reading comprehension, particularly emphasizing inferential processes. Therefore, a comprehensive reading would be a (mainly) strategic process [
21]. In the case of PROCULEIN, a well-known tool used for assessing text comprehension in English as a second language, the test distinguishes between the text’s foundation and the situational model. It suggests that various types of inferences come into play within the situational model: (a) those about the intention of the author; (b) those on the meaning of stylistic resources and ambiguous expressions; (c) those about the information implicit in the text; and (d) those about actions involving the use of prior knowledge [
22]. It also reflects the importance of inference in adequately understanding what a certain author has meant by her text [
1], which is the basis of TEECLED, the Assessment Test and Training of Reading Competence in Digital and Spanish [
2], not only testing print-based texts but also digital ones [
23]. To these are added tests such as the PIAAC, which measures the cognitive and work-related competences necessary for individuals to participate successfully in society, as a way of inferring levels of adult reading competence, since reading competence is the individual’s ability to use their reading comprehension in a useful way in society [
1]. In this direction, we can mention the following: the M10, RM-2, and Test-Fix tests, intended for adults and within the CEDEtest battery (see [
24]), which attend to inferential reasoning (with broader objectives such as addressing general intelligence or numerical reasoning and, in the case of Test-Fix, non-verbal intelligence); Lectum and its underlying psycholinguistic approach [
25]; inferential understanding, to which PROLEC-R—a test that continues to be, by the way, one of the most applied (see [
26] in Lima, [
27] in Mexico) for both research and in the field of dyslexia—also attends, with its emphasis on information processing; as well as CLIP_v5 [
28], which emphasizes Walter Kintsch’s construction–integration model, as well as bridge and macro-proposal inferences, though with poor incorporation of the reader’s knowledge. On the other hand, the results obtained by Guzmán-Simón et al. [
29] regarding TECOLEIN showed a graduation of inferential reading comprehension in students 12, 14, and 16 years old, who developed variable strategies: some starting with knowledge built from the text and others from knowledge based on long-term memory. Thus, inference is confirmed as one of the fundamental pillars of reading comprehension and competence, which is why it is included as an evaluable dimension in numerous tests.
5. CL Test Review
In this new scenario, the tests themselves have undergone a methodological revision due to the feedback received, as several questions require resolution, and some criteria need further refinement. This study has considered pre-existing tests for the formulation of its dimensions.
Subsequently, these tests are juxtaposed with the dimensions developed by the MAI, serving to contextualize the importance of the reading process as conceptualized by the authors of this research, and grounded in TEECLED.
To begin with, tests should avoid unnatural language contexts (like those present, for example, in TECOLEIN; see [
29]). They must use authentic texts, not those elaborated by outsiders, that prioritize the critical and pragmatic dimension over attention to the lexicon (the opposite of what happened, say, in CLP; see [
30]). On the other hand, tests should focus more on reading efficiency (in the manner of TLC2 and TLC2010) and less on reading speed, since attention to understanding is more important than how quickly something is read. Likewise, it must be borne in mind that students do not possess the same skills with respect to the different types of texts—e.g., narrative, discontinuous, expository—as deduced from ECOMPLEC or EVALÚA 2012, a typology into which digital texts are incorporated (Programme for the International Assessment of Adult Competencies (PIAAC), 2011). In this regard, it is essential to consider certain aspects that have to do with reading habits, as reflected in recent studies on adolescents: (a) integrative training of new readers at school [
31]; (b) print-based-reading consumption preferences, choice of reading materials, periods of greatest reading, etc. [
23]; (c) digital reading by young people [
32]; (d) the relationship between reading habits and music (see the study by Parrado Collantes et al. [
33]); (e) reading consumption in international contexts (see [
34] for field research in Poland, Chile and Portugal); and (f) reading mediation, also in contexts of an international nature ([
35], in the aforementioned countries).
Furthermore, the assessment of the errors committed, a function that encompasses TALE, the Magallanes Scale (refer to [
36]), and the LEE test [
37]), appears to be crucial. On the other hand, reading aloud has to be considered (ESCALA, 2013) in order to assess speed and accuracy. Likewise, tests must allow for rereading the text, a fact that makes ECL possible [
36,
38]. The tests must be adapted to different time durations since it has been proven that having more time works in favor of people with intellectual disabilities (ECOMPLEC, see [
17]). As for the questions, it is desirable that they are not exclusively of the multiple-choice type, as has been customary, but, more in line with PIACC, with a combination of multiple choice and short-answer questions, or constructed in the manner of PROCULEIN, which also includes questions about opposites, true-or-false questions, and evocation or free memory-based probing, open questions. Alternatively, questionnaires, like those from TLC (2010), incorporate multiple-choice exercises and also exercises with prompts to complete, underline, order, build an outline, or give a short answer. Likewise, oral texts and stimuli (illustrations) should be included to help carry out the test (this is the case, for example, in PROLEC 2007). On the other hand, poor applications of cloze methodology should be avoided, which, despite the methodology’s solid Gestalt psycholinguistic basis and linguistic–pragmatic approach, have sometimes been nothing more than exercises in pure divination (see [
39,
40]). In addition, critical understanding has to be developed even more (even if it was already addressed in NEP, or, in a limited way, in CLP), which, as Serrano de Moreno [
41] states, is an objective of university students, namely, accessing knowledge as the basis of the ability to infer.
However, it is fair to recognize an increasing use of technology at different levels, ranging from design to methodology. In addition to being familiar to the student, the technology allows immediate feedback, even during the test itself. For example, Dialect (2013) offers results immediately, as well as online correction to improve the statistical calculation. In addition, PROLEC-SE-R (with TEAcorrige) and ECOMPLEC and its Read & Answer system stand out, as does the TuinLEC technology at EdiLEC [
42]. The particular case of ECOMPLEC, an updated version of the print-based format, is interesting to examine in more detail: Read & Answer allows the registration of online indices and provides data on the process of reading and answering unobservable questions in print-based reading (Llorens Tatay points this out; [
43]). These authors argue that such a record offers “descriptive” insights (i.e., how users read) as well as “explanatory” indications (the strategies employed for searching and self-regulation). The latter are interesting because they can train and, later, help design adapted intervention programs [
44]. For its part, the ABCDeti software (designed by Cedeti; see, for this, [
45]), as stated in its approach, is criterion-oriented (achievement levels are based on the performance obtained) and aimed at evaluating reading skills. Regarding the Pista E program [
46], it is notable for its GPS technology, although it does have inherent limitations concerning accuracy and speed. More importantly, there are potential drawbacks related to the fact that students may be more intrigued by the novelty of the application’s format, which does not necessarily translate into an improvement in their reading skills.
In general, it seems that apps are geared more to improving reading comprehension than to its evaluation, although these aspects are not exclusive, as seen in the case of Dytective, which aims to improve cognitive literacy skills from a playful perspective but which also has a “screening” test that advertises the possibility of detecting reading and writing difficulties in 15 min [
47]. As for the first aspect, improvement, the following works stand out: Ludiletras (to work on literacy, see [
48]); Galexia, an app developed by Pambú! Dev in 2016 [
49], which attends to the reading fluency of a child and stimulation of their development; Leobien, for children up to 12 years old, with the playful purpose of working on reading skills to reinforce what they have learned in the classroom (memory, attention, sequencing and reading speed) [
14]; Subtext app, which allows the teacher to interact with students to address reading and comprehension (see, for the latter, [
50]); Yo Leo con Grin, an app aimed at first readers (for example, it presents tasks for learning letters, syllables, strokes, and reading words and phrases) [
51]; Intralíneas, an educational platform for digital reading and support for teachers used for the work of reading comprehension in educational centers for students between 8 and 18 years old [
52]; Lea System, an app which allows you to read short texts with attractive topics relating to general culture and to answer questions, by which you can earn points and awards [
53]; and Ludibuk, which provides statistics on the evolution of each reader, so that, on the one hand, teachers can see progress and act accordingly and, on the other, readers can work at home, and in the case of small children, they can play with different colors, change the textual typography, underline, etc. Books can also be accompanied by music, so that young readers, says Vázquez [
54], will relax and focus on what they read. At the end of the reading, questions are asked about what has been read. We found apps with more specific purposes, for example, those used to improve reading speed and read a large number of words in a short period of time with almost 90% text comprehension, taking advantage of Spritz technology (rapid reader function: on-screen, only one word, displayed at a specific speed). Other apps involved the method “Rapid Serial Visual Presentation” (speed reader) and the presence of gamification elements, which are additionally useful for monitoring the progress (accelerating the reading) of interactive exercises (texts to be read faster) or the possibility of personalizing the texts (reading speed test; [
55]). Following the aforementioned gamification line, there are educational video games on the market such as Maximum Consequentia, which is online and designed to be played ubiquitously, from anywhere with Internet access, from
https://manglar.uninorte.edu.co/handle/10584/5884#page=1 (accessed on 13 June 2023) [
56], and is designed to achieve the best reading comprehension by its being structured according to five levels.
More focused on evaluation, Istation [
57] measures the data obtained by the student, which is recorded as to different skills: phonological, phonetic, fluency, vocabulary, and comprehension. Regarding Reading Comprehension, designed by Alejandro Soto Treviño for Android in 2016 [
58], this app measures the speed of reading and comprehension according to levels by means of a 10-question questionnaire that is answered after reading the text. At the end, it offers a report with the time spent reading, words read per minute, degree of comprehension (percentage), correct answers, words understood per minute, and recommendations based on the analysis of the answers and the reading time.
Based on technology or not, another aspect which has been improved with regard to evaluation and which must also be reaffirmed in future experiences is the combination of several tests, especially those that are more refined and theoretically justified, and that attend even more to the qualitative aspects. See, for this, the critical analysis by Montanero Fernández [
22], who analyzes PROLEC, PCL, CL, EVALÚA, PROLEC-SE, CLT, TD, CL-4, TA, IHE, LASSI, ACRA, and IDEA. Thus, to avoid possible bias in the results, Jiménez-Pérez et al. [
59] decided to use ACL-4, PROLEC-R and ESCOLA together. Research by Calet et al. [
60] has shown that the ECOMPLEC, PROLEC-R and ACL tests vary in terms of the assessment format used and in the components of comprehension skills that they measure; thus, more than one diagnostic instrument is required. The experience of using the Reading Efficacy Test (TECLE), a general screening test for reading comprehension (narrative text) and the area of inferences of the Test to Understand (TLC) Test [
61]) also stand out [
62], according to Sampedro [
63], as does the use with second-grade students of the ACL-2, TECLE and PEES tests in the proposal of Cruz Ripoll Salceda et al. [
64]. The researchers had the task of evaluating the level of reading, before and after the intervention, within the framework of the LEO-PAR-D program of peer tutoring, which provided positive results, as its participants, the researchers confirmed, supervised the reading of classmates, asked and answered questions, made predictions, or assigned points for reading or answering correctly. For its part, the TECOLEIN-MARSI combination [
19] demonstrated additional interesting data, such as the finding that the sex and socioeconomic and cultural index (ISC) variables were not statistically significant. With even partial application of the technology, experiences were registered with ABCDeti, CLP, and SEPA-SEP [
45], as well as one that jointly used the CLP test with the virtual e-Pels program, and which obtained a reasonable success; although the sample of students was small, the program significantly improved the level of reading comprehension [
65].
6. Self-Test
All the aforementioned advances and improvement points can be implemented in the self-diagnosis tests. Individually acknowledging our strengths and weaknesses through self-reflection leads to a deeper and more comprehensive understanding of ourselves.
We are aware that a greater consideration of social, family, and behavioral factors, the degree of autonomy (not only cognitive) and factors such as taking tests in state or non-state centers is necessary (NEP-SR; see [
18,
66,
67]). The possibility must be considered that there are significant differences between men and women due to parameters such as deeper cognitive problems or unequal opportunities, to name just two. Even so, along the lines of Márquez García et al. [
68] (p. 58), students’ self-evaluation of their reading comprehension processes (how they read and how they understand), including elements that they themselves detect as problems, must be improved while reading, but the students must also go further and be able to solve them.
It is a continuing reality that the majority of the tests commented on have their attention focused on the primary and secondary education stages in particular, and on the baccalaureate stage, in much higher proportions than would be typical of efforts aimed at other university students and adults. Nonetheless, the experiences of González Moreyra are meritorious [
69], and now, as for improvement programs, Barboza-Palomino and Ventura-León [
70], Guerra and Guevara [
71] and Meza de Vernet [
72] in Venezuela have seen isolated cases of use of platforms such as PIAAC by readers up to 90 years of age (see [
73]). Likewise, the majority of these programs do not allow self-evaluation, a possibility that this test does offer.
In short, these three fundamental axes, namely, memory, attention, and inference, have an intrinsic relationship with the three aspects of the PISA questions, as well as with the phases of critical competence applied to reading, namely, OPA (Pensar, Observar, Actuar; Observe, Think, Act) [
74]. Thus, the use of memory (MAI) is employed for information retrieval (PISA), which corresponds to collecting information or data in the observation phase (OPA). Secondly, attention (MAI) leads to a general understanding that facilitates accurate interpretation (PISA), that is, thinking with focus and concentration (OPA). Finally, inferring (MAI) with one’s own knowledge is the best way to reflect and evaluate (PISA) in order to make a decision, draw a conclusion, or take a position (OPA).
The present tool is proposed as a first step in the comprehensive and informal evaluation of the reading skills of older teenagers (including university students), as well as any adult, for them to become aware of their own strengths and abilities that favor self-regulation. The objective of this design is to verify to what extent the proposed multidimensional model allows reading comprehension to be measured, compared to the generally accepted CompLEC model [
10]. This proposal is based, in a simplified way, on the guidelines of the validated TEECLED (EduLeo) tool, which takes three times as long to complete. Thus, a simple, fast, and effective tool is provided to determine a user’s level of reading competence, something of great importance in a society, like the Spanish one, that does not read as much as it once did.
Both memory and the ability to infer, along with attention as a fundamental axis, are pillars of reading skills at any age. All relate to the linguistic and psychological areas of the individual. Thus, these three fundamental axes, namely, memory, attention, and inference, have an intrinsic relationship with the three aspects of the PISA questions, as well as with the phases of critical competence applied to reading, namely, OPA (Observe, Think, Act; in Spanish: Observar, Pensar, Actuar) [
74]. Thus, the use of memory (MAI) is employed for information retrieval (PISA), which corresponds to collecting information, data, in the observation phase (OPA). Secondly, attention (MAI) leads to a general understanding that facilitates accurate interpretation (PISA), that is, thinking with focus and concentration (OPA). Finally, inferring (MAI) with one’s own knowledge is the best way to reflect and evaluate (PISA) in order to make a decision, draw a conclusion, or take a position (OPA).
7. Method
7.1. Design of the Investigation
To validate this multidimensional model of measurement of reading proficiency, a quasi-experimental cross-sectional research design has been proposed.
7.2. Participants
Three hundred and sixty university students participated (15.6% men and 84.4% women, aged between 17 and 21 years; 74.4% of the sample was 18 years old, and 96.6% were between 17 and 19 years old). Since the research was carried out with students who were studying to be future teachers in early childhood education and primary education, the sample has a higher percentage of women than men. The sample consists of university students from Malaga, Granada, and Jaén, in Andalucía (Spain), who belong to a middle-level sociocultural framework and attend public universities located in middle-class areas.
In this investigation, the ethical indications necessary for an investigation on human beings have been respected, and the following guidelines were taken into account, following the AECL (Asociación Española de Comprensión Lectora
https://comprensionlectora.es/, accessed on 13 June 2023) Guidelines: confidentiality guarantees were provided; subjects were informed of their right to information, and to protection of their personal data, like Spain demands in its law; guarantees of non-discrimination as to any reason were made; and it was made clear that the study was a free intervention that could be abandoned, without having to explain, at any time, at the request of the student.
7.3. Instruments
Two instruments were administered to the same set of students:
(a) Self-Assessment in Reading Competence: Memory, Attention, and Inference (MAI)—Ad hoc designed questionnaire consisting of a scale made up of 10 items from the three dimensions proposed (attention, inference, and memory), which alternate these items and present a dichotomous response option (Yes/No) which attempts to divide respondents into two groups (
Table 1).
(b) Reading proficiency test (CompLEC, [
10])—A test aimed at secondary education students that combines five texts, three of them continuous, and the other two discontinuous, with a total of 20 questions and based on the indications published by the OECD in its PISA 2000 report. The texts are characterized by being short, with variations that fluctuates between 274 and 426 words. The types of text are mainly written in expository and argumentative terms with regard to continuums, diagrams, and graphics, with a minimum of 130 words used for non-continuous texts. The formal approach to questions revolves around open and closed questions, with multiple-choice questions prevailing. The twenty questions are divided into information retrieval, integration, and reflection on the content and form of the text. CompLEC obtains satisfactory internal consistency (α = 0.79).
7.4. Process
This development of a reading comprehension measurement tool that is independent of a reference text and allows for the determination of an individual’s level of reading comprehension in an objective and simple way has been based on the identification of three theoretical dimensions that are considered relevant as causal antecedents of reading comprehension, understood as a multidimensional construct. These three causal antecedent dimensions of reading comprehension are the ability to infer, attention, and memory.
Following a general principle to use several items instead of just one to represent each construct [
75], for each dimension, several items have been proposed, based on theoretical reflection and analysis of the scientific literature, with special emphasis on attention, memory, and the ability to infer—the three fundamental axes in the measurement of reading competence, as has been verified in the previous section.
Regarding data collection, the research group carried out this work; as a result, it was not necessary to give instructions to collaborating teachers. During the last month of class before exams, the test was conducted with the students. At first the CompLEC test was administered, followed immediately by the MAI. The allotted time was one hour for both tests. The MAI test required an average of 5 min to be completed, with the shortest time being 4 min and the longest being 8.
7.5. Analysis of Data
In order to check the reliability of the joint scale and the subscales representing each dimension, Cronbach’s alpha coefficients were obtained [
76,
77]. In addition, an exploratory factor analysis was carried out to verify both the one-dimensionality of each MAI subscale and the multidimensional structure of the joint scale. A cluster analysis of variable groupings was also carried out, with the intention of confirming the multidimensional structure of the joint scale of items that are used as predictors of reading proficiency. Next, a multiple regression analysis was performed to determine whether the indicators grouped into orthogonal factors which represent the selected dimensions could explain the values of the CompLEC reading skills results [
10]. Finally, the predictive power of each dimension for reading proficiency was checked, and a reduced scale was obtained, consisting of only three items—one per dimension—which allows reading proficiency to be explained in a way similar to that of the full scale.
8. Results
As for the subscales corresponding to each dimension, their adequate reliability was verified by their obtaining Cronbach’s alpha values greater than 0.9, which shows that the students’ responses to the items of the subscale were very similar (inference subscale with four items (Cronbach’s α = 0.919); attention subscale with three items (Cronbach’s α = 0.949); and memory subscale with three items (Cronbach’s α = 0.952)). For the joint scale, an adequate reliability value was also obtained (Cronbach’s α = 0.958). In all cases, a high item–total correlation was obtained, and there was no improvement in the alpha value if any item was eliminated, either in the subscales or in the joint scale. The high Cronbach’s alpha values obtained in the MAI test for the memory, attention, and inference subscales, as well as the joint scale, can be justified despite concerns about potential bias due to overly homogenous items. These constructs—integral to reading comprehension—are expected to be closely related, particularly in a demographic like older teenagers and adults, who typically exhibit more stabilized cognitive functions and consistent reading skills. This specificity of the demographic likely contributes to there being less variability in the responses, inherently boosting the internal consistency measures. Furthermore, the MAI test is designed as a rapid self-assessment tool for which high reliability is crucial to ensure that, despite its brevity, the test accurately assesses these critical reading components. The items within each subscale were carefully designed to be both targeted and specific, minimizing response ambiguity and ensuring robust construct measurement. This design choice is reflected in the high item–total correlations and the observation that removing items does not improve the alpha values, indicating that each item is vital for the scale’s integrity. Additionally, the short length of the subscales necessitates that items significantly contribute to the construct measurement, which supports the use of higher alpha values to maintain reliability in a concise assessment format. Therefore, the high alpha values in this context are indicative not just of item similarity but of a well-calibrated tool that effectively measures complex cognitive abilities in reading without sacrificing depth to gain brevity. An exploratory factor analysis was performed to check the unidimensionality of each subscale. The indicators showed significant correlations (
Table 2) with each other greater than 0.5, so the factor analysis was appropriate. Both the KMO indicator of sample adequacy (0.843, 0.761 and 0.768, greater than 0.6, respectively, for the inference, attention, and memory subscales) and the Bartlett sphericity test (
p = 0.000 < 0.05 in the three subscales) indicate that the factor analysis was appropriate, as do the values of the determinants of the correlation matrix, which are close to 0 (0.052, 0.046, and 0.044, respectively).
For the dimension of inference, the commonality of all the indicators was sufficient (>0.789), and they can be grouped into a single factor which represents 80.686% of the variability of the indicators. For the attention dimension, adequate values for the commonality of all the indicators were also obtained (>0.878) and they can be grouped into a single factor which represents 90.834% of the variability of the indicators. With respect to the memory dimension, the commonality of all the indicators was also adequate (>0.891), and they can be grouped into a single factor that represents 91.270% of the variability of the indicators.
To check whether the joint scale presents a multidimensional structure, a joint factor analysis was carried out on the 10 items that represent the indicators of the three dimensions. Again, the KMO (0.905, >0.6), the Bartlett sphericity test (p = 0.000) and the determinant of the correlation matrix (6.21 × 10−6, very close to 0) all showed the feasibility of the factorial analysis.
The factor analysis obtained adequate community values for almost all of the indicators, excepting only the two of them (RC3 and RC7) which narrowly did not reach the value of 0.7 (0.679 and 0.688, respectively) but which could be grouped into only one factor that represents 72.822% of the variability of the indicators. However, the eigenvalue of the second factor (0.930) was close to 1, and the greatest homogeneity occurred with the fourth factor, so we can specify in the analysis that it produced a solution in three factors, as theoretically proposed, in order to observe whether the indicators were grouped in the expected theoretical dimensions.
By specifying three factors, all of the communalities registered values between 0.811 and 0.938, and the explained variance rose from 72.822% to 88.206%. Varimax rotation allowed for orthogonal factors to be obtained. In the first factor, the indicators RC1, RC4, RC6, and RC8 were grouped, and 34.080% of the total variance was explained. In the second factor, indicators RC5, RC7 and RC9 were grouped, and 28.911% of the total variance was explained. Finally, the last factor grouped the indicators RC2, RC3 and RC10, and explained 25.215% of the total variance. The indicator RC6 was the only one that was not grouped correctly according to what was theoretically established, and it loaded more in the first factor than the third, which is the one in which it should have been grouped. In general, all the indicators loaded their factors with values higher than 0.75, and mostly higher than 0.8.
Disregarding indicator RC6, the joint factor analysis was again performed on the remaining nine items which represented the indicators of the three dimensions. Again, the KMO (0.893, >0.6), the Bartlett sphericity test (p = 0.000), and the determinant of the correlation matrix (2.74 × 10−5, very close to 0) showed the feasibility of factor analysis.
The factor analysis obtained adequate values of the community (>0.813) in all the indicators. The three identified factors explain 89.385% of the variability of the indicators. The Varimax rotation guaranteed that the factors obtained were orthogonal, establishing that the dimensions in which the indicators were grouped are independent of each other.
In the first factor, the indicators of the attention dimension were grouped (RC1, RC4, and RC8), and 32,607% of the total variance was explained. In the second factor, the indicators of the memory dimension were grouped (RC5, RC7, and RC9), and 31.095% of the total variance was explained. Finally, in the last factor, the indicators of the inference dimension were grouped (RC2, RC3, and RC10), and 25.684% of the total variance was explained.
Based on these independent factors that represent factor scores, a multiple regression analysis was proposed to determine whether the indicators grouped into factors that represented the antecedent theoretical dimensions of reading literacy could serve to explain the values of literacy for readers that had been obtained in the model of Llorens Tatay et al. [
10].
A multiple linear regression was calculated to predict reading competence (RC) based on the following factors: Factor 1 (attention), Factor 2 (memory), and Factor 3 (inference). A significant regression equation was found: F (3, 356) = 202.893,
p < 0.000, with an R2 of 0.631 (
Table 3). Participants’ predicted RC was equal to 13,867 + 1427 × Attention + 0.575 × Memory + 0.775 × Inference, all being measured using factor scores. All factors were significant (
p = 0.000) predictors of RC. As orthogonal factors they are independent, so multicollinearity is impossible among the predictor factors. The data met the assumption of independent errors (Durbin–Watson value = 2093).
Therefore, a variation of one point in the value of the factor that represents attention will produce an increase of 1.427 points in reading skills; an increase of one point in the value of the factor representing inference produces an increase of 0.775 points in reading proficiency; an increase of 1 point in the factor representing memory represents an increase of 0.575 points in reading proficiency.
In order to confirm the multidimensional structure of the predictors of reading literacy, a cluster analysis of variables was performed. This analysis was similar to the factor analysis but less restrictive in its assumptions (it did not require linearity or symmetry, allowed categorical variables, etc.), and it supported various methods of estimating the distance matrix.
The cluster analysis of the ten proposed indicators revealed a structure in the items. Firstly, it grouped the three items of the memory dimension (RC5, RC7, and RC9) clearly and differently from the other items. Secondly, it grouped the three items of the attention dimension (RC1, RC4, and RC8) with a distance of less than 10. Finally, it also grouped the items RC2 and RC3 of the inference dimension at a reasonable distance. Item RC10, which belongs to the inference dimension, was also included in the previous group, although at a distance close to 20. Finally, item RC6, which corresponded to the inference dimension, was grouped with the items of the attention dimension, although at a distance of 15. We observed, just as in the factor analysis, that item RC6 was not grouped in the proposed theoretical dimension.
By eliminating item RC6, which was classified in the attention dimension while corresponding to the inference dimension, we found that the cluster analysis also provided evidence of convergent and discriminant validity; that is, the items grouped in each cluster represented aspects similar to each other and different between the identified groups, which correspond to the proposed theoretical dimensions (inference, attention, and memory) as antecedents of reading comprehension.
To check the individual influence of each proposed dimension on reading competence, a regression analysis was proposed for each dimension, with the three items as predictor variables of the reading competence measure described by Llorens Tatay et al. [
10], which was the dependent variable. This method allows for checking whether it is possible to reduce the number of relevant indicators for each dimension, so that a reduced version of the multidimensional scale based on attention, inference and memory can be obtained to explain reading literacy.
8.1. Multiple Regression Models to Predict RC Based on Memory Items
A multiple linear regression was calculated to predict reading competence (RC) based on the items (RC5, RC7, and RC9) that are the indicators for the memory dimension (
Table 2, column A). A significant regression equation was found: F(3, 356) = 59.568,
p < 0.000, with an R2 of 0.334. Participants’ predicted RC was equal to 10.141 + 0.920 × RC5 + 0.073 × RC7 + 0.257 × RC9, where RC5, RC7, and RC9 were measured on a five-point Likert scale. Participants’ RC increased 0.920 for each unit of RC5, 0.073 for each unit of RC7, and 0.257 for each unit of RC9. But only RC5 (
p = 0.000) was a significant predictor of RC. The data met the assumption of independent errors (Durbin–Watson value = 1.662).
According to Hair et al. [
78] (p. 105f.), “high multicollinearity creates problems with multiple regression models … distorting the size of the beta coefficients (…) and/or changing the sign of these same coefficients”. To determine whether multicollinearity is a problem, it is important to determine whether the VIF (variance inflation factor) is 3.0 or lower; if so, then multicollinearity is unlikely to be a problem. Note that in previous publications the acceptable VIF level was thought to be 5.0, but subsequent research indicates this level is too high [
79,
80]. VIF levels were 5.928, 6.380, and 4.293 for RC5, RC7, and RC9, respectively, and all were above the limit of 3.0. Belsley [
81] proposes using both VIF and a condition index for detecting multicollinearity. Thus, if two or more variables have variance proportions (>0.5) in the same dimension, these variables are collinear. RC5 and RC7 had variance proportions of 0.8 and 0.83, respectively. Accordingly, we eliminated RC7 from the model and ran the multiple linear regression again with just RC5 and RC9.
A new multiple linear regression was calculated to predict reading competence (RC), based on the items (RC5 and RC9) for the memory dimension (
Table 2, column B). A significant regression equation was found: F(2, 357) = 89.518,
p < 0.000, with the same R2 of 0.334. Participants’ predicted RC was equal to 10.123 + 0.968 × RC5 + 0.284 × RC9. But only RC5 (
p = 0.000) was again a significant predictor of RC. VIF levels were 3.549 > 3, and RC5 and RC9 had high variance proportions in the condition index (0.92 and 0.93, respectively). Thus, RC9 had to be removed, and a single linear regression was again calculated using just RC5 to predict RC. The data met the assumption of independent errors (Durbin–Watson value = 1.666).
A single linear regression was calculated to predict RC based on item RC5, which represents the memory dimension (
Table 4, column C). A significant regression equation was found: F(1, 358) = 175.543,
p < 0.000, with an R2 of 0.329. Participants’ predicted RC was equal to 10.277 + 1.204 × RC5. RC5 (
p = 0.000) was again a significant predictor of RC. Therefore, an increase of one point in the variable RC5, which represents the memory dimension, produces an increase of 1.2 points in reading proficiency. The data fulfilled the assumption of independent errors (Durbin–Watson value = 1675).
8.2. Multiple Regression Models to Predict RC Based on Inference Items
A multiple linear regression was calculated to predict reading competence (RC) based on the items (RC2, RC3, and RC10) that are the indicators for the inference dimension (
Table 2, column A). A significant regression equation was found: F(3, 356) = 107.508,
p < 0.000, with an R2 of 0.475. Participants’ predicted RC was equal to 9.497 + 0.079 × RC2 + 0.427 × RC3 + 0.882 × RC10, all measured on a five-point Likert scale. But only RC3 (
p = 0.001) and RC10 (
p = 0.000) were significant predictors of RC. The VIF level was 3.204, >3 for RC2. RC2 and RC3 had variance proportions of 0.91 and 0.60, respectively. Then, we eliminated RC2 from the model and ran the multiple linear regression again with just RC3 and RC10. The data met the assumption of independent errors (Durbin–Watson value = 1.611).
A new multiple linear regression was calculated to predict reading competence (RC) based on the items (RC3 and RC10) for the inference dimension (
Table 2, column B). A significant regression equation was found: F(2, 357) = 161.430,
p < 0.000, with an R2 of 0.475. Participants’ predicted RC was equal to 9.540 + 0.466 × RC3 + 0.906 × RC10. Both RC3 and RC10 were significant (
p = 0.000) predictors of RC. VIF levels were 2.104, <3; thus, there was no multicollinearity between them. Therefore, an increase of one point in the variable RC3, which represents the inference dimension, produces an increase of 0.466 points in reading proficiency, and an increase of one point in the variable RC10, which also represents inference, generates an increase of. 906 in reading literacy. The data met the assumption of independent errors (Durbin–Watson value = 1.615).
8.3. Multiple Regression Models to Predict RC Based on Attention Items
A multiple linear regression was calculated to predict reading competence (RC) based on the items (RC1, RC4, and RC8) that are the indicators for the attention dimension (
Table 2, column A). A significant regression equation was found: F(3, 356) = 205.207,
p < 0.0000, with an R2 of 0.634. Participants’ predicted RC was equal to 9.150 + 0.265 × RC1 + 0.968 × RC4 + 0.330 × RC8, all measured on a five-point Likert scale. But only RC4 (
p = 0.000) and RC8 (
p = 0.007) were significant predictors of RC. VIF levels were 6.505 and 6.111 (both >3) for RC1 and RC4, respectively. RC1 and RC4 had variance proportions of 0.84 and 0.83, respectively. Then, we eliminated RC1 from the model and ran the multiple linear regression again with just RC4 and RC8. The data met the assumption of independent errors (Durbin–Watson value = 2.270).
A new multiple linear regression was calculated to predict reading competence (RC) based on the items (RC4 and RC8) for the attention dimension (
Table 2, column B). A significant regression equation was found: F(2, 357) = 304.230,
p < 0.000, with an R2 of 0.630. Participants’ predicted RC was equal to 9.131 + 1.153 × RC4 + 0.415 × RC8. Both RC4 and RC8 were significant (
p = 0.000) predictors of RC. VIF levels were both 3.311, >3, and RC4 and RC8 had high variance proportions in the condition index (0.92 and 0.91, respectively). Thus, RC4 had to be removed, and a single linear regression was calculated again using just RC8 to predict RC. The data met the assumption of independent errors (Durbin–Watson value = 2.265).
A single linear regression was calculated to predict reading competence (RC) based on item RC8 which represents the attention dimension (
Table 2, column C). A significant regression equation was found: F(1, 358) = 386.828,
p < 0.000, with an R2 of 0.519. Participants’ predicted RC was equal to 9.665 + 1.394 × RC8. RC8 (
p = 0.000) was again a significant predictor of RC. Therefore, an increase of one point in the variable RC8, which represents the attention dimension, produces an increase of 1394 points in reading skills. The data met the assumption of independent errors (Durbin–Watson value = 2.013).
8.4. Multiple Regression Models to Predict RC Based on Memory, Inference and Attention Items
A multiple linear regression was calculated to predict reading competence (RC) based on the following items: RC5 for the memory dimension, RC3 and RC10 for the inference dimension, and R8 for the attention dimension (
Table 2, column D). A significant regression equation was found: F(4, 355) = 125.499,
p < 0.000, with an R2 of 0.586. Participants’ predicted RC was equal to 8.763 + 0.285 × RC5 + 0.080 × RC3 + 0.458 × RC10 + 0.847 × RC8. But RC3 (
p = 0.483) was not a significant predictor of RC. Then, we eliminated RC3 from the model and ran again the multiple linear regression with just RC5, RC10, and RC8. The data met the assumption of independent errors (Durbin–Watson value = 1.903).
A new multiple linear regression was calculated to predict reading competence (RC) based on the following items: RC5 for the memory dimension, RC10 for the inference dimension, and R8 for the attention dimension (
Table 2, column E). A significant regression equation was found: F(3, 356) = 167.406,
p < 0.000, with an R2 of 0.585. Participants’ predicted RC was equal to 8.786 + 0.319 × RC5 + 0.856 × RC8 + 0.487 × RC10. All (RC5, RC8, and RC10) were significant (
p = 0.000) predictors of RC. VIF levels were all under the limit (1.708, 2.180, and 2.195, respectively). Thus, there was no problem of multicollinearity and the indicators for each dimension were independent variables. The data met the assumption of independent errors (Durbin–Watson value = 1.895).
Therefore, of the theoretical dimensions identified, attention contributes the most to reading comprehension, followed by the ability to infer, and to a lesser extent by memory. From each point of increase on the Likert scale with respect to the attention indicator, a 0.856-point increase in reading skills occurred. In the case of the inference indicator, for each point of increase, almost half a point of improvement was achieved in reading skills. Finally, for each improvement point in the memory indicator, almost 0.32 points were obtained in the reading competence.
The scale formed by the three indicators shows a Cronbach’s alpha of 836, exhibiting adequate reliability. Therefore, it is possible to explain 58.5% of reading literacy using a scale made up of only three items representing the three theoretical dimensions that give rise to reading competence.
Using the full scale with nine indicators grouped into three factors that represent the proposed theoretical dimensions, it is possible to explain 63.1% of the variability of reading skills (
Table 1). With the reduced version of the scale made up of only one item per dimension, 58.5% of the variability of reading skills can be explained (
Table 2, column E).
9. Discussion
The importance of considering the micro-abilities of memory, attention and inference in the complex processes involved in reading comprehension supports the need for an evaluation tool that considers these three dimensions. To this end, the present work has been aimed at validating the MAI test (Brief Autotest of Reading Proficiency for Adults), as the first rapid self-assessment tool for reading comprehension for adults.
Firstly, the results show that the MAI presented adequate levels of reliability. The internal consistency for each of the subscales, as well as for the complete tool, showed very good values, exceeding the Nunnally (1978) criterion of α = 0.70 designating adequate reliability. According to the standards described by George and Mallery [
82], the values obtained are considered excellent. Likewise, the results exceeded the recommended minimum levels for the consistency of a scale, as indicated by Gliem and Gliem [
83], who set an alpha value of 0.8 as a reasonable goal; by Huh et al. [
84], who maintained that said value must be equal to or greater than 0.6; by Kaplan and Saccuzzo [
85], for whom the value should be between 0.7 and 0.8; and the value set by Loo [
86] of greater than 0.8.
Second, construct validity was demonstrated. To check both the unidimensionality of each MAI subscale and the multidimensional structure of the joint scale, an exploratory factor analysis was performed, since the KMO indicator of sample adequacy, the Bartlett sphericity test and the determinant of the correlation matrix showed the feasibility of these analyses. The data obtained led us to dispense with RC6, without which the joint factor analysis was again performed on the remaining nine items which represented the indicators of the three dimensions. Finally, the indicators of the “attention” dimension (RC1, RC4, and RC8) were grouped in the first factor, the indicators of the “memory” dimension (RC5, RC7, and RC9) were grouped in the second and the indicators of the dimension “inference” (RC2, RC3, and RC10) were grouped into the third. In addition, Varimax rotation guaranteed that the factors obtained were orthogonal, so the dimensions in which the indicators were grouped are independent of each other. Therefore, the data obtained showed a good fit to the theoretical model, which indicates that it has a valid structure formed by three dimensions (inference, attention, and memory) which integrate the relevant factors from the literature reviewed regarding their incidence in the evaluation of reading competence.
On the other hand, to verify to what extent the proposed multidimensional model allowed reading comprehension to be measured compared to the generally accepted CompLEC model [
10], a multiple regression analysis was performed. A significant regression equation was found, and all of the factors were significant predictors of reading literacy.
In addition, to confirm the multidimensional structure of the predictors of reading literacy, a cluster analysis of variables was performed. The cluster analysis of the ten indicators revealed a clear structure that grouped the items of the three dimensions in the same way as previously indicated: memory dimension (RC5, RC7, and RC9), attention dimension (RC1, RC4, and RC8) and inference dimension (RC2, RC3, and RC10, although with the latter item at a distance of close to 20). Again, as in the factor analysis, item RC6 was not grouped in the theoretical dimension proposed for inference, but in the attention dimension. Thus, by eliminating item RC6, the cluster analysis also provided evidence of convergent and discriminant validity.
Finally, in order to check the individual influence of each dimension on reading proficiency, a regression analysis was carried out for each dimension, with the three items as predictive variables for the reading competence measure described by Llorens Tatay et al. [
10], which was dependent variable. This method verified how it was possible to reduce the number of relevant indicators for each dimension in order to obtain a reduced version of the multidimensional scale. Thus, in the case of the memory dimension, item RC5 represented the dimension and was a significant predictor of reading proficiency (an increase of one point in the variable RC5 produced an increase of 1.2 points in reading proficiency). Regarding inference, items RC3 and RC10 were significant predictors (an increase of one point in both the RC3 and RC10 variables generated an increase of 0.466 points and 0.906, respectively, in reading proficiency). Regarding the attention dimension, item RC8 was a significant predictor (an increase of one point in the variable RC8 produced an increase of 1.394 points in reading skills). Finally, a multiple linear regression was calculated to predict reading proficiency based on items RC5 for the memory dimension; RC3 and RC10 for the inference dimension; and R8 for the attention dimension, from which analysis it was determined that the items RC5, RC8, and RC10 were significant predictors of reading proficiency (a 0.856 point increase in reading proficiency was achieved for each point of increase on the Likert scale with respect to attention, a 0.487 point increase for inference and a 0.319 point increase for the memory indicator).
Therefore, attention is the dimension that contributes the most to reading comprehension, followed by inference and, to a lesser extent, memory. This result allows for the design of strategies to improve reading competence by stimulating the implementation of intervention systems that especially enhance, in line with certain studies, attention [
87,
88] and the inference capacity [
89,
90,
91,
92].
The reliability of the scale made up of the three referred indicators was adequate (α = 0.836), and it was possible to explain 58.5% of the reading competence based on them, representing the three theoretical dimensions defined. Thus, retaining concepts and recalling them from memory, maintaining reading concentration, and being able to synthesize inferences are the indicators that have been significant for reading literacy, as described below:
- -
RC5 (memory dimension): You have the ability to retain some concepts from memory.
- -
RC8 (attention dimension): When reading, you have not thought of anything concrete that could interrupt your concentration.
- -
RC10 (inference dimension): You are able to synthesize the text in a single sentence.
On the other hand, the full scale of nine indicators, three per factor, managed to explain 63.1% of the variability of reading skills. Therefore, both scales offer empirical evidence that attention, inference, and memory represent relevant dimensions in explaining reading competence.
10. Conclusions
This scale will make it possible to respond to the gap that exists regarding an individual’s self-evaluation or their self-perception of their reading competence level. Furthermore, this will enable the assessment of the scale’s efficacy as a measurement instrument for older teenagers and adults. It will allow for the comparison of its results with other reading evaluation experiences. In this way, this tool can be used in combination with other tests, just as has been carried out in the present study with the CompLEC, in order to obtain a complementary assessment of an individual’s reading comprehension and to analyze the correlations with the dimensions as the objects of the study (i.e., inference, attention and memory).
The MAI is an effective self-assessment measure used to become aware of the reality of an individual’s development of their own reading skills, beyond the scores that a conventional reading comprehension test may award or the levels at which other tests can locate a reader. In this sense, the tool stands as an instrument that can be useful in the university-level educational environment, given the reduced interest at this level in evaluating reading comprehension as a skill or competence, one largely considered as already having been achieved, and given the value that self-perception of said reading mechanism on the part of the adult population can possess when implying a greater knowledge of strengths and weaknesses. However, it would be convenient to use this self-assessment test with an adult sample, one not necessarily composed of students, to verify the effectiveness of the self-assessment in some participants not used to reflecting on their own learning or managing the skills required to access the texts and, ultimately, to understand the textual information read. It would also be interesting to expand the sample to other age groups or educational levels, such as elementary school students, to verify if, like TEECLED, it can be beneficial across a wide range of users of standard Spanish. Although it might seem so initially, this test has not incorporated this target group.
On the other hand, other dimensions related to reading comprehension, such as fluency [
93], speed [
94] or even the type of texts read [
95], cannot be forgotten, as indicators under study and variables of main interest. However, we consider that for a rapid self-assessment tool such as the MAI test purports to be, it is necessary to focus attention on the dimensions of inference, attention, and memory because not only do they encompass the categories of questions proposed by PISA, but there are tests that already consider them, in addition to studies that separately support their importance.
Furthermore, as this research has shown, attention, a dimension that until now, basically, as already advanced, has only been the object of interest in studies and tests carried out in the field of reading difficulties such as dyslexia or attention deficit hyperactivity disorder (ADHD), is the most important dimension for predicting reading literacy. However, the results of certain investigations have already increased the growing evidence supporting the importance of attention for reading [
11], so the development of attention in the reading process should not be neglected, but, on the contrary, must receive special consideration in the educational context.
While this test has been developed for the Spanish language, it has been built upon parameters and dimensions utilized in internationally recognized assessments, such as PISA and PIRLS or PIIAC, through a scientifically validated tool, TEECLED. As a result, it could potentially be extrapolated to other languages, such as English, but the reliability of such an extension would need to be validated through tests conducted in that specific language. Furthermore, this tool is not as reliant on linguistic aspects as it is on cognitive processes, which tend to be more globally consistent than the inherent origins and idiosyncrasies of individual languages. In this context, social factors have been considered, such as the rapid pace of life in Western society and, increasingly, in other societies where social classes are improving their economic standing (for instance, the purchasing power of middle classes in other cultures). Consequently, the widespread preference for images over words hinders visual memory in word reading. Words require the effort of prior phonetic transcription in the brain and do not convey feelings or moods as swiftly as, for instance, an emoticon would. Attention is undermined by the high impacts of numerous stimuli that hinder the ability to concentrate optimally on a task (for instance, not completing a reading due to getting lost while clicking on hyperlinks and failing to construct a complete sense of the main idea).
In conclusion, for all that has been said, it is possible to affirm that the MAI test is a reliable and valid tool for the self-evaluation of reading competence and stands as the first rapid self-evaluation tool for reading comprehension for adults, as demanded by the general user, not only for individuals who are studying or who belong to an academic community, but also for all those people who use standard Spanish (not just in Spain but also in Latin America) and who want to assess their overall reading proficiency. This means understanding their strengths in reading and identifying weaknesses that should be improved to navigate their surroundings with a solid and adept comprehension of the reading material around them.
Thus, this tool could help enhance the ability to understand a range of content, from basic legal documents (like contracts) to instant messages, encompassing maps and recipes. It aims to prevent misunderstandings and counteract the “snowball effect” of fake news. The goal isn’t solely to possess an in-depth linguistic knowledge of the native language (although this certainly contributes significantly), but to better utilize the linguistic resources already at one’s disposal.