Next Article in Journal
The Possibilities and Impossibilities of Transformative Leadership: An Autoethnographic Study of Demographic Data Policy Enactment in Ontario
Previous Article in Journal
Association Between Parental Attendance at Early Adolescence’s Parent–Teacher Conferences and Their Children’s Performance in Standardized Exams for High School and College Entrance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

dmQAR: Mapping Metacognition in Digital Spaces onto Question–Answer Relationship

by
Brittany Adams
1,*,
Nance S. Wilson
2 and
Gillian E. Mertens
2
1
Curriculum and Instruction, The University of Alabama, Tuscaloosa, AL 35487, USA
2
SUNY Cortland Literacy Department, State University of New York at Cortland, Cortland, NY 10345, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(6), 751; https://doi.org/10.3390/educsci15060751 (registering DOI)
Submission received: 19 May 2025 / Revised: 10 June 2025 / Accepted: 12 June 2025 / Published: 14 June 2025
(This article belongs to the Special Issue Digital Literacy Environments and Reading Comprehension)

Abstract

:
This paper proposes the Digital Metacognitive Question–Answer Relationship (dmQAR) Framework, an adaptation of traditional QAR models for the complexities of digital reading environments. In response to the nonlinear, multimodal, and algorithmically curated nature of online texts, the dmQAR Framework scaffolds purposeful metacognitive questioning to support comprehension, evaluation, and critical engagement. Drawing on research in metacognition, critical literacy, and digital reading, the framework reinterprets “Right There,” “Think and Search,” “Author and Me,” and “On My Own” question categories to align with the demands of digital spaces. Practical instructional strategies, including think-alouds, student-generated questioning, digital annotation, and reflection journals, are detailed to support implementation across diverse educational contexts. The paper emphasizes that developing self-regulated questioning is essential for fostering critical literacy and resisting surface-level engagement with digital texts. Implications for instruction highlight the need for explicit metacognitive scaffolding and equitable access to digital literacy tools. Future research directions include empirical validation of the framework’s impact on digital reading comprehension and exploration of developmental differences in metacognitive questioning practices. In an era of widespread misinformation and algorithmic bias, embedding metacognitive questioning into literacy education is vital for preparing students to navigate digital landscapes critically and reflectively.

1. Introduction

The landscape of reading has undergone a profound transformation in the 21st century, shifting from predominantly paper-based formats to an increasingly digital and interconnected information space (Delgado et al., 2018; Turner et al., 2020). Marked by the proliferation of digital texts, multimodal platforms, and algorithmic information curation, this shift presents both unprecedented opportunities and complex challenges for readers (Yen et al., 2018). While traditional models of reading comprehension have provided valuable insights into how readers construct meaning, they often fail to account for the nonlinear, multimodal, and rapidly evolving nature of digital reading environments (Paris & Hamilton, 2009; Raphael et al., 2009). Specifically, these models emphasize recall and text-to-text connections, often overlooking the metacognitive questioning strategies required for deep comprehension and critical engagement in digital spaces (Pressley, 2002b).
Traditional comprehension frameworks tend to conceptualize reading as a linear process of extracting information from a static text (Gavelek & Bresnahan, 2009). They emphasize declarative knowledge (knowing that) and procedural knowledge (knowing how), through practices such as identifying main ideas or deploying repair strategies when comprehension breaks down. However, they frequently neglect conditional knowledge (knowing when and why to apply strategies), which is essential for navigating the nonlinear, multimodal, and algorithmically curated nature of digital information (Pearson & Cervetti, 2017). Moreover, such models often marginalize the reader’s active role, underestimating the importance of metacognitive self-questioning as a tool for higher-order comprehension and critical literacy (Cho & Afflerbach, 2017; Wilson & Smetana, 2011). Even highly proficient readers, including graduate students, frequently rely on surface-level strategies when engaging with digital texts, missing opportunities for deeper critical analysis (Adams et al., 2023). These limitations challenge the assumption that comprehension naturally leads to criticality without deliberate metacognitive support.
As digital environments become the dominant medium for information consumption, readers face evolving cognitive and sociocultural demands that fundamentally reshape how comprehension occurs (Cho et al., 2018; Leu et al., 2011). Digital reading is inherently networked, requiring the construction of meaning across multiple, nonlinear pathways that span sources, platforms, authors, and contexts (Coiro, 2021; Leu et al., 2018; Naumann, 2015). Algorithmic curation further complicates this terrain by privileging some perspectives while obscuring others (Nash, 2024). To navigate such information ecologies, readers must engage in ongoing metacognitive regulation, continually asking themselves: What is my reading goal? Which links are worth following? How does this hyperlinked information relate to the main argument? (Raphael et al., 2009). In this context, self-questioning becomes not just a cognitive support but questioning-as-thinking (Wilson & Smetana, 2011), or the active generation of purposeful questions to direct comprehension, interpretation, and critique.
Metacognitive strategies are increasingly recognized as essential tools for supporting online readers (Cho & Afflerbach, 2015; Cho et al., 2017; Uçak & Kartal, 2023; Zenotz, 2012). These intentional cognitive moves enable readers to integrate multimodal content, navigate nonlinear structures, and critically evaluate sources (Kiili & Leu, 2019). Without such regulation, digital reading is prone to fragmentation, superficiality, and cognitive overload (Schurer et al., 2023; Uçak & Kartal, 2023). While traditional models of metacognitive comprehension often emphasize monitoring at the level of a single text, digital literacy demands flexible, goal-driven strategies that enable readers to move across texts, assess credibility, and synthesize across modalities. In this reconceptualization, the internet becomes not simply a source of information, but a dynamic, deictic environment requiring sustained metacognitive engagement (Leu et al., 2018).
For educators, these shifts demand more than technological integration; they require a foundational rethinking of literacy instruction. Metacognitive strategies must be reimagined for digital contexts in ways that support students’ agency as readers, evaluators, and synthesizers of online content. While an emerging body of research explores the potential of metacognition to support digital reading (Cho & Afflerbach, 2015; Cho et al., 2017; Uçak & Kartal, 2023; Wu, 2014; Zenotz, 2012), much of this scholarship remains research-focused rather than instructionally applicable.
To address this gap, this paper introduces the Digital Metacognitive Question–Answer Relationship (dmQAR) Framework, a theoretical adaptation of Raphael’s (1982, 1984, 1986) widely used Question–Answer Relationship (QAR) model and the questioning-as-thinking framework (Wilson & Smetana, 2011). Grounded in Rosenblatt’s (1978) transactional theory of reading, which conceptualizes meaning-making as relational, dynamic, and context-dependent, dmQAR extends these insights into the digital domain. It maps metacognitive questioning strategies onto the architecture of digital environments, accounting for hypertext, multimodality, and algorithmic curation (Mertens, 2023). Offering explicit instructional strategies, scaffolds, and lesson adaptations, the framework is designed to help teachers embed structured metacognitive questioning into digital literacy curricula. Its goal is to cultivate reflective, independent, and critically engaged digital readers.
Guided by the following research questions, the paper integrates cognitive, sociocultural, and critical traditions in literacy studies:
  • How do digital literacy environments shape the need for metacognitive questioning strategies?
  • What types of self-questioning facilitate deeper comprehension and critical engagement with digital texts?
  • How can educators scaffold students’ metacognitive questioning skills to foster critical digital literacy?
The sections that follow outline the theoretical foundations of metacognitive questioning, present the dmQAR Framework, and explore implications for digital literacy instruction.

2. Digital Environments and Implications for Reading

As many scholars have explored, digital environments significantly reshape how we understand and engage with reading (e.g., Kohnen et al., 2020; Mertens & Kohnen, 2022; Coiro, 2021; Leu et al., 2018; Nash, 2025; Wineburg & McGrew, 2019). As Hartman and colleagues (Hartman et al., 2010) explain, “The nature of each medium shapes the details of comprehension processing down garden paths of different sorts” (p. 131). In print reading, comprehension is influenced by the reader, the task, the text, the context, and the author. In digital contexts, however, comprehension is further complicated by the very architecture of the online environment. To read effectively online, readers must engage both flexibly and purposefully, navigating a nearly unlimited and constantly shifting body of highly accessible information. Thus, to understand the paths a reader might take through this digital “garden,” it is essential to understand the environment itself and how it shapes those paths.

2.1. Digital Environments

As a medium for reading and information processing, the internet introduces unique affordances that fundamentally alter cognitive engagement. Marsh and Rajaram (2019) identify several critical features of the digital landscape (see Table 1), emphasizing how expansive access, networked architectures, and rapid information retrieval affect cognition. Central to these implications is their argument that the boundary between personally known information and instantly accessible online content is becoming increasingly blurred. When information is retrieved rapidly, readers may overestimate its trustworthiness or internalize it into their cognitive schemas without scrutiny. As internet speeds increase, people tend to demonstrate greater confidence in the accuracy of accessed content, often offloading memory and evaluative judgment onto the digital environment (Lynch, 2016).
Moreover, the mutable, expansive, and often inaccurate nature of online content, compounded by obscured authorship, complicates traditional reading practices. Digital texts rarely exist in isolation; readers must interpret individual texts while also navigating the broader networked structures of hyperlinks, embedded media, and social contexts (Mertens & Kohnen, 2022). Unlike traditional print, which unfolds linearly, digital reading frequently requires lateral reading, or the practice of evaluating sources by opening new tabs, verifying authorship, and checking claims across multiple sources (Wineburg & McGrew, 2019).
The distraction-rich nature of digital environments poses additional challenges. Beyond pop-ups and chaotic webpage design, readers encounter hyperlinked and multimodal distractions that fragment attention. Nash (2025), in a study of students’ metacognitive reflections, observed frequent platform switching and a lack of sustained engagement. The study suggests that the stimulation-seeking behaviors cultivated by digital reading environments may undermine deep comprehension. However, Nash also found that structured cognitive reflection on one’s digital reading paths could increase self-awareness and attentiveness.
Proficient digital reading, therefore, entails far more than processing words on a screen. It demands continuous comprehension monitoring (Adams & Wilson, 2022), critical source evaluation (Kiili & Leu, 2019; Turner et al., 2020), and awareness of the sociopolitical forces shaping online information (Adams et al., 2022). Readers must develop strategic thinking skills within environments that often inhibit metacognition (Mertens, 2023). Given the complexity, volatility, and networked nature of online reading, traditional comprehension strategies are insufficient. Effective digital reading requires ongoing self-questioning to validate sources, synthesize across modalities, and resist cognitive overload.
These features fundamentally reshape the cognitive processes required for skilled reading. As such, traditional reading models designed for static, print-based texts are increasingly inadequate for capturing the dynamics of online reading. To prepare students for digital literacy, educators must revisit foundational comprehension models and adapt them to address these evolving demands.

2.2. Popular Reading Models and Online Reading

Conventional reading comprehension models offer foundational insights but lack the nuance to account for the metacognitive demands of online environments. The Simple View of Reading (SVR), for example, frames reading comprehension as the product of decoding (word recognition) and linguistic comprehension (meaning-making). While useful in early reading development, SVR does not address the nonlinear structures, multimodal integrations, or algorithmic filtering that characterize digital texts (Leu et al., 2018). Nor does it attend to metacognitive regulation, a skill essential for navigating the web’s fragmented and context-dependent texts (Coiro, 2011; Kiili & Leu, 2019).
By contrast, Rosenblatt’s (1978) transactional theory of reading offers a more fitting lens for understanding digital literacy. This theory views reading as a dynamic transaction between reader and text, emphasizing that meaning is co-constructed rather than passively absorbed (Damico et al., 2009). Such a perspective aligns with the interpretive demands of digital reading, where hypertextual structures, multimodal elements, and reader-driven navigation paths all shape comprehension (Gavelek & Whittingham, 2017). In digital contexts, readers must formulate questions in response not only to textual content but also to the structural and contextual signals embedded in the reading experience (Fox & Alexander, 2017; Coiro, 2011).
While traditional frameworks like SVR remain foundational, they fall short of addressing the nonlinear, multimodal, and evaluative demands imposed by digital environments. Recent scholarship (Coiro, 2021; Naumann, 2015; Turner et al., 2020) has begun articulating new models that better reflect these conditions. The dmQAR Framework builds on this emerging work by offering practically grounded, classroom-based strategies that scaffold metacognitive questioning throughout the digital reading process.

3. Theoretical Framework

To conceptualize metacognitive questioning for digital readers, we integrate three theoretical perspectives: metacognitive knowledge, experiences, and regulation (Ozturk, 2016); questioning-as-thinking; and critical literacy. Together, these frameworks illuminate how readers monitor cognition, interact dynamically with texts, and critique the sociopolitical forces shaping information in digital spaces.

3.1. Metacognitive Reading Practices in Digital Literacy

Metacognition, or thinking about thinking, is central to self-regulated learning and effective comprehension (Flavell, 1979). The literature most consistently divides metacognition into three concepts. The first, metacognitive skills, includes processes of self-regulation such as monitoring, evaluation, and reflection. These skills help readers recognize breakdowns in understanding and adjust strategies accordingly. The second, metacognitive knowledge, addresses knowledge of the self, the task, and appropriate strategies. Once readers develop regulation skills, they can engage this knowledge to modify their behavior. The third, metacognitive experiences, involves the affective and cognitive responses that accompany learning (Barzilai & Zohar, 2012, 2014, 2016). For example, recognizing when a multimodal element distracts from a reading goal, or when conflicting information across sources induces uncertainty, constitutes a metacognitive experience.
However, metacognition is not limited to knowledge or strategy. It also involves epistemic thinking, or one’s beliefs about knowledge and knowing (Hofer, 2016). Epistemic metacognition includes both the ability to regulate one’s learning and the influence of epistemological beliefs on that regulation (Mason et al., 2010). In digital reading, these beliefs shape how readers assess credibility. For instance, a reader’s view of Wikipedia as a “crowdsourced” resource may influence their trust in its content, just as a generative AI blurb in a search engine might prompt skepticism, or unwarranted acceptance. Engaging epistemic metacognition helps readers navigate the blurred boundaries between reliable and dubious online content (Barzilai & Zohar, 2014, 2016).
Since digital environments intensify metacognitive demands through their architecture and design (Marsh & Rajaram, 2019; Pae, 2020), epistemic awareness alone is not sufficient. Metacognitive abilities are effective only when enacted by a reader who is dispositionally inclined to be metacognitive (Kuhn, 2021). In digital contexts, this means being aware of the following: (1) online environments require intentional metacognitive engagement (Barzilai & Zohar, 2016), and (2) these environments are often structured to inhibit that very engagement (Mertens & Kohnen, 2022; Nash, 2025). Readers who recognize these challenges are more likely to deploy strategies like selective attention, goal-setting, and critical questioning to manage cognitive load and sustain purpose-driven engagement with digital content (Cho & Afflerbach, 2017).
Substantial scholarship has demonstrated that metacognitive interventions enhance digital reading. Studies show benefits for English learners (Uçak & Kartal, 2023), improvements in synthesizing across multiple sources (List & Lin, 2023), and increased strategic decision-making (Wu, 2014). Yet much of this research remains theoretical or exploratory, offering limited guidance for classroom instruction. As digital reading becomes a dominant mode of knowledge construction, educational research must prioritize pragmatic frameworks for explicitly teaching metacognitive regulation during digital engagement.

3.2. Critical Literacy and Digital Environments

Grounded in Freire and Macedo’s (1987) conception of reading as both cognitive and sociopolitical, critical literacy emphasizes the interrogation of power structures embedded in texts. This perspective is especially salient in digital environments where platform design, algorithmic curation, and commercial interests filter what readers see and how they see it (Lewison et al., 2015; Luke, 2012; Mertens & Kohnen, 2022). Digital spaces are often engineered to elicit specific behaviors (e.g., clicks, engagement, emotional reactions) many of which benefit platforms more than users (Nash, 2025). Readers must interrogate how and why certain texts appear, asking: Who benefits from this representation? What perspectives are excluded? How is my understanding being shaped by platform design or algorithmic filtering? (Janks et al., 2013).
Embedding critical literacy into digital reading instruction involves more than identifying bias. It requires cultivating metacognitive questioning habits that empower students to critique, resist, and transform digital discourse. Through the integration of metacognitive, transactional, and critical lenses, the dmQAR Framework supports the structured self-questioning practices necessary for navigating a digital information ecosystem shaped by sociotechnical forces.

3.3. Question–Answer Relationship and Questioning-As-Thinking

Questioning lies at the heart of reading comprehension, transforming readers from passive recipients into active constructors of meaning. Through questioning, readers engage in metacognitive skills (e.g., self-regulation), activate knowledge (e.g., task or strategy awareness), and experience cognitive dissonance or curiosity that can drive deeper engagement. In digital spaces, these dimensions of metacognitive questioning are essential for maintaining purposeful interaction with online content.
Raphael’s (1982, 1984, 1986) original Question–Answer Relationship (QAR) framework categorizes questions according to their relation to the text: “Right There” questions target explicit details, “Think and Search” questions synthesize across text segments, “Author and Me” questions require integration of textual and personal knowledge, and “On My Own” questions rely primarily on prior experience. QAR helps make the cognitive demands of questions visible and has been widely used to support strategic comprehension development (Raphael & Au, 2005; Wilson & Smetana, 2011).
However, while QAR provides a strong developmental scaffold for print environments, digital reading requires a more expansive approach. Readers must now synthesize information across hyperlinked documents, verify source credibility, interpret multimodal elements, and account for algorithmic filtering (Wineburg & McGrew, 2019; Coiro, 2011; Mertens & Kohnen, 2022). These additional layers necessitate dynamic, self-generated questioning rather than teacher-led prompts. In print-based classrooms, teachers scaffold attention and inference through guided questioning (Pressley, 2002a; Ness, 2016). Online, readers must internalize these questioning strategies and apply them independently across fragmented and interactive content. Without these self-questioning habits, readers are vulnerable to shallow engagement and misinformation (Cho & Afflerbach, 2017; Kiili & Leu, 2019).
In response, the dmQAR Framework extends Raphael’s original taxonomy to meet the epistemic, cognitive, and structural demands of digital reading. It retains the developmental progression from comprehension to critique but embeds this sequence within the realities of online engagement. The framework scaffolds questioning about hyperlinks, multimedia, source attribution, and ideological framing, equipping students to navigate today’s networked, multimodal texts with purpose, reflection, and critical awareness.

4. The Digital Metacognitive Question–Answer Relationship Framework

This framework adapts QAR (Raphael, 1982, 1984, 1986) and metacognitive questioning-as-thinking (Wilson & Smetana, 2011) into a flexible, developmentally sensitive model for scaffolding readers through the cognitive complexities of digital reading. The ability to engage in metacognitive questioning does not emerge spontaneously; rather, it develops progressively as readers gain experience in regulating comprehension and critically engaging with texts (Perkins, 1992). In an era where readers encounter a vast array of digital texts with varying formats, multimodal structures, credibility challenges, and algorithmic curation (Cho et al., 2018; Ness, 2016), purposeful questioning becomes essential for navigating and critically engaging with information.
However, traditional QAR frameworks, developed for static print environments and designed to occur after reading, do not fully address the demands of reading in digital spaces characterized by nonlinearity, hypertextuality, and fragmented authorship. The dmQAR Framework directly responds to these challenges by bridging basic comprehension and higher-order metacognitive processes, such as self-regulation, lateral validation, and critical synthesis, during online reading.
Designed to be adaptable across age groups and instructional contexts from primary to post-secondary, the dmQAR provides educators with a structured yet flexible guide to fostering purposeful, reflective, and critically engaged digital readers. Specifically, the dmQAR reinterprets each traditional QAR category through the lens of digital literacy demands. The following sections outline how foundational comprehension (Right There), relational analysis (Think and Search), critical evaluation (Author and Me), and reflexive application (On My Own) are reconfigured for online environments and describe practical instructional strategies to scaffold these metacognitive practices.
The original QAR framework divides question types into two broad categories: In the Text and In My Head. These general categories gave readers clues as to where information can be found when answering questions. However, because the digital environment algorithmically adapts to provide different experiences for different users, it can be challenging to clearly delineate if an answer can be found either within a given text or within our minds. Since so much of what we encounter online is controlled by digital environments, the firm boundaries between these question types become nebulous. For this reason, dmQAR focuses on four categories: Right There, Think and Search, Author and Me, and On My Own. Though we present these as separate categories, we recognize that proficient digital readers ask a combination of each type of question as they read online. To support understanding and classroom application, the dmQAR is represented as a circular visual model (see Figure 1).

4.1. “Right There” During Online Reading

“Right There” questions traditionally ask readers to locate highly literal information within a passage. Often, these questions are designed to guide readers directly to the language of the text itself. However, in digital environments, surface-level retrieval is complicated by multimodal layering, hyperlinking, and variable credibility, requiring readers to attend both to textual content and digital affordances. For the purposes of the dmQAR Framework, “Right There” questions refer to information that can be found within a given webpage, without leaving the page, but that may require navigating multimodal features or discerning subtle credibility cues.
To support critical engagement even at this basic level, readers must interrogate digital texts more deliberately. Example questions include:
  • How did I locate this article/webpage?
  • Does this website/article meet my purpose for reading?
  • What can I learn from understanding the organization of this article/webpage?
  • What sources does this article cite, and are they reputable?
  • Does this text present multiple perspectives, or is it biased?
  • Whose perspective is included in this piece?
  • Is this content fact-based, opinion-based, or designed to persuade?
  • What is the main argument of this text?
  • How does this text construct its argument?
These questions move beyond surface retrieval to prompt students to consider how they were directed to the article, as well as its credibility, purpose, and the structure of single-page digital texts, thereby laying a foundation for deeper metacognitive regulation.
Educators can scaffold “Right There” engagement by implementing a variety of active strategies. Digital annotation tools such as Hypothes.is, Perusall, or Kami allow students to highlight key points, define unfamiliar terms, and insert marginal reflections as they read (Leu et al., 2011). Online discussion boards or collaborative Google Docs extend this engagement into social learning environments, where readers can compare main ideas and interpretations with peers, reinforcing active comprehension (Cho et al., 2017). Summarization exercises, in which students restate the main argument of a digital article in their own words, further ensure that comprehension is not passive but generative. Instructors can also model comprehension questioning by generating basic “Right There” prompts and gradually shifting this responsibility to students as they build fluency in self-monitoring comprehension (Behrman, 2006).
Importantly, comprehension strategies in digital reading must also address multimodal features. Videos, infographics, and interactive elements embedded within texts are increasingly common, and their integration can either reinforce or obscure meaning (Wilson & Smetana, 2011; Kiili & Leu, 2019). When engaging with video content, for example, students might be guided to take structured notes summarizing how audiovisual elements support (or conflict with) the central argument. When interacting with infographics, students might evaluate whether the visual data extends or distorts the accompanying text. Explicit instruction in cross-modal comprehension is thus essential for maintaining coherence across modes (Leu et al., 2011).
Ultimately, comprehension at the “Right There” level acts as a cognitive safeguard against fragmentation in digital environments. Without a strong foundation in basic meaning-making, students are less equipped to evaluate credibility, integrate sources, or engage critically with digital texts. Foundational comprehension strategies must therefore be explicitly taught and reinforced as the first step toward robust digital literacy.

4.2. “Think and Search” During Online Reading

In traditional print contexts, “Think and Search” questions required readers to piece together information from different parts of a single text. However, in digital environments, this skill must expand significantly: readers must evaluate fragmented, multimodal, and networked sources while making real-time credibility judgments. Limiting “Think and Search” to a single source is inadequate for the contemporary Internet reader. Online reading demands that readers synthesize information across hyperlinked documents, embedded media, and cross-platform contexts (Leu et al., 2011).
Unlike print texts that offer linear sequencing, digital texts often disperse meaning across links, videos, sidebars, and comment threads. Readers must not only locate explicit information but also assess how various elements (e.g., hyperlinks, visuals, related articles) construct meaning, shape arguments, or obscure key details (Cho et al., 2017; Coiro, 2021; Turner et al., 2020). Readers who cannot make these inferences may misinterpret implicit messages or miss critical relationships between ideas. Thus, “Think and Search” questions in online environments scaffold students’ ability to contextualize and critically synthesize information, bridging basic comprehension and higher-order analysis.
Effective “Think and Search” questioning in digital contexts encourages students to interrogate structural features, such as hyperlinking and multimodal integration. Hyperlinks often suggest authority or relatedness, but readers must ask: Why was this link included? What perspectives does it amplify or omit? Similarly, videos, infographics, and interactive tools embedded within articles can either enrich or manipulate meaning. Readers must determine whether these additions clarify the main argument or distract from it (Kiili & Leu, 2019).
Moreover, online content often contains surface indicators of credibility (e.g., citations, author bios, professional design) that may not reflect actual reliability. As Barzilai and Zohar (2012) caution, readers must learn to interrogate the structure of a digital text, not merely its surface features. Students should therefore be taught to ask the following questions:
  • How does following this hyperlink help me achieve my purpose online?
  • Does this hyperlink lead to a reputable source?
  • Does the linked content reinforce or challenge the main argument of the text?
  • What perspectives are absent in the sources being linked?
  • Who shared this resource, and why? What might they want me to believe?
To develop these skills, educators can engage students in targeted activities that foreground source comparison and credibility analysis. One effective practice is Comparing News Sources, where students examine how different media outlets frame the same event, noting shifts in language, tone, and ideological perspective. This exercise helps students detect bias, agenda-setting, and narrative shaping across sources.
Another critical instructional focus is Recognizing Sponsored Content. Many online texts blend advertising with editorial material, obscuring commercial interests behind apparent neutrality. This architecture of the internet requires that readers consistently approach text with an evaluative perspective while engaging in metacognitive practices. The features of online environments provide opportunities for engaging with complex, diverse, and unfamiliar forms of information; within this environment, metacognitive practices can ensure students that they are moving purposefully towards a learning goal (Barzilai & Ka’adan, 2017). Teaching students to identify native advertising, sponsored posts, and conflict-of-interest disclosures sharpens their ability to question the motivations underlying information presentation.
In sum, effective “Think and Search” questioning enables readers to move from isolated comprehension toward relational, contextualized understanding. However, interrogating links, structure, and source credibility naturally leads readers to deeper critical inquiries: not only how a text was constructed, but why. For this reason, “Think and Search” practices build a necessary bridge to “Author and Me” questioning, where readers evaluate the author’s purpose, perspective, and underlying assumptions.

4.3. “Author and Me” During Online Reading

“Author and Me” questions challenge readers to move beyond information retrieval toward making inferences about a text’s framing, purpose, and ideological positioning. In digital environments, this inferential reading is particularly essential, as persuasive bias, commercial agendas, and emotional framing are often obscured by platform architectures. Unlike traditional print texts, online content frequently lacks transparent sourcing, editorial review, or clear authorship (Leu et al., 2011), making critical questioning indispensable for evaluating credibility.
Digital texts, especially those circulating via social media, blogs, or opinion-based news outlets, are often designed to persuade, evoke emotional responses, or reinforce ideological worldviews. In contrast to journalistic standards historically associated with mainstream media, many digital sources operate without formal accountability structures (Metzger & Flanagin, 2013). Therefore, readers must be taught to interrogate not only what is presented, but how and why it is framed. Educators can cultivate these competencies through explicit instruction in source evaluation, fact-checking, and digital credibility assessment. Students should regularly engage in structured questioning practices such as:
  • How is this text or these texts meeting my reading purpose?
  • Who is the author of this text, and what are their credentials?
  • Who shared this text, and how does that sharing context shape interpretation?
  • How might the hosting platform’s design influence which perspectives are amplified?
Without such critical reading habits, students risk becoming passive consumers of misinformation, confirmation bias, and commercially-motivated narratives (Kiili et al., 2008).
One key instructional strategy is lateral reading (Wineburg & McGrew, 2019). Skilled readers cross-reference claims by consulting multiple independent sources rather than trusting information at face value. The “Author and Me” questions a reader may ask during reading helps construct knowledge from multiple information sources by being able to “apply in a coordinated and adaptive fashion, a complex set of epistemic strategies such as evaluating source reliability, corroborating claims, and integrating information” (Barzilai & Ka’adan, 2017, p. 194) across sources. Practical classroom applications include verifying an author’s credentials, checking citations for accuracy, and corroborating claims through reputable fact-checking organizations, like Snopes or FactCheck.org. Students should practice the following:
  • Searching for the same claim across diverse sources.
  • Investigating whether cited studies are accurately represented.
  • Identifying financial sponsorships or conflicts of interest that might bias reporting.
Moreover, critical literacy practices must extend to examining how financial structures influence content production. Native advertising, clickbait headlines, and monetization strategies often obscure commercial intent (Lewison et al., 2002). Classroom activities can guide students to analyze who funds media outlets, what advertisements accompany particular articles, and how financial incentives shape journalistic framing.
Yet even when sources appear credible, students must recognize that digital texts are mediated through algorithmic curation and corporate platform structures (Coiro, 2021; Behrman, 2006). In working with the complex context and content of information sources, readers can engage with their own metacognitive knowledge about which epistemic strategy to apply (Barzilai & Ka’adan, 2017). For example, readers can critically reflect on how search engines, social media filters, and ownership consolidation privilege certain perspectives while marginalizing others. “Author and Me” questioning can help students step outside of individual texts to interrogate systemic structures by asking the following questions:
  • What political, economic, or social forces influence this text?
  • How does algorithmic curation affect what I encounter?
  • What ideological assumptions are embedded in this content?
  • Who benefits and who is disadvantaged by the way this issue is framed?
Building on these practices, educators can integrate exercises that analyze social media algorithms. Platforms tailor content based on users’ engagement histories, reinforcing existing beliefs and creating so-called filter bubbles. By comparing how the same social or political issue is presented across Twitter, Facebook, TikTok, and major news outlets, students can explore the following:
  • Who is speaking across platforms?
  • Who is absent from these narratives?
  • How do platform architectures amplify or distort issues differently?
Encouraging students to vary search terms, for example, comparing “climate change hoax” versus “climate change evidence,” also reveals how algorithms prioritize information differently depending on phrasing (Coiro, 2021). Such exercises foster awareness that access to information itself is curated, partial, and politically charged.
In brief, “Author and Me” questioning within the dmQAR framework prepares readers not only to comprehend digital texts but also to interrogate the broader power structures that shape digital discourse. By embedding structured, critical questioning into literacy instruction, educators can equip students to challenge dominant narratives, seek out marginalized voices, and participate ethically and thoughtfully in digital spaces.

4.4. “On My Own” During Online Reading

“On My Own” questions traditionally ask readers to rely on their background knowledge rather than retrieving information directly from a text. In digital reading environments, this type of metacognitive questioning acquires heightened importance. According to schema theory, comprehension is an interactive process in which new information is interpreted through the lens of pre-existing cognitive frameworks (Anderson & Pearson, 1984). Readers who possess rich background knowledge are better equipped to make inferences, recognize implicit assumptions, and synthesize across fragmented digital sources (Bråten et al., 2011).
However, while prior knowledge can enable deeper inquiry, it can also constrain questioning in digital spaces. Algorithmic curation often reinforces confirmation bias by exposing readers predominantly to content that aligns with their existing beliefs (Wineburg & McGrew, 2019). Without broad, critical schemas, students may accept information at face value, misinterpret sources, or fail to recognize missing perspectives (Barzilai & Zohar, 2012). Thus, “On My Own” questioning highlights the critical role of activating and expanding prior knowledge during digital reading.
Consider, for example, a student with a strong foundation in climate science. Encountering a news article that oversimplifies or misrepresents scientific consensus, such a student may immediately ask: What key information is omitted? How does this portrayal compare with established scientific findings? Conversely, a student without this background may not recognize why skeptical sources position climate data in misleading ways (Mertens, 2023). These differences underscore the metacognitive necessity of equipping readers to recognize gaps, contradictions, and biases when engaging with digital texts.
To foster deeper critical engagement, educators must intentionally scaffold background knowledge. Pre-reading activities can introduce foundational concepts before students encounter complex texts. Comparative reading assignments can expose students to diverse perspectives on a single topic, helping them recognize ideological framing and expand their cognitive schemas. Guided questioning models can explicitly demonstrate how prior knowledge shapes inquiry, with teachers modeling how to activate and apply existing knowledge when evaluating online sources.
Beyond content scaffolding, reflective practices help students interrogate how their personal experiences shape interpretation. These reflective practices are important because most of the digital reading that people engage with often comes from curiosity and not from formal schooling practices. Critical literacy involves not only identifying external bias but also examining one’s internalized beliefs and habits of attention. Activities that ask students to track their digital habits, such as analyzing the types of sources they engage with over a week, can reveal content patterns and cognitive blind spots. Key reflective questions include the following:
  • What types of voice am I most often exposed to?
  • How do my beliefs influence how I interpret information?
  • What important perspectives might I be missing?
Building on these reflections, students can be encouraged to ask relational questions when reading digitally:
  • What connections can I make between this issue and my own experiences?
  • What additional information would help me better understand this issue?
  • How might different communities or stakeholders interpret this information differently?
In sum, while strong background knowledge enhances relational and critical questioning, gaps in schema and algorithmic curation threaten to narrow students’ digital comprehension. The dmQAR Framework therefore emphasizes explicit instruction not only in digital text analysis but also in the cultivation, activation, and critical expansion of prior knowledge. “On My Own” questioning fosters readers who are not merely passive absorbers of digital content, but active interrogators of meaning-making, bias, and epistemological complexity in online environments.

5. Instructional Applications of dmQAR in Classrooms

To support classroom implementation, we integrate practitioner-based strategies drawn from research on digital comprehension, critical questioning, and metacognitive scaffolding (e.g., Cho et al., 2017; Wineburg & McGrew, 2019; Ness, 2016). The dmQAR Framework provides a structured approach for fostering self-questioning strategies aligned with the cognitive demands of digital reading. Because online spaces present new affordances and constraints compared to traditional print, metacognitive questioning must evolve to help students slow down, reflect, and engage critically with digital texts. When considering the architecture of online spaces, specific question-and-answer relationships shift to accommodate the affordances and challenges of hypertextuality, multimodality, and algorithmic influence. While this framework scaffolds students in reflecting more deliberately during online reading, how it is implemented in classroom practice will significantly influence its impact. For the model to succeed, educators must combine explicit instruction, scaffolded inquiry, and blended use of digital and non-digital tools to support students’ critical engagement with digital texts.
While the framework is designed to support all learners, equitable implementation requires attention to disparities in device access, internet connectivity, and digital reading experience. Teachers may need to scaffold digital navigation and metacognitive questioning differently for students with limited prior exposure to online texts (Warschauer & Matuchniak, 2010).

5.1. Explicit Instruction and Teacher Modeling

One crucial foundation for developing metacognitive questioning is explicit instruction through teacher modeling. Traditional comprehension instruction has often relied on teacher-led questioning, where educators construct questions to guide students’ understanding. In this model, educators act as facilitators, helping students recognize patterns, identify textual gaps, and engage in structured discussion. By modeling these questioning strategies, teachers provide students with a cognitive roadmap for critically engaging with digital texts (Ness, 2016).
Think-alouds remain one of the most effective methods for explicitly teaching self-questioning (McKeown et al., 2009; Wilson & Smetana, 2011). Instructors can verbalize their thought processes while engaging with texts, modeling how to ask questions at different levels of the dmQAR Framework. Through real-time metacognitive modeling, teachers articulate comprehension gaps, evaluate textual reliability, and challenge biases (Wilkinson & Son, 2011). Think-alouds also highlight transferable questioning habits that apply across both digital and traditional literacy environments (Raphael & Au, 2005). For example, while reading a controversial news article, a teacher might include the following questions in their model:
  • Right There: “Does this webpage meet my purpose for learning?”
  • Think and Search: “I see that an expert is cited. Should I look up their credentials to verify credibility?”
  • Author and Me: “The author is making a strong claim. What evidence supports it?”
  • On My Own: “What is this article about? How does it intersect with what I already know or believe about the topic?”
Teachers would pose these questions while actively demonstrating online reading about a topic. By doing so, the teacher illustrates the interactive nature of online reading, and the necessity of constant questioning to make progress towards a learning goal. After teacher think-alouds, engaging the class in collaborative think-alouds, where students take turns verbalizing thought processes, further supports shared metacognition and reinforces the epistemic thinking necessary with online reading. These practices enable students to internalize questioning strategies that they can later apply independently during digital reading. Table 2 illustrates classroom-aligned applications of each dmQAR category, highlighting instructional strategies, tools, and learning outcomes.

5.2. Shifting Toward Student-Driven Questioning

While purposeful teacher-led questioning scaffolds student engagement with strategic digital reading, overuse of teacher-led instruction risks promoting passive learning (Pearson & Gallagher, 1983). To foster genuine cognitive autonomy, the dmQAR framework suggests that instruction must gradually shift from teacher-led modeling to student-driven questioning. The goal is not merely for students to answer pre-constructed prompts, but to support students in internalizing a flexible habit of inquiry that persists across digital contexts (Pearson & Cervetti, 2017).
Student-generated questioning deepens comprehension, enhances engagement, and promotes self-regulated learning (Chin, 2006; McNamara & Magliano, 2009). This shift toward student agency aligns with broader calls to reevaluate traditional instructional and assessment structures in favor of practices that prioritize cognitive autonomy and authentic engagement (Crogman et al., 2023). Research suggests that, when students formulate their own questions, they monitor comprehension more effectively and engage more deeply with texts (Ness, 2016). This skill is particularly essential in digital environments, where algorithmic curation demands that readers independently identify gaps, biases, and credibility challenges (Cho & Afflerbach, 2017).
The dmQAR Framework supports this transition by scaffolding the process of independent questioning. Effective instructional strategies include collaborative questioning groups, where students co-construct questions about a digital text, refining their ability to detect bias, synthesize multiple sources, and assess credibility (King, 1994). For example, in a middle school classroom exploring media coverage of a current event, students might use the dmQAR structure to generate “Think and Search” questions that compare how different news outlets report on the same issue. Using graphic organizers, they examine variations in framing, highlight which perspectives are emphasized or excluded, and analyze how images or layout shape reader interpretation. A follow-up journal reflection invites students to connect these insights to their personal digital reading habits, reinforcing both metacognitive transfer and critical digital citizenship (Crogman et al., 2023).
Maintaining questioning logs, either digitally or on paper, further encourages students to reflect on their inquiries and build a personal repertoire of metacognitive questioning strategies.

5.3. Digital and Non-Digital Tools for Metacognition

While digital reading presents challenges, such as nonlinear navigation, multimodal distractions, and information overload, interactive digital tools can help scaffold metacognitive questioning practices. Annotation and collaborative discussion platforms, like Perusall, Hypothes.is, and Kami, enable students to engage in layered, social reading experiences that promote deeper questioning. These tools offer multiple affordances: embedding instructor-posed guiding questions, fostering peer-generated discussion, and supporting multimodal integration (Crogman et al., 2025). They also support multimodal integration, encouraging students to critically analyze hyperlinks, embedded visuals, and video content alongside traditional textual elements.
For example, a high school teacher guiding students through an online article on climate change might use Hypothes.is to embed prompts that ask students to assess the credibility of cited sources, highlight persuasive language, and comment on embedded graphs. In response, students collaboratively annotate the article, debate reliability in the margins, and summarize multimodal features in threaded discussions. These shared annotations offer a trace of evolving comprehension and deepen awareness of how digital features influence interpretation.
However, digital scaffolding alone is insufficient for cultivating enduring metacognitive habits. Non-digital strategies, like reflection journals, provide an important complement to digital tools by offering opportunities for deliberate, extended questioning beyond the immediacy of online interaction. Research suggests that written reflection reinforces self-regulation, deepens inquiry, and enhances memory retention (Zimmerman & Schunk, 2011). After engaging with a complex digital text, students might reflect in journals:
  • Right There: “What claims were made? Were sources cited?”
  • Think and Search: “What perspectives were missing? How did different elements contribute to meaning?”
  • Author and Me: “What ideological assumptions are embedded in this content?”
  • On My Own: “How do my prior experiences shape my interpretation of this issue?”
Reflection journals allow students to track evolving interpretations over time and build metacognitive self-awareness that transcends individual reading sessions.
Similarly, Socratic seminars offer rich opportunities for collective inquiry into digital texts. In these dialogues, students pose and respond to open-ended questions, modeling the kinds of critical engagement and multi-perspective evaluation necessary for digital literacy (Chin, 2006). By facilitating discussions of digital news articles, social media debates, or multimedia texts, educators can nurture habits of flexible, dialogic questioning that transfer to independent digital engagement.
Educators are increasingly integrating immersive technologies to extend this reflective space (Crogman et al., 2025). By integrating digital tools for scaffolding with non-digital practices for reflection and synthesis, educators ensure that metacognitive questioning becomes a durable cognitive habit rather than a medium-specific technique. This blended approach strengthens the dmQAR Framework, cultivating students’ ability to flexibly interrogate meaning across screen-based and traditional literacy contexts (Pearson & Cervetti, 2017).
Embedding both digital and non-digital scaffolds for metacognitive questioning, the dmQAR Framework promotes a flexible and transferable critical literacy. Students trained through explicit modeling, gradual autonomy, and multimodal inquiry develop not just comprehension, but the critical habits necessary for engaged, ethical participation in the digital information landscape. As the boundaries between reading, thinking, and communicating continue to evolve, equipping students with structured self-questioning strategies will be essential for fostering thoughtful, discerning, and resilient digital citizens.

6. Future Directions for Practice

This paper proposes and justifies the dmQAR Framework as a critical adaptation of the traditional QAR model for contemporary digital literacy instruction. In an era increasingly dominated by hyperlinked, multimodal, and algorithmically curated texts, the dmQAR Framework reconceptualizes comprehension as an active, inquiry-driven, and metacognitive process. Drawing from metacognitive theory (Flavell, 1979; Ozturk, 2016), transactional reading theory (Rosenblatt, 1978), and critical literacy (Freire & Macedo, 1987; Luke, 2012), the dmQAR systematically integrates self-regulated questioning into online reading practices.
The framework advances digital literacy pedagogy by providing structured scaffolds for moving students beyond surface-level comprehension toward deeper relational analysis, critical evaluation of sources, and reflexive engagement with digital architectures. Unlike traditional models, which assume meaning is constructed linearly from stable texts, the dmQAR reflects the nonlinearity and epistemological complexity of reading in networked digital environments (Coiro, 2021; Turner et al., 2020; Leu et al., 2018).
Practically, the dmQAR offers educators actionable pathways to teach metacognitive questioning across a range of instructional contexts. Strategies such as digital annotation, lateral reading practices, collaborative questioning, and reflection journals ensure that questioning becomes a transferable literacy habit rather than an isolated comprehension exercise. In doing so, this framework positions metacognitive questioning not as ancillary, but as foundational to critical participation in digital information ecosystems.

7. Future Research Directions

While this paper provides a theoretical and pedagogical foundation, the dmQAR Framework opens several critical avenues for future empirical research. First, future studies should investigate the instructional impact of the dmQAR on students’ digital comprehension, evaluation of source credibility, and resistance to misinformation. Experimental and quasi-experimental studies could compare students taught with traditional comprehension instruction to those receiving explicit dmQAR scaffolding, measuring outcomes such as critical questioning behaviors, integrative synthesis across digital sources, and susceptibility to biased or false information (Wineburg & McGrew, 2019; List & Lin, 2023).
Second, longitudinal research could track the development of autonomous metacognitive questioning habits over time. Do students who are initially scaffolded through dmQAR instruction continue to employ self-questioning strategies in independent digital reading? What factors sustain or inhibit the internalization of these metacognitive skills across contexts? Such research must also examine how the framework functions in multilingual classrooms, where students bring diverse linguistic repertoires, cognitive strategies, and culturally situated questioning practices that may shape digital comprehension and metacognitive engagement.
Third, given the proliferation of AI-enhanced reading platforms and adaptive annotation tools, research should examine how such technologies influence self-questioning practices. Tools like Perusall, Newsela, and AI-driven personal tutors increasingly embed questioning prompts within digital texts; however, it remains unclear whether these affordances foster deeper inquiry or promote overreliance on external cues (Ademola, 2024; Kiili & Leu, 2019). Future studies should explore how adaptive scaffolds interact with student-driven metacognitive development.
Fourth, as digital media continues to evolve toward hyper-modal and participatory platforms (e.g., TikTok, Instagram, Discord), research must investigate how self-questioning differs across media environments (Crogman et al., 2025). Does questioning about credibility and ideological framing function differently in video-based versus text-based or infographic-based environments? How can dmQAR strategies be tailored to meet the cognitive demands of emerging digital formats (Greenhow et al., 2020)?
Finally, equity-focused research is needed to ensure that dmQAR practices benefit all students, not just those with privileged access to devices, stable internet, or prior digital literacy experience. While prior work (e.g., Warschauer & Matuchniak, 2010) has illuminated longstanding gaps, post-pandemic data reveal even greater disparities in students’ digital reading skills (García & Weiss, 2020). Future studies should examine how dmQAR interventions can be designed for inclusivity, addressing access gaps and supporting historically marginalized learners’ critical digital participation.

8. Preparing Students for Future-Ready Literacy

As digital information environments become increasingly complex, nonlinear, and politically charged, ensuring that students develop the skills to interrogate, synthesize, and apply knowledge in meaningful ways is no longer optional. Rather, it is a fundamental prerequisite for participatory, democratic, and critical engagement. Literacy education must move beyond static comprehension models to embrace inquiry-based, socially reflexive, and critically evaluative practices.
The dmQAR Framework equips students with the structured tools necessary to actively interrogate the credibility, structure, purpose, and ideological positioning of digital texts. By embedding self-questioning at every stage of reading—across comprehension, synthesis, critique, and reflexive application—educators can foster students’ capacity to navigate information ecologies saturated with commercial agendas, misinformation, and algorithmic biases.
Future-ready literacy means cultivating readers who do not passively consume information but who actively question, contextualize, synthesize, and challenge the narratives that shape their digital worlds. By leveraging explicit instruction, collaborative inquiry, multimodal integration, and reflective metacognitive practices, the dmQAR Framework offers a pragmatic path forward for preparing students to think critically, read deeply, and act ethically in a dynamic digital age.

Author Contributions

Conceptualization, B.A. and N.S.W.; Methodology, B.A., N.S.W. and G.E.M.; Validation, B.A., N.S.W. and G.E.M.; Writing—original draft, B.A.; Writing—review & editing, B.A., N.S.W. and G.E.M.; Visualization, B.A.; Supervision, B.A.; Project administration, B.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

No generative AI tools were used in the preparation of this manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Adams, B., Dussling, T., Stevens, E. Y., & Wilson, N. S. (2022). Troubling critical literacy assessment: Criticality-in-process. Journal of Literacy Innovation, 7(2), 119–140. [Google Scholar]
  2. Adams, B., & Wilson, N. S. (2022). Investigating students’ during-reading practices through social annotation. Literacy Research and Instruction, 61(4), 339–360. [Google Scholar] [CrossRef]
  3. Adams, B., Wilson, N. S., Dussling, T., Stevens, E. Y., Van Wig, A., Baumann, J., Yang, S., Mertens, G. E., Bean-Folkes, J., & Smetana, L. (2023). Literacy’s Schrödinger’s cat: Capturing reading comprehension with social annotation. Teaching Education, 34(4), 367–383. [Google Scholar] [CrossRef]
  4. Ademola, E. O. (2024, July 15–19). Reading strategies in the AI age: Enhancing comprehension and engagement with advanced technologies. 38th iSTEAMS Multidisciplinary Bespoke Conference (pp. 105–124), Accra, Ghana. [Google Scholar]
  5. Anderson, R. C., & Pearson, P. D. (1984). A schema-theoretic view of basic processes in reading. In P. D. Pearson (Ed.), Handbook of reading research (pp. 255–291). Longman. [Google Scholar]
  6. Barzilai, S., & Ka’adan, I. (2017). Learning to integrate divergent information sources: The interplay of epistemic cognition and epistemic metacognition. Metacognition and Learning, 12(2), 193–232. [Google Scholar] [CrossRef]
  7. Barzilai, S., & Zohar, A. (2012). Epistemic thinking in action: Evaluating and integrating online sources. Cognition and Instruction, 30(1), 39–85. [Google Scholar] [CrossRef]
  8. Barzilai, S., & Zohar, A. (2014). Reconsidering personal epistemology as metacognition: A multifaceted approach to the analysis of epistemic thinking. Educational Psychologist, 49(1), 13–35. [Google Scholar] [CrossRef]
  9. Barzilai, S., & Zohar, A. (2016). Epistemic (meta) cognition: Ways of thinking about knowledge and knowing. In J. A. Greene, W. A. Sandoval, & I. Bråten (Eds.), Handbook of epistemic cognition (pp. 409–424). Routledge. [Google Scholar]
  10. Behrman, E. H. (2006). Teaching about language, power, and text: A review of classroom practices that support critical literacy. Journal of Adolescent & Adult Literacy, 49(6), 490–498. [Google Scholar]
  11. Bråten, I., Britt, M. A., Strømsø, H. I., & Rouet, J. F. (2011). The role of epistemic beliefs in the comprehension of multiple expository texts: Toward an integrated model. Educational Psychologist, 46(1), 48–70. [Google Scholar] [CrossRef]
  12. Chin, C. (2006). Classroom interaction in science: Teacher questioning and feedback to students’ responses. International Journal of Science Education, 28(11), 1315–1346. [Google Scholar] [CrossRef]
  13. Cho, B.-Y., & Afflerbach, P. (2015). Reading on the internet. Journal of Adolescent & Adult Literacy, 58(6), 504–517. [Google Scholar]
  14. Cho, B.-Y., & Afflerbach, P. (2017). An evolving perspective of constructively responsive reading comprehension strategies in multilayered digital text environments. In S. E. Israel (Ed.), Handbook of research on reading comprehension (2nd ed., pp. 109–134). Routledge. [Google Scholar]
  15. Cho, B.-Y., Woodward, L., & Li, D. (2018). Epistemic processing when adolescents read online: A verbal protocol analysis of more and less successful online readers. Reading Research Quarterly, 53(2), 197–221. [Google Scholar] [CrossRef]
  16. Cho, B.-Y., Woodward, L., Li, D., & Barlow, W. (2017). Examining adolescents’ strategic processing during online reading with a question-generating task. American Educational Research Journal, 54(4), 691–724. [Google Scholar] [CrossRef]
  17. Coiro, J. (2011). Predicting reading comprehension on the internet: Contributions of offline reading skills, online reading skills, and prior knowledge. Journal of Literacy Research, 43(4), 352–392. [Google Scholar] [CrossRef]
  18. Coiro, J. (2021). Toward a multifaceted heuristic of digital reading to inform assessment, research, practice, and policy. Reading Research Quarterly, 56(1), 9–31. [Google Scholar] [CrossRef]
  19. Crogman, H. T., Cano, V. D., Pacheco, E., Sonawane, R. B., & Ayala, L. G. (2025). Virtual reality, augmented reality, and mixed reality in experiential learning: Transforming educational paradigms. Education Sciences, 15(3), 303. [Google Scholar] [CrossRef]
  20. Crogman, H. T., Eshun, K. O., Jackson, M., & Ochieng, S. (2023). Ungrading: The case for abandoning institutionalized assessment protocols and improving pedagogical strategies. Education Sciences, 13(11), 1091. [Google Scholar] [CrossRef]
  21. Damico, J. S., Campano, G., & Harste, J. C. (2009). Transactional theory and critical theory in reading comprehension. In S. E. Israel, & G. G. Duffy (Eds.), Handbook of research on reading comprehension (1st ed., pp. 177–188). Routledge. [Google Scholar]
  22. Delgado, P., Vargas, C., Ackerman, R., & Salmerón, L. (2018). Don’t throw away your printed books: A meta-analysis on the effects of reading media on reading comprehension. Educational Research Review, 25, 23–38. [Google Scholar] [CrossRef]
  23. Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34(10), 906–911. [Google Scholar] [CrossRef]
  24. Fox, E., & Alexander, P. A. (2017). Text and comprehension: A retrospective, perspective, and prospective. In S. E. Israel (Ed.), Handbook of research on reading comprehension (2nd ed., pp. 335–352). The Guilford Press. [Google Scholar]
  25. Freire, P., & Macedo, D. (1987). Literacy: Reading the word and the world. Bergin & Garvey. [Google Scholar]
  26. García, E., & Weiss, E. (2020). Education inequalities at the start of the COVID-19 pandemic: The case of digital access. Economic Policy Institute. Available online: https://www.scirp.org/reference/referencespapers?referenceid=3288693 (accessed on 1 June 2025).
  27. Gavelek, J., & Bresnahan, P. (2009). Ways of meaning making: Sociocultural perspectives on reading comprehension. In S. E. Israel, & G. G. Duffy (Eds.), Handbook of research on reading comprehension (pp. 140–176). Routledge. [Google Scholar]
  28. Gavelek, J., & Whittingham, C. E. (2017). Meaning making in the 21st century: The sociogenesis of reading comprehension. In S. E. Israel (Ed.), Handbook of research on reading comprehension (2nd ed., pp. 166–190). The Guilford Press. [Google Scholar]
  29. Greenhow, C., Galvin, S. M., Brandon, D. L., & Askari, E. (2020). A decade of research on K–12 teaching and teacher learning with social media: Insights on the state of the field. Teachers College Record, 122(6), 1–72. [Google Scholar] [CrossRef]
  30. Hartman, D. K., Morsink, P. M., & Zheng, J. (2010). From print to pixels: The evolution of cognitive conceptions of reading comprehension. In E. A. Baker (Ed.), The new literacies: Multiple perspectives on research and practice (pp. 131–164). Guilford Press. [Google Scholar]
  31. Hofer, B. K. (2016). Epistemic cognition as a psychological construct: Advancements and challenges. In J. A. Greene, W. A. Sandoval, & I. Bråten (Eds.), Handbook of epistemic cognition (pp. 19–38). Routledge. [Google Scholar]
  32. Janks, H., Dixon, K., Ferreira, A., Granville, S., & Newfield, D. (2013). Doing critical literacy: Texts and activities for students and teachers. Routledge. [Google Scholar]
  33. Kiili, C., Laurinen, L., & Marttunen, M. (2008). Students evaluating Internet sources: From versatile evaluators to uncritical readers. Journal of Educational Computing Research, 39(1), 75–95. [Google Scholar] [CrossRef]
  34. Kiili, C., & Leu, D. J. (2019). Exploring the collaborative synthesis of information during online reading. Computers in Human Behavior, 95, 146–157. [Google Scholar] [CrossRef]
  35. King, A. (1994). Comparison of self-questioning, summarizing, and note-taking-review as strategies for learning from lectures. American Educational Research Journal, 29(2), 303–323. [Google Scholar] [CrossRef]
  36. Kohnen, A. M., Mertens, G. E., & Boehm, S. M. (2020). Can middle schoolers learn to read the web like experts? Possibilities and limits of a strategy-based intervention. Journal of Media Literacy Education, 12(2), 64–79. [Google Scholar] [CrossRef]
  37. Kuhn, D. (2021). Metacognition matters in many ways. Educational Psychologist, 57(2), 73–86. [Google Scholar] [CrossRef]
  38. Leu, D. J., Kinzer, C. K., Coiro, J., Castek, J., & Henry, L. A. (2018). New literacies: A dual-level theory of the changing nature of literacy, instruction, and assessment. In Theoretical models and processes of literacy (pp. 319–346). Routledge. [Google Scholar]
  39. Leu, D. J., McVerry, J. G., O’Byrne, W. I., Kiili, C., Zawilinski, L., Everett-Cacopardo, H., Kennedy, C., & Forzani, E. (2011). The new literacies of online reading comprehension: Expanding the literacy and learning curriculum. Journal of Adolescent & Adult Literacy, 55(1), 5–14. [Google Scholar]
  40. Lewison, M., Flint, A. S., & Van Sluys, K. (2002). Taking on critical literacy: The journey of newcomers and novices. Language Arts, 79(5), 382–392. [Google Scholar] [CrossRef]
  41. Lewison, M., Leland, C., & Harste, J. (2015). Creating critical classrooms: Reading and writing with an edge (2nd ed.). Routledge. [Google Scholar]
  42. List, A., & Lin, C.-J. (2023). Content and quantity of highlights and annotations predict learning from multiple digital texts. Computers & Education, 199, 104791. [Google Scholar]
  43. Luke, A. (2012). Critical literacy: Foundational notes. Theory into Practice, 51(1), 4–11. [Google Scholar] [CrossRef]
  44. Lynch, M. P. (2016). The Internet of us: Knowing more and understanding less in the age of big data. Liveright. [Google Scholar]
  45. Marsh, E. J., & Rajaram, S. (2019). The digital expansion of the mind: Implications of internet usage for memory and cognition. Journal of Applied Research in Memory and Cognition, 8(1), 1–14. [Google Scholar] [CrossRef]
  46. Mason, L., Boldrin, A., & Ariasi, N. (2010). Epistemic metacognition in context: Evaluating and learning online information. Metacognition Learning 5, 67–90. [Google Scholar] [CrossRef]
  47. McKeown, M. G., Beck, I. L., & Blake, R. G. K. (2009). Rethinking reading comprehension instruction: A comparison of instruction for strategies and content approaches. Reading Research Quarterly, 44(3), 218–253. [Google Scholar] [CrossRef]
  48. McNamara, D. S., & Magliano, J. (2009). Toward a comprehensive model of comprehension. In B. H. Ross (Ed.), The psychology of learning and motivation (pp. 297–384). Elsevier Academic Press. [Google Scholar]
  49. Mertens, G. E. (2023). A rainbow hurricane?: Exploring student evaluations of ambiguously credible tweeted information within crisis contexts. Journal of Research on Technology in Education, 56(1), 40–55. [Google Scholar] [CrossRef]
  50. Mertens, G. E., & Kohnen, A. M. (2022). A crisis of medium: Critically reading internet architecture. English Journal, 111(4), 62–70. [Google Scholar] [CrossRef]
  51. Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online environments: The use of cognitive heuristics. Journal of Pragmatics, 59, 210–220. [Google Scholar] [CrossRef]
  52. Nash, B. L. (2024). Love and learning in the age of algorithms: How intimate relationships with artificial intelligence may shape epistemology, sociality, and linguistic justice. Reading Research Quarterly, 59(4), 624–631. [Google Scholar] [CrossRef]
  53. Nash, B. L. (2025). Cutting through “the fog of scrolling”: Understanding students’ entangled digital reading through metacognitive reflections on self-made video-recordings. Reading Research Quarterly, 60(2), e70001. [Google Scholar] [CrossRef]
  54. Naumann, J. (2015). A model of online reading engagement: Linking engagement, navigation, and performance in digital reading. Computers in Human Behavior, 53, 263–277. [Google Scholar] [CrossRef]
  55. Ness, M. (2016). When readers ask questions: Inquiry-based reading instruction. The Reading Teacher, 70(2), 189–196. [Google Scholar] [CrossRef]
  56. Ozturk, N. (2016). An analysis of pre-service elementary teachers’ understanding of metacognition and pedagogies of metacognition. Journal of Teacher Education and Educators, 5(1), 47–68. [Google Scholar]
  57. Pae, H. K. (2020). Script effects as the hidden drive of the mind, cognition, and culture. Springer. [Google Scholar]
  58. Paris, S. G., & Hamilton, E. E. (2009). The development of children’s reading comprehension. In S. E. Israel, & G. G. Duffy (Eds.), Handbook of research on reading comprehension (pp. 32–53). Routledge. [Google Scholar]
  59. Pearson, P. D., & Cervetti, G. N. (2017). The roots of reading comprehension instruction. In S. E. Israel (Ed.), Handbook of research on reading comprehension (2nd ed., pp. 12–56). Routledge. [Google Scholar]
  60. Pearson, P. D., & Gallagher, M. C. (1983). The instruction of reading comprehension. Contemporary Educational Psychology, 8(3), 317–344. [Google Scholar] [CrossRef]
  61. Perkins, D. N. (1992). Smart schools: From training memories to educating minds. The Free Press. [Google Scholar]
  62. Pressley, M. (2002a). Comprehension instruction: What makes sense now, what might make sense soon. Reading Online, 5(2), 1–14. [Google Scholar]
  63. Pressley, M. (2002b). Metacognition and self-regulated comprehension. In A. E. Farstrup, & S. J. Samuels (Eds.), What research has to say about reading instruction (3rd ed., pp. 291–309). International Reading Association. [Google Scholar]
  64. Raphael, T. E. (1982). Question–Answering strategies for children. The Reading Teacher, 36(2), 186–190. [Google Scholar]
  65. Raphael, T. E. (1984). Teaching learners about sources of information for answering comprehension questions. Journal of Reading, 27(4), 303–311. [Google Scholar]
  66. Raphael, T. E. (1986). Teaching Question–Answer relationships, revisited. The Reading Teacher, 39(6), 516–522. [Google Scholar]
  67. Raphael, T. E., & Au, K. H. (2005). QAR: Enhancing comprehension and test taking across grades and content areas. The Reading Teacher, 59(3), 206–221. [Google Scholar] [CrossRef]
  68. Raphael, T. E., George, M., Weber, C. M., & Nies, A. (2009). Approaches to teaching reading comprehension. In S. E. Israel, & G. G. Duffy (Eds.), Handbook of research on reading comprehension (pp. 449–469). Routledge. [Google Scholar]
  69. Rosenblatt, L. M. (1978). The reader, the text, the poem: The transactional theory of the literary work. Southern Illinois University Press. [Google Scholar]
  70. Schurer, T., Opitz, B., & Schubert, T. (2023). Mind wandering during hypertext reading: The impact of hyperlink structure on reading comprehension and attention. Acta Psychologica, 233, 103836. [Google Scholar] [CrossRef]
  71. Turner, K. H., Hicks, T., & Zucker, L. (2020). Connected reading: A framework for understanding how adolescents encounter, evaluate, and engage with texts in the digital age. Reading Research Quarterly, 55(2), 291–309. [Google Scholar] [CrossRef]
  72. Uçak, G., & Kartal, G. (2023). Scaffolding design to increase reading comprehension for learners of English through online strategy training. E-Learning and Digital Media, 20(4), 402–423. [Google Scholar] [CrossRef]
  73. Warschauer, M., & Matuchniak, T. (2010). New technology and digital worlds: Analyzing evidence of equity in access, use, and outcomes. Review of Research in Education, 34(1), 179–225. [Google Scholar] [CrossRef]
  74. Wilkinson, I. A. G., & Son, E. H. (2011). A dialogical turn in research on learning and teaching to comprehend. In M. L. Kamil, P. D. Pearson, E. B. Moje, & P. P. Afflerbach (Eds.), Handbook of reading research (Vol. 4, pp. 359–387). Routledge. [Google Scholar]
  75. Wilson, N. S., & Smetana, L. (2011). Questioning as thinking: A metacognitive framework to improve comprehension of expository text. Literacy, 45(2), 84–90. [Google Scholar] [CrossRef]
  76. Wineburg, S., & McGrew, S. (2019). Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information. Teachers College Record, 121(11), 1–40. [Google Scholar] [CrossRef]
  77. Wu, J.-Y. (2014). Gender differences in online reading engagement, metacognitive strategies, navigation skills, and reading literacy. Journal of Computer Assisted Learning, 30(3), 252–271. [Google Scholar] [CrossRef]
  78. Yen, M. H., Wang, C. Y., Chang, W. H., & Huang, S. H. (2018). Assessing metacognitive components in self-regulated reading of science texts in e-based environments. International Journal of Science and Mathematics Education, 16, 797–816. [Google Scholar] [CrossRef]
  79. Zenotz, V. (2012). Awareness development for online reading. Language Awareness, 21(1–2), 85–100. [Google Scholar] [CrossRef]
  80. Zimmerman, B. J., & Schunk, D. H. (Eds.). (2011). Handbook of self-regulation of learning and performance. Taylor & Francis. [Google Scholar]
Figure 1. dmQAR.
Figure 1. dmQAR.
Education 15 00751 g001
Table 1. Ten Properties of the Internet *.
Table 1. Ten Properties of the Internet *.
PropertyDefinition
Unlimited scopeAny topic may be addressed
Inaccurate contentThere is no guarantee information is accurate
Rapidly changing contentContent can be added, deleted, or changed quickly
Many distractions and choicesMany pictures, ads, hyperlinks pepper content
Very accessibleDoes not require tech-savviness; available to many
Requires searchSearch terms required to find information
Fast resultsHits are almost instantaneous
The ability to authorAnyone can be an author
Source information is obscuredAuthorship may be hidden
Many connections to othersEasy to share and receive information with others
* Adapted from Marsh and Rajaram (2019).
Table 2. Snapshot Applications of the dmQAR Framework in Classroom Contexts.
Table 2. Snapshot Applications of the dmQAR Framework in Classroom Contexts.
dmQAR CategorySample ScenarioInstructional StrategyDigital Tool/ModalityTargeted Outcome
Right ThereStudents analyze a webpage about climate change.Teacher models how to identify the main argument, headline structure, and explicit claims.Perusall or Kami for annotation; Google Doc for group notes.Literal comprehension and digital source vetting (e.g., ad placement, hyperlinks, tone).
Think and SearchStudents investigate media bias by comparing how three outlets cover the same event.Students use graphic organizers to trace similarities/differences in headlines, visuals, and sourcing.Split-screen reading, HyperDocs, or Padlet.Source triangulation; synthesis across texts and modes.
Author and MeStudents evaluate an opinion blog about school surveillance.Students question the author’s motive, sponsorship, and tone using lateral reading techniques.FactCheck.org, Google Scholar, or Snopes for corroboration.Critical evaluation of purpose, bias, and ideological stance.
On My OwnStudents read about AI in education and reflect on their digital habits.After reading, students journal about how their own assumptions or experiences shaped interpretation.Reflection journals (digital or handwritten); class discussion.Epistemic metacognition; personal schema activation; perspective-taking.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Adams, B.; Wilson, N.S.; Mertens, G.E. dmQAR: Mapping Metacognition in Digital Spaces onto Question–Answer Relationship. Educ. Sci. 2025, 15, 751. https://doi.org/10.3390/educsci15060751

AMA Style

Adams B, Wilson NS, Mertens GE. dmQAR: Mapping Metacognition in Digital Spaces onto Question–Answer Relationship. Education Sciences. 2025; 15(6):751. https://doi.org/10.3390/educsci15060751

Chicago/Turabian Style

Adams, Brittany, Nance S. Wilson, and Gillian E. Mertens. 2025. "dmQAR: Mapping Metacognition in Digital Spaces onto Question–Answer Relationship" Education Sciences 15, no. 6: 751. https://doi.org/10.3390/educsci15060751

APA Style

Adams, B., Wilson, N. S., & Mertens, G. E. (2025). dmQAR: Mapping Metacognition in Digital Spaces onto Question–Answer Relationship. Education Sciences, 15(6), 751. https://doi.org/10.3390/educsci15060751

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop