Next Article in Journal
Effective Personality in Early Childhood Teacher Education: A Pilot Study on Its Relationship with Inclusive Education in a Pedagogy Program in Southern Chile
Previous Article in Journal
Inclusive Education as a Pillar of Sustainability: An Experimental Study on Students’ Attitudes Towards People with Disabilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Understanding Multimodal Assessment Practices in Higher Education to Improve Equity

Department of Education Specialties, St. John’s University, Queens, NY 11439, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(11), 1523; https://doi.org/10.3390/educsci15111523
Submission received: 23 September 2025 / Revised: 31 October 2025 / Accepted: 7 November 2025 / Published: 11 November 2025

Abstract

Multimodal assignments can improve student engagement, offer different paths for meaning-making or accessing information, and shift top-down power structures often inherent in traditional classrooms. However, despite the growing interest in including multimodal assignments in courses, many instructors still struggle with how to assess them and may continue to value print-based modes. This qualitative study, framed through social semiotics, aimed to explore why instructors in higher education include multimodal assignments and their corresponding assessments in their teaching and how they design and implement the assessments. Based on the analysis of focus group data, findings indicate that although there is a shared purpose of creating multimodal assignments and assessments to reinvent instructional practices to be more inclusive and equitable, instructors operate in the continuous process of rethinking and refining of what counts as “good” multimodal assessment. Furthermore, the implications for professional development programs emphasizing equity-centered pedagogies are discussed.

1. Introduction

As educators continue to move beyond traditional, print-based assignments and assessments that may be misaligned with the funds of knowledge of academically marginalized students, there has been a growing recognition of the necessity to create spaces where learners can show understanding through multimodal design (Boivin & Cohen Miller, 2022; Watts-Taffe, 2022). Multimodal assignments intentionally integrate multiple modes (e.g., text, video, color, image, sound) in ways that are often more culturally relevant and inclusive of students’ varied literacy practices (Stewart, 2023). In the classroom, these texts are often inherently collaborative and can challenge students to critically consider places and mobilities in terms of their content, representation, and audience (Jiang et al., 2020, p. 292).
Multimodal assessments are essential for moving towards more equitable and inclusive teaching and learning in higher education, as they foster epistemic and social justice by inviting diverse meaning-making practices (Cook-Sather et al., 2025; Newfield et al., 2003), Because multimodal assignments carry such strong potential for culturally relevant pedagogies and can engender student engagement (Stewart, 2023; Jiang et al., 2020; Tan et al., 2020), the pedagogically purposeful assessment of them is essential. However, many instructors may not know how to assess the robust authoring potentials, focus solely on print-based features, or worry about the technology and tools (Curwood, 2012; Nichols & Johnston, 2020; Ross et al., 2020). As a result, some may shy away from including multimodal assignments altogether and limit the ways in which students can express their perspectives within a course. Understanding the best practices in creating multimodal assessments can therefore engender opportunities for educators to feel more comfortable with including multimodal assignments and provide clearer expectations for students as they navigate potentially new ways of authoring in the classroom.
The present study is framed through social semiotics, which attends to how meaning is made through texts situated in social and cultural environments (Bezemer & Kress, 2008; Kress, 2010). From this perspective, meaning is inseparable from the historical, cultural, and institutional contexts in which a text is created (Kress, 2010; van Leeuwen, 2004) and cannot be represented or recreated out of context again (Serafini, 2014). Social semiotics forwards the process of drawing from culturally constructed inventories of semiotic schemas (i.e., resource for meaning making) to create meaning (O’Halloran, 2013). Therefore, communication and learning materialize through the choice of semiotic modes (e.g., text, pictures, gestures, music, sounds, layout, typography, etc.) that can have diverse meanings. By discerning these meanings and the authoring potentials for an assignment, instructors can better understand how to assess them.
From a social semiotics perspective, multimodality conceptualizes meaning as designed and socially negotiated through diverse semiotic resources (O’Halloran, 2013; Jewitt, 2008, 2009, 2013; Zammit, 2015) and rests on four interwoven theoretical assumptions: (a) language is part of a multimodal ensemble (i.e., even though language is an integral part of meaning making, meaning is made through more than just language, through the ways in which modes are used, connected, and arranged); (b) each mode within the multimodal ensemble is based on context and situated therein (for learning, this means that different modes might affect the learner differently); (c) people make meaning through the way they use and organize the modes (and one may attune to some modes more than others); and (d) meanings of signs are socially created and particular to that situation (Jewitt, 2009; Bezemer & Cowan, 2020). Within social semiotics, signs are the principal unit of meaning representation (Jewitt, 2013). For example, a sign can be a photograph, written language, a drawing, a symbol, a letter, a street sign, etc.
Social semioticians also argue that modes offer various affordances and limitations, and these are often specific to the mode, audience, and task (Kress, 2010; Serafini, 2014). Readers of a text make meaning not just through individual modes, but through the interplay and orchestration of those modes (Halverson, 2009; Kress, 2010; Kress & van Leeuwen, 2006). Readers interpret these multimodal ensembles by attending to how linguistic, visual, spatial, and aural resources interact to shape meaning potentials. These meanings extend beyond the sum of their parts; rather, each mode’s meaning potential is transformed through its relationship to other modes and the specific social and cultural context in which it appears (Lemke, 1998; Mills, 2010). Thus, assessing these modal ensembles can be much more complicated than traditional print-based assignments (e.g., essays, research papers) to which many students and teachers are accustomed.
Despite many studies emphasizing the unique affordances of meaning-making across modes and genres, particularly in the context of culturally responsive and critical pedagogies, there remains a disconnect between multimodal assignments and assessment design (Tan et al., 2020). A review of the literature on multimodal assessment revealed the need for reframing multimodal assessments to appropriately address the very concept of multimodality (Kress, 2009). Cartner and Hallas (2020) argue that various elements of multimodal compositions, such as selection, placement, and layout of information, the use of cross-referencing through hyperlinks, and the use of colors, to name a few, require a different approach to assessment than can be evaluated with criteria that originated in traditional print forms. However, a comparative analysis of assessment instruments used to evaluate multimodal compositions conducted by Tan et al. (2020) indicated a lack of interdependence and cohesion between linguistic and other modes in the rubrics. Although many of the rubrics included expectations for elements of multimodal design, such as designs for medium (i.e., the ways that elements are chosen and presented to increase comprehension), they could also be print-centric (e.g., spelling and grammar, story format, title page). Moreover, the very notion of assessing meaning-making using media other than traditional print is currently incongruent with unimodal standardized assessments in grade school and conventional notions of academic rigor in higher education (Stewart, 2023).
There is also a disjunction between the teachers’ willingness to experiment with multimodal composition assignments in order to increase student engagement and teacher and student metasemiotic knowledge, which involves an understanding of how meaning is created through language and other semiotic systems (Aagaard & Lund, 2013; Unsworth, 2014). The effectiveness of multimodal pedagogies relies on the use of metalanguage that reflects and adds to metasemiotic knowledge in the process of designing and evaluating multimodal compositions (Lim et al., 2022; New London Group, 1996; Svärdemo Åberg & Åkerfeldt, 2017). The metalanguage of multimodality is a shared language that would allow all parties to co-design and engage in collaborative meaning-making (Unsworth, 2014). Kress and van Leeuwen (2006) emphasize that metalanguage is essential for analyzing and discussing how meaning is made across modes in representational structures in a multimodal text, that is, how semiotic systems function to convey the nature of events and the participants involved in them; interactive structures, which concern the relationship between the maker of the text and its interpreter; and compositional meanings, which refer to the distribution of information in the text through the interplay of modes. Learning and using the metalanguage to build an understanding of the interpretive possibilities of texts requires substantial time, sustained practice, and ongoing engagement from both instructors and students (Unsworth, 2006).
A critical review of empirical and conceptual works on multimodal assessments (Anderson & Kachorsky, 2019) revealed that one of the recommendations shared among the reviewed studies was for the explicit teaching of metalanguage related to multimodal composition analysis and class discussions around the deconstruction of multimodal texts. According to Nielsen et al. (2020), teachers and students negotiate multimodal projects effectively when they share a language to discuss meaning-making practices that surround the creation of multimodal texts. Likewise, Macken-Horarik et al. (2017) found that a metalanguage allowed participants to structure their conversations around multimodal design choices more critically and meaningfully.
Another issue that permeated the reviewed literature relates to the challenge of constructing a shared vision of what an assessment of multimodal compositions is and how it should be designed. In addition to fixed forms of assessment, empirical studies contain examples of assessments that are flexible and adaptable to the context of teacher-situated practice, availability of digital resources, and genre and register negotiated by students (Burnett et al., 2014; Hung et al., 2013; Lawrence & Mathis, 2020). Shipka (2009), and other researchers posit that the process of creating a multimodal composition (i.e., choosing and interacting with modes) should be an integral part of the assessment along with the artifact, with many arguing that assessment based on one single artifact is ineffectual (Deng et al., 2023; Hafner & Ho, 2020). Furthermore, there is also a call to involve students in negotiating what criteria should be included in the evaluation of the process and product (Cartner & Hallas, 2020). Deng et al. (2023) additionally claim that student peers, as well as teachers, should be included in the feedback process.
Finally, rubrics vary considerably in how the multimodal nature of assignments is measured. There are examples of rubrics that measure modalities separately (e.g., sound, movement, perspective, gesture, and language in Wessel-Powell et al. (2016)); in dichotomous combinations (e.g., text and image in Callow (2020)); and in the convergence of modes that create intermodal cohesion or complementarity (Fajardo, 2018), where a combination of modes creates greater meaning than any single mode in isolation. In a review of assessments in the context of teaching multimodal literacies, Tan et al. (2020) argue that the awareness and use of intermodal complementarity is the key to advancing multimodal forms of meaning-making in educational settings. Furthermore, there are current frameworks that offer an analysis of multimodal elements (Burnett et al., 2014; Hung et al., 2013), but these may become unwieldy when instructors also include content material.
With such wide-ranging views of practices and approaches to assessment represented in the current research, instructors may feel overwhelmed when attempting to create evidence-based approaches to multimodal assessment for their courses. A lack of agreement on best practices in multimodal assessments hinders the development of professional programs that promote a seamless and effective integration of multimodality into course design. We hope that this study will provide valuable information for researchers and educators to open spaces for conversations that aim to identify the overarching principles that could guide the creation of multimodal assignments and assessments across academic disciplines.
We designed this study to better understand the strategies that guide the creation and implementation of multimodal assessments in college courses to explore why professors include multimodal assignments and their corresponding assessments in their teaching and how they design and implement them. We sought to add authentic insights from faculty representing different disciplines and diverse teaching styles to the discussion of the affordances and challenges of using multimodal assessments in a range of instructional settings.

2. Materials and Methods

We explored professors’ decision-making processes when creating, assessing, and conceptualizing multimodal assignments in their courses, as well as their reflections on their successes and needs around multimodal assessments. The study took place at a large, private American university where the authors and the participants are employed. Using the purposeful sampling method, the participants were recruited through an email invitation sent to faculty by the third author, who, working at Distance Education Office, utilized his professional network to identify faculty members with substantial experience in online pedagogies, particularly multimodal projects. The participants were five tenured university professors from three different colleges within the university. One of the participants worked in the same college as one of the researchers. We intentionally tried to avoid recruiting colleagues from our own colleges to reduce potential biases and utilized our networks to invite professors teaching in Business, Liberal Arts and Sciences, and Computer and Information Technology fields. See Table 1 for the demographics.
The five faculty members were invited to participate in focus group discussions about their philosophies and practices of multimodal assessment and to share their assessment instruments with the researchers. However, only one participant volunteered to share a rubric and an assignment description for a multimodal project. Due to the limited participation, these materials were not included in the analysis.
We conducted two virtual focus group meetings with the professors to better understand the multimodal assignments and assessments within their courses and their procedures and objectives for creating and assessing the assessments. Focus groups provided us with “the advantage of getting reactions from a relatively wide range of participants in a relatively short time” (Morgan, 1996, p. 134). They also allowed participants to collectively consider and discuss the complex social behaviors, semiotics, and pedagogical motivations behind creating multimodal assessments (Morgan & Krueger, 1993; Morgan, 1996).
We structured focus group meetings around a series of questions that prompted individual reflection and group discussion, opening space for comments and conversations as they arose. We designed the open-ended questions to elicit reflections on assumptions and beliefs related to multimodal assessments (e.g., “What does a “good” multimodal assessment look like?”) and also explored the plethora of pedagogical decisions surrounding the creation and implementation of multimodal assessments (e.g., “Can you walk me through your thinking process when creating multimodal assessments?”). Because not all professors had the same metalanguage around multimodality, we often clarified terms or asked the interviewees to define their version of terms as they pertained to their disciplines. See Appendix A for the complete list of questions.
After the first focus group meeting, we conducted an additional focus group meeting to gain a deeper insight into the participants’ challenges and successes by aligning their multimodal assignments with multimodal assessments. The second meeting gave us the opportunity to see the professors’ experiences in the context of specific assignments and courses that they designed and taught (e.g., “Why do you include a multimodal project in your course and what do you expect the students to gain through a multimodal assessment in this project?”). The focus group questions are included in Appendix B.
A small number of participants in our sample allowed for in-depth observations of their interactions, which were captured through memoing, particularly those exchanges that highlighted both shared and divergent experiences and perspectives. To further probe the professors’ interpretations of experiences with multimodal assessments, the focus group format enabled us to invite participants to compare practices and viewpoints among themselves (Morgan, 1996) (e.g., “Are we on the same page with those definitions [of multimodal assessments], do we feel comfortable?”, “Does anyone else feel that way or differently?”). At the same time, the method also provided a unique venue for the professors to articulate how they exercised teacher autonomy in their classrooms in relation to multimodal assessments by reflecting on their perceived freedom to make instructional decisions (Vangrieken et al., 2017) (e.g., Do you feel you have academic freedom to develop your own assessment strategies? Are there pedagogical principles that you choose when you create assessments?). Giving professors the space to reflect on their decision-making processes during focus group interviews enabled the participants to reveal the uniqueness of their pedagogical practices. This method was beneficial in eliciting responses that highlighted the diversity of the participants’ professional backgrounds and offered a rich range of perspectives and insights.

3. Data Analysis

We analyzed focus group data through axial coding (Saldaña, 2021), approaching the data from a social semiotic perspective that attends to how meaning is constructed through language, representation, and context (Bezemer & Kress, 2008; Jewitt et al., 2016). We began the axial coding as a group to ensure consensus. Basing axial codes on what we identified as the most contentious components in the extant literature on multimodal assignments and assessments (see Table 2), we were able to understand how the instructors’ practices reflected current perspectives from the onset of our data analysis. Beginning with axial codes enabled us to relate codes and develop categories in a structured way, thereby enhancing analytic rigor (Scott & Medaugh, 2017).
After ensuring interrater reliability through coding comparisons among all three authors across the data, we began subsequent second and third rounds of coding where we collectively developed additional codes based on thematic analysis for contextualizing the components within the participants’ pedagogical practice. For example, when analyzing the professors’ responses regarding a walkthrough of one of their multimodal assignments/assessments, thematic codes included Challenges [with the production/implementation of the assignment for assessment] and Individual Approaches [to teach and assess multimodal projects]. In comparing these codes to the current literature, we were able to assess how these instructors included (if at all) the recommended best practices from the institutional discourse surrounding multimodal assessment research. These analytic steps ensured that our categories were both grounded in the data and theoretically informed, increasing the trustworthiness of our interpretations.

4. Results

The primary purpose of this study was to perform a comparative analysis of university professors’ use of multimodal assessments in their teaching practices with the current literature on multimodal assessments using a social semiotic lens (Bezemer & Kress, 2008; Kress, 2010) to understand how professors conceptualize and enact multimodal assessment practices in higher education. From this perspective, multimodal assessments are not only pedagogical tools but socially situated meaning-making practices shaped by cultural, institutional, and disciplinary contexts (Kress, 2010; van Leeuwen, 2004). Each instructor’s approach reflects their negotiation of what counts as legitimate modes of representation, what affordances are valued, and how meaning is communicated and recognized within specific learning environments. Findings indicate that although there is a shared purpose of creating multimodal assignments and assessments to reinvent instructional practices to be more inclusive and equitable, professors operate in the continuous process of iteratively rethinking and refining what counts as “good” multimodal assessment based on students’ meaning-making practices and the semiotics available to them.
In the following subsections, we discuss our findings in the context of the predominant recommendations in the extant literature, focusing on how each instructor viewed the semiotic resources within their social contexts. We found that the participants considered both the semiotic process and the product when they evaluated multimodal assignments to support inclusion practices. Professors also open social spaces for collaboration with students to co-create assessments for evaluating multimodal projects, fostering equity by empowering students in the learning process. There was no consensus among the professors on the use of specific forms of assessment (e.g., a rubric) or common design principles underlying multimodal assessments (e.g., focusing on affordances of multimodality, use of metalanguage). The professors also expressed the need for professional development in creating multimodal assessments.

5. Process and Product

Echoing the literature (Deng et al., 2023; Hafner & Ho, 2020; Shipka, 2009; Stewart, 2023), some professors discussed focusing on both the process and the final product of the multimodal project in the context of assessment, reflecting that meaning is contextually shaped. For example, in one project, Participant T, a professor in the Liberal Arts Department, explained that students wrote “a process letter… for the project that they can use, if they choose to revise before they put into their final portfolio”. However, “After the projects are complete, we designed a rubric collaboratively as a class”. Her letter shows that the students were attuned to how their semiotic choices evolved as the project did, unique to the context of this particular course.
Participant C, a Liberal Arts professor, emphasized the importance of observing the unveiling of the students’ learning goals in the process of project development to facilitate the assessment of the final product. C explained that she discusses the process with students to interpret how and why they engage multimodality to elicit desired responses from their audience as they develop their projects. That information is then used to “assess their work when it is done”. Observing the students’ rhetorical choices belies the multimodal design decisions and the interaction between the modes (text, image, sound) and the intended audience.
Participant C also discussed the value of having students start working in groups early in the semester and continue to work in the same groups throughout the semester to fully engage in project development. As several researchers suggest (Miller, 2008; Shanahan, 2013; Stewart, 2023), C focused on the learning goals rather than the digital tools through which they were mediated. Participant S in the Liberal Arts, who, along with Participant T, uses contract grading (Inoue, 2023), added that reviewing learning goals throughout a multimodal project has a particular value in fostering self-regulation in students:
…[I] just ask really simple questions all semester, which is, what are you trying to do with this piece that you made, whether you think it is working, and how could it work better? It is just so simple… and I hope that by repeating it all semester, people might ask themselves that, you know, there might be some transfer of that set of questions and, you know, kind of a writing process sensibility. This reflective process exemplifies an equity-oriented approach to assessment, grounded in social semiotics, that redistributes authority over the evaluation by inviting students to critically interpret their own semiotic decisions and recognize the value of diverse representational forms.
Another way our participants reflected on assessing multimodal projects was through exclusively focusing on the final product or by choosing a method relevant to the unveiling instructional context. For Participant J, a professor within the Business School, modeling expectations was important: “I demonstrate what I want them to do. I create narrated videos and ask them to create something similar. It is important to demonstrate to students my expectations”. This approach parallels modeling a construction of a successful digital artifact using a student planning checklist described in Jones et al.’s (2020) work. J admitted that finding a fitting assessment model for his instructional purposes was still a work in progress. J’s difficulties in identifying generic forms of assessment reflect the complexity of the semiotic negotiation between an author’s intent in creating a multimodal ensemble, such as designing a school project, and the audience’s interpretation, as described by Kress (2015). For instructors, assessing a student’s multimodal demonstration of learning involves interpreting how intended meanings are realized across modes, attending to the interplay and complementarity among these semiotic resources within the specific context of the discipline and classroom community.

6. Including Students

When asked about pedagogical principles that the participants applied while creating multimodal assessments, some responses mirrored the literature heralding the importance of collaboration between the professors and the students to push back on the notion of assessment as a top-down process (Deng et al., 2023). Participant T mentioned nine times that she included students in the process of creating the assessment, signifying its importance to the assessment overall and that assessment language is its own social semiotic system built on power, culture, and academic norms. T expressed explicitly that “…one thing that I really value is the ability to collaborate [in the assessment process] with students”. T also shared the value of making the process inclusive: “Designing it together really makes sure that the rubric is inclusive of everyone’s project. No matter what”. In doing so, the students are active designers of assessment norms and language, aligning with the social semiotic emphasis on agency and shared authorship in communication (Bezemer & Kress, 2008).
Participant C also spoke about the approach to involving students in the assessment process. While she does not collaborate with students in creating rubrics, she involves them in an open discussion of grading for the final assessment. C further explained that there was rarely any disagreement around grading because before the final assessment meeting, she holds multiple small group conferences where she and the students collaboratively evaluate and discuss progress towards the final project. C emphasized that instead of focusing on the punitive aspects of the final assessment, she sees it as an opportunity for an ongoing conversation with students throughout the learning process: “It is progress that happens and they’re always aware if the progress is not happening because there are reasons why it is not happening”. Even when instructors, like Participant C, did not cocreate rubrics, they facilitated dialogic assessment conversations, where students’ interpretations of progress and purpose became part of the meaning-making process. This iterative feedback cycle echoes what Serafini (2014) calls the situated interpretation of multimodal texts, where meaning is continually renegotiated through social interaction. Furthermore, as research suggests, the inclusion of peer feedback in addition to the instructors may continually encourage students to critically reflect on their own multimodal compositions (Burnett et al., 2014; Deng et al., 2023; Hafner & Ho, 2020).
Though Participant J does not collaborate with students in creating assessments, he emphasized peer evaluation as a semiotic act of meaning-making within a multilayered assessment process. Asking students to submit presentations and evaluate their classmates’ presentations using rubrics engages them in interpreting and creating meaning across multiple communicative modes. From a social semiotic perspective, this process not only decentralizes evaluative authority but also fosters semiotic awareness, as students learn to articulate and justify the criteria by which meaning is made and valued in their discipline (Kress, 2010; Bezemer & Kress, 2008). Both the presentation and the peer review are accorded power through multimodal negotiation and through being graded, thus validating the importance of the peer evaluation component in the assessment process (Quesada et al., 2019; Stewart, 2023).

7. Modal Affordances and Cohesion

To understand how our participants conceptualized multimodal assessments, we examined the criteria they used to evaluate the students’ meaning-making through the affordances of modal ensembles. From a social semiotic perspective, such assessment involves interpreting how learners purposefully select and orchestrate semiotic resources (e.g., visual, linguistic, spatial, gestural, and aural) to convey ideas within specific social, cultural, and disciplinary contexts (Bezemer & Cowan, 2020; Kress, 2010; van Leeuwen, 2004). Participant E, an instructor of video game design in Computer Science and IT, highlighted the complexity of this work, noting that the “level of competency and sensibility can widely vary” for multimodal projects among students. Therefore, he must assess: “…how well the student considered the form of expression in accomplishing the messaging or their stated goal for the project; how the student has… synthesized the materials to come up with something original”. Though there are many predesigned rubrics and frameworks like those discussed in the introduction of this manuscript, these fail to also consider these essential tenets of E’s multimodal projects, as he attunes to the interplay of semiotic form, messaging, and originality. Furthermore, as is the case with all interactive multimedia, one must also consider the semiotics for the user, or the multimodal interaction (Bolt, 1980; Zagalo, 2019). Holdren’s (2012) use of multifaceted rubrics to assess both process and product may provide a flexible model, but Participant E’s approach underscores the broader semiotic challenge of assessing originality and coherence across modes within specific disciplines.
Participant C noted that the seemingly endless semiotic choices available to students created certain constraints in multimodal assignment completion and assessment. She observed that some students’ projects lacked depth and critical thinking due to the “hyper-layering” effect created by an excessive, ineffective use of modes. Like E, she admitted that helping students effectively compose multimodality to achieve rhetorical goals and build complex and cohesive content was an ongoing struggle that is also reflected in the literature. Hung et al. (2013) provide a rubric that outlines criteria for the meaningful inclusion of the five design modes in a multimodal presentation, using cohesion as the core criterion to evaluate the effectiveness of the interplay of modes. However, like any rubric we reviewed, this rubric would need to be adapted to specific projects. It could also be supplemented with other forms of assessment that reflect the instructor’s pedagogical beliefs, such as involving students in the assessment process by negotiating how their multimodal project meets the assessment criteria during a teacher–student interview (Godhe, 2013).
Another challenge brought by a complex interplay of modes in design and content was fairly assessing the students’ multimodal work, which varied in levels of digital competency. Participant C strived to evaluate the multimodal component of their projects based on the students’ effort, their level of engagement, and the reasoning that the students employed when they engaged with the various media rather than their digital expertise (Fjørtoft, 2020). Because of all these factors, she acknowledged that assessing multimodal projects is a work in progress: “I go through the process of learning how to evaluate and assess their work”. This approach aligns with contemporary social semiotic thinking, which views learners as designers of meaning who mobilize modal resources differentially based on their access, experience, and purpose (Cope & Kalantzis, 2020; Gill & Stewart, 2024). C’s acknowledgment that she continues to learn how to evaluate student work reflects the inherently dynamic and situated nature of multimodal assessment, where meaning-making practices evolve alongside the changing technological and sociocultural landscapes that shape how students compose.

8. Using Metalanguage

Much of the multimodal assessment research calls for including a metalanguage to discuss both the design and assessment process (Anderson & Kachorsky, 2019; Lim et al., 2022; New London Group, 1996; Nielsen et al., 2020; Macken-Horarik et al., 2017; Svärdemo Åberg & Åkerfeldt, 2017). Our participants generally discussed metalanguage in the context of including students in the assessment process. Participant T discussed a type of metalanguage for students when co-creating a rubric for assessment where students meet in groups to determine on what criteria the rubric should be based:
I am hoping that we have a sort of common vocabulary about what we’re expecting. So, what I do is, I break students up into groups of, like, 4 or 5, and have each group come up with some criteria that they think is specific enough and also that, like, we’re all speaking the same language and then they write those on the board. Each group goes up and then we look at them all together and we’re like, well, what do we have questions about? What does this mean? And we do, like, a lot of clarifying and then we vote on what makes sense.
Though T did not directly reference an exclusively multimodal metalanguage, she discussed multimodal elements that would be expected to be included in the assignment with the students, ensuring that they could comprehensively discuss their work throughout the creation and assessment process (Macken-Horarik et al., 2017).
Not engaging the metalanguage, on the other hand, may become an impediment to creatively using multiple modes in a project. Participant C explained that her class started their projects by researching their ideas without considering the semiotics. When students advance to the next stage of the project, they struggle to consider how to orchestrate a modal ensemble beyond traditional textual modes. C notes that students “really do not know how to think”, and that breaks the continuity of the project. Engaging metalanguage around semiotics and multimodality at the project’s inception may help the students to better express their ideas in meaningful ways across different modes (Lim et al., 2022). Alternatively, Participant J discussed using rubrics to provide students with his expectations. Though students clearly understood how they were being graded on traditional multiple-choice exams, he thought that the students needed more details and instructions with multimodal assessments, a role that a metalanguage could have fulfilled. Without explicit dialogue about modal affordances and purposes, power over assessment remains with the instructor, limiting the students’ opportunities to articulate and negotiate the semiotic value of their own designs.

9. Challenges with Multimodal Assessment

Many participants expressed that perhaps the rubrics were fundamentally at odds with a multimodal assessment, underscoring a broader institutional tension between standardization and situated meaning-making (Newfield et al., 2003). For example, Participant E expressed that traditional rubrics may not capture the wide range of projects that his students create, echoing Reed’s (2008) argument that inflexible, narrowly specified learning outcomes typical of print-based assessments cannot account for an author’s intent as manifested through contextualized semiotic choices within a multimodal project. Additionally, Participant T, who frequently emphasized the importance of the process over the final product, lamented the students’ focus on their final grades: “The numbers are not really that important; that is something that they value more than I do because I use contract grading, but I also give detailed comments about each criterion”.
An example of using traditional assessments to mediate the balance between keeping the process of assessment student-centered while setting clear expectations for the product comes from Cheung’s (2023) study of composing multimodal blogs. The researcher used scales for the process and product, leading to significant improvement in equality and collaboration and more effective multimodal writing design. However, the tension between the professors’ perspectives and their students’ needs highlights the importance of developing innovative assessment instruments that are flexible and equitable but can still provide students with clear criteria for negotiating meaning-making with their professors.
Participant T expressed interest in exploring other types of assessments, but students consistently wrote on her evaluations that they enjoyed creating a rubric together. Though she wanted to offer more novel approaches to assessment, the students took comfort in the familiarity of the rubric systems to which they were accustomed (Anderson & Kachorsky, 2019) and enjoyed the autonomy of cocreating the rubric together (Smith et al., 2024):
I definitely am interested in exploring other ways [of assessment but I’ve] gotten like, weirdly, on all of my course evaluations designing the rubrics together, which does not seem like that big of a deal is something that I am constantly getting really positive feedback from students on… Um, it is definitely something I would like to think more critically through.
From a social semiotic perspective, rubrics function as semiotic artifacts of institutional power, denoting assumptions about what counts as knowledge and how it should be represented. Participant T’s use of contract grading and cocreated rubrics reflects a move toward redistributing semiotic authority, allowing students to participate in defining evaluative norms.
Participant S shared that she discontinued a multimodal assessment she referred to as “collective assessment,” once she recognized its time-consuming nature. S created the practice with the intent to emphasize student agency by commenting on individual projects and synthesizing common themes across students’ work in class videos. Despite her initial enthusiasm for the strategy, S concluded that its impact did not justify the considerable time required to record new videos each semester, suggesting a tension between pedagogical commitments to more humanizing assessment and institutional expectations for efficiency in online learning environments (Freeman & Dobbins, 2013; Midgette & Stewart, 2025). Participant C believed that assessing multimodal projects in her discipline was very similar to assessing unimodal assignments. In both formats, she used a set of evaluative criteria that she applied holistically (e.g., engagement, effort), rather than relying on rubric. Another strategy that she used to set expectations was to model all parts of the project for her students, “using myself to an extent as the rubric”. However, C expressed interest in learning how to create assessments specifically for multimodal assessments. C stated that such professional development would help faculty to “…move forward to learning a lot about what we can do because there’s so much that we could do. And I think I am just at the beginning”.
Findings also revealed that none of our participants had formal training in creating and evaluating multimodal assignments. They each had different learning experiences with multimodal teaching. Participant T became interested in including multimodal texts and assignments while working on her dissertation. Participant S learned about these practices from colleagues and conferences. Participant C was self-educated by reading publications. Participant E adopted his practices from mentors who excelled in pedagogy and discussed the idea of just “winging it” when it came to creating his multimodal assessments. He stated, “[I was] just trying to figure it out, trying to recalibrate our internal compass, going with what we think works well in the classroom”. His feelings of bewilderment may reflect the lack of consensus in the field of best practices and the seemingly daunting task of assessing not only content but intricate design principles and creativity that juxtapose the institutional focus on measurable outcomes.

10. Discussion

Although standardized, print-based forms of assessment continue to dominate higher education (Ross et al., 2020), perpetuating epistemic and social inequities by limiting opportunities for diverse students to exercise agency and draw upon their cultural and experiential knowledge, university professors are increasingly experimenting with multimodal approaches that foreground design, representation, and original meaning-making. In this study, we examined how professors across disciplines conceptualize, design, and evaluate multimodal assessments as part of their efforts to challenge the institutional norms that privilege traditional, text-centric measures of academic success. Current research envisions multimodal assessments as a framework that redefines what ways of knowing are recognized and valued, and how knowledge can be demonstrated through more inclusive and equitable practices that foster the representation of cultural identities and perspectives and collaborative meaning-making (Anderson & Kachorsky, 2019). Through a social semiotics lens, this study contributes an authentic account of professors’ reflections on their efforts to disrupt the dominance of traditional institutional systems of evaluation by juxtaposing multimodal assessment practices that are contextualized to be equitable and inclusive of all learners in the classroom.
Our findings show that there is a shared purpose among the participating faculty to push back against the privileging of traditional academic epistemologies by implementing assessment practices that encourage students to demonstrate diverse ways of knowing, befitting of the multimodal assignments in their courses. The professors intentionally leveraged multimodality to amplify student voices and prioritize student creative choice by establishing co-created learning environments, a core element of inclusive and equitable classrooms (Midgette & Stewart, 2025). Aligned with the assertions of Wyatt-Smith and Kimber (2009), most participants made an effort to include students in the assessment process by collectively developing learning objectives and descriptors before, during, and after a multimodal production or engaging students in peer evaluation, thus challenging traditional top-down power structures in the classroom. The participants contextualized the collaborative approach in their belief that involving students in the process of assessment gave the students greater freedom to make choices in creating knowledge representations, reflect on how these choices affect the product, and have a say in how artifacts (their own and/or of their peers) should be evaluated.
Other multimodal assessment practices that critically and epistemologically opposed standardized assessment environments included participants’ efforts to create low-stakes but high-agency spaces for the formative evaluation of students’ work, typically structured as group discussions. The professors emphasized the importance of paying close attention to the students’ meaning-making processes, their evolving understanding of their broader authoring potential, and the alignment of works-in-progress with specified learning outcomes. Moving away from traditional assessments toward student-centered approaches to practicing multimodality, as illustrated in the work of our participants, carries significant implications for social justice. By promoting student agency in evaluating multimodal projects from conceptualizing to production, instructors are “giving voice to the broader context of meaning-making including community/self/family/culture/nation and beyond” (Newfield et al., 2003, p. 79), thus breaking systemic barriers for students from marginalized backgrounds.
At the same time, the findings showed a lack of deliberate focus on other criteria identified in the literature, namely, on how modes work together within assignments in assessments and on a need for metalanguage to create and discuss evaluative criteria. Furthermore, we were unable to establish a consistent approach to designing and implementing multimodal assessments in our participants’ teaching. We account for the disconnect between our findings and the existing literature by acknowledging that the participants began using multimodal assessments informally and may not be fully familiar with the recommended practices. The participant faculty’s propensity to anchor assessment practices in professional experience rather than in the current research highlights the need for professional development programs that focus on designing comprehensive instructional designs for evaluating multimodal projects (Haßler et al., 2016; Sung et al., 2016). Institutes of higher education should provide opportunities for faculty to receive professional development in adopting authentic, meaningful assessment practices based on exploring various modes of representation (Fjørtoft, 2020). Our participant faculty proposed that such professional development programs could serve as a network for ongoing professional support and a platform for interdisciplinary discussions and collaborations.
It is also important to note that there is some ambiguity around how our participants perceive the design and use of multimodal classroom assessments. This uncertainty highlights the need for further research and discussion to determine which design principles should be emphasized in professional development programs. What we learned from our participants is that there is no one way to create and implement multimodal assessments. While a review by Anderson and Kachorsky (2019) suggests that rubrics are commonly used as the primary instrument in multimodal assessments, our study found that some professors also advocated for innovative assessment tools designed to reflect social semiotic practices within the context of their academic disciplines (Bezemer & Cowan, 2020). For instance, professors used progress checks that were aligned with learning objectives instead of rubrics and evaluated both the process and product during regular small-group interviews with the students. Exploring innovative assessment designs is essential to provide faculty with relevant evidence to design assessment practices that effectively capture student learning in multimodal projects. Professors can leverage this knowledge to approach assessment needs on a case-by-case basis.
Another aspect of professional development that would be undoubtedly valued by our participants in this study relates to metalanguage. The professors shared that they felt limited in the way that they could discuss design elements with students because they did not have formal training in metalanguage. Designing, discussing, and re-inventing assessment criteria in multimodal assignments is dependent on the knowledge of the metalanguage of all participating parties (Jones et al., 2020; Unsworth, 2006). Metalanguage empowers professors and students to have the appropriate vocabulary to be able to adequately and appropriately understand and leverage the affordances of various semiotic resources (modes) and the interplay among them in an artifact (Hammond & Gibbons, 2005; Lim et al., 2022). By having that metalanguage, students can practice more agency in designing and implementing multimodal assessments. This agency is essential for disrupting the dominance of print literacies in assessment practices. Furthermore, a shared understanding of the metalanguage can create collaborative spaces where educators exchange professional experiences related to multimodal assignments and assessments more effectively and continue to explore the affordances of multimodal assessment across academic disciplines (Jewitt, 2008; Unsworth, 2006), as was the intent in this study.
While our findings offer robust implications for multimodal assessment practices, it is important to acknowledge the limitations of our study. Notably, we only offer the perspectives of five tenured professors representing three colleges at one university. Though we invited many professors to participate, the responsibilities of the professors meant that few had the time to participate in a voluntary study. However, the small number of participants in this study enabled deep engagement with the data on the instructors’ pedagogical practices and allowed for an in-depth exploration of both convergent and divergent perspectives, as reflected in the participants’ interactions during focus group discussions (Morgan, 1996). Furthermore, we only represented the experiences and ideas of these few people. The shared context within the same university and some of the professors’ long-standing collegial relationships may have led to commonalities in pedagogical approaches that could limit the range of perspectives. Additional studies may wish to explore larger and more diverse subsets of instructors representing a range of regions, institutional types, and academic ranks to provide a broader and more representative perspective.
Another limitation of our study is the potential bias that we, as researchers, may have imbued into the focus groups as fellow professors and university employees. Coming from the field of literacy education and being a tenure-track faculty, the first and the second author of this manuscript had perspectives and strong opinions about multimodal assessments based on their experiences in creating multimodal projects and assessment, which differed from those of the other participants. This positionality could have influenced how we framed the questions and conducted the focus groups. To address this, we made a conscious effort to facilitate rather than direct the dialogue in the focus groups, providing ample time for the participants to share their experiences and thoughts. We encouraged a candid conversation by inviting the professors to honestly discuss both their successes and what they considered grey areas in creating and implementing assessments. The professors were prompted to elaborate on their responses or add information not covered in the interview questions. Notably, the professors provided critical reflections on their practices by sharing their uncertainties about developing and implementing multimodal assessments, highlighting their willingness to adopt a critical perspective during the focus groups.

11. Conclusions

In our study, we found that university professors had some experience using multimodal assessment practices in their classrooms and a desire to further develop this aspect of their teaching. However, their understanding of the relationship between multimodal projects and the corresponding assessments was largely intuitive and/or based on their prior knowledge of print-centric forms of assessment. There is a gaping need for professional development in practicing multiliteracies, with a special emphasis on metalanguage and social semiotics. In future studies, it is particularly important to explore the types of training most suitable for educators in various educational contexts, including different disciplines and instructional delivery formats, such as online hybrid or asynchronous courses.
By giving voice to individual professors, our findings illuminate a quintessential issue: there is not yet consensus on what constitutes effective pedagogies for creating, implementing, and evaluating multimodal projects that instructors could use to connect theory and practice (Tour & Barnes, 2022). Although there are numerous assessment techniques discussed in the literature, there is a dearth of theoretical knowledge that explains the relationship between multimodal design in all of its stages and its evaluation. The absence of a shared understanding of effective pedagogies for multimodal assessment has led to inconsistent practices across disciplines. Without clear frameworks, faculty often rely on intuition rather than evidence-based strategies, limiting the pedagogical potential of multimodal work and hindering its broader adoption. Future research should develop empirically grounded frameworks that define effective multimodal pedagogy and guide faculty development, ensuring greater consistency in implementation and evaluation.
Although achieving full consensus may be challenging due to a diversity of semiotic practices across academic disciplines, clarifying core principles remains essential for fostering equitable and coherent teaching practices. Establishing a shared metalanguage and evidence base would strengthen faculty collaboration and advance a more rigorous and inclusive understanding of multimodal assessment practices in higher education.
This study delved into the philosophies and practices used by interdisciplinary university professors for multimodal teaching and evaluation. It specifically focused on what professors consider a multimodal assessment, why and how they created it, and how they incorporated it into their teaching. The participants saw multimodal projects as a collaborative and process-oriented activity that involves both instructors and students (Collier & Kendrick, 2016) who contextually orchestrate modes based on their access, knowledge, and purpose (Cope & Kalantzis, 2020; Gill & Stewart, 2024). They also emphasized the importance of involving students in the evaluation process to collectively determine the semiotic value of their designs for the success of multimodal projects. To further enhance student-centered pedagogies in multimodal contexts, it is crucial for researchers to conduct studies aimed at understanding students’ experiences with assessment processes, particularly in how their lived experiences are represented/underrepresented in the construction of multimodal projects and assessments (Lawrence & Mathis, 2020). Considering the vital role of student perspectives in disrupting instructional practices that privilege dominant cultures and languages, it is essential for students to have an equal voice in discussions about how students and instructors can make multimodal assessment practices more inclusive and equitable.

Author Contributions

Conceptualization, E.M., O.G.S. and I.A.; methodology, O.G.S.; validation, E.M. and O.G.S.; formal analysis, E.M., O.G.S. and I.A.; investigation, E.M., O.G.S. and I.A.; writing–original draft preparation, E.M.; Writing–review and editing, E.M., O.G.S. and I.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Removed for doubt-blinded peer review.The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of ST. JOHN’S UNIVERSITY (IRB approval number: FY2024-30, 28 September 2024) for studies involving humans.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

To protect participant confidentiality, only de-identified and summarized data will be made available upon reasonable request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. First Focus Group Protocol

Focus groups were conducted with up to four participants per session and lasted approximately one hour. The purpose of these discussions was to explore the faculty members’ use of multimodal assignments in their courses. Specifically, the sessions aimed to gain insight into the types of multimodal assignments being implemented, their design and structure, and the approaches used to assess student work. The focus groups were designed to facilitate an open and conversational environment that encouraged participants to share their experiences and perspectives.
The questions are listed below:
  • Define multimodal assignments.
  • Discuss the types of multimodal assignments you are currently using. What do they look like?
  • What criteria are you using to assess multimodal assignments? Do you focus on all modes equally? Why? Or why not?
  • Do you create multimodal assessments differently from how you create traditional assessments?
  • How do students meet the expectations of multimodal assessment compared to traditional assessment? Are there any differences?
  • Does the multimodal assessment change if it is used in an online setting?
  • How did you learn how to develop a multimodal assessment?

Appendix B. Second Focus Group Protocol

The follow-up meetings were implemented to dive deeper into the participant responses from the focus groups. The interviews lasted approximately 20 min. The questions are listed below:
  • What makes a good multimodal project?
  • Why did you include multimodal projects in your course?
  • What do you expect students to gain through those multiple projects?
  • Can you walk me through your thinking process when you’re creating multimodal assignments? Are there any sort of pedagogical principles that guide you in that?
  • Do you feel that you have the freedom to develop your own assessment strategy?
  • Proportionally, how many assignments in your courses are multimodal versus more linguistic based, and how do you decide which ones should be multimodal?
  • When students create a multimodal assignment, do they tend to orient more towards a certain mode over other modes?
  • Can you discuss your process and any challenges in creating a comprehensive rubric for grading a range of multimodal assignment submissions?
  • Is there anything else that you would want me to know about how you assess multimodal assignments or how you create multimodal assignments?
  • Can you talk about your use of a rubric to grade multimodal assignments and if you are exploring different ways to provide students with feedback for multimodal assignments?

References

  1. Aagaard, T., & Lund, A. (2013). Mind the gap: Divergent objects of assessment in technology-rich learning environments. Nordic Journal of Digital Literacy, 8(4), 225–243. [Google Scholar] [CrossRef]
  2. Anderson, K. T., & Kachorsky, D. (2019). Assessing students’ multimodal compositions: An analysis of the literature. English Teaching: Practice & Critique, 18(3), 312–334. [Google Scholar] [CrossRef]
  3. Bezemer, J., & Cowan, K. (2020). Exploring reading in social semiotics: Theory and methods. Education 3-13, 49(1), 107–118. [Google Scholar] [CrossRef]
  4. Bezemer, J., & Kress, G. (2008). Writing in multimodal texts: A social semiotic account of designs for learning. Written Communication, 25(2), 166–195. [Google Scholar] [CrossRef]
  5. Boivin, A. C. N., & Cohen Miller, A. (2022). Inclusion and equity with multimodality during COVID-19. In D. Vyortkina, T. Reagan, & N. Collins (Eds.), Keep calm, teach on: Education responding to a pandemic (pp. 87–108). Information Age Publishing. [Google Scholar]
  6. Bolt, R. A. (1980). “Put-that-there”: Voice and gesture at the graphics interface. In Proceedings of the 7th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’80) (pp. 262–270). Association for Computing Machinery. [Google Scholar] [CrossRef]
  7. Burnett, R. E., Frazee, A., Hanggi, K., & Madden, A. (2014). A programmatic ecology of assessment: Using a common rubric to evaluate multimodal processes and artifacts. Computers and Composition, 31, 53–66. [Google Scholar] [CrossRef]
  8. Callow, J. (2020). Visual and verbal intersections in picture books: Multimodal assessment for middle years students. Language and Education, 34(2), 115–134. [Google Scholar] [CrossRef]
  9. Cartner, H., & Hallas, J. (2020). Aligning assessment, technology, and multiliteracies. E-Learning and Digital Media, 17(2), 131–147. [Google Scholar] [CrossRef]
  10. Cheung, A. (2023). Developing and evaluating a set of process and product-oriented classroom assessment rubrics for assessing digital multimodal collaborative writing in L2 classes. Assessing Writing, 56, 100723. [Google Scholar] [CrossRef]
  11. Collier, D. R., & Kendrick, M. (2016). I wish I was a lion a puppy: A multimodal view of writing process assessment. Pedagogies: An International Journal, 11(2), 167–188. [Google Scholar] [CrossRef]
  12. Cook-Sather, A., Moreira, D., Rolfes, P., & Smith, J. (2025). Multimodality as an equitable approach to summative assessment in higher education. Research & Practice in Assessment, 20(1), 14. [Google Scholar]
  13. Cope, B., & Kalantzis, M. (2020). Making sense: Reference, agency, and structure in a grammar of multimodal meaning. Cambridge University Press. [Google Scholar]
  14. Curwood, J. S. (2012). Cultural shifts, multimodal representations, and assessment practices: A case study. E-Learning and Digital Media, 9(2), 232–244. [Google Scholar] [CrossRef]
  15. Deng, Y., Liu, D., & Feng, D. (2023). Students’ perceptions of peer review for assessing digital multimodal composing: The case of a discipline-specific English course. Assessment & Evaluation in Higher Education, 48(8), 1254–1267. [Google Scholar] [CrossRef]
  16. Fajardo, M. F. (2018). Cohesion and tension in tertiary students’ digital compositions: Implications for teaching and assessment of multimodal compositions. In H. de Silva Joyce, & S. Feez (Eds.), Multimodality across classrooms (pp. 178–193). Routledge. [Google Scholar]
  17. Fjørtoft, H. (2020). Multimodal digital classroom assessments. Computers & Education, 152, 103892. [Google Scholar] [CrossRef]
  18. Freeman, R., & Dobbins, K. (2013). Are we serious about enhancing courses? Using the principles of assessment for learning to enhance course evaluation. Assessment & Evaluation in Higher Education, 38(2), 142–151. [Google Scholar] [CrossRef]
  19. Gill, A., & Stewart, O. G. (2024). The instructional implications of a critical media literacy framework and podcasts in a high school classroom. Journal of Adolescent & Adult Literacy, 68(3), 291–304. [Google Scholar] [CrossRef]
  20. Godhe, A.-L. (2013). Negotiating assessment criteria for multimodal texts. The International Journal of Assessment and Evaluation, 19(3), 31–43. [Google Scholar] [CrossRef]
  21. Hafner, C. A., & Ho, W. Y. J. (2020). Assessing digital multimodal composing in second language writing: Towards a process-based model. Journal of Second Language Writing, 47, 100710. [Google Scholar] [CrossRef]
  22. Halverson, E. R. (2009). Shifting learning goals: From competent tool use to participatory media spaces in the emergent design process. Cultural Studies of Science Education, 4(1), 67–76. [Google Scholar] [CrossRef]
  23. Hammond, J., & Gibbons, P. (2005). Putting scaffolding to work: The contribution of scaffolding in articulating ESL education. Prospect, 20(1), 7–30. Available online: http://hdl.handle.net/10453/6610 (accessed on 6 November 2025).
  24. Haßler, B., Major, L., & Hennessy, S. (2016). Tablet use in schools: A critical review of the evidence for learning outcomes. Journal of Computer Assisted Learning, 32(2), 139–156. [Google Scholar] [CrossRef]
  25. Holdren, T. S. (2012). Using art to assess reading comprehension and critical thinking in adolescents. Journal of Adolescent and Adult Literacy, 55(8), 692–703. [Google Scholar] [CrossRef]
  26. Hung, H. T., Chiu, Y. C. J., & Yeh, H. C. (2013). Multimodal assessment of and for learning: A theory-driven design rubric. British Journal of Educational Technology, 44(3), 400–409. [Google Scholar] [CrossRef]
  27. Inoue, A. B. (2023). Cripping labor-based grading for more equity in literacy courses. The WAC Clearinghouse. University Press of Colorado. [Google Scholar] [CrossRef]
  28. Jewitt, C. (2008). Multimodality and literacy in school classrooms. Review of Research in Education, 32(1), 241–267. [Google Scholar] [CrossRef]
  29. Jewitt, C. (2009). An introduction to multimodality. In C. Jewitt (Ed.), The Routledge handbook of multimodal analysis (pp. 14–27). Routledge. [Google Scholar]
  30. Jewitt, C. (2013). Multimodal methods for researching digital technologies. In S. Price, C. Jewitt, & B. Brown (Eds.), The SAGE handbook of digital technology research (pp. 250–265). Sage. [Google Scholar]
  31. Jewitt, C., Bezemer, J., & O’Halloran, K. (2016). Introducing multimodality. Routledge. [Google Scholar]
  32. Jiang, L., Yang, M., & Yu, S. (2020). Chinese ethnic minority students’ investment in English learning empowered by digital multimodal composing. TESOL Quarterly, 54(4), 954–979. [Google Scholar] [CrossRef]
  33. Jones, P., Turney, A., Georgiou, H., & Nielsen, W. (2020). Assessing multimodal literacies in science: Semiotic and practical insights from pre-service teacher education. Language and Education, 34(2), 153–172. [Google Scholar] [CrossRef]
  34. Kress, G. (2009). Assessment in the perspective of a social semiotic theory of multimodal teaching and learning. In C. Wyatt-Smith, & J. Cumming (Eds.), Educational assessment in the 21st century: Connecting theory and practice (pp. 19–41). Springer. [Google Scholar]
  35. Kress, G. (2010). Multimodality: A social semiotic approach to contemporary communication. Routledge. [Google Scholar]
  36. Kress, G. (2015). Semiotic work: Applied linguistics and a social semiotic account of multimodality. AILA Review, 28(1), 49–71. [Google Scholar] [CrossRef]
  37. Kress, G., & van Leeuwen, T. (2006). Reading images: Grammar of visual design (1st ed.). Routledge. [Google Scholar]
  38. Lawrence, W. J., & Mathis, J. B. (2020). Multimodal assessments: Affording children labeled ‘at-risk’ expressive and receptive opportunities in the area of literacy. Language and Education, 34(2), 135–152. [Google Scholar] [CrossRef]
  39. Lemke, J. (1998). Multiplying meaning: Visual and verbal semiotics in scientific text. In J. R. Martin, & R. Veel (Eds.), Reading science: Critical and functional perspectives on discourses of science (pp. 87–114). Routledge. [Google Scholar]
  40. Lim, F. V., Cope, B., & Kalantzis, M. (2022). A metalanguage for learning: Rebalancing the cognitive with the socio-material. Frontiers in Communication, 7, 830613. [Google Scholar] [CrossRef]
  41. Macken-Horarik, M., Love, K., Sandiford, C., & Unsworth, L. (2017). Functional grammatics: Re-conceptualizing knowledge about language and image for school English (1st ed.). Routledge. [Google Scholar] [CrossRef]
  42. Midgette, E., & Stewart, O. G. (2025). Designing a critical digital literacies virtual exchange experience to engage and reinvent multimodal texts. Journal of Virtual Exchange, 8, 39–53. [Google Scholar] [CrossRef]
  43. Miller, S. M. (2008). Teacher learning for new times: Repurposing new multimodal literacies and digital video composing for schools. In J. Flood, B. Heath, & D. Lapp (Eds.), Handbook of research on teaching literacy through the communicative and visual arts (Vol. 2, pp. 441–460). International Reading Association. [Google Scholar]
  44. Mills, K. A. (2010). A review of the “digital turn” in the new literacy studies. Review of Educational Research, 80(2), 246–271. [Google Scholar] [CrossRef]
  45. Morgan, D. L. (1996). Focus groups. Annual Review of Sociology, 22(1), 129–152. [Google Scholar] [CrossRef]
  46. Morgan, D. L., & Krueger, R. A. (1993). When to use focus groups and why. In D. L. Morgan (Ed.), Successful focus groups: Advancing the state of the art (pp. 3–19). Sage. [Google Scholar]
  47. Newfield, D., Andrew, D., Stein, P., & Maungedzo, R. (2003). No number can describe how good it was: Assessment issues in the multimodal classroom. Assessment in Education: Principles, Policy and Practice, 10(1), 61–81. [Google Scholar] [CrossRef]
  48. New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. In B. Cope, & M. Kalantzis (Eds.), Multiliteracies: Literacy learning and the design of social futures (pp. 9–38). Routledge. [Google Scholar]
  49. Nichols, T. P., & Johnston, K. (2020). Rethinking availability in multimodal composing: Frictions in digital design. Journal of Adolescent & Adult Literacy, 64(3), 259–270. [Google Scholar] [CrossRef]
  50. Nielsen, W., Georgiou, H., Jones, P., & Turney, A. (2020). Digital explanation as assessment in university science. Research in Science Education, 50, 2391–2418. [Google Scholar] [CrossRef]
  51. O’Halloran, K. (2013). Multimodal analysis and digital technology. In E. Montagna (Ed.), Readings in intersemiosis and multimedia (pp. 35–53). IBIS Editions. [Google Scholar]
  52. Quesada, V., Gomez Ruiz, M. A., Gallego Noche, M. B., & Cubero-Ibáñez, J. (2019). Should I use co-assessment in higher education? Pros and cons from teachers and students’ perspectives. Assessment & Evaluation in Higher Education, 44(7), 987–1002. [Google Scholar] [CrossRef]
  53. Reed, Y. (2008). No rubric can describe the magic: Multimodal designs and assessment challenges in a postgraduate course for English teachers. English Teaching: Practice and Critique, 7(3), 26–41. Available online: https://files.eric.ed.gov/fulltext/EJ832215.pdf (accessed on 6 November 2025).
  54. Ross, J., Curwood, J. S., & Bell, A. (2020). A multimodal assessment framework for higher education. E-Learning and Digital Media, 17(4), 290–306. [Google Scholar] [CrossRef]
  55. Saldaña, J. (2021). The coding manual for qualitative researchers. Sage. [Google Scholar]
  56. Scott, C., & Medaugh, M. (2017). Axial coding. In J. Matthes, C. S. Davis, & R. F. Potter (Eds.), The international encyclopedia of communication research methods (1st ed., pp. 1–2). Wiley. [Google Scholar] [CrossRef]
  57. Serafini, F. (2014). Reading the visual: An introduction to teaching multimodal literacy. Teachers College Press. [Google Scholar]
  58. Shanahan, L. E. (2013). Composing “kid-friendly” multimodal text: When conversations, instruction, and signs come together. Written Communication, 30(2), 194–227. [Google Scholar] [CrossRef]
  59. Shipka, J. (2009). Negotiating rhetorical, material, methodological, and technological difference: Evaluating multimodal designs. College Composition and Communication, 61(1), W2343-W366. [Google Scholar] [CrossRef]
  60. Smith, A., McConnell, L., Iyer, P., Allman-Farinelli, M., & Chen, J. (2024). Co-designing assessment tasks with students in tertiary education: A scoping review of the literature. Assessment & Evaluation in Higher Education, 50(2), 199–218. [Google Scholar] [CrossRef]
  61. Stewart, O. G. (2023). Understanding what works in humanizing higher education online courses: Connecting through videos, feedback, multimodal assignments, and social media. Issues and Trends in Learning Technologies, 11(2), 2–26. [Google Scholar] [CrossRef]
  62. Sung, Y. T., Chang, K. E., & Liu, T. C. (2016). The effects of integrating mobile devices with teaching and learning on students’ learning performance: A meta-analysis and research synthesis. Computers & Education, 94, 252–275. [Google Scholar] [CrossRef]
  63. Svärdemo Åberg, E., & Åkerfeldt, A. (2017). Design and recognition of multimodal texts: Selection of digital tools and modes on the basis of social and material premises? Journal of Computers in Education, 4, 283–306. [Google Scholar] [CrossRef]
  64. Tan, L., Zammit, K., D’warte, J., & Gearside, A. (2020). Assessing multimodal literacies in practice: A critical review of its implementations in educational settings. Language and Education, 34(2), 97–114. [Google Scholar] [CrossRef]
  65. Tour, E., & Barnes, M. (2022). Engaging English Language Learners in digital multimodal composing: Pre-service teachers’ perspectives and experiences. Language and Education, 36(3), 243–258. [Google Scholar] [CrossRef]
  66. Unsworth, L. (2006). Towards a metalanguage for multiliteracies education: Describing the meaning-making resources of language-image interaction. English Teaching: Practice and Critique, 5(1), 55–76. Available online: https://eric.ed.gov/?id=EJ843820 (accessed on 6 November 2025).
  67. Unsworth, L. (2014). Multiliteracies and metalanguage: Describing image/text relations as a resource for negotiating multimodal texts. In J. Coiro, M. Knobel, C. Lankshear, & D. J. Leu (Eds.), Handbook of research on new literacies (pp. 377–406). Routledge. [Google Scholar] [CrossRef]
  68. Vangrieken, K., Grosemans, I., Dochy, F., & Kyndt, E. (2017). Teacher autonomy and collaboration: A paradox? Conceptualising and measuring teachers’ autonomy and collaborative attitude. Teaching and Teacher Education, 67, 302–315. [Google Scholar] [CrossRef]
  69. van Leeuwen, T. (2004). Introducing social semiotics. Routledge. [Google Scholar]
  70. Watts-Taffe, S. (2022). Multimodal literacies: Fertile ground for equity, inclusion, and connection. The Reading Teacher, 75(5), 603–609. [Google Scholar] [CrossRef]
  71. Wessel-Powell, C., Kargin, T., & Wohlwend, K. E. (2016). Enriching and assessing young children’s multimodal storytelling. The Reading Teacher, 70(2), 167–178. [Google Scholar] [CrossRef]
  72. Wyatt-Smith, C., & Kimber, K. (2009). Working multimodally: Challenges for assessment. English Teaching: Practice and Critique, 8(3), 70–90. Available online: https://eric.ed.gov/?id=EJ869395 (accessed on 6 November 2025).
  73. Zagalo, N. (2019). Multimodality and expressivity in videogames. Observatorio (OBS*), 13(1), 86–101. [Google Scholar] [CrossRef]
  74. Zammit, K. (2015). Extending students’ semiotic understandings: Learning about and creating multimodal texts. In P. P. Trifonas (Ed.), International handbook of semiotics (pp. 1291–1308). Springer. [Google Scholar] [CrossRef]
Table 1. Demographic Characteristics of Participants.
Table 1. Demographic Characteristics of Participants.
ParticipantSTECJ
Age54424558No Response
GenderFemaleFemaleMaleFemaleMale
RaceWhiteWhiteAsianWhiteTwo or More Races
Number of Years Teaching in Higher Education1820192517
Program Level(s) (Undergraduate, Graduate, PhD)UGUGUGUGUG
GR
Teaching AreaLiberal ArtsLiberal ArtsComputer Science and ITLiberal ArtsBusiness
Table 2. Axial Codes.
Table 2. Axial Codes.
Axial CodeFurther Description
Interplay[of modes]
Journey[focusing on the journey of the production of the assignment for assessment]
Final[focusing on the final submitted assignment for assessment]
Linguistic/Other[focus on Linguistic or other modes]
Metalanguage[using/creating/discussing metalanguage]
Including Students[in the process of assessment]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Midgette, E.; Stewart, O.G.; August, I. Understanding Multimodal Assessment Practices in Higher Education to Improve Equity. Educ. Sci. 2025, 15, 1523. https://doi.org/10.3390/educsci15111523

AMA Style

Midgette E, Stewart OG, August I. Understanding Multimodal Assessment Practices in Higher Education to Improve Equity. Education Sciences. 2025; 15(11):1523. https://doi.org/10.3390/educsci15111523

Chicago/Turabian Style

Midgette, Ekaterina, Olivia G. Stewart, and Ian August. 2025. "Understanding Multimodal Assessment Practices in Higher Education to Improve Equity" Education Sciences 15, no. 11: 1523. https://doi.org/10.3390/educsci15111523

APA Style

Midgette, E., Stewart, O. G., & August, I. (2025). Understanding Multimodal Assessment Practices in Higher Education to Improve Equity. Education Sciences, 15(11), 1523. https://doi.org/10.3390/educsci15111523

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop