1. Introduction
As educators continue to move beyond traditional, print-based assignments and assessments that may be misaligned with the funds of knowledge of academically marginalized students, there has been a growing recognition of the necessity to create spaces where learners can show understanding through multimodal design (
Boivin & Cohen Miller, 2022;
Watts-Taffe, 2022). Multimodal assignments intentionally integrate multiple modes (e.g., text, video, color, image, sound) in ways that are often more culturally relevant and inclusive of students’ varied literacy practices (
Stewart, 2023). In the classroom, these texts are often inherently collaborative and can challenge students to critically consider places and mobilities in terms of their content, representation, and audience (
Jiang et al., 2020, p. 292).
Multimodal assessments are essential for moving towards more equitable and inclusive teaching and learning in higher education, as they foster epistemic and social justice by inviting diverse meaning-making practices (
Cook-Sather et al., 2025;
Newfield et al., 2003), Because multimodal assignments carry such strong potential for culturally relevant pedagogies and can engender student engagement (
Stewart, 2023;
Jiang et al., 2020;
Tan et al., 2020), the pedagogically purposeful assessment of them is essential. However, many instructors may not know how to assess the robust authoring potentials, focus solely on print-based features, or worry about the technology and tools (
Curwood, 2012;
Nichols & Johnston, 2020;
Ross et al., 2020). As a result, some may shy away from including multimodal assignments altogether and limit the ways in which students can express their perspectives within a course. Understanding the best practices in creating multimodal assessments can therefore engender opportunities for educators to feel more comfortable with including multimodal assignments and provide clearer expectations for students as they navigate potentially new ways of authoring in the classroom.
The present study is framed through social semiotics, which attends to how meaning is made through texts situated in social and cultural environments (
Bezemer & Kress, 2008;
Kress, 2010). From this perspective, meaning is inseparable from the historical, cultural, and institutional contexts in which a text is created (
Kress, 2010;
van Leeuwen, 2004) and cannot be represented or recreated out of context again (
Serafini, 2014). Social semiotics forwards the process of drawing from culturally constructed inventories of semiotic schemas (i.e., resource for meaning making) to create meaning (
O’Halloran, 2013). Therefore, communication and learning materialize through the choice of semiotic modes (e.g., text, pictures, gestures, music, sounds, layout, typography, etc.) that can have diverse meanings. By discerning these meanings and the authoring potentials for an assignment, instructors can better understand how to assess them.
From a social semiotics perspective, multimodality conceptualizes meaning as designed and socially negotiated through diverse semiotic resources (
O’Halloran, 2013;
Jewitt, 2008,
2009,
2013;
Zammit, 2015) and rests on four interwoven theoretical assumptions: (a) language is part of a multimodal ensemble (i.e., even though language is an integral part of meaning making, meaning is made through more than just language, through the ways in which modes are used, connected, and arranged); (b) each mode within the multimodal ensemble is based on context and situated therein (for learning, this means that different modes might affect the learner differently); (c) people make meaning through the way they use and organize the modes (and one may attune to some modes more than others); and (d) meanings of signs are socially created and particular to that situation (
Jewitt, 2009;
Bezemer & Cowan, 2020). Within social semiotics, signs are the principal unit of meaning representation (
Jewitt, 2013). For example, a sign can be a photograph, written language, a drawing, a symbol, a letter, a street sign, etc.
Social semioticians also argue that modes offer various affordances and limitations, and these are often specific to the mode, audience, and task (
Kress, 2010;
Serafini, 2014). Readers of a text make meaning not just through individual modes, but through the interplay and orchestration of those modes (
Halverson, 2009;
Kress, 2010;
Kress & van Leeuwen, 2006). Readers interpret these multimodal ensembles by attending to how linguistic, visual, spatial, and aural resources interact to shape meaning potentials. These meanings extend beyond the sum of their parts; rather, each mode’s meaning potential is transformed through its relationship to other modes and the specific social and cultural context in which it appears (
Lemke, 1998;
Mills, 2010). Thus, assessing these modal ensembles can be much more complicated than traditional print-based assignments (e.g., essays, research papers) to which many students and teachers are accustomed.
Despite many studies emphasizing the unique affordances of meaning-making across modes and genres, particularly in the context of culturally responsive and critical pedagogies, there remains a disconnect between multimodal assignments and assessment design (
Tan et al., 2020). A review of the literature on multimodal assessment revealed the need for reframing multimodal assessments to appropriately address the very concept of multimodality (
Kress, 2009).
Cartner and Hallas (
2020) argue that various elements of multimodal compositions, such as selection, placement, and layout of information, the use of cross-referencing through hyperlinks, and the use of colors, to name a few, require a different approach to assessment than can be evaluated with criteria that originated in traditional print forms. However, a comparative analysis of assessment instruments used to evaluate multimodal compositions conducted by
Tan et al. (
2020) indicated a lack of interdependence and cohesion between linguistic and other modes in the rubrics. Although many of the rubrics included expectations for elements of multimodal design, such as designs for medium (i.e., the ways that elements are chosen and presented to increase comprehension), they could also be print-centric (e.g., spelling and grammar, story format, title page). Moreover, the very notion of assessing meaning-making using media other than traditional print is currently incongruent with unimodal standardized assessments in grade school and conventional notions of academic rigor in higher education (
Stewart, 2023).
There is also a disjunction between the teachers’ willingness to experiment with multimodal composition assignments in order to increase student engagement and teacher and student metasemiotic knowledge, which involves an understanding of how meaning is created through language and other semiotic systems (
Aagaard & Lund, 2013;
Unsworth, 2014). The effectiveness of multimodal pedagogies relies on the use of metalanguage that reflects and adds to metasemiotic knowledge in the process of designing and evaluating multimodal compositions (
Lim et al., 2022;
New London Group, 1996;
Svärdemo Åberg & Åkerfeldt, 2017). The metalanguage of multimodality is a shared language that would allow all parties to co-design and engage in collaborative meaning-making (
Unsworth, 2014).
Kress and van Leeuwen (
2006) emphasize that metalanguage is essential for analyzing and discussing how meaning is made across modes in representational structures in a multimodal text, that is, how semiotic systems function to convey the nature of events and the participants involved in them; interactive structures, which concern the relationship between the maker of the text and its interpreter; and compositional meanings, which refer to the distribution of information in the text through the interplay of modes. Learning and using the metalanguage to build an understanding of the interpretive possibilities of texts requires substantial time, sustained practice, and ongoing engagement from both instructors and students (
Unsworth, 2006).
A critical review of empirical and conceptual works on multimodal assessments (
Anderson & Kachorsky, 2019) revealed that one of the recommendations shared among the reviewed studies was for the explicit teaching of metalanguage related to multimodal composition analysis and class discussions around the deconstruction of multimodal texts. According to
Nielsen et al. (
2020), teachers and students negotiate multimodal projects effectively when they share a language to discuss meaning-making practices that surround the creation of multimodal texts. Likewise,
Macken-Horarik et al. (
2017) found that a metalanguage allowed participants to structure their conversations around multimodal design choices more critically and meaningfully.
Another issue that permeated the reviewed literature relates to the challenge of constructing a shared vision of what an assessment of multimodal compositions is and how it should be designed. In addition to fixed forms of assessment, empirical studies contain examples of assessments that are flexible and adaptable to the context of teacher-situated practice, availability of digital resources, and genre and register negotiated by students (
Burnett et al., 2014;
Hung et al., 2013;
Lawrence & Mathis, 2020).
Shipka (
2009), and other researchers posit that the process of creating a multimodal composition (i.e., choosing and interacting with modes) should be an integral part of the assessment along with the artifact, with many arguing that assessment based on one single artifact is ineffectual (
Deng et al., 2023;
Hafner & Ho, 2020). Furthermore, there is also a call to involve students in negotiating what criteria should be included in the evaluation of the process and product (
Cartner & Hallas, 2020).
Deng et al. (
2023) additionally claim that student peers, as well as teachers, should be included in the feedback process.
Finally, rubrics vary considerably in how the multimodal nature of assignments is measured. There are examples of rubrics that measure modalities separately (e.g., sound, movement, perspective, gesture, and language in
Wessel-Powell et al. (
2016)); in dichotomous combinations (e.g., text and image in
Callow (
2020)); and in the convergence of modes that create intermodal cohesion or complementarity (
Fajardo, 2018), where a combination of modes creates greater meaning than any single mode in isolation. In a review of assessments in the context of teaching multimodal literacies,
Tan et al. (
2020) argue that the awareness and use of intermodal complementarity is the key to advancing multimodal forms of meaning-making in educational settings. Furthermore, there are current frameworks that offer an analysis of multimodal elements (
Burnett et al., 2014;
Hung et al., 2013), but these may become unwieldy when instructors also include content material.
With such wide-ranging views of practices and approaches to assessment represented in the current research, instructors may feel overwhelmed when attempting to create evidence-based approaches to multimodal assessment for their courses. A lack of agreement on best practices in multimodal assessments hinders the development of professional programs that promote a seamless and effective integration of multimodality into course design. We hope that this study will provide valuable information for researchers and educators to open spaces for conversations that aim to identify the overarching principles that could guide the creation of multimodal assignments and assessments across academic disciplines.
We designed this study to better understand the strategies that guide the creation and implementation of multimodal assessments in college courses to explore why professors include multimodal assignments and their corresponding assessments in their teaching and how they design and implement them. We sought to add authentic insights from faculty representing different disciplines and diverse teaching styles to the discussion of the affordances and challenges of using multimodal assessments in a range of instructional settings.
2. Materials and Methods
We explored professors’ decision-making processes when creating, assessing, and conceptualizing multimodal assignments in their courses, as well as their reflections on their successes and needs around multimodal assessments. The study took place at a large, private American university where the authors and the participants are employed. Using the purposeful sampling method, the participants were recruited through an email invitation sent to faculty by the third author, who, working at Distance Education Office, utilized his professional network to identify faculty members with substantial experience in online pedagogies, particularly multimodal projects. The participants were five tenured university professors from three different colleges within the university. One of the participants worked in the same college as one of the researchers. We intentionally tried to avoid recruiting colleagues from our own colleges to reduce potential biases and utilized our networks to invite professors teaching in Business, Liberal Arts and Sciences, and Computer and Information Technology fields. See
Table 1 for the demographics.
The five faculty members were invited to participate in focus group discussions about their philosophies and practices of multimodal assessment and to share their assessment instruments with the researchers. However, only one participant volunteered to share a rubric and an assignment description for a multimodal project. Due to the limited participation, these materials were not included in the analysis.
We conducted two virtual focus group meetings with the professors to better understand the multimodal assignments and assessments within their courses and their procedures and objectives for creating and assessing the assessments. Focus groups provided us with “the advantage of getting reactions from a relatively wide range of participants in a relatively short time” (
Morgan, 1996, p. 134). They also allowed participants to collectively consider and discuss the complex social behaviors, semiotics, and pedagogical motivations behind creating multimodal assessments (
Morgan & Krueger, 1993;
Morgan, 1996).
We structured focus group meetings around a series of questions that prompted individual reflection and group discussion, opening space for comments and conversations as they arose. We designed the open-ended questions to elicit reflections on assumptions and beliefs related to multimodal assessments (e.g., “What does a “good” multimodal assessment look like?”) and also explored the plethora of pedagogical decisions surrounding the creation and implementation of multimodal assessments (e.g., “Can you walk me through your thinking process when creating multimodal assessments?”). Because not all professors had the same metalanguage around multimodality, we often clarified terms or asked the interviewees to define their version of terms as they pertained to their disciplines. See
Appendix A for the complete list of questions.
After the first focus group meeting, we conducted an additional focus group meeting to gain a deeper insight into the participants’ challenges and successes by aligning their multimodal assignments with multimodal assessments. The second meeting gave us the opportunity to see the professors’ experiences in the context of specific assignments and courses that they designed and taught (e.g., “Why do you include a multimodal project in your course and what do you expect the students to gain through a multimodal assessment in this project?”). The focus group questions are included in
Appendix B.
A small number of participants in our sample allowed for in-depth observations of their interactions, which were captured through memoing, particularly those exchanges that highlighted both shared and divergent experiences and perspectives. To further probe the professors’ interpretations of experiences with multimodal assessments, the focus group format enabled us to invite participants to compare practices and viewpoints among themselves (
Morgan, 1996) (e.g., “Are we on the same page with those definitions [of multimodal assessments], do we feel comfortable?”, “Does anyone else feel that way or differently?”). At the same time, the method also provided a unique venue for the professors to articulate how they exercised teacher autonomy in their classrooms in relation to multimodal assessments by reflecting on their perceived freedom to make instructional decisions (
Vangrieken et al., 2017) (e.g., Do you feel you have academic freedom to develop your own assessment strategies? Are there pedagogical principles that you choose when you create assessments?). Giving professors the space to reflect on their decision-making processes during focus group interviews enabled the participants to reveal the uniqueness of their pedagogical practices. This method was beneficial in eliciting responses that highlighted the diversity of the participants’ professional backgrounds and offered a rich range of perspectives and insights.
5. Process and Product
Echoing the literature (
Deng et al., 2023;
Hafner & Ho, 2020;
Shipka, 2009;
Stewart, 2023), some professors discussed focusing on both the process and the final product of the multimodal project in the context of assessment, reflecting that meaning is contextually shaped. For example, in one project, Participant T, a professor in the Liberal Arts Department, explained that students wrote “a process letter… for the project that they can use, if they choose to revise before they put into their final portfolio”. However, “After the projects are complete, we designed a rubric collaboratively as a class”. Her letter shows that the students were attuned to how their semiotic choices evolved as the project did, unique to the context of this particular course.
Participant C, a Liberal Arts professor, emphasized the importance of observing the unveiling of the students’ learning goals in the process of project development to facilitate the assessment of the final product. C explained that she discusses the process with students to interpret how and why they engage multimodality to elicit desired responses from their audience as they develop their projects. That information is then used to “assess their work when it is done”. Observing the students’ rhetorical choices belies the multimodal design decisions and the interaction between the modes (text, image, sound) and the intended audience.
Participant C also discussed the value of having students start working in groups early in the semester and continue to work in the same groups throughout the semester to fully engage in project development. As several researchers suggest (
Miller, 2008;
Shanahan, 2013;
Stewart, 2023), C focused on the learning goals rather than the digital tools through which they were mediated. Participant S in the Liberal Arts, who, along with Participant T, uses contract grading (
Inoue, 2023), added that reviewing learning goals throughout a multimodal project has a particular value in fostering self-regulation in students:
…[I] just ask really simple questions all semester, which is, what are you trying to do with this piece that you made, whether you think it is working, and how could it work better? It is just so simple… and I hope that by repeating it all semester, people might ask themselves that, you know, there might be some transfer of that set of questions and, you know, kind of a writing process sensibility. This reflective process exemplifies an equity-oriented approach to assessment, grounded in social semiotics, that redistributes authority over the evaluation by inviting students to critically interpret their own semiotic decisions and recognize the value of diverse representational forms.
Another way our participants reflected on assessing multimodal projects was through exclusively focusing on the final product or by choosing a method relevant to the unveiling instructional context. For Participant J, a professor within the Business School, modeling expectations was important: “I demonstrate what I want them to do. I create narrated videos and ask them to create something similar. It is important to demonstrate to students my expectations”. This approach parallels modeling a construction of a successful digital artifact using a student planning checklist described in
Jones et al.’s (
2020) work. J admitted that finding a fitting assessment model for his instructional purposes was still a work in progress. J’s difficulties in identifying generic forms of assessment reflect the complexity of the semiotic negotiation between an author’s intent in creating a multimodal ensemble, such as designing a school project, and the audience’s interpretation, as described by
Kress (
2015). For instructors, assessing a student’s multimodal demonstration of learning involves interpreting how intended meanings are realized across modes, attending to the interplay and complementarity among these semiotic resources within the specific context of the discipline and classroom community.
7. Modal Affordances and Cohesion
To understand how our participants conceptualized multimodal assessments, we examined the criteria they used to evaluate the students’ meaning-making through the affordances of modal ensembles. From a social semiotic perspective, such assessment involves interpreting how learners purposefully select and orchestrate semiotic resources (e.g., visual, linguistic, spatial, gestural, and aural) to convey ideas within specific social, cultural, and disciplinary contexts (
Bezemer & Cowan, 2020;
Kress, 2010;
van Leeuwen, 2004). Participant E, an instructor of video game design in Computer Science and IT, highlighted the complexity of this work, noting that the “level of competency and sensibility can widely vary” for multimodal projects among students. Therefore, he must assess: “…how well the student considered the form of expression in accomplishing the messaging or their stated goal for the project; how the student has… synthesized the materials to come up with something original”. Though there are many predesigned rubrics and frameworks like those discussed in the introduction of this manuscript, these fail to also consider these essential tenets of E’s multimodal projects, as he attunes to the interplay of semiotic form, messaging, and originality. Furthermore, as is the case with all interactive multimedia, one must also consider the semiotics for the user, or the multimodal interaction (
Bolt, 1980;
Zagalo, 2019).
Holdren’s (
2012) use of multifaceted rubrics to assess both process and product may provide a flexible model, but Participant E’s approach underscores the broader semiotic challenge of assessing originality and coherence across modes within specific disciplines.
Participant C noted that the seemingly endless semiotic choices available to students created certain constraints in multimodal assignment completion and assessment. She observed that some students’ projects lacked depth and critical thinking due to the “hyper-layering” effect created by an excessive, ineffective use of modes. Like E, she admitted that helping students effectively compose multimodality to achieve rhetorical goals and build complex and cohesive content was an ongoing struggle that is also reflected in the literature.
Hung et al. (
2013) provide a rubric that outlines criteria for the meaningful inclusion of the five design modes in a multimodal presentation, using cohesion as the core criterion to evaluate the effectiveness of the interplay of modes. However, like any rubric we reviewed, this rubric would need to be adapted to specific projects. It could also be supplemented with other forms of assessment that reflect the instructor’s pedagogical beliefs, such as involving students in the assessment process by negotiating how their multimodal project meets the assessment criteria during a teacher–student interview (
Godhe, 2013).
Another challenge brought by a complex interplay of modes in design and content was fairly assessing the students’ multimodal work, which varied in levels of digital competency. Participant C strived to evaluate the multimodal component of their projects based on the students’ effort, their level of engagement, and the reasoning that the students employed when they engaged with the various media rather than their digital expertise (
Fjørtoft, 2020). Because of all these factors, she acknowledged that assessing multimodal projects is a work in progress: “I go through the process of learning how to evaluate and assess their work”. This approach aligns with contemporary social semiotic thinking, which views learners as designers of meaning who mobilize modal resources differentially based on their access, experience, and purpose (
Cope & Kalantzis, 2020;
Gill & Stewart, 2024). C’s acknowledgment that she continues to learn how to evaluate student work reflects the inherently dynamic and situated nature of multimodal assessment, where meaning-making practices evolve alongside the changing technological and sociocultural landscapes that shape how students compose.
9. Challenges with Multimodal Assessment
Many participants expressed that perhaps the rubrics were fundamentally at odds with a multimodal assessment, underscoring a broader institutional tension between standardization and situated meaning-making (
Newfield et al., 2003). For example, Participant E expressed that traditional rubrics may not capture the wide range of projects that his students create, echoing
Reed’s (
2008) argument that inflexible, narrowly specified learning outcomes typical of print-based assessments cannot account for an author’s intent as manifested through contextualized semiotic choices within a multimodal project. Additionally, Participant T, who frequently emphasized the importance of the process over the final product, lamented the students’ focus on their final grades: “The numbers are not really that important; that is something that they value more than I do because I use contract grading, but I also give detailed comments about each criterion”.
An example of using traditional assessments to mediate the balance between keeping the process of assessment student-centered while setting clear expectations for the product comes from
Cheung’s (
2023) study of composing multimodal blogs. The researcher used scales for the process and product, leading to significant improvement in equality and collaboration and more effective multimodal writing design. However, the tension between the professors’ perspectives and their students’ needs highlights the importance of developing innovative assessment instruments that are flexible and equitable but can still provide students with clear criteria for negotiating meaning-making with their professors.
Participant T expressed interest in exploring other types of assessments, but students consistently wrote on her evaluations that they enjoyed creating a rubric together. Though she wanted to offer more novel approaches to assessment, the students took comfort in the familiarity of the rubric systems to which they were accustomed (
Anderson & Kachorsky, 2019) and enjoyed the autonomy of cocreating the rubric together (
Smith et al., 2024):
I definitely am interested in exploring other ways [of assessment but I’ve] gotten like, weirdly, on all of my course evaluations designing the rubrics together, which does not seem like that big of a deal is something that I am constantly getting really positive feedback from students on… Um, it is definitely something I would like to think more critically through.
From a social semiotic perspective, rubrics function as semiotic artifacts of institutional power, denoting assumptions about what counts as knowledge and how it should be represented. Participant T’s use of contract grading and cocreated rubrics reflects a move toward redistributing semiotic authority, allowing students to participate in defining evaluative norms.
Participant S shared that she discontinued a multimodal assessment she referred to as “collective assessment,” once she recognized its time-consuming nature. S created the practice with the intent to emphasize student agency by commenting on individual projects and synthesizing common themes across students’ work in class videos. Despite her initial enthusiasm for the strategy, S concluded that its impact did not justify the considerable time required to record new videos each semester, suggesting a tension between pedagogical commitments to more humanizing assessment and institutional expectations for efficiency in online learning environments (
Freeman & Dobbins, 2013;
Midgette & Stewart, 2025). Participant C believed that assessing multimodal projects in her discipline was very similar to assessing unimodal assignments. In both formats, she used a set of evaluative criteria that she applied holistically (e.g., engagement, effort), rather than relying on rubric. Another strategy that she used to set expectations was to model all parts of the project for her students, “using myself to an extent as the rubric”. However, C expressed interest in learning how to create assessments specifically for multimodal assessments. C stated that such professional development would help faculty to “…move forward to learning a lot about what we can do because there’s so much that we could do. And I think I am just at the beginning”.
Findings also revealed that none of our participants had formal training in creating and evaluating multimodal assignments. They each had different learning experiences with multimodal teaching. Participant T became interested in including multimodal texts and assignments while working on her dissertation. Participant S learned about these practices from colleagues and conferences. Participant C was self-educated by reading publications. Participant E adopted his practices from mentors who excelled in pedagogy and discussed the idea of just “winging it” when it came to creating his multimodal assessments. He stated, “[I was] just trying to figure it out, trying to recalibrate our internal compass, going with what we think works well in the classroom”. His feelings of bewilderment may reflect the lack of consensus in the field of best practices and the seemingly daunting task of assessing not only content but intricate design principles and creativity that juxtapose the institutional focus on measurable outcomes.
10. Discussion
Although standardized, print-based forms of assessment continue to dominate higher education (
Ross et al., 2020), perpetuating epistemic and social inequities by limiting opportunities for diverse students to exercise agency and draw upon their cultural and experiential knowledge, university professors are increasingly experimenting with multimodal approaches that foreground design, representation, and original meaning-making. In this study, we examined how professors across disciplines conceptualize, design, and evaluate multimodal assessments as part of their efforts to challenge the institutional norms that privilege traditional, text-centric measures of academic success. Current research envisions multimodal assessments as a framework that redefines what ways of knowing are recognized and valued, and how knowledge can be demonstrated through more inclusive and equitable practices that foster the representation of cultural identities and perspectives and collaborative meaning-making (
Anderson & Kachorsky, 2019). Through a social semiotics lens, this study contributes an authentic account of professors’ reflections on their efforts to disrupt the dominance of traditional institutional systems of evaluation by juxtaposing multimodal assessment practices that are contextualized to be equitable and inclusive of all learners in the classroom.
Our findings show that there is a shared purpose among the participating faculty to push back against the privileging of traditional academic epistemologies by implementing assessment practices that encourage students to demonstrate diverse ways of knowing, befitting of the multimodal assignments in their courses. The professors intentionally leveraged multimodality to amplify student voices and prioritize student creative choice by establishing co-created learning environments, a core element of inclusive and equitable classrooms (
Midgette & Stewart, 2025). Aligned with the assertions of
Wyatt-Smith and Kimber (
2009), most participants made an effort to include students in the assessment process by collectively developing learning objectives and descriptors before, during, and after a multimodal production or engaging students in peer evaluation, thus challenging traditional top-down power structures in the classroom. The participants contextualized the collaborative approach in their belief that involving students in the process of assessment gave the students greater freedom to make choices in creating knowledge representations, reflect on how these choices affect the product, and have a say in how artifacts (their own and/or of their peers) should be evaluated.
Other multimodal assessment practices that critically and epistemologically opposed standardized assessment environments included participants’ efforts to create low-stakes but high-agency spaces for the formative evaluation of students’ work, typically structured as group discussions. The professors emphasized the importance of paying close attention to the students’ meaning-making processes, their evolving understanding of their broader authoring potential, and the alignment of works-in-progress with specified learning outcomes. Moving away from traditional assessments toward student-centered approaches to practicing multimodality, as illustrated in the work of our participants, carries significant implications for social justice. By promoting student agency in evaluating multimodal projects from conceptualizing to production, instructors are “giving voice to the broader context of meaning-making including community/self/family/culture/nation and beyond” (
Newfield et al., 2003, p. 79), thus breaking systemic barriers for students from marginalized backgrounds.
At the same time, the findings showed a lack of deliberate focus on other criteria identified in the literature, namely, on how modes work together within assignments in assessments and on a need for metalanguage to create and discuss evaluative criteria. Furthermore, we were unable to establish a consistent approach to designing and implementing multimodal assessments in our participants’ teaching. We account for the disconnect between our findings and the existing literature by acknowledging that the participants began using multimodal assessments informally and may not be fully familiar with the recommended practices. The participant faculty’s propensity to anchor assessment practices in professional experience rather than in the current research highlights the need for professional development programs that focus on designing comprehensive instructional designs for evaluating multimodal projects (
Haßler et al., 2016;
Sung et al., 2016). Institutes of higher education should provide opportunities for faculty to receive professional development in adopting authentic, meaningful assessment practices based on exploring various modes of representation (
Fjørtoft, 2020). Our participant faculty proposed that such professional development programs could serve as a network for ongoing professional support and a platform for interdisciplinary discussions and collaborations.
It is also important to note that there is some ambiguity around how our participants perceive the design and use of multimodal classroom assessments. This uncertainty highlights the need for further research and discussion to determine which design principles should be emphasized in professional development programs. What we learned from our participants is that there is no one way to create and implement multimodal assessments. While a review by
Anderson and Kachorsky (
2019) suggests that rubrics are commonly used as the primary instrument in multimodal assessments, our study found that some professors also advocated for innovative assessment tools designed to reflect social semiotic practices within the context of their academic disciplines (
Bezemer & Cowan, 2020). For instance, professors used progress checks that were aligned with learning objectives instead of rubrics and evaluated both the process and product during regular small-group interviews with the students. Exploring innovative assessment designs is essential to provide faculty with relevant evidence to design assessment practices that effectively capture student learning in multimodal projects. Professors can leverage this knowledge to approach assessment needs on a case-by-case basis.
Another aspect of professional development that would be undoubtedly valued by our participants in this study relates to metalanguage. The professors shared that they felt limited in the way that they could discuss design elements with students because they did not have formal training in metalanguage. Designing, discussing, and re-inventing assessment criteria in multimodal assignments is dependent on the knowledge of the metalanguage of all participating parties (
Jones et al., 2020;
Unsworth, 2006). Metalanguage empowers professors and students to have the appropriate vocabulary to be able to adequately and appropriately understand and leverage the affordances of various semiotic resources (modes) and the interplay among them in an artifact (
Hammond & Gibbons, 2005;
Lim et al., 2022). By having that metalanguage, students can practice more agency in designing and implementing multimodal assessments. This agency is essential for disrupting the dominance of print literacies in assessment practices. Furthermore, a shared understanding of the metalanguage can create collaborative spaces where educators exchange professional experiences related to multimodal assignments and assessments more effectively and continue to explore the affordances of multimodal assessment across academic disciplines (
Jewitt, 2008;
Unsworth, 2006), as was the intent in this study.
While our findings offer robust implications for multimodal assessment practices, it is important to acknowledge the limitations of our study. Notably, we only offer the perspectives of five tenured professors representing three colleges at one university. Though we invited many professors to participate, the responsibilities of the professors meant that few had the time to participate in a voluntary study. However, the small number of participants in this study enabled deep engagement with the data on the instructors’ pedagogical practices and allowed for an in-depth exploration of both convergent and divergent perspectives, as reflected in the participants’ interactions during focus group discussions (
Morgan, 1996). Furthermore, we only represented the experiences and ideas of these few people. The shared context within the same university and some of the professors’ long-standing collegial relationships may have led to commonalities in pedagogical approaches that could limit the range of perspectives. Additional studies may wish to explore larger and more diverse subsets of instructors representing a range of regions, institutional types, and academic ranks to provide a broader and more representative perspective.
Another limitation of our study is the potential bias that we, as researchers, may have imbued into the focus groups as fellow professors and university employees. Coming from the field of literacy education and being a tenure-track faculty, the first and the second author of this manuscript had perspectives and strong opinions about multimodal assessments based on their experiences in creating multimodal projects and assessment, which differed from those of the other participants. This positionality could have influenced how we framed the questions and conducted the focus groups. To address this, we made a conscious effort to facilitate rather than direct the dialogue in the focus groups, providing ample time for the participants to share their experiences and thoughts. We encouraged a candid conversation by inviting the professors to honestly discuss both their successes and what they considered grey areas in creating and implementing assessments. The professors were prompted to elaborate on their responses or add information not covered in the interview questions. Notably, the professors provided critical reflections on their practices by sharing their uncertainties about developing and implementing multimodal assessments, highlighting their willingness to adopt a critical perspective during the focus groups.
11. Conclusions
In our study, we found that university professors had some experience using multimodal assessment practices in their classrooms and a desire to further develop this aspect of their teaching. However, their understanding of the relationship between multimodal projects and the corresponding assessments was largely intuitive and/or based on their prior knowledge of print-centric forms of assessment. There is a gaping need for professional development in practicing multiliteracies, with a special emphasis on metalanguage and social semiotics. In future studies, it is particularly important to explore the types of training most suitable for educators in various educational contexts, including different disciplines and instructional delivery formats, such as online hybrid or asynchronous courses.
By giving voice to individual professors, our findings illuminate a quintessential issue: there is not yet consensus on what constitutes effective pedagogies for creating, implementing, and evaluating multimodal projects that instructors could use to connect theory and practice (
Tour & Barnes, 2022). Although there are numerous assessment techniques discussed in the literature, there is a dearth of theoretical knowledge that explains the relationship between multimodal design in all of its stages and its evaluation. The absence of a shared understanding of effective pedagogies for multimodal assessment has led to inconsistent practices across disciplines. Without clear frameworks, faculty often rely on intuition rather than evidence-based strategies, limiting the pedagogical potential of multimodal work and hindering its broader adoption. Future research should develop empirically grounded frameworks that define effective multimodal pedagogy and guide faculty development, ensuring greater consistency in implementation and evaluation.
Although achieving full consensus may be challenging due to a diversity of semiotic practices across academic disciplines, clarifying core principles remains essential for fostering equitable and coherent teaching practices. Establishing a shared metalanguage and evidence base would strengthen faculty collaboration and advance a more rigorous and inclusive understanding of multimodal assessment practices in higher education.
This study delved into the philosophies and practices used by interdisciplinary university professors for multimodal teaching and evaluation. It specifically focused on what professors consider a multimodal assessment, why and how they created it, and how they incorporated it into their teaching. The participants saw multimodal projects as a collaborative and process-oriented activity that involves both instructors and students (
Collier & Kendrick, 2016) who contextually orchestrate modes based on their access, knowledge, and purpose (
Cope & Kalantzis, 2020;
Gill & Stewart, 2024). They also emphasized the importance of involving students in the evaluation process to collectively determine the semiotic value of their designs for the success of multimodal projects. To further enhance student-centered pedagogies in multimodal contexts, it is crucial for researchers to conduct studies aimed at understanding students’ experiences with assessment processes, particularly in how their lived experiences are represented/underrepresented in the construction of multimodal projects and assessments (
Lawrence & Mathis, 2020). Considering the vital role of student perspectives in disrupting instructional practices that privilege dominant cultures and languages, it is essential for students to have an equal voice in discussions about how students and instructors can make multimodal assessment practices more inclusive and equitable.