Next Article in Journal
Sustainable Education and Digitalization through the Prism of the COVID-19 Pandemic
Next Article in Special Issue
Connecting People with Science: A Proof-of-Concept Study to Evaluate Action-Based Storytelling for Science Communication
Previous Article in Journal
Operationalizing Digitainability: Encouraging Mindfulness to Harness the Power of Digitalization for Sustainable Development
Previous Article in Special Issue
Models of Teaching Science Communication
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evidence-Based Methods of Communicating Science to the Public through Data Visualization

1
Advanced Visualization Laboratory, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
2
Institute for Methods Innovation, D02RX2 Dublin, Ireland
3
School of Information Sciences, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(8), 6845; https://doi.org/10.3390/su15086845
Submission received: 19 February 2023 / Revised: 14 March 2023 / Accepted: 20 March 2023 / Published: 18 April 2023

Abstract

:
This essay presents a real-world demonstration of the evidence-based science communication process, showing how it can be used to create scientific data visualizations for public audiences. Visualizing research data can be an important science communication tool. Maximizing its effectiveness has the potential to benefit millions of viewers. As with many forms of science communication, creators of such data visualizations typically rely on their own judgments and the views of the scientists providing the data to inform their science communication decision-making. But that leaves out a critical stakeholder in the communications pipeline: the intended audience. Here, we show the practical steps that our team, the Advanced Visualization Lab at the University of Illinois at Urbana-Champaign, has taken to shift toward more evidence-based practice to enhance our science communication impact. We do this by using concrete examples from our work on two scientific documentary films, one on the theme of “solar superstorms” and the other focusing on the black hole at the center of the Milky Way galaxy. We used audience research with each of these films to inform our strategies and designs. Findings revealed specific techniques that were effective in information labels. For example, audiences appreciated the use of an outline of the Earth to demonstrate scale in scientific visualizations relating to the Sun. We describe how such research evidence informed our understanding of “what works and why” with cinematic-style data visualizations for the public. We close the essay with our key take-home messages from this evidence-based science communication process.

1. Introduction

It is increasingly recognized that science communication practice should employ an evidence-based approach to deliver impact. Jensen and Gerber (2020) [1] argued for changes in science communication norms and practices, including adapting engagement approaches to audience needs, proactively ensuring the inclusion of marginalized or under-represented groups and continually refining practices by using insights from robust evaluations and audience research (also see Kennedy et al., 2018 [2]). This manifesto for evidence-based science communication has been widely viewed and cited (approx. 40,000 views and 60 citations as of early 2023). However, we are lacking concrete examples of what evidence-based science communication looks like as a process. It is important to clarify what kind of evidence can be used to inform practice and how it can be used (e.g., Bucchi and Trench 2021 [3]). This paper presents the experience of a multidisciplinary team implementing evidence-based science communication in the visualization of scientific data for public audiences. We show how we identified aspects of our work to prioritize evaluation and then implemented a set of audience research studies to shape our design decision-making to increase our impact.
Taking an evidence-based approach means making visualization design choices not because they are the norm or because that is what the designers personally like or prefer to implement. Instead, audience needs, as revealed through existing studies, evaluations, or audience research, are the driving force behind visualization design. However, understanding what audiences need and using audience insights to inform practice can be challenging. We describe in this paper how we have undertaken this process and its implications for our practices.

2. Science Communication through Data Visualization

With rapid advances in science, data processing, and computer graphics technologies, the line between representations of “real science” and “Hollywood science” is blurring. For example, the film Interstellar made the news for contributing to the scientific understanding of astrophysics (James et al., 2015) [4]. But the blurring of lines goes both ways: science is making visual effects more believable, while visual effects are making science more widely accessible (Borkiewicz et al., 2022) [5]. A cinematic presentation of scientific data, or “cinematic scientific visualization”, can capture public attention and interest in complex science topics in this current age of rapidly growing media consumption (Jensen et al., 2022) [6]. Placing emphasis on aesthetic design, storytelling, and cinematography, these cinematic visualizations reach millions of viewers worldwide through entertainment and informal learning experiences, ranging from museums and documentary films to YouTube and Reddit. At the same time, these scientific visualizations are based on research data and are created in collaboration with scientists. The dataset may be collected from the real world via satellite, telescope, microscope, or some other instrument; or it may be simulated on a computer according to the laws of physics. This distinctive, public-oriented, film-based form of scientific visualization is driven by distinct goals, processes, and outcomes, making it a new and important frontier in science communication.
Cinematic scientific visualizations are situated between the categories of traditional scientific visualization and artistic scientific illustration, integrating a visualization of scientific data and a Hollywood-style artistic impression (see Figure 1). Cinematic scientific visualization is defined by its (1) use of scientific data, (2) aim of achieving intelligibility for a public audience, and (3) visual appeal (Borkiewicz et al., 2022) [5] (see Figure 2).
Traditional visualization, cinematic visualization, and scientific illustration are all types of science communication, but with distinct intended audiences. Traditional visualization is most suitable for scientific experts (e.g., via research publications), and scientific illustration is often targeted at nonexperts (e.g., via nonfiction and fiction films). Cinematic visualization is most often aimed at the general public, but recently, improved computer graphics tools (e.g., Borkiewicz et al., 2019 [7]) and machine learning (e.g., Aleo et al., 2020 [8]) have been starting to make cinematic-style visualization more accessible to a wider range of scientists.
Borkiewicz et al., 2019 [7] (p. 11) argued that science should “make use of visual effects tools [to] allow for the creation of higher-fidelity visualizations that meet the high bar set by modern cinema”. To attain this goal, cinematic scientific visualization designers must balance accuracy and intelligibility. Only when an image is sufficiently understandable and accurate can the focus shift to the final piece of the puzzle, namely aesthetic appeal. There are many challenges encountered in the process of going from raw data to visuals. Decisions on where to make omissions, simplifications, and abstractions or on where to add artistic elements are typically made by data visualizers alone or jointly between visualizers and scientists. Indeed, scientists may drive the process by offering particular narratives they would like the visualization to portray, so that the data visualizers’ task is to highlight those stories, possibly at the expense of other aspects. For example, in showing a system of extrasolar planets that were discovered via the dimming caused as each planet passed in front of its star, we as visualizers chose to exaggerate the planets’ sizes to make that process clear for viewers (see Figure 3).
As with other types of science communication, there are sometimes negotiations between scientists and visualizers on how to proceed or which story to tell (Woodward et al., 2015 [9]). Although rarely implemented in practice, at present, an evidence-based, strategic communication process can guide how best to make such design decisions. Later in this article, we will demonstrate this process by using detailed examples from our recent experience.

3. Developing Effective Visual Science Communication

This paper aims to clarify what evidence-based science communication means in practice, using the concrete example of developing scientific data visualizations for public audiences. Here, we present a case study of science communication using research data visualization drawn from our experience as a multidisciplinary team at the Advanced Visualization Lab (AVL) at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign. The AVL produces cinematic scientific visualizations for films, documentaries, and museums to make complex scientific concepts understandable and exciting for broad public audiences. We are a “Renaissance team” (Cox 1998) [10] of four to six full-time visualization creators whose backgrounds include art and design, programming, and (more recently) science communication, and we work closely with scientists to bring their datasets to life. Our visualization work is funded by a variety of sources, including government agencies (e.g., the National Science Foundation, NASA), film companies (e.g., IMAX, Disney), philanthropic foundations (e.g., The Brinson Foundation), and contract work.
We take advantage of recent technological advances to bring a cinematic level of scientific visualization to the data to create more-appealing and more-accessible content. As an example, as of 2021, the documentary Solar Superstorms, coproduced by AVL, has been shown on international television in 16 countries, has been played in 86 planetariums, has 4.6 million views on YouTube alone, and is available for streaming on Amazon Prime and MagellanTV.
One of the key barriers to introducing evidence-based practices into science communication work for teams such as ours is having the budget or in-house capacity to conduct audience research and impact evaluations. Science communicators are often left to their own devices to design and conduct empirical evaluations, with limited training and support by their institutions or funders and often without in-house experts to call upon for advice (Jensen and Gerber 2020, p. 3) [1].
Here, we present our recent experiences with audience research, enabled through temporarily adding an in-house audience research/evaluation expert to the team in the form of a civic science fellow role, funded by The Brinson Foundation. In what follows, we summarize what we did and what we have learned to date. We highlight three types of evidence that have extended our efforts to maximize visual communication effectiveness and forestall negative outcomes for our audiences: (1) learning from existing research and theory, (2) quantitative audience research, and (3) qualitative audience research.

4. Informing Practice with Existing Research Evidence

Many important and relevant findings have already been established, and they do not require fresh data evidence gathering to inform science communication. Accessing insights from existing evidence of this kind takes only a modest investment of time. Motivated by the aim of fostering evidence-based practice by increasing our use of evidence to inform our practices, AVL has long consumed relevant research literature and theory to encourage reflexivity and refine our thinking about data visualization design via our regular “journal club”. This regular reading/discussion group extends beyond our team, periodically including other teams who do similar types of science communication with data visualization, such as NASA’s Scientific Visualization Studio.
Discussion topics covered at these regular meetings encapsulate a wide variety of domains, ranging from human perception (Healey and Enns 2012; Kong et al., 2019 [11,12]) to data ethics (Holland et al., 2018; Lee et al., 2021 [13,14]), to photorealism and visual aesthetics (Spencer et al., 2021; Akbaba et al., 2021 [15,16]), to documentary filmmaking (Takahashi 2017; Kouril et al., 2021 [17,18]), to science communication (Lee-Robbins et al., 2023; Franconeri et al., 2021; Jensen and Laurie 2016 [19,20,21]). In addition to a discussion on each paper and topic, the meetings conclude with a “show and tell” exercise to provide cross-team feedback on work-in-progress visualizations, applying topics learned from the readings. This expert critique aims to keep our community abreast of new techniques and in line with established best practices.
Furthermore, the social scientist in our team conducted a systematic literature review aimed at investigating the existing knowledge on a high-priority design issue, that is, whether and to what extent explanatory labels should be used in our data visualizations for public audiences. To date, our team has avoided using such labels in our cinematic visualizations, on the basis of an anecdotal assumption that these would detract from the immersive nature of the visuals. Our social scientist’s review concluded that informational labels “may come with tradeoffs between broad accessibility and precise understanding for non-technical audiences” (Jensen et al., 2022, p. 4 [6]). This article also pointed to key factors that may influence audience reception of cinematic scientific visualizations, namely intelligibility, film content, and immersion. The research literature to date suggests that immersion can boost audience impact, especially emotional impact (Jensen et al., 2022 [6]). This literature review’s conclusions, therefore, left open the possibility that labels could be helpful, but it was not definitive. This suggested the need to undertake research directly with audiences to gain more-specific and more-actionable insights into the benefits of or limitations on introducing labels in AVL’s visualizations.
Informal audience feedback through public and professional demonstrations is one method that we use to obtain validation on completed visualizations and perform a “temperature check” on the effectiveness of our design decisions. This is a kind of evidence-based science communication practice. Through comments and questions raised by our audiences, we noticed a pattern in our more photorealistic visualizations: people were sometimes unsure whether what they were looking at was (a) a real photographic video, (b) an artistic illustration, or (c) a data visualization. Because of this informal feedback, this uncertainty is now an issue at the forefront of our minds when designing visualizations. We often address this concern by incorporating nonphotorealistic elements in our visualizations (Figure 4) and/or making clear in the documentary’s accompanying narration that we are showing a computational model.

5. Informing Practice with Quantitative Audience Research

To inform the science communication practices of AVL and the wider data visualization community, an online survey study was conducted to test the intelligibility of informational labels constructed by the AVL team. These labels were designed for cinematic data visualizations featured in the film Solar Superstorms. This film takes viewers into the tangle of magnetic fields and superhot plasma that vent the Sun’s surface energy in dramatic flares, violent solar tornadoes, and the largest eruptions in the solar system, namely coronal mass ejections.
The show features one of the most intensive efforts ever made to visualize the Sun’s inner workings, including a series of scientific visualizations computed on the giant supercomputing initiative Blue Waters, based at the National Center for Supercomputing Applications, University of Illinois.

6. Methods

A pool of participants for this survey was recruited, primarily from the University of Illinois at Urbana-Champaign student community (n = 66, including 40 fully completed and 26 partially completed surveys). Prior to commencing the research, formal ethics approval was obtained from the University of Illinois at Urbana-Champaign’s human subjects review board. This survey was designed to measure different facets of audience response to the labels, using the following structure: (1) participants completed the consent and demographic sections of the survey, (2) participants were shown a full approximately 10 min video containing a truncated version of the Solar Superstorms narrated documentary film that showed only the cinematic data visualizations (and cut out other aspects of the film), and (3) participants were asked to provide feedback on the overall film experience and the labels, using screenshots throughout the survey in order to obtain focused feedback on each information label. The survey was conducted via the Qualia Analytics platform, using metrics evaluating the quality of the experience, visual appeal, and intelligibility. Survey items included semantic differentials, which evaluate audience responses between two opposing adjectives. In this case, the key adjective pairs for the semantic differentials were confusing/clear, interesting/uninteresting, and informative/uninformative.

7. Results

At the outset, participants were asked to rate the overall quality of the film clip that they viewed, on a scale of 0 to 100. Audience assessments skewed positive (Figure 5), with a mean score of 84.95 out of 100 (n = 44).
When asked whether they remembered seeing explanatory labels during the film (before showing them screenshots), 87% of respondents said yes and 13% no. Participants’ overall impression of the informational labels was positive overall, where 27% said that they were very clear, 46% clear, 22% neutral, and 5% very unclear (n = 37). Participants were asked at the overall level, “what, if anything, do you remember about the explanatory labels in the video clip?” Some had positive responses to this question, such as “[The labels] helped explain what was being shown”.
I remember thinking they were very helpful, especially because I do not know a lot about that kind of science. When watching, I would say to myself, what is that? Then a label would pop up explaining it, so I was happy with that.
Some noted that the labels interacted with other explanatory aspects of the film, such as the voiceover narration:
Though the audio was very clear, the explanatory labels give much more context to the video being played. Explanatory labels such as magnetic field and solar space make [the film] more informative and clear.
Others had fairly neutral responses when reporting what they remembered about the labels overall:
Some complementary information about the content, but I don’t remember about the details.

8. Effective Informational Labels

The informational label aimed at showing physical scale by using “Earth to scale” (Figure 6) was particularly well received by audience members.
Responses were overwhelmingly in the direction of viewing this explanatory label as interesting (85%), fascinating (88%), clear (81%), beautiful (75%), and informative (93%).
One respondent commented on this label, “I really like the Earth to scale one [because] it puts the Sun in perspective”. Another person highlighted that the label was the most memorable part of the film:
There had been an illustration of a small Earth at the bottom left-hand corner of the screen that was labeled “Earth (drawn to scale)” in white text, which was placed to show the size of the Sun in relation to Earth.
Indeed, multiple respondents noted this particular label when asked what they remembered from the whole film.
I liked seeing the “to scale” images. Helps give a scope to how massive [the Sun] really is.
For the following scene’s labels (Figure 7), we chose to show a simplified color table, describing “cool gas” as blue and “hot gas” as orange. In reality, the specific variables being visualized are “baryon overdensity”, in a color palette range that spans teal, blue, and purple, and “temperature”, which is visualized in yellows, oranges, and reds. A relatively small fraction of the volume comprises high-density filaments (of mostly cool gas) and also a relatively small fraction comprises hot gas (heated by radiation from the new stars that had formed in those dense filaments). Both of those were important to the planned storyline for that scene, while the gas that was both low density and cool, occupying most of the volume, was less important. We chose transfer functions that made the nondense cool gas completely invisible and transparent, making it much easier for the eye to see detail in the places where it mattered. This level of explanation is often included in informational labels on traditional scientific visualizations, but we determined that “cool gas” and “hot gas” explained enough of the story to be approachable to our nonexpert audience.
This approach to informational labeling received a strongly positive quantitative response, where only a few individuals reported that the labels were somewhat confusing. There were also minimal audience concerns registered in the qualitative feedback.
The only negative qualitative comment worth highlighting was, “Sense of scale isn’t conveyed. It doesn’t give that sense of grandeur of just how long 50,000 light-years is”. This comment points to how challenging the process of devising appropriate informational labels can be when trying to balance immersive entertainment and an intelligible learning experience.

9. Ineffective Informational Labels

The primary focus of this audience research was to uncover problematic labels and gain insights into how to improve them. The next label (Figure 8) was rated as somewhat confusing by a small set of respondents.
However, the qualitative feedback did not provide enough of a basis to make specific changes:
I don’t know what I’m looking at, and the description doesn’t describe the entire picture.
For this reason, no change was made to this informational label.
Next, we present findings regarding a set of information labels that were identified as having aspects that need improvement, according to the audience feedback.
For the “21 days” label (Figure 9), the fact that it appeared as a static image overlaid on the video for a period of a few seconds raised questions for some audience members about its precision:
I found the labels saying time amounts confusing, like what I am watching that occurred over 21 days?
This issue was raised by three participants in the qualitative feedback on the labels represented in this screenshot:
My problem is with the “Time Shown” part. When I was watching, there wasn’t a scale under the “time shown” like there is in this image. I’m guessing it was supposed to be an animation of a scale moving across but I still don’t understand it and why “21” exactly.
Time shown says 21 days, would help to know 21 days after what phenomenon?
The same kind of shift to a progressive time label was used to address audience concerns about another label, as can be seen in the next example and in Figure 10. The following label received negative feedback from a minority of respondents for being confusing, uninteresting, and uninformative. This was the most negatively received set of labels, where some respondents questioned whether there was sufficient added value in providing the label, such as saying the label was “not necessary”:
The simulation [scientific visualization] is much more eye grabbing than the label and seems not that important.
A respondent who rated the label in this scene as uninteresting explained their claim as follows:
It’s just a simple label stating the length of the time lapse; It’s very cut and dry, to-the-point (which is a good thing), but objectively it’s just not an interesting label lol.
Another respondent simply said that they gave the information label this rating “because it’s not interesting”.
Those explaining their rating of this label as confusing provided more specifics on the problem and started to point toward possible solutions:
I just do not get the time labels. Like, maybe if they were counting down while the actions were occurring or if the narrator explained what they meant they would be more clear.
The timescale label’s lack of precision raised concerns for some audience members:
It doesn’t specify what time the clip mentioned here starts and what time it ends. So, it is hard to get an idea of how fast it is.
I just wasn’t sure what it meant by 12 h, is it like a time lapse over a period of 12 h?
The qualitative feedback pointed the way forward, suggesting that syncing the timescale label with the unfolding of the scene could provide a more precise and intelligible representation:
Would help to have details of the time for what is being displayed.
To address the concern that “It does not show how it progresses over time”, the team decided to introduce a progressive representation of the time passing to give a more precise sense of the timescale involved. The new version of the timescale label shows hours’ progressively being marked as the scene unfolds, where the clock hands animate over the period of a few seconds.
Another label was aiming to depict the physical size being represented in the cinematic data visualization (Figure 11). In addition to the kilometer scale, an outline of Australia was shown to make the label more concrete and easily understood by audiences.
However, a few respondents in the pool of US-based respondents rated this label as confusing:
Why Australia for scale? Does not give a clear idea of scale.
What is equal to the length of Australia?
Why is it showing the length of Australia?
Such comments prompted the team to take a different approach for this label, replacing Australia with an outline of the Earth, aligned with a distant 3D feature that is the focal point of the scene (the expanding blue arc), to show the physical scale (Figure 12). The use of a three-dimensional object provides important depth cues: without these perspective cues, 2D markings can be quite misleading as a guide to scale in a deep 3D scene. In addition, the size of the label was more precisely calibrated to the scene in this final stage.
These examples show how practical decision-making by science communication practitioners can be directly informed by evidence from audience research to improve clarity and intelligibility.

10. Informing Practice with Qualitative Audience Research

As part of the evidence-based approach for AVL’s work, a participatory qualitative study was undertaken to inform design choices for a new cinematic scientific visualization focusing on the black hole at the center of the Milky Way galaxy. This visualization used a dataset provided by the Galactic Center Group at UCLA, led by Professor and Nobel Prize winner Andrea Ghez.

11. Methods

These participatory focus groups were conducted in collaboration with an audience researcher at Science News, another science communication organization based in the United States. The Science News researcher recruited participants from STEM (science, technology, engineering, and math) organizations and through direct appeals on social media, aiming to recruit Black American participants aged 11–17 who were in middle and high school. Full informed consent procedures were followed, with a separate human subjects review approval for this research. In addition, participants and their legal guardians signed media consent/assent forms so that their images and identities could be presented as contributors to the completed data visualization. The overall focus group was shared between AVL and Science News, where separate sections gained input on science communication decision-making for their distinct projects.
The focus group used an A/B testing structure, first providing an introduction and then presenting at least two previously videorecorded options first sequentially and then side by side for participants to respond to. The focus group proceeded along the steps indicated in Table 1.
The video explaining the design choices on which the team was seeking feedback was prerecorded with visual guidance (Figure 13) and narration integrated.
This prepackaged approach was preferred over a live voiceover by the focus group moderator, to keep the moderator in a neutral position so that participants felt comfortable raising problems about the visualizations.

12. Results

The participants’ perspectives on each of the design decisions presented to them helped to guide AVL’s selection of visualization techniques. The AVL’s development of the visualization was ongoing during the focus group research period so that new decisions could be put in front of focus groups on an unfolding basis. This provided fresh, continuous external input from diverse audiences into the design process, improving its probability of delivering effective communication outcomes.
Here, we provide a flavor of the focus group input that informed AVL’s design decision-making in the preparation of this black hole scientific visualization. A series of design choices were presented to participants, generally with clear preferences and rationale for one option over the other(s).

13. Camera Path

Respondents were presented with two choices for the camera path, or visual route, taking the viewer from Earth to the galactic center (Figure 14). The first, direct path went through the thick of the Milky Way, passing stars and gas clouds on the way (Camera Test A). The second path zoomed out first to show an overview of the Milky Way galaxy, before diving in toward the galactic center (Camera Test B). This second version described by a participant as follows:
To me it appears as if the video is being captured outside in space, gradually descending maybe towards a particular planet.
(Emmanuel)
There was a clear preference for the option that first showed an overview perspective, before running headlong toward the galactic center.
Emmanuel: The second appears to be the better one for me. Because the first one- it was difficult for me to figure out what was going on. But the second one, I just found it easy for me to figure out what it was [about]. So for me, it was self-explanatory more than the first.
Int: Avis, did you have a view on which one was the better option?
Avis: Yeah, for me, Option B was better for me. Because with Option B, I loved the fact that it started with the galaxy. I could see the Milky Way. […] I loved the way it transitioned from the Milky Way to showing the orbits as they were spiraling around and then also the black hole. I think this was more self-explanatory than the first one.
[…] Bay: I also chose option B because it was self-explanatory.
The preference for Camera Test B was driven by the need to orient the viewer to the scene. The overview perspective in Camera Test B helped to orient the viewer, clarifying what they were looking at and reducing the feeling of being “lost”:
In the first one, I really did not get the idea of it being something that has to do with the galaxy. But with the second option, I could get that easily.
(Avis)
This reduced viewers’ ability to make sense of what they were seeing (traveling toward the center of the Milky Way galaxy from Earth):
For me, the Camera Test A appears like a cloud. It is just moving with different levels of cloud.
(Emmanuel)
There was a pattern in this perception that Camera Test B provided a more intelligible visual journey:
From the beginning, the image on Camera Test A was looking like clouds and little particles of stars, while Camera Test B was quite obvious because it captured the full view of the galaxy. So for me, I would prefer Camera Test B.
(Avis)
This camera path (Camera Test B), first providing an overview perspective, was selected on the basis of this kind of audience feedback.

14. Color Saturation

Another design choice presented to audiences (Figure 15) was the question whether to represent colors with a more-intense color-saturated palette (Color Test A) or a paler color palette (Color Test B).
There was a clear preference for the stronger, more-saturated color option (Color Test A):
Bay: Color Test A is more colorful; the colors are more pronounced than Color Test B
Int: […] Which one is the more interesting or beautiful?
Bay: Test A
The strength of color was understood as signifying originality, which was perceived positively:
Emmanuel: Color Test A appears to be the original because of how intense the color is. […] Color Test B seems to be a copy of Color Test A.
King: For me, I would also go for A. Just as Emmanuel said, B looks like a prototype of A.
Indeed, this theme where the less-intense color palette was viewed as a copy repeatedly came up (see Figure 15 for a side-by-side view of the color tests).
Definitely Color Test A is clear, and Color Test B seems like a washed-out version of [A]. […] [Color Test B] is like if you have something and it is just so washed out and [it makes you think,] “Is your screen is clear, or is supposed to be like that?”
(Martina)
However, the one exception was a participant’s view that the paler color palette seemed more “natural” or realistic:
I prefer Color Test B because it looks like the video is still in its natural state. If I am looking at a video of the galaxy, I would love to see it in its natural state—not with all the colors added.
(Avis)
Other participants agreed that they would support a preference for the more “natural” color scheme to be used:
Responding to the audience feedback, AVL decided to use a bold (rather than pale) color palette in the scientific visualizations for this project. These are examples of how qualitative evidence from audiences can influence design decision-making in science communication.

15. Conclusions: Toward Evidence-Based Practice

Jensen and Gerber (2020, p. 4) [1] made the case that increasing the “systematic use of evidence in science communication practice [could] maximize effectiveness and forestall negative impacts”. The primary purpose of this article has been to demonstrate how this kind of evidence-based science communication can be implemented in practice. Particularly when breaking new ground in science communication, professional intuition is likely to be insufficient to select the most effective approach from different plausible options (Jensen 2020) [22]. This is where empirical evidence can be used to guide practical decision-making. This article has provided worked examples of how systematic audience research can be used to feed into science communication choices for professionals to boost impact, with a focus on cinematic-style data visualizations.
Cinematic scientific visualization uses filmmaking techniques to present complex scientific datasets in a visually compelling way while prioritizing audience understanding. It currently occupies a sliver of space along the spectrum between traditional data visualization and scientific illustration. However, its broad appeal and ever-expanding technical feasibility will see it continue to grow as a field and gain ground in the overlapping domains of education and entertainment. The question now is, how can this form of science communication more effectively deliver impact? To answer this, we believe in taking an evidence-based communication approach to a project by using audience research to systematically test different design options.
Integrating impact evaluation capacity within our team has enabled a virtuous circle of mutual learning between science communication research and science communication practice. The practical challenges that our team faces when making design decisions to produce data visualizations for the general public are not unique. Many science communicators encounter similar decision points, and yet the science communication research literature does not provide clear guidance on which approach is better and why. The collaborative process that we have undertaken at the Advanced Visualization Lab has revealed such practical challenges to be addressed by using social research methods on a timescale that can feed into our ongoing work. As with every social research project, compromises between quality, depth, and practical constraints must be made to ensure that data can be used in the real world (Jensen and Laurie 2016) [21]. Working on issues that really matter in practice and working at the speed of practice are two of the key challenges that researchers face in making evidence-based science communication a reality.
Jensen (2015, p. 13) [23] argued that “using robust social scientific evidence […] to ensure success should be viewed as a basic necessity across the [science communication] sector”. Moreover, “using evidence in science communication practice, for example, by integrating impact evaluation, requires reflexivity and a willingness to reconsider established practices in light of the best available evidence” (Jensen and Gerber 2020, p. 3) [1]. On a practical level, our team’s aspirations to employ evidence-based approaches have long been limited by our access to audience research and evaluation expertise to deliver key insights into our design decision-making. As Jensen and Gerber (2020, p. 3) [1] point out, “developing understanding of relevant evidence and producing new evidence through evaluation requires know-how that is often inadequately developed in science communication teaching/training for practitioners”. Our experience reinforces the importance of including social science and evaluation skills within the range of capacities needed to create high-impact science communication. If such capacities are integrated into teams such as ours, evidence-based approaches to science communication can become the norm rather than the exception.

Author Contributions

Conceptualization, E.A.J.; Methodology, E.A.J. and K.B.; Validation, E.A.J.; Formal analysis, E.A.J.; Investigation, E.A.J.; Resources, K.B., S.L. and J.C.; Data curation, E.A.J.; Writing–original draft, E.A.J., K.B. and J.P.N.; Writing–review & editing, E.A.J., K.B., J.P.N., S.L. and J.C.; Visualization, E.A.J., K.B., S.L. and J.C.; Supervision, K.B. and J.P.N.; Project administration, E.A.J.; Funding acquisition, K.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The Brinson Foundation as part of the Civic Science Fellows program. The SciWise initiative (sciwise.org) also provided support via the survey instrument.

Institutional Review Board Statement

The studies were conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of the University of Illinois at Urbana-Champaign (protocol code 23080 dated 31 May 2022, and protocol code 23230 dated 25 August 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Acknowledgments

The authors thank all the research participants. We also thank Martina Efeyini and Science News for collaborating on the focus group research. User experience improvements to the survey instrument were made by Aaron M. Jensen.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jensen, E.A.; Gerber, A. Evidence-based science communication. Front. Commun. 2020, 4, 78. [Google Scholar] [CrossRef]
  2. Kennedy, E.B.; Jensen, E.A.; Verbeke, M. Preaching to the scientifically converted: Evaluating inclusivity in science festival audiences. Int. J. Sci. Educ. 2018, 8, 14–21. [Google Scholar] [CrossRef]
  3. Bucchi, M.; Trench, B. Rethinking science communication as the social conversation around science. J. Sci. Commun. 2021, 20, Y01. [Google Scholar] [CrossRef]
  4. James, O.; von Tunzelmann, E.; Franklin, P.; Thorne, K.S. Gravitational lensing by spinning black holes in astrophysics, and in the movie Interstellar. Class. Quantum Gravity 2015, 32, 065001. [Google Scholar] [CrossRef]
  5. Borkiewicz, K.; Jensen, E.A.; Levy, S.; Naiman, J.P.; Carpenter, J. LSE Impact Blog. Introducing Cinematic Scientific Visualization: A new Frontier in Science Communication. Available online: https://blogs.lse.ac.uk/impactofsocialsciences/2022/03/16/introducing-cinematic-scientific-visualization-a-new-frontier-in-science-communication (accessed on 16 March 2022).
  6. Jensen, E.A.; Borkiewicz, K.M.; Naiman, J.P. A new frontier in science communication? What we know about how public audiences respond to cinematic scientific visualization. Front. Commun. 2022, 7, 840631. [Google Scholar] [CrossRef]
  7. Borkiewicz, K.; Naiman, J.P.; Lai, H. Cinematic visualization of multiresolution data: Ytini for adaptive mesh refinement in Houdini. Astron. J. 2019, 158, 1–18. [Google Scholar] [CrossRef]
  8. Aleo, P.D.; Lock, S.J.; Cox, D.J.; Levy, S.A.; Naiman, J.P.; Christensen, A.J.; Borkiewicz, K.; Patterson, R. Clustering-informed cinematic astrophysical data visualization with application to the Moon-forming terrestrial synestia. Astron. Comput. 2020, 33, 10424. [Google Scholar] [CrossRef]
  9. Woodward, K.; Jones, J.P.; Vigdor, L.; Marston, S.A.; Hawkins, H.; Dixon, D.P. One Sinister Hurricane: Simondon and Collaborative Visualization. Ann. Assoc. Am. Geogr. 2015, 105, 496–511. [Google Scholar] [CrossRef]
  10. Cox, D.J. Renaissance Teams and Scientific Visualization: A Convergence of Art and Science. In Proceedings of the 15th Annual Conference on Computer Graphics and Interactive Techniques, Atlanta, GA, USA, 1–5 August 1988; pp. 83–103. [Google Scholar]
  11. Healey, C.G.; Enns, J.T. Attention and Visual Memory in Visualization and Computer Graphics. IEEE Trans. Vis. Comput. Graph. 2011, 18, 1170–1188. [Google Scholar] [CrossRef] [PubMed]
  12. Kong, H.; Liu, Z.; Karahalios, K. Trust and recall of information across varying degrees of title-visualization misalignment. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019. [Google Scholar] [CrossRef]
  13. Holland, S.; Hosny, A.; Newman, S.; Joseph, J.; Chmielinski, K. The Dataset Nutrition Label: A Framework to Drive Higher Data Quality Standards. arXiv 2018, arXiv:1805.03677. [Google Scholar]
  14. Lee, C.; Yang, T.; Inchoco, G.D.; Jones, G.M.; Satyanarayan, A. Viral Visualizations: How Coronavirus Skeptics Use Orthodox Data Practices to Promote Unorthodox Science Online. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021. [Google Scholar] [CrossRef]
  15. Spencer, G.; Shirley, P.; Zimmerman, K.; Greenberg, D.P. Physically-based glare effects for digital images. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 6–11 August 1995. [Google Scholar] [CrossRef]
  16. Akbaba, D.; Wilburn, J.; Nance, M.T.; Meyer, M. Manifesto for Putting “Chartjunk” in the Trash 2021! arXiv 2021, arXiv:2109.10132. [Google Scholar]
  17. Takahashi, T. Data Visualization as Documentary Form: The Murmur of Digital Magnitude. Discourse 2017, 39, 376–396. [Google Scholar] [CrossRef]
  18. Kouril, D.; Strnad, O.; Mindek, P.; Halladjian, S.; Isenberg, T.; Groeller, E.; Viola, I. Molecumentary: Adaptable narrated documentaries using molecular visualization. IEEE Trans. Vis. Comput. Graph. 2021, 29, 1733–1747. [Google Scholar] [CrossRef] [PubMed]
  19. Lee-Robbins, E.; Adar, E. Affective Learning Objectives for Communicative Visualizations. IEEE Trans. Vis. Comput. Graph. 2023, 29, 1–11. [Google Scholar] [CrossRef] [PubMed]
  20. Franconeri, S.L.; Padilla, L.M.; Shah, P.; Zacks, J.M.; Hullman, J. The Science of Visual Data Communication: What Works. Psychol. Sci. Public Interest 2021, 22, 110–161. [Google Scholar] [CrossRef] [PubMed]
  21. Jensen, E.A.; Laurie, C. Doing Real Research: A Practical Guide to Social Research; SAGE: London, UK, 2016. [Google Scholar]
  22. Jensen, E.A. Why impact evaluation matters in science communication: Or, advancing the science of science communication. In Science Communication in South Africa: Reflections on Current Issues; African Minds: Cape Town, South Africa, 2020; pp. 213–228. [Google Scholar]
  23. Jensen, E. Highlighting the value of impact evaluation: Enhancing informal science learning and public engagement theory and practice. J. Sci. Commun. 2015, 14, Y05. [Google Scholar] [CrossRef]
Figure 1. Exemplars of traditional and cinematic scientific visualization compared with scientific illustration.
Figure 1. Exemplars of traditional and cinematic scientific visualization compared with scientific illustration.
Sustainability 15 06845 g001
Figure 2. Hierarchy of cinematic scientific visualization needs.
Figure 2. Hierarchy of cinematic scientific visualization needs.
Sustainability 15 06845 g002
Figure 3. Example of scale exaggeration, necessary to show the multiple planets in this exoplanetary system.
Figure 3. Example of scale exaggeration, necessary to show the multiple planets in this exoplanetary system.
Sustainability 15 06845 g003
Figure 4. Example of a visualization showing the data’s underlying grid. Notes: (left)—one frame of the video, showing data grids; (right)—5 s into the video, the grids transition into a photorealistic data visualization.
Figure 4. Example of a visualization showing the data’s underlying grid. Notes: (left)—one frame of the video, showing data grids; (right)—5 s into the video, the grids transition into a photorealistic data visualization.
Sustainability 15 06845 g004
Figure 5. Audience assessment of video clip quality. Higher scores (redder) are more favorable.
Figure 5. Audience assessment of video clip quality. Higher scores (redder) are more favorable.
Sustainability 15 06845 g005
Figure 6. Cinematic data visualization of a coronal mass ejection, with label “Earth to scale”.
Figure 6. Cinematic data visualization of a coronal mass ejection, with label “Earth to scale”.
Sustainability 15 06845 g006
Figure 7. Cinematic data visualization showing galaxy formation, with label to indicate scale and gas type.
Figure 7. Cinematic data visualization showing galaxy formation, with label to indicate scale and gas type.
Sustainability 15 06845 g007
Figure 8. Cinematic data visualization showing development of a coronal mass ejection.
Figure 8. Cinematic data visualization showing development of a coronal mass ejection.
Sustainability 15 06845 g008
Figure 9. Cinematic data visualization of the Sun, with label indicating timescale.
Figure 9. Cinematic data visualization of the Sun, with label indicating timescale.
Sustainability 15 06845 g009
Figure 10. Cinematic data visualization showing development of a coronal mass ejection.
Figure 10. Cinematic data visualization showing development of a coronal mass ejection.
Sustainability 15 06845 g010
Figure 11. Cinematic data visualization of internal workings of the Sun, with label showing scale.
Figure 11. Cinematic data visualization of internal workings of the Sun, with label showing scale.
Sustainability 15 06845 g011
Figure 12. Cinematic data visualization of internal workings of the Sun, with an alternative label showing scale.
Figure 12. Cinematic data visualization of internal workings of the Sun, with an alternative label showing scale.
Sustainability 15 06845 g012
Figure 13. Instruction for participatory research focus group to inform design choices.
Figure 13. Instruction for participatory research focus group to inform design choices.
Sustainability 15 06845 g013
Figure 14. Example of videos used in focus groups with audiences to obtain feedback on visualization design options for camera path. In this example, Camera Test A kept the camera within the disk of the Milky Way galaxy, while Camera Test B flew out of the disk.
Figure 14. Example of videos used in focus groups with audiences to obtain feedback on visualization design options for camera path. In this example, Camera Test A kept the camera within the disk of the Milky Way galaxy, while Camera Test B flew out of the disk.
Sustainability 15 06845 g014
Figure 15. Example of videos used in focus groups. A/B options were shown to audiences to obtain feedback on the visualization color palette.
Figure 15. Example of videos used in focus groups. A/B options were shown to audiences to obtain feedback on the visualization color palette.
Sustainability 15 06845 g015
Table 1. Focus group structure.
Table 1. Focus group structure.
Follow-Up QuestionsRationaleFocus Group Moderator
Instruction
Step
After the introductory content is played, pause to check that everyone is clear on the instructions and encourage people to ask me to pause to comment or replay something as needed.We avoided having the researcher who was moderating the focus group also present information about the visualizations or what they were aiming to show. This was intended to allow participants to openly criticize any aspect of the visualization without fear of offending the researcher (who adopted a neutral tone/position).Play short video of AVL designer describing the cinematic scientific visualization scene on which we are seeking feedback. This includes (a) a lead-up to the scene/context; (b) a scene’s objective; and (c) clarity on where to focus, an explanation of the work-in-progress nature of the visualization, and the options on the table.1
While test versions A and B are playing, gently request initial responses to what the audience is seeing so far.
“What are you seeing here? Does anything stand out?”
The aim is to understand how data visualizations A and B are perceived in turn, before making direct comparisons. This is intended to surface details in audience perceptions that may be lost in the direct comparison.Sequentially show A/B comparison, asking for comments during the video and after each video has been shown.2
Ask the following:
  • “What was that video clip showing us?”
  • “What did you think about how it played out? Interesting? Clear?”
  • “Did anything seem weird or bother you at all in that clip?”
Here, the aim is to understand the salient features of the visualization. We are also aiming to uncover any limitations on intelligibility or other negative aspects for viewers.After playing test version A, pause to ask about general associations with the design that has just been shown: “What comes to mind when you think about what you just saw?”2a
Probe views on key dimensions:
  • Visual appeal—“Beautiful”.
  • Intelligibility—“Clear and easy to understand”.
  • Scientific realism—“Does this seem scientifically accurate to you?” and “If you were there, do you think this is what it would look like?”
Here, the goal is to gather further feedback that is based on quality criteria specific to this kind of science communication (see Borkiewicz et al., 2022 [5]).Seek further detail on the basis of preferences between A and B. “Let’s discuss your views on the video clip (Test A) you just saw”.2b
Ask the following:
  • “What was that video clip showing us?”
  • “What did you think about how it played out? Interesting? Clear?”
  • “Did anything seem weird or bother you at all in that clip?”
Here, the aim is to understand the salient features of the visualization. We are also aiming to uncover any limitations on intelligibility or other negative aspects for viewers.After playing test B, pause to ask about the initial perceptions of the design that has just been shown: “What comes to mind when you think about what you just saw?”2c
Ask the following:
“Do you have any other thoughts on the comparison between A and B?”
This part of the focus group is designed to draw out further details on the comparison between A and B. This will help to clarify why one option is superior to the other, given the aims of the visualization.Play side-by-side comparison video for test versions A and B, and seek further comments comparing the design options.3
Repeat above process with the next design decision set. If all feedback has already been collected on all design decisions that have been prepared, close the focus group.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jensen, E.A.; Borkiewicz, K.; Naiman, J.P.; Levy, S.; Carpenter, J. Evidence-Based Methods of Communicating Science to the Public through Data Visualization. Sustainability 2023, 15, 6845. https://doi.org/10.3390/su15086845

AMA Style

Jensen EA, Borkiewicz K, Naiman JP, Levy S, Carpenter J. Evidence-Based Methods of Communicating Science to the Public through Data Visualization. Sustainability. 2023; 15(8):6845. https://doi.org/10.3390/su15086845

Chicago/Turabian Style

Jensen, Eric A., Kalina Borkiewicz, Jill P. Naiman, Stuart Levy, and Jeff Carpenter. 2023. "Evidence-Based Methods of Communicating Science to the Public through Data Visualization" Sustainability 15, no. 8: 6845. https://doi.org/10.3390/su15086845

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop