Next Article in Journal
Local Journalism: How the War in Ukraine Imposed Itself on the Production Routine of the Local Press
Next Article in Special Issue
Mobile Media as an “Essential” Tool for Collective Action: Explaining Intentions for Disruptive Political Behavior in U.S. Politics
Previous Article in Journal
Data-Driven Deep Journalism to Discover Age Dynamics in Multi-Generational Labour Markets from LinkedIn Media
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mobile Selective Exposure: Confirmation Bias and Impact of Social Cues during Mobile News Consumption in the United States

by
Morgan Quinn Ross
1,*,
Jarod Crum
1,
Shengkai Wang
1 and
Silvia Knobloch-Westerwick
2
1
School of Communication, The Ohio State University, Columbus, OH 43210, USA
2
Mediated Communication (Web Science), Technische Universität Berlin, 10623 Berlin, Germany
*
Author to whom correspondence should be addressed.
Journal. Media 2023, 4(1), 146-161; https://doi.org/10.3390/journalmedia4010011
Submission received: 20 December 2022 / Revised: 12 January 2023 / Accepted: 20 January 2023 / Published: 28 January 2023
(This article belongs to the Special Issue Mobile Politics)

Abstract

:
Concerns about online news consumption have proliferated, with some evidence suggesting a heightened impact of the confirmation bias and social cues online. This paper argues that mobile media may further shape selective exposure to political content. We conducted two online selective exposure experiments to investigate whether browsing political content on smartphones (vs. computers) facilitates selective exposure to attitude-consistent vs. attitude-discrepant articles (confirmation bias) with high vs. low views (impact of social cues). Notably, these studies leveraged novel random assignment techniques and a custom-designed, mobile-compatible news website. Using a student sample, Study 1 (N = 157) revealed weak evidence that the confirmation bias is stronger on smartphones than computers, and the impact of social cues was similar across devices. Study 2 (N = 156) attempted to replicate these findings in a general population sample. The impact of social cues remained similar across devices, but the confirmation bias was not stronger on smartphones than computers. Overall, the confirmation bias (but not the impact of social cues) manifested on smartphones, and neither outcome was consistently stronger on smartphones than computers.

1. Introduction

Many scholars argue that Americans are becoming increasingly polarized (Abramowitz and Saunders 2008; Iyengar et al. 2019), with consequences for democracy and civic society (Sunstein 2009). It is often argued that polarization is in part shaped by selective exposure (e.g., Stroud 2010) or “any systematic bias in selected messages that diverges from the composition of accessible messages” (Knobloch-Westerwick 2014, p. 3). Notably, scholars have highlighted that people choose sources and articles that align with their political attitudes and ideologies, which in turn strengthen those same ideologies (Levendusky 2013). This confirmation bias in selective exposure reduces exposure to content that is not consistent with pre-existing attitudes, resulting in increased polarization (Sunstein 2009). Crucially, the confirmation bias may be heightened in the online domain, where people can choose from a wider variety of sources (Pearson and Knobloch-Westerwick 2019).
The online domain is also notable for its inclusion of social cues, or signals of the endorsement of others (e.g., views, likes, shares) that accompany most online news articles (Porten-Cheé et al. 2018). Besides attitude consistency, social cues are another commonly studied factor that can guide selective exposure (Haim et al. 2018). Critically, because online news articles entail varying levels of attitude consistency and social cues—and the confirmation bias and impact of social cues may even interact—studies that only consider one or the other may not paint an accurate picture of online news consumption (Messing and Westwood 2014; Mothes and Ohme 2019).
Although several researchers have implicated the rise of polarization on changing news consumption patterns afforded by new media (e.g., Bennett and Iyengar 2008), fewer studies have examined what is now the most common form of news consumption: mobile. Especially prominent since the advent of smartphones (Westlund 2015), mobile news consumption recently eclipsed online news consumption via desktop and laptop computers (Walker 2019). The current studies examine how this shift in which devices people use to consume the news entails changes in how people engage with the news.
To do so, the current studies follow a mix-of-attributes approach to studying new media (Eveland 2003; Ohme 2020), considering particular attributes of mobile media that contribute to selective exposure rather than the novelty of new media writ large. Specifically, it focuses on the small screens and personal nature of mobile media. The current studies argue that these attributes of mobile media will facilitate the confirmation bias, but not the impact of social cues, on smartphones (vs. computers).
Therefore, the current studies investigate mobile selective exposure. Study 1 compares the confirmation bias and impact of social cues on computers and mobile devices among college students and Study 2 attempts to replicate Study 1 in a general population sample. Both studies utilize novel approaches to random assignment of browsing device and a custom-designed, mobile-compatible news website to support the causality and ecological validity of the findings.

1.1. Confirmation Bias

The confirmation bias refers to spending more time on or selecting more attitude-consistent than attitude-discrepant information. Early communication research identified the confirmation bias in the context of political campaigns (Lazarsfeld et al. 1944). Subsequent research, mostly based on Festinger’s (1957) cognitive dissonance theory, found moderate empirical evidence for the phenomenon, but the confirmation bias has been particularly robust in the last two decades. In particular, the online domain may amplify the confirmation bias (Bennett and Iyengar 2008), as selectivity is enhanced due to the wider array of available content and non-linear website structure (Pearson and Knobloch-Westerwick 2019). The current studies, which test the confirmation bias online, thus pose the following hypothesis:
H1. 
Participants will exhibit the confirmation bias.
Critically, the confirmation bias is challenging and relates to identity. To select an attitude-consistent article, users typically read headlines and determine whether they align with their attitudes (cf. Westerwick et al. 2017). This procedure is cognitively taxing, explaining why the confirmation bias fails to materialize when participants complete a concurrent distractor task (Jang 2014). Further, the confirmation bias is related to one’s identities, including personal traits and social group memberships (Oyserman 2001), in that people often select information that aligns with their salient identities. For example, American opponents of immigration select more anti-immigration news when their national identity is primed (Wojcieszak and Garrett 2018). Therefore, to the extent that the small screen and personal nature of mobile media impact cognitive processing and identity, the confirmation bias may differ in the mobile domain.

Confirmation Bias on Smartphones

A wide body of work indicates that people find it challenging to successfully navigate small screens (Ofcom 2018). Users find it more difficult to type (Bao et al. 2011; Kamvar and Baluja 2006), read (Molyneux 2015; Napoli and Obar 2014), and search for (Chae and Kim 2004; Ghose et al. 2013) information on mobile media. Moreover, users display less cognitive arousal and spend less time reading news articles and fixating on news content on smartphones compared with desktop computers (Dunaway et al. 2018; Dunaway and Suroka 2019). This work suggests that the physical constraints of using small screens could deplete cognitive resources that would otherwise be used to enact the confirmation bias. However, in a more direct test of the effect of small screens on cognitive processing, Kim and Sundar (2016) found that smaller screens were associated with greater systematic processing. They argued that the ease of using larger screens may render them distracting and less conducive to more effortful processing.
In addition, the personal nature of mobile media could encourage the selection of attitude-consistent content. To the extent that smartphones are perceived as extensions of the self (Ross and Bayer 2021), it is proposed that mobile users’ identities are more salient when using these devices. In turn, smartphone browsers may be more likely to select content that aligns with these identities, such as attitude-consistent content (see Wojcieszak and Garrett 2018). The personal nature of mobile media may thus facilitate the confirmation bias.
In sum, small screens may promote effortful processing (and in turn the confirmation bias) and the personal nature of mobile media may encourage the selection of identity-consistent content. Taken together, the confirmation bias is expected to persist, and may in fact be strengthened, on mobile devices. The current studies, therefore, pose the following two hypotheses:
H2. 
Participants will exhibit confirmation bias on smartphones.
H3. 
The confirmation bias will be stronger on smartphones than on computers.

1.2. Impact of Social Cues

The impact of social cues refers to spending more time on or selecting more information with higher social cues. With the shift towards online news consumption, social cues have grown in prominence. Legacy media websites, social media, and smartphone apps often use a variety of social cues to sort their content and entice their consumers (Messing and Westwood 2014). In general, people believe that socially endorsed information is likely to be relevant and credible (Chaiken 1987; Sundar and Nass 2001). However, there are myriad types of social cues (Haim et al. 2018). Notable dimensions of social cues include whether the endorsement is explicit (Claypool et al. 2001) and whether the endorsers are known (Kaiser et al. 2021). The current studies opted to operationalize social cues as views, which represent implicit and unknown endorsement. Previous work has demonstrated that the inclusion of views (Ohme and Mothes 2020) and a higher number of views (Knobloch-Westerwick et al. 2005) increases selective exposure. Moreover, as the current studies utilize a news website rather than a social media platform (Knobloch-Westerwick 2014), the choice of views (vs. likes) is more ecologically valid. The current studies thus pose the following hypothesis:
H4. 
Participants will exhibit the impact of social cues.
Crucially, in contrast to the confirmation bias, social cues neither require effortful processing nor clearly relate to identity. Social recommendations are easy to process, often consisting of a single number indicating the popularity of an article (Knobloch-Westerwick et al. 2005). In addition, it is unclear whether content with high social cues would be more consistent with one’s identities than content with low social cues. For example, people who view themselves as “popular” may select content with high social cues, whereas those who view themselves as “unique” may select content with low social cues (Brewer 1991). Therefore, even if the small screen and personal nature of mobile media impact cognitive processing and identity, the impact of social cues may not differ in the mobile domain.

Impact of Social Cues on Smartphones

As detailed above, small screens appear to support more effortful processing, whereas social cues would likely play a larger role in selective exposure if users relied on less effortful processing. Furthermore, although the personal nature of mobile media may encourage the selection of identity-consistent content, it is unclear whether such content would be accompanied by high or low social cues. Taken together, the impact of social cues is expected to persist, but may not be stronger or weaker, on mobile devices. The current studies therefore pose the following hypothesis and research question:
H5. 
Participants on smartphones will exhibit the impact of social cues.
RQ1. 
Will the impact of social cues differ between smartphones and computers?

1.3. Confirmation Bias and Impact of Social Cues

Finally, scholars have argued that social cues can function to “override” the confirmation bias, such that high social cues entice people to select attitude-discrepant content (Messing and Westwood 2014; Mothes and Ohme 2019). This effect seems most likely to occur in the online domain, where attitude consistency and social cues vary. However, to the extent that selective exposure processes differ in the mobile domain as outlined above, it is important to explore how the confirmation bias and impact of social cues interact in the mobile domain. Thus, we pose the following research question:
RQ2. 
How do the confirmation bias and impact of social cues interact on smartphones versus computers?
Study 1 and Study 2 test these hypotheses and research questions in a student sample and a general population sample, respectively. The data that support the findings of this study are openly available, along with preregistrations,1 analysis scripts, and Supplementary Materials, at https://osf.io/2zwdn/ (suppl. materials).

2. Study 1

2.1. Overview

A 2 × 2 × 2 online selective exposure experiment was conducted with a student sample, with browsing device (computer vs. smartphone) as a between-subjects factor and attitude consistency (attitude consistent vs. discrepant) and social cues (high vs. low views) as within-subjects factors. During the study, participants were randomly assigned to enter a short URL to a custom-designed, mobile-compatible news website on either a smartphone or a computer. Crucially, participants indicated prior to random assignment whether they could access both devices, supporting random assignment.

2.2. Method

2.2.1. Participants

The sample was recruited among students at The Ohio State University in the United States from March to September 2020. Participants were recruited using a participant pool of students in communication courses as well as through instructor announcements in communication courses. They received extra credit for completing the study.
After exclusions, our final sample consisted of 157 participants (see Supplementary Materials for full reporting).2 Participants in the final sample were 22.1 years old on average (SD = 5.2). Fifty-seven participants identified as male, 97 as female, and 1 as other. Ninety-nine participants had a high school diploma, 36 had an associate degree, and 20 had a bachelor’s degree. Participants in the final sample differed from participants who dropped out in terms of age, but not gender or education; younger participants were more likely to drop out (see Supplementary Materials). An additional sample was recruited at the same university to pretest the stimuli. This sample consisted of 57 participants who were on average 22.8 years old (SD = 3.5).

2.2.2. Procedure

This research was approved by the Institutional Review Board at the first author’s institution in 2020. Participants completed a consent form and baseline questionnaire on a device of their choosing, providing information regarding their political attitudes and available devices. They were then randomly assigned to complete the selective exposure task on a computer or a smartphone. If participants did not have a working computer or smartphone at hand, they completed the task on the available device (but were excluded from analyses). Participants (computer: n = 85; smartphone: n = 72) spent three minutes browsing the website, but they were not told how or how long to browse the website. Finally, participants completed the endpoint questionnaire. See Supplementary Materials for more information about the procedure.

2.2.3. Stimuli and Stimuli Pretest

The stimuli and website were adapted from prior research (Pearson and Knobloch-Westerwick 2019). The stimuli (i.e., news articles) were displayed on a mobile-compatible, custom-designed news website entitled The Compilation. Participants were shown eight articles on the home page. The articles covered eight political topics: abortion, business regulation, affirmative action, social welfare, the death penalty, gun control, the minimum wage, and gay marriage. Each participant viewed either a liberal or conservative article on each topic (randomized by the software), with four liberal and four conservative articles overall. Thus, each participant saw approximately four attitude-consistent articles (M = 3.94, SD = 1.12). Half of the articles were assigned a high social cue (6.8 K, 7.4 K, 7.7 K, or 8.1 K views) and half were assigned a low social cue (319, 482, 814, or 920 views), such that two liberal and two conservative articles had high social cues. Articles were displayed in a random order and the association of social cues with article issues and stances was randomized.
The overview page of the website consisted of a header with The Compilation logo and the titles and leads of the eight articles. Font, font size, and font color were identical across conditions. The header in the computer condition included a search bar and the header in the smartphone condition included a hamburger icon. Furthermore, in the computer condition, the articles were displayed in two columns, while in the smartphone condition, the articles were displayed in a single column. Each article on the overview page consisted of a tile, view number cue, and short lead description. Participants could click on a tile to be directed to the article’s content page, which included a headline, summary, and article text. They returned to the overview page by clicking a Back to Overview button. After three minutes, participants were instructed to return to the questionnaire (screenshots of stimuli can be found in Figure 1 and on the OSF page).
The article texts were extracted from news websites (e.g., Washington Post). They were adjusted to be the same length as typical news articles (M = 675; SD = 10.3). The topics were among the most important (Doherty et al. 2016) and ideologically contentious issues in U.S. politics (Newport and Dugan 2017). Article titles and leads were reworded to present a clear conservative or liberal stance for each topic. Titles ranged from 8 to 10 words; leads were between 26 and 28 words. All titles and leads were pretested previously to ensure the political stance could be easily perceived (Pearson and Knobloch-Westerwick 2019). Liberal titles and leads were perceived as significantly more liberal than conservative titles and leads, but not more interesting or typical; see the Supplementary Materials for full reporting. Additionally, in a separate pretest (McKnight 2018), the numbers of views associated with high and low social cues were perceived as distinct. They also align with other work on social cues (Dvir-Gvirsman 2019; Li et al. 2020; Ohme and Mothes 2020).

2.2.4. Measures

The Supplementary Materials contain full scales and descriptive statistics for all measures.
2.2.4.1. Selective Exposure. Selective exposure was tracked unobtrusively by software (e.g., Knobloch-Westerwick 2015). The time spent on each article and the number of times participants clicked on each article were recorded. Across devices, participants read about three articles (M = 3.22; SD = 2.00), spending about 32 seconds on each article they read (M = 31.9; SD = 13.8) and an additional 76 seconds (M = 76.4; SD = 43.9) on the overview page. Selective exposure was operationalized in two ways to ensure robustness. It was operationalized for the four types of articles (e.g., attitude-consistent and high social cues) as (1) the proportion of each article type that was selected and (2) the average time spent reading each article type.
2.2.4.2. Political Attitudes. Participants gauged their attitudes on ten political issues (the eight topics from the selective exposure website and two control issues) in the baseline questionnaire. Participants were asked to “rate how [they] feel personally about” 10 positions with a slider scale from 0 to 100 initially set to 50, where only the anchor labels were shown (0: Strongly oppose; 100: Strongly agree). Responses were used to determine attitude consistency. Attitudes of 50 (1.8%) were excluded.
2.2.4.3. Control Variables. In all analyses, we controlled for the number of attitude-consistent articles displayed on the selective exposure website. We also examined additional controls, such as need for cognition, political interest, and mobile news preference, which were identified by prior research (Pearson and Knobloch-Westerwick 2019) to influence selective exposure. Results were the same without these additional controls; thus, we report results only controlling for the number of attitude-consistent articles.

2.3. Results

Repeated-measure ANOVAs were performed. The dependent variables were the proportion of each article type selected and average time spent on each article type (e.g., attitude-consistent and high social cues). Attitude consistency (consistent vs. discrepant) and social cues (high vs. low) served as within-subjects factors, while browsing device (smartphone vs. computer) served as a between-subjects factor. The direct effects of attitude consistency and social cues, in the overall sample and on smartphones specifically, were examined, and we additionally specified interactions between attitude consistency and browsing device; social cues and browsing device; and attitude consistency, social cues, and browsing device. We report p-values for statistical significance and ω2 (omega squared) for practical significance.

2.3.1. Confirmation Bias

Overall, attitude consistency had a significant impact in terms of selection, F(1,597) = 5.24, p = 0.02, ω2 = 0.007, and time spent, F(1,597) = 3.87, p = 0.05, ω2 = 0.005, supporting H1. On smartphones specifically, attitude consistency had a marginally significant impact in terms of selection, F(1,274) = 2.89, p = 0.09, ω2 = 0.007, but a significant impact in terms of time spent, F(1,274) = 9.22, p = 0.003, ω2 = 0.03, providing partial support for H2. Finally, the interaction between attitude consistency and browsing device was significant for time spent, F(1,595) = 4.12, p = 0.04, ω2 = 0.005, but not selection, F(1,595) = 0.01, p = 0.93, providing partial support for H3 (see Table 1).

2.3.2. Impact of Social Cues

Overall, social cues marginally affected selective exposure in terms of selection, F(1,597) = 3.71, p = 0.05, ω2 = 0.005, but not time spent, F(1,597) = 1.59, p = 0.21, providing partial support for H4. On smartphones specifically, though, the impact of social cues did not manifest (selection: F(1,274) = 2.07, p = 0.15; time spent: F(1,274) = 1.15, p = 0.29); H5 was not supported. Moreover, the interaction between social cues and browsing device was not significant (selection: F(1,595) = 0.02, p = 0.90; time spent: F(1,595) = 0.04, p = 0.84). Regarding RQ1, the impact of social cues appeared similar on smartphones and computers (see Table 2).

2.3.3. Confirmation Bias and Impact of Social Cues

The interaction between attitude consistency, social cues, and browsing device was not significant (selection: F(1,591) = 0.25, p = 0.62; F(1,591) = 2.02, p = 0.16). Regarding RQ2, the interaction of the confirmation bias and impact of social cues appeared similar on smartphones and computers (see Table 3).

2.4. Summary

Study 1 examined selective exposure to attitude-consistent (vs. discrepant) articles with high (vs. low) social cues on computers and smartphones in a student sample. Overall, in support of H1, participants spent more time on and selected more attitude-consistent articles. Lending support to H2, mobile browsers specifically exhibited the confirmation bias as well, spending significantly more time on and selecting marginally more attitude-consistent content. Critically, mobile browsers exhibited a stronger confirmation bias than computer browsers, in terms of time but not selection, partially supporting H3. In terms of social cues, participants selected marginally more articles with high social cues overall, but did not spend more time on these articles, yielding partial support for H4. However, on smartphones specifically, social cues did not affect selective exposure; H5 was not supported. Regarding RQ1, the impact of social cues was similar across browsing device. Finally, regarding RQ2, the interaction between confirmation bias and impact of social cues did not differ across browsing device.

3. Study 2

3.1. Overview

Another 2 × 2 × 2 online selective exposure experiment was conducted with a general population sample, with attitude consistency (attitude consistent vs. discrepant) and social cues (high vs. low views) as within-subjects factors and browsing device (computer vs. smartphone) as a between-subjects factor. Unlike Study 1, participants were randomly assigned to a browsing device during a pre-test and were then emailed a link to the study with instructions to take it on that device. Again, participants indicated prior to random assignment whether they could access both devices, supporting random assignment. The stimuli, measures, and analyses were identical to Study 1.

3.2. Method

3.2.1. Participants

The sample was recruited in the United States from June to August 2021. A general population sample stratified by age, gender, and education was recruited via Dynata. Dynata recruits participants via e-mail invitations, SMS and text messages, telephone alerts, banners, and messaging on web sites and online communities. Incentives were distributed by Dynata based on their standard for study compensation.
After exclusions, our final sample consisted of 156 participants (see Supplementary Materials for full reporting).3 Participants in the final sample were 43.4 years old on average (SD = 16.0). Fifty-four participants identified as male and 93 identified as female. Sixty-two participants had a bachelor’s degree, 44 had a high school degree, 22 had an associate degree, 13 had a master’s degree, 3 had no educational degree, 2 had a doctorate, and 1 had a medical or law degree. Participants in the final sample differed from participants who dropped out in terms of age and gender, but not education; older and male participants were more likely to drop out (see Supplementary Materials).

3.2.2. Procedure

This research was approved by the Institutional Review Board at the first author’s institution in 2021. Participants first completed a pre-test, filling out a consent form and evaluating whether they could access a computer or smartphone, both in general and for the subsequent survey. One week later, participants received an email with a link to the survey, with the instructions that they must complete the survey on the assigned device. Device assignment was randomized for participants who indicated that they could complete the survey on either device. Participants who indicated that they could only complete the survey on one device completed the survey on that device (but were excluded from analyses). The rest of the survey (i.e., baseline questionnaire, selective exposure website, and endpoint questionnaire) proceeded similarly to Study 1, with the exception that the study components were automatically loaded in the same window on the same device where participants (computer: n = 78; smartphone: n = 78) completed previous components. See Supplementary Materials for more information about the procedure.

3.3. Results

3.3.1. Confirmation Bias

Attitude consistency had a strong significant impact, showing a confirmation bias overall in terms of selection, F(1,1132) = 15.81, p < 0.001, ω2 = 0.01, and time spent, F(1,1132) = 28.84, p < 0.001, ω2 = 0.02, supporting H1. On smartphones specifically, the confirmation bias was significant as well, in terms of both selection, F(1,573) = 5.97, p = 0.01, ω2 = 0.009, and time spent, F(1,573) = 15.72, p < 0.001, ω2 = 0.03, supporting H2. However, the interaction between attitude consistency and browsing device was not significant for selection, F(1,1130) = 0.25, p = 0.62, or time spent, F(1,1130) = 0.12, p = 0.73; H3 was not supported (see Table 4).

3.3.2. Impact of Social Cues

Social cues did not affect selective exposure overall, in terms of selection, F(1,1132) = 0.52, p = 0.47, or time spent, F(1,1132) = 1.85, p = 0.17; H4 was not supported. On smartphones specifically, the impact of social cues did not emerge either, both in terms of selection, F(1,573) = 2.19, p = 0.14, and time spent, F(1,573) = 0.94, p = 0.33; H5 was not supported. Moreover, the interaction between social cues and browsing device was not significant for selection, F(1,1130) = 1.69, p = 0.19, or time spent, F(1,1130) = 0.00, p = 0.97. Regarding RQ1, the impact of social cues appeared similar on smartphones and computers (see Table 5).

3.3.3. Confirmation Bias and Impact of Social Cues

The interaction between attitude consistency, social cues, and browsing device was not significant (selection: F(1,1126) = 0.01, p = 0.91; time spent: F(1,1126) = 0.97, p = 0.33). Regarding RQ2, the interaction of the confirmation bias and impact of social cues appeared similar on smartphones and computers (see Table 6).

3.4. Summary

Study 2 attempted to replicate Study 1 by examining selective exposure to attitude-consistent (vs. discrepant) articles with high (vs. low) social cues on computers and smartphones in a general population sample. Overall, in support of H1 and H2, participants overall and mobile browsers specifically spent more time on and selected more attitude-consistent articles. However, mobile browsers did not exhibit a stronger confirmation bias than computer browsers in terms of selection or time spent; H3 was not supported. In terms of social cues, neither H4 nor H5 was supported; participants overall and mobile browsers specifically did not select more articles with high social cues and did not spend more time on these articles. Regarding RQ1, the impact of social cues was similar across browsing device. Regarding RQ2, the interaction between confirmation bias and impact of social cues did not differ across browsing device.

4. Discussion

With the rise of polarization, scholars have interrogated the role of changing news consumption patterns (Bennett and Iyengar 2008). The shift from traditional sources such as newspapers to newer sources such as the Internet may engender more selectivity towards attitude-consistent content (Pearson and Knobloch-Westerwick 2019), and also entails a news environment where social cues proliferate (Porten-Cheé et al. 2018). These online news consumers are increasingly accessing the news from their smartphones rather than their desktop or laptop computers. The small screens and personal nature of mobile media are therefore increasingly germane for selective exposure. Thus, the current studies developed a custom-designed, mobile-compatible news website and deployed novel random assignment techniques in two studies to investigate the impacts of mobile news consumption on the confirmation bias and impact of social cues among both college students and adults in the United States.
The current studies found a confirmation bias overall (H1) and on smartphones specifically (H2). The overall confirmation was perhaps unsurprising given the wealth of evidence for the confirmation bias. Yet, to the authors’ knowledge, the current studies represent the first causal evidence that the confirmation bias persists in the mobile domain (beyond behavioral intention; Kim and Lu 2020). This finding manifested for both selection and time as well as among college students and adults, lending confidence to its robustness. It runs counter to propositions that news consumers are less attentive in the mobile domain (Dunaway et al. 2018) or that mobile access reduces ideological self-selection (Yang et al. 2020), as participants identified, selected, and read attitude-consistent content on their smartphones. Instead, it aligns with recent work that news consumers exhibit similar attention on computers and smartphones (Ohme et al. 2022a). Although smartphones may be more likely to be used in distracting public settings, smartphone users still select attitude-consistent content, perhaps because they immerse themselves in their devices to avoid distraction (see Bayer et al. 2016; Ohme et al. 2022b).
However, H3 only garnered partial support; mobile browsers displayed greater confirmation bias than computer browsers, but only in terms of time spent (vs. selection) and among college students (vs. the general population). As such, this finding should be treated with caution. The current studies suggested that the confirmation bias would be heightened on smartphones (vs. computers) because their small screens are conducive to effortful processing and their personal nature supports the selection of identity-consistent content. To distinguish between these two explanations, Study 2 additionally captured self-report measures of systematic and heuristic processing (Neuwirth et al. 2002). Although cognitive processing is notoriously difficult to self-report (Kim and Sundar 2016), neither mode of processing was significantly affected by browsing device (see Supplementary Materials for full reporting), which aligns with recent work that leveraged mobile eye-tracking to capture attention (Ohme et al. 2022a). If this null finding generalizes to the student population, the partially significant findings in Study 1 may therefore stem from the personal nature of mobile media, which is heightened among younger users (Ross and Bayer 2021). In addition, the fact that the finding emerged in terms of time spent but not selection suggests that the confirmation bias may only be heightened on mobile devices at higher levels of selective exposure (Mothes and Ohme 2019). The personal nature of the content may emerge more starkly when reading the article instead of its headline and summary. Future work should continue to probe whether and how the confirmation bias on mobile devices is heightened for certain (e.g., younger) populations and at certain (e.g., higher) levels of selective exposure.
Conversely, the current studies found limited evidence for the impact of social cues overall (H4) and on smartphones specifically (H5). The overall impact of social cues only emerged as marginally significant in terms of selection among the student sample; the impact of social cues on smartphones specifically never emerged. Participants appeared to rely on the attitude consistency of news articles rather than the number of social cues. Moreover, the (lack of) impact of social cues was similar across devices (RQ1). This null finding supports the expectation that the small screen and personal nature of mobile media would not affect the impact of social cues. Finally, the interaction between the confirmation bias and the impact of social cues did not differ between smartphones and computers (RQ2). These findings appear to go against prior findings that social cues may override the confirmation bias (Messing and Westwood 2014; Mothes and Ohme 2019). However, these studies operationalized social cues in different ways. In light of these inconsistent operationalizations of social cues (Haim et al. 2018), future research should explore a variety of social cues to systematically understand their impacts.
The current studies have important implications. Comparisons of the confirmation bias and impact of social cues on computers and smartphones suggest continuity in these processes across devices. This aligns with a growing literature on the small or null effects of new media (e.g., Orben and Przybylski 2019). The current findings are important not only to demonstrate that processes such as the confirmation bias remain important in the mobile domain, but also to avoid overstating the influence of new media.
These null findings are particularly credible due to the novel random assignment techniques deployed in the current studies. Lab experiments facilitate random assignment: thus, researchers have typically studied mobile selective exposure (e.g., Ohme et al. 2022a) and differences in mobile news consumption more broadly (e.g., Dunaway et al. 2018) in lab settings. Conversely, online participants typically complete surveys on either smartphones or computers, and differences between these survey-takers could confound results. To mitigate this possibility, in the current studies participants indicated whether they could access a computer or a smartphone in the baseline questionnaire (Study 1) or a pre-test (Study 2), enabling random assignment. Thus, survey-takers who prefer using smartphones or computers were more likely to be evenly distributed in expectation across conditions. This lends robustness to the conclusion that browsing device had a limited impact on selective exposure.
Some limitations must be noted. The study design sought to achieve parity between computer and smartphone conditions while still supporting ecological validity. On the one hand, these design choices created differences between the two conditions that could have been responsible for the observed (lack of) device differences. For instance, all of the articles were immediately visible to computer browsers, whereas smartphone browsers had to scroll down to view all of the content. On the other hand, these design choices may have created artificial parity between mobile and computer news experiences at the expense of ecological validity. In the real world, participants would not see the same eight articles on the same eight issues; instead, they would likely browse algorithmically curated content on different issues. The current studies sought to balance these competing concerns. In general, studies comparing computers and smartphones have considered a wide variety of media content (Dunaway et al. 2018; Dunaway and Suroka 2019; Kim and Sundar 2016; Ohme et al. 2022a), which entail meaningfully different user experiences. Future studies should move beyond device comparisons and study more granular differences that reflect the complexities of the media environment.
There were also limitations to the novel random assignment techniques. First, the studies used different techniques. In Study 1, participants could choose which device they used to complete the baseline and endpoint questionnaires. In Study 2, the device was randomly assigned after the pre-test. Further, participants in Study 1 may have changed devices to browse the selective exposure website, whereas it was automatically loaded for participants in Study 2. Completing self-report measures on different devices and differences in continuity across study components may have been responsible for divergent findings between studies. Second, these approaches were complicated. In Study 1, participants had to open a new tab in a browser—potentially on a different device—to enter a short URL. Participants then had to return to the survey to continue the study. Many participants who completed the study noted that it was straightforward to complete, but the resulting sample was likely skewed towards participants who persevered through the study (who appeared to be older college students). Study 2 partially mollified this concern, but also exhibited more attrition due to the one-week delay between the pre-test and the survey. Indeed, this attrition was starker among older adults, who tend to have lower levels of digital literacy (Moore and Hancock 2020). Third, to fulfill random assignment, participants had to be able to access both a smartphone and a computer. This mitigates the representativeness of our final samples; participants needed multiple modes of Internet access, which is associated with higher levels of income and education (Medero et al. 2022). Fourth, self-selection could not be completely curtailed. Although all participants could access both a smartphone and a computer, participants who preferred using their smartphones would have still been more likely to successfully complete the experiment in the smartphone condition (and vice-versa). Yet, short of bringing participants into the lab during a global pandemic, requiring participants to be able to access both devices minimized the risk of self-selection.
These findings pave the way towards future research that further disentangles the cognitive and identity processes that underlie mobile news consumption. Mobile news consumers likely rely on both systematic and heuristic processing. The mixture between the two could depend on individual differences (e.g., need for cognition), but also media environments. In the real world, people can choose how they access the news. A person who uses both their computer and smartphone for news may browse the news systematically on their computers in the morning while heuristically “snack” on the news on their smartphones throughout the day (Molyneux 2018), but someone who relies on only their smartphone may engage with mobile news more systematically. The current studies importantly examine the causal impact of mobile selective exposure, but future research must interrogate how this impact interacts with self-selection. Individual differences and media environments also implicate different identity processes. For example, extraverted mobile-only users may be more responsive to social cues. This may be magnified when participants use their own devices. Participants could be assigned to browse their own smartphones or smartphones provided by researchers to better distinguish between the cognitive processing and identity mechanisms presented in the current studies (Melumad and Pham 2020).
These future research directions could be grounded in established theoretical frameworks. One possibility is the Theory of Interactive Media Effects (TIME), which the authors considered between Study 1 and Study 2 but ultimately only incorporated peripherally. This theory emerged in the tradition of studying technology in tandem with user psychology and connects technological affordances with psychological processes (Sundar et al. 2015). It could thus help connect the novel attributes of mobile media, on the one hand, and the confirmation bias and impact of social cues, on the other. Indeed, it specifies two affordances that map onto the novel attributes of mobile media considered in the current studies: the small screen and the self-as-source perception, where the latter refers to a perception of the self as the source of information obtained from media (Sundar 2007).
However, such an application faces two key obstacles. First, the theory offers conflicting explanations for the impact of small screens. The action pathway of the TIME specifies how affordances impact cognitive processing, predicting that small screens reduce perceptual bandwidth and thus user engagement (e.g., systematic processing). However, the cue pathway of the TIME specifies how affordances impact heuristics and thus cognitive processing, with small screens increasing the realism and being-there heuristics, thus reducing systematic processing. As such, it is unclear whether smaller screens will increase or decrease systematic processing (perhaps resulting in the null difference between cognitive processing on computers and smartphones observed in Study 2). Second, the self-as-source perception does not map neatly onto the personal nature of mobile media. The self-as-source perception is typically understood with reference to customized interfaces (e.g., a news aggregator that a user customizes with their favorite sources; Kang and Sundar 2016). The concept would need to be extended to customized devices (Lee and Sundar 2015), with the expectation that people may perceive themselves the source of information retrieved from interfaces on such devices. However, an exploratory comparison of the self-as-source perception on computers and smartphones did not yield significance (see Supplementary Materials for full reporting).

5. Conclusions

The current studies leveraged a custom-designed, mobile-compatible news website and novel methods of random device assignment to investigate the unique ramifications of selective exposure in the mobile domain with samples from two populations. The findings exhibited the confirmation bias but not the impact of social cues. Crucially, the confirmation bias, but not the impact of social cues, persisted in the mobile domain and may be somewhat heightened on smartphones compared with computers among college students. Further, the interaction between the confirmation bias and impact of social cues did not differ between browsing devices. Taken together, the current studies implicate the mobile domain as a potential continuer—if not enhancer—of confirmation bias, above and beyond the impact of social cues, with negative ramifications for polarization and democracy.

Supplementary Materials

The Supplementary Materials are available at https://osf.io/2zwdn/.

Author Contributions

Conceptualization, M.Q.R. and S.K.-W.; methodology, M.Q.R. and S.K.-W.; software, S.W. and S.K.-W.; validation, M.Q.R. and J.C.; formal analysis, M.Q.R.; investigation, M.Q.R. and J.C.; resources, S.K.-W.; data curation, M.Q.R. and J.C.; writing—original draft preparation, M.Q.R. and J.C.; writing—review and editing, M.Q.R., J.C. and S.K.-W.; visualization, M.Q.R.; supervision, S.K.-W.; project administration, M.Q.R.; funding acquisition, M.Q.R. and S.K.-W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of The Ohio State University (Study ID: 2020E0345; Date of Approval: 15 April 2020).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

The data that support the findings of this study are openly available at https://osf.io/2zwdn/.

Conflicts of Interest

The authors declare no conflict of interest.

Notes

1
Both studies were preregistered. However, the predictions in this manuscript depart from these preregistrations, due to subsequent literature review and the goal of streamlining predictions across the two studies. Due to these departures from the preregistrations, we treat all analyses as exploratory. Yet, we still share our preregistrations for transparency.
2
Notably, to ensure random assignment, we excluded participants who reported that they could not access both a smartphone and a computer.
3
See note 2 above.

References

  1. Abramowitz, Alan I., and Kyle L. Saunders. 2008. Is polarization a myth? The Journal of Politics 70: 542–55. [Google Scholar] [CrossRef]
  2. Bao, Patti, Jeffrey Pierce, Stephen Whittaker, and Shumin Zhai. 2011. Smart phone use by non-mobile business users. Paper presented at 13th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI’11), Stockholm, Sweden, August 30–September 2; pp. 445–54. [Google Scholar] [CrossRef]
  3. Bayer, Joseph B., Sonya Dal Cin, Scott W. Campbell, and Elliot Panek. 2016. Consciousness and self-regulation in mobile communication. Human Communication Research 42: 71–97. [Google Scholar] [CrossRef]
  4. Bennett, W. Lance, and Shanto Iyengar. 2008. A new era of minimal effects? The changing foundations of political communication. Journal of Communication 58: 707–31. [Google Scholar] [CrossRef]
  5. Brewer, Marilynn B. 1991. The social self: On being the same and different at the same time. Personality and Social Psychology Bulletin 17: 475–82. [Google Scholar] [CrossRef]
  6. Chae, Minhee, and Jinwoo Kim. 2004. Do size and structure matter to mobile users? An empirical study of the effects of screen size, information structure, and task complexity on user activities with standard web phones. Behaviour & Information Technology 23: 165–81. [Google Scholar] [CrossRef]
  7. Chaiken, Shelly. 1987. The heuristic model of persuasion. In Social Influence: The Ontario Symposium. Edited by Mark P. Zanna, James M. Olson and C. Peter Herman. New York: Psychology Press, vol. 5, pp. 3–39. [Google Scholar]
  8. Claypool, Mark, Phong Le, Makoto Wased, and David Brown. 2001. Implicit interest indicators. Paper presented at 6th International Conference on Intelligent User Interfaces, Santa Fe, NM, USA, January 14–17; pp. 33–40. [Google Scholar] [CrossRef] [Green Version]
  9. Doherty, Carroll, Jocelyn Kiley, and Bridget Jameson. 2016. Partisanship and Political Animosity in 2016. Washington, DC: Pew Research Center. Available online: https://www.pewresearch.org/politics/2016/06/22/partisanship-and-political-animosity-in-2016/ (accessed on 12 October 2020).
  10. Dunaway, Johanna, and Stuart Suroka. 2019. Smartphone-size screens constrain cognitive access to video news stories. Information, Communication & Society 24: 69–84. [Google Scholar] [CrossRef] [Green Version]
  11. Dunaway, Johanna, Kathleen Searles, Mingxiao Sui, and Newly Paul. 2018. News attention in a mobile era. Journal of Computer-Mediated Communication 23: 107–24. [Google Scholar] [CrossRef] [Green Version]
  12. Dvir-Gvirsman, Shira. 2019. I like what I see: Studying the influence of popularity cues on attention allocation and news selection. Information, Communication & Society 22: 286–305. [Google Scholar] [CrossRef]
  13. Eveland, William P. 2003. A “mix of attributes” approach to the study of media effects and new communication technologies. Journal of Communication 53: 395–410. [Google Scholar] [CrossRef]
  14. Festinger, Leon. 1957. A Theory of Cognitive Dissonance. Stanford: Stanford University Press, vol. 2. [Google Scholar]
  15. Ghose, Anindya, Avi Goldfarb, and Sang Pil Han. 2013. How is the mobile Internet different? Search costs and local activities. Information Systems Research 24: 613–31. [Google Scholar] [CrossRef] [Green Version]
  16. Haim, Mario, Anna Sophie Kümpel, and Hans-Bernd Brosius. 2018. Popularity cues in online media: A review of conceptualizations, operationalizations, and general effects. SCM Studies in Communication and Media 7: 186–207. [Google Scholar] [CrossRef]
  17. Iyengar, Shanto, Yphtach Lelkes, Matthew Levendusky, Neil Malhotra, and Sean J. Westwood. 2019. The origins and consequences of affective polarization in the United States. Annual Review of Political Science 22: 129–46. [Google Scholar] [CrossRef]
  18. Jang, S. Mo. 2014. Challenges to selective exposure: Selective seeking and avoidance in a multitasking media environment. Mass Communication and Society 17: 665–88. [Google Scholar] [CrossRef]
  19. Kaiser, Johannes, Tobias R. Keller, and Katharina Kleinen-von Königslöw. 2021. Incidental news exposure on Facebook as a social experience: The influence of recommender and media cues on news selection. Communication Research 48: 77–99. [Google Scholar] [CrossRef] [Green Version]
  20. Kamvar, Maryam, and Shumeet Baluja. 2006. A large scale study of wireless search behavior: Google mobile search. Paper presented at SIGCHI Conference on Human Factors in Computing Systems, Montréal, QC, Canada, April 22–27; pp. 701–9. [Google Scholar] [CrossRef]
  21. Kang, Hyunjin, and S. Shyam Sundar. 2016. When self is the source: Effects of media customization on message processing. Media Psychology 19: 561–88. [Google Scholar] [CrossRef]
  22. Kim, Ki Joon, and S. Shyam Sundar. 2016. Mobile persuasion: Can screen size and presentation mode make a difference to trust? Human Communication Research 42: 45–70. [Google Scholar] [CrossRef]
  23. Kim, Minchul, and Yanqin Lu. 2020. Testing partisan selective exposure in a multidimensional choice context: Evidence from a conjoint experiment. Mass Communication and Society 23: 107–27. [Google Scholar] [CrossRef]
  24. Knobloch-Westerwick, Silvia. 2014. Choice and Preference in Media Use: Advances in Selective Exposure Theory and Research. New York: Routledge. [Google Scholar]
  25. Knobloch-Westerwick, Silvia. 2015. The selective exposure self-and affect-management (SESAM) model: Applications in the realms of race, politics, and health. Communication Research 42: 959–85. [Google Scholar] [CrossRef]
  26. Knobloch-Westerwick, Silvia, Nikhil Sharma, Derek L. Hansen, and Scott Alter. 2005. Impact of popularity indications on readers’ selective exposure to online news. Journal of Broadcasting & Electronic Media 49: 296–313. [Google Scholar] [CrossRef]
  27. Lazarsfeld, Paul F., Bernard Berelson, and Hazel Gaudet. 1944. The People’s Choice. New York: Duell, Sloan & Pearce. [Google Scholar]
  28. Lee, Seoyeon, and S. Shyam Sundar. 2015. Cosmetic customization of mobile phones: Cultural antecedents, psychological correlates. Media Psychology 18: 1–23. [Google Scholar] [CrossRef]
  29. Levendusky, Matthew S. 2013. Why do partisan media polarize viewers? American Journal of Political Science 57: 611–23. [Google Scholar] [CrossRef]
  30. Li, Ruobing, Michail Vafeiadis, Anli Xiao, and Guolan Yang. 2020. The role of corporate credibility and bandwagon cues in sponsored social media advertising. Corporate Communications: An International Journal 25: 495–513. [Google Scholar] [CrossRef]
  31. McKnight, Jessica. 2018. The impact of cues on perceptions and selection of content in online environment. Paper presented at University of Münster 4th Annual Summer School, Münster, Germany, May 29–June 3. [Google Scholar]
  32. Medero, Kristina, Kelly Merrill, and Morgan Quinn Ross. 2022. Modeling access across the digital divide for intersectional groups seeking web-based health information: National survey. Journal of Medical Internet Research 24: e32678. [Google Scholar] [CrossRef] [PubMed]
  33. Melumad, Shiri, and Michel Tuan Pham. 2020. The smartphone as a pacifying technology. Journal of Consumer Research 47: 237–50. [Google Scholar] [CrossRef]
  34. Messing, Solomon, and Sean J. Westwood. 2014. Selective exposure in the age of social media: Endorsements trump partisan source affiliation when selecting news online. Communication Research 41: 1042–63. [Google Scholar] [CrossRef]
  35. Molyneux, Logan. 2015. Civic Engagement in a Mobile Landscape: Testing the Roles of Duration and Frequency in Learning from News. Doctoral thesis, The University of Texas at Austin, Austin, TX, USA. [Google Scholar]
  36. Molyneux, Logan. 2018. Mobile news consumption: A habit of snacking. Digital Journalism 6: 634–50. [Google Scholar] [CrossRef]
  37. Moore, Ryan. C., and Jeffrey T. Hancock. 2020. Older adults, social technologies, and the coronavirus pandemic: Challenges, strengths, and strategies for support. Social Media + Society 6. [Google Scholar] [CrossRef]
  38. Mothes, Cornelia, and Jakob Ohme. 2019. Partisan selective exposure in times of political and technological upheaval: A social media field experiment. Media and Communication 7: 42–53. [Google Scholar] [CrossRef]
  39. Napoli, Philip M., and Jonathan A. Obar. 2014. The emerging mobile Internet underclass: A critique of mobile Internet access. The Information Society 30: 323–34. [Google Scholar] [CrossRef]
  40. Neuwirth, Kurt, Edward Frederick, and Charles Mayo. 2002. Person-effects and heuristic-systematic processing. Communication Research 29: 320–59. [Google Scholar] [CrossRef]
  41. Newport, Frank, and Andrew Dugan. 2017. Partisan Differences Growing on a Number of Issues. Washington, DC: Gallup. Available online: https://news.gallup.com/opinion/polling-matters/215210/partisan-differences-growing-number-issues.aspx (accessed on 12 October 2020).
  42. Ofcom. 2018. Scrolling News: The Changing Face of Online News Consumption. Available online: https://www.ofcom.org.uk/__data/assets/pdf_file/0022/115915/Scrolling-News.pdf (accessed on 5 November 2020).
  43. Ohme, Jakob. 2020. Mobile but not mobilized? Differential gains from mobile news consumption for citizens’ political knowledge and campaign participation. Digital Journalism 8: 103–25. [Google Scholar] [CrossRef] [Green Version]
  44. Ohme, Jakob, and Cornelia Mothes. 2020. What affects first-and second-level selective exposure to journalistic news? A social media online experiment. Journalism Studies 21: 1220–42. [Google Scholar] [CrossRef]
  45. Ohme, Jakob, Ewa Masłowska, and Cornelia Mothes. 2022a. Mobile news learning—Investigating political knowledge gains in a social media newsfeed with mobile eye tracking. Political Communication 39: 339–57. [Google Scholar] [CrossRef]
  46. Ohme, Jakob, Kathleen Searles, and Claes H. de Vreese. 2022b. Information processing on smartphones in public versus private. Journal of Computer-Mediated Communication 27: zmac022. [Google Scholar] [CrossRef]
  47. Orben, Amy, and Andrew K. Przybylski. 2019. The association between adolescent well-being and digital technology use. Nature Human Behaviour 3: 173–82. [Google Scholar] [CrossRef] [Green Version]
  48. Oyserman, Daphna. 2001. Self-concept and identity. In The Blackwell Handbook of Social Psychology: Intraindividual Processes. Edited by Abraham Tesser and Norbert Schwarz. Oxford: Blackwell, pp. 499–517. [Google Scholar] [CrossRef] [Green Version]
  49. Pearson, George David Hooke, and Silvia Knobloch-Westerwick. 2019. Is the confirmation bias bubble larger online? Pre-election confirmation bias in selective exposure to online versus print political information. Mass Communication and Society 22: 466–86. [Google Scholar] [CrossRef]
  50. Porten-Cheé, Pablo, Jörg Haßler, Pablo Jost, Christiane Eilders, and Marcus Maurer. 2018. Popularity cues in online media: Theoretical and methodological perspectives. SCM Studies in Communication and Media 7: 208–30. [Google Scholar] [CrossRef]
  51. Ross, Morgan Quinn, and Joseph B. Bayer. 2021. Explicating self-phones: Dimensions and correlates of smartphone self-extension. Mobile Media & Communication 9: 488–512. [Google Scholar] [CrossRef]
  52. Stroud, Natalie Jomini. 2010. Polarization and partisan selective exposure. Journal of Communication 60: 556–76. [Google Scholar] [CrossRef]
  53. Sundar, S. Shyam. 2007. The MAIN model: A heuristic approach to understanding technology effects on credibility. In Digital Media, Youth, and Credibility. Edited by Miriam J. Metzger and Andrew J. Flanagin. MIT Press: Cambridge, pp. 73–100. [Google Scholar] [CrossRef]
  54. Sundar, S. Shyam, and Clifford Nass. 2001. Conceptualizing sources in online news. Journal of Communication 51: 52–72. [Google Scholar] [CrossRef]
  55. Sundar, S. Shyam, Haiyan Jia, T. Franklin Waddell, and Yan Huang. 2015. Toward a theory of interactive media effects (TIME): Four models for explaining how interface features affect user psychology. In The Handbook of the Psychology of Communication Technology. Edited by S. Shyam Sundar. Chichester: Wiley Blackwell, pp. 47–86. [Google Scholar] [CrossRef]
  56. Sunstein, Cass R. 2009. Going to Extremes: How Like Minds Unite and Divide. New York: Oxford University Press. [Google Scholar]
  57. Walker, Mason. 2019. Americans Favor Mobile Devices over Desktops and Laptops for Getting News. Washington, DC: Pew Research Center. Available online: https://www.pewresearch.org/fact-tank/2019/11/19/americans-favor-mobile-devices-over-desktops-and-laptops-for-getting-news/ (accessed on 12 October 2020).
  58. Westerwick, Axel, Benjamin K. Johnson, and Silvia Knobloch-Westerwick. 2017. Confirmation biases in selective exposure to political online information: Source bias vs. content bias. Communication Monographs 84: 343–64. [Google Scholar] [CrossRef]
  59. Westlund, Oscar. 2015. News consumption in an age of mobile media: Patterns, people, place, and participation. Mobile Media & Communication 3: 151–59. [Google Scholar] [CrossRef]
  60. Wojcieszak, Magdalena, and R. Kelly Garrett. 2018. Social identity, selective exposure, and affective polarization: How priming national identity shapes attitudes toward immigrants via news selection. Human Communication Research 44: 247–73. [Google Scholar] [CrossRef]
  61. Yang, Tian, Sílvia Majó-Vázquez, Rasmus K. Nielsen, and Sandra González-Bailón. 2020. Exposure to news grows less fragmented with increase in mobile access. Proceedings of the National Academy of Sciences of the United States of America 117: 28678–83. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Screenshot of Selective Exposure Website (Computer Condition).
Figure 1. Screenshot of Selective Exposure Website (Computer Condition).
Journalmedia 04 00011 g001
Table 1. Confirmation Bias by Browsing Device Condition (Study 1).
Table 1. Confirmation Bias by Browsing Device Condition (Study 1).
Proportion of Articles SelectedAverage Time Spent per Article (s)
ConsistentDiscrepantConsistentDiscrepant
Overall0.44 (0.39) a0.37 (0.38) c14.98 (21.57) a11.63 (20.06) c
Smartphone0.41 (0.38) a0.34 (0.37) b16.04 (21.87) a8.95 (16.36) c
Computer0.47 (0.40)0.40 (0.39)14.06 (21.32)13.91 (22.54)
Note. Means with a versus b superscripts differ significantly from each other at p < 0.10. Means with a versus c superscripts differ significantly from each other at p < 0.05. Means are presented with standard deviations in parentheses.
Table 2. Impact of on Social Cues by Browsing Device Condition (Study 1).
Table 2. Impact of on Social Cues by Browsing Device Condition (Study 1).
Proportion of Articles SelectedAverage Time Spent per Article (s)
HighLowHighLow
Overall0.44 (0.39) a0.37 (0.38) b14.40 (22.46)12.25 (19.16)
Smartphone0.41 (0.39)0.34 (0.35)13.80 (21.15)11.29 (18.00)
Computer0.46 (0.40)0.40 (0.40)14.91 (23.59)13.07 (20.12)
Note. Means with a versus b superscripts differ significantly from each other at p < 0.10. Means are presented with standard deviations in parentheses.
Table 3. Confirmation Bias Among Articles with High and Low Social Cues (Study 1).
Table 3. Confirmation Bias Among Articles with High and Low Social Cues (Study 1).
Proportion of Articles SelectedAverage Time Spent per Article (s)
HighLowHighLow
ConsistentDiscrepantConsistentDiscrepantConsistentDiscrepantConsistentDiscrepant
Overall0.47 (0.41)0.40 (0.38)0.42 (0.37) a0.33 (0.38) b16.29 (24.02)12.45 (20.64)13.66 (18.77)10.82 (19.50)
Smartphone0.43 (0.41)0.38 (0.37)0.39 (0.34) a0.29 (0.36) b16.27 (21.51)11.30 (20.62)15.81 (22.37) a6.57 (10.01) c
Computer0.50 (0.41)0.42 (0.38)0.44 (0.39)0.37 (0.33)16.30 (26.10)13.46 (20.74)11.76 (14.75)14.34 (24.27)
Note. Means with a verus b superscripts differ significantly from each other at p < 0.05. Means with a versus c superscripts differ significantly from each other at p < 0.05. Means are presented with standard deviations in parentheses.
Table 4. Confirmation Bias by Browsing Device Condition (Study 2).
Table 4. Confirmation Bias by Browsing Device Condition (Study 2).
Proportion of Articles SelectedAverage Time Spent per Article (s)
ConsistentDiscrepantConsistentDiscrepant
Overall0.45 (0.39) a0.35 (0.41) c14.30 (23.14) a8.13 (14.76) c
Smartphone0.41 (0.42) a0.33 (0.41) c14.31 (23.84) a7.76 (15.00) c
Computer0.48 (0.41) a0.37 (0.41) c14.29 (22.44) a8.52 (14.52) c
Note. Means with a versus c superscripts differ significantly from each other at p < 0.05. Means are presented with standard deviations in parentheses.
Table 5. Impact of on Social Cues by Browsing Device Condition (Study 2).
Table 5. Impact of on Social Cues by Browsing Device Condition (Study 2).
Proportion of Articles SelectedAverage Time Spent per Article (s)
HighLowHighLow
Overall0.39 (0.42)0.41 (0.42)10.39 (18.72)11.97 (20.43)
Smartphone0.35 (0.41)0.40 (0.42)10.14 (21.27)11.77 (18.82)
Computer0.43 (0.42)0.42 (0.41)10.65 (15.71)12.18 (22.00)
Note. Means are presented with standard deviations in parentheses.
Table 6. Confirmation Bias Among Articles with High and Low Social Cues (Study 2).
Table 6. Confirmation Bias Among Articles with High and Low Social Cues (Study 2).
Proportion of Articles SelectedAverage Time Spent per Article (s)
HighLowHighLow
ConsistentDiscrepantConsistentDiscrepantConsistentDiscrepantConsistentDiscrepant
Overall0.45 (0.42) a0.33 (0.40) c0.45 (0.41) a0.37 (0.42) c13.64 (22.92) a7.30 (12.84) c14.95 (23.37) a8.98 (16.48) c
Smartphone0.40 (0.42) a0.29 (0.40) c0.43 (0.42)0.37 (0.43)14.17 (27.34) a6.31 (12.01) c14.45 (19.90) a9.19 (17.40) c
Computer0.50 (0.43) a0.37 (0.41) c0.46 (0.40) a0.37 (0.42) c13.10 (17.36) a8.30 (13.61) c15.44 (26.43) a8.75 (15.47) c
Note. Means with a versus c superscripts differ significantly from each other at p < 0.05. Means are presented with standard deviations in parentheses.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ross, M.Q.; Crum, J.; Wang, S.; Knobloch-Westerwick, S. Mobile Selective Exposure: Confirmation Bias and Impact of Social Cues during Mobile News Consumption in the United States. Journal. Media 2023, 4, 146-161. https://doi.org/10.3390/journalmedia4010011

AMA Style

Ross MQ, Crum J, Wang S, Knobloch-Westerwick S. Mobile Selective Exposure: Confirmation Bias and Impact of Social Cues during Mobile News Consumption in the United States. Journalism and Media. 2023; 4(1):146-161. https://doi.org/10.3390/journalmedia4010011

Chicago/Turabian Style

Ross, Morgan Quinn, Jarod Crum, Shengkai Wang, and Silvia Knobloch-Westerwick. 2023. "Mobile Selective Exposure: Confirmation Bias and Impact of Social Cues during Mobile News Consumption in the United States" Journalism and Media 4, no. 1: 146-161. https://doi.org/10.3390/journalmedia4010011

APA Style

Ross, M. Q., Crum, J., Wang, S., & Knobloch-Westerwick, S. (2023). Mobile Selective Exposure: Confirmation Bias and Impact of Social Cues during Mobile News Consumption in the United States. Journalism and Media, 4(1), 146-161. https://doi.org/10.3390/journalmedia4010011

Article Metrics

Back to TopTop