Next Article in Journal
Curriculum–Skill Gap in the AI Era: Assessing Alignment in Communication-Related Programs
Previous Article in Journal
Analysis of the Multimedia, Crossmedia and Transmedia Elements in Spanish Journalistic Media Projects During the Period 2020–2022
Previous Article in Special Issue
Online Media Bias and Political Participation in EU Member States; Cross-National Perspectives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI-Driven Privacy Trade-Offs in Digital News Content: Consumer Perception of Personalized Advertising and Dynamic Paywall

IT Management, Hanshin University, Osan 18101, Republic of Korea
Journal. Media 2025, 6(4), 170; https://doi.org/10.3390/journalmedia6040170
Submission received: 29 July 2025 / Revised: 24 September 2025 / Accepted: 29 September 2025 / Published: 6 October 2025

Abstract

As digital media companies pursue sustainable revenue, AI-based strategies like personalized advertising and dynamic paywalls have become prevalent. These monetization models involve different forms of consumer data collection, raising distinct privacy concerns. This study investigates how digital news users perceive privacy trade-offs between these two AI-driven models. Based on Communication Privacy Management Theory and Privacy Calculus Theory, we conducted a survey of 336 Korean news consumers. Findings indicate that perceived control and risk significantly affect users’ willingness to disclose data. Moreover, users with different privacy orientations prefer different monetization models. Those favoring dynamic paywalls tend to be more privacy-sensitive and show a higher willingness to pay for personalized, ad-free content. While personalization benefits are broadly acknowledged, the effectiveness of privacy control mechanisms remains limited. These insights highlight the importance of ethical, user-centered AI monetization strategies in journalism and contribute to theoretical discussions around algorithmic personalization and digital news consumption.

1. Introduction

The transition to digital media has radically changed news consumption, with digital platforms and algorithmic curation now central, while traditional outlets decline. This transformation raises questions about digital news business sustainability and the ethics of personal data collection and algorithmic governance. Recently, in the process of AI reorganizing the business model of the journalism industry, personalized advertising and dynamic paywall strategies are rapidly emerging. Hong et al. (2023) and Davoudi et al. (2018) explained that the combination of AI-based recommendation systems and subscription systems complements the limitations of existing free models. In particular, major media companies in the United States and Europe report that exposure to customized advertisements improves the consumer experience, but excessive data collection induces user resistance. ZareRavasan (2023) additionally revealed that dynamic charging promotes digital innovation and agility. Notably, news platforms increasingly use AI to collect, process, and monetize user data through various models (Hong et al., 2023). However, the focus on sensationalism and click-driven content often overshadows journalistic transparency, fairness, and accountability, undermining the media’s democratic function. A dual structure has emerged: daily issues and breaking news drive profits, while reporters express fatigue and even exit the industry.
Wang et al. (2023) present strategies for optimizing free content permission and subscription price setting in consideration of differences in content consumption saturation and advertising rates among consumers, revealing the effects of dynamic elements beyond simple fixed subscription models or static payment walls. Additionally, Chae et al. (2022) emphasize the importance of flexible paywall operations and adaptive strategies by analyzing the positive impact of temporary cancellation of paywall suspension on subsequent subscription conversions and the role of modulators in controlling this effect by content consumption patterns. Xu et al. (2025) empirically explores the differences between various paywall strategies (promotion, trial, price discount, etc.) used by news sites on the subscription initiation and completion process based on behavioral data, showing the detailed impact of user experience and price design on subscription acquisition. In addition, Morel et al. (2025) reveals how users perceive fairness, data usage, and transparency in ‘pay-or-ok models’ based on cookie tracking, suggesting that these factors have a significant impact on the acceptance of dynamic price discrimination or personalized paywalls. These studies can be directly linked to the context of privacy acceptance according to AI-based price discrimination and user segmentation that this study intends to deal with. This paper will contribute to the development of the above studies by empirically identifying how these strategies are accepted by users, especially in the Korean media environment, and what privacy trade-off is like.
When digital media business models are uncertain, paying consumers become crucial in shaping editorial direction (Hong et al., 2023). Online advertising began in 1994 when HotWired’s banner ad for AT&T, sold based on “impressions”—the number of people who viewed the ad (Medoff, 2001). This model, known as “CPM” (cost-per-thousand viewers), dominated until Procter & Gamble’s 1996 deal with Yahoo introduced “CPC” (cost-per-click), where payment occurred only when users clicked ads (Evans, 2009). By 2008, many display ads were still sold on a CPM basis, but CPC became increasingly important for direct response.
The rise of AI in journalism has transformed not only content personalization but also business models. Personalized advertising and dynamic paywalls use algorithms to monetize user engagement and behavioral data, reflecting a broader commodification of audience attention and data. However, journalism research has rarely systematically examined how these monetization strategies intersect with user perceptions of fairness, agency, and value trade-offs in data collection.
The explosion of web pages in the 1990s led to search engines selling advertising to generate revenue. Initially, search engines used CPM for banner ads, but this created tension between helping users find information quickly and maximizing ad views. The shift to CPC, pioneered by GoTo.com (later acquired by Yahoo), introduced innovations like auction-based ad placement and pricing, which became industry standards.
Traditional paywalls, based on the number of articles read, often fail to boost revenue and may deter potential subscribers (Davoudi et al., 2018). As technological obstacles—such as ad blockers, new cookie policies, and limited tracking—challenge ad-supported models, news providers must develop more effective paywall mechanisms to build sustainable customer relationships. The need for reliable revenue streams is urgent as free content models face growing limitations.
As AI mediates decisions in media, finance, and healthcare, ethical concerns about algorithmic transparency and fairness are mounting (Eslami et al., 2018; Sandvig et al., 2014). The opacity of personalization algorithms raises questions about consumer understanding and trust, fueling debates over algorithmic governance and the ethics of data-driven personalization (Nemitz, 2018; Zuboff, 2019). Mittelstadt et al. (2016) highlight the lack of transparency, accountability, and fairness in AI systems, especially regarding personal data processing. These issues underscore the importance of understanding how consumers perceive AI-driven privacy collection in news environments.
Acquisti et al. (2015) argue that consumer privacy decisions are influenced by cognitive biases and incomplete information rather than rational trade-offs. This suggests that consumer responses to AI-driven privacy collection may differ from idealized models, emphasizing the need for empirical validation. Lee et al. (2024) propose a taxonomy of AI-driven privacy risks to guide human-centered AI design, aiming to equip practitioners with tools and a shared language for prioritizing user privacy. This study builds on the privacy paradox, where individuals express strong privacy concerns yet disclose data when offered personalization benefits (Awad & Krishnan, 2006). This paradox is especially relevant for news consumers navigating AI-driven personalization shapes information environments, understanding user perceptions and ensuring ethical design are critical for building trust and effective governance in the digital news ecosystem.
Existing studies rarely directly and simultaneously compare the ‘AI-based advertising model and the paid subscription model’, and it is difficult to find empirical studies, especially in the context of Korean news consumers. Therefore, this study’s AI personalization advertisement-dynamic charging mutual comparison and acceptance search have its originality, practical and theoretical contribution.
Table 1 outlines the conceptual basis for comparing two AI-driven revenue models.

2. Research Objective

This study aims to measure consumers’ value perception of AI-driven privacy collection applied to two emerging revenue models in the digital news ecosystem: personalized advertising models and dynamic paywalls. This study aims to empirically verify how media consumers’ perception of privacy value differs according to the two models, help Korean media companies establish a business model using AI-driven privacy technology, and present strategic directions for AI-driven privacy collection. This study is conducted based on the following research questions. Criado and Such (2015) argue that consumers’ privacy expectations are often shaped by contextual integrity rather than binary notions of data exposure. This highlights the importance of examining how personalized news services collect data within socially acceptable boundaries. ZareRavasan (2023) emphasizes that organizational agility is critical when leveraging big data analytics for innovation, which provides a foundation for understanding why media firms are experimenting with AI-driven personalized advertising and paywall models.
Consumers’ willingness to share personal data online is shaped by their evaluation of perceived risk and benefit, a core tenet of Privacy Calculus Theory (Li, 2012). Martin and Murphy (2017) found that consumer trust plays a mediating role in the acceptance of data collection for marketing purposes. Their findings suggest that trust in how personal data is handled may influence consumers’ willingness to disclose information, especially when benefits are expected in return. Individuals with strong propensity to value privacy tend to perceive higher risks and greater concerns when interacting with AI systems. However, the perception of control over their data can mitigate these concerns (Belanche et al., 2021). Moreover, when consumers see clear benefits, they may choose to disclose data despite privacy worries (Sutanto et al., 2013).
Communication Privacy Management Theory explains that individuals create and manage boundaries to protect private information. These boundaries are negotiated based on context, perceived risks, and trust. Consumers adjust these boundaries when interacting with AI systems that request or infer personal information for content delivery.
Recent studies have also highlighted the personalization–privacy paradox, wherein consumers express concern about data misuse but continue to disclose personal information when they expect tangible benefits (Awad & Krishnan, 2006; Taddicken, 2014).
By drawing on Privacy Calculus Theory and Communication Privacy Management Theory, and aligning with recent work on algorithmic trust and fairness (Shin, 2021), this research explores how consumers weigh benefits and risks when deciding whether to disclose personal data in personalized news environments. Given these dynamics, this study seeks to analyze the perceived value and privacy concerns associated with each AI-driven model.

2.1. Research Questions

Based on the theoretical frameworks and prior studies, the following research questions are posed:
  • RQ1. How does propensity to value privacy affect perceived AI privacy risk, control, and concern?
  • RQ2. How do perceived risk and perceived control affect privacy concern?
  • RQ3. How do privacy concerns affect consumers’ intention to disclose personal information?
  • RQ4. How does perceived benefit influence consumers’ intention to disclose personal information?
Recent studies suggest that consumers’ willingness to share personal information is shaped by a perceived trade-off between personalization benefits and privacy risks, often conceptualized through the privacy calculus framework (Diney & Hart, 2006). This research builds upon such perspectives by comparing two AI-driven data collection mechanisms—personalized advertising and dynamic paywalls—in the context of digital news. This study seeks to analyze the difference in consumers’ value perception for two AI-driven revenue models: personalized advertising models and dynamic paywall systems. AI-driven privacy collection methods are applied differently, and to determine their impact on privacy collection technology acceptance in the media industry.

2.2. Hypotheses Development

Privacy Calculus Theory suggests that individuals evaluate perceived risks and benefits when deciding to disclose personal information (Li, 2012; Diney & Hart, 2006). The Privacy Calculus Theory emphasizes that individuals calculate expected benefits and perceived risks when providing information. This theory has recently become particularly useful in the context of news consumption, where empirical studies (Shin, 2021; Sutanto et al., 2013; Awad & Krishnan, 2006) on the intention to provide data when subscribing to news explain ‘behavior that is attracted to customized information and benefits despite the burden of privacy’.
The H1, H4, and H7 hypotheses of this study were designed on the basis of this theory. Those with a higher propensity to value privacy are expected to perceive more risks in AI-driven data collection (H1), and less perceived control (H2), while also demonstrating stronger privacy concerns (H3).
CPM theory (Petronio, 2002) explains that users dynamically manage personal information boundaries. Beyond the context of ‘person-to-person relations’, it has been actively used in algorithm-based media situations such as news platforms and SNS in recent years (Kim & Kim, 2017). In particular, the absence of a sense of data control amplifies anxiety and concern (theoretical basis of the H2, H3, and H5 hypotheses), and transparency emerges as an important variable (Belanche et al., 2021). Privacy in digital communication is often governed by CPM principles, emphasizing shared ownership and negotiated boundaries of private data, critical in understanding user attitudes to AI-driven personalization (Petronio, 2002).
The phenomenon of ‘personalization-privacy paradox’ proposed by Awad and Krishnan (2006) was demonstrated a lot in mobile and AI environments in the 2020s (Sutanto et al., 2013; Taddicken, 2014). The Privacy calculus is further complicated by phenomena like privacy paradox and context collapse, where social media users negotiate privacy in overlapping and unstable contexts (Masur & Ranzini, 2025). This study also deals with the trend of ‘providing data when expecting benefits despite high concerns’. Recently, Acquisti et al. (2015) pointed out that cognitive biases and information asymmetry hinder rational choices, and Lee et al. (2024) systematized various types of AI privacy risk.
Perceived risk is a known driver of privacy concern (Awad & Krishnan, 2006), whereas perceived control mitigates it (Belanche et al., 2021). Privacy practices vary situationally, shaped by platform-specific affordances that mediate both control and disclosure (Möller, 2024; Trepte et al., 2020). In turn, privacy concern is expected to negatively influence disclosure intention (H6), whereas perceived benefit works positively (Awad & Krishnan, 2006).
H1. 
Privacy value propensity positively affects perceived AI privacy risk.
H2. 
Privacy value propensity negatively affects perceived AI privacy control.
H3. 
Privacy value propensity positively affects AI privacy concern.
H4. 
Perceived AI privacy risk positively affects AI privacy concern.
H5. 
Perceived AI privacy control negatively affects AI privacy concern.
H6. 
AI privacy concern negatively affects the intention to disclose personal information.
H7. 
Perceived benefit positively affects the intention to disclose personal information.

3. Methods

3.1. Theoretical Frameworks

To examine user response to AI-driven personalization in journalism, this study applies Privacy calculus theory (Diney & Hart, 2006; Li, 2012) and Communication privacy management theory (Petronio, 2002). These frameworks, while originally developed for contexts like e-commerce or interpersonal communication, are increasingly relevant to journalism where users face involuntary data exchanges under algorithmic systems. In the context of news consumption, privacy boundaries are not merely chosen but are often passively crossed, making boundary turbulence (Petronio, 2002) and perceived benefit-risk trade-offs (Awad & Krishnan, 2006) central to understanding disclosure behavior.
The privacy calculus theory helps us understand this contradictory human behavior. Internet users provide their personal information despite concerns about privacy infringement, based on the dichotomy that the benefits of providing personal information outweigh the possible losses in the future. This behavior reflects a computational calculation of privacy risks and benefits (Haynes & Robinson, 2023). The communication privacy management (CPM) theory, based on the communication boundary management (CBM) theory, is intended to explain an individual’s cognitive process related to deciding whether or not to disclose personal information in the process of forming interpersonal relationships (Kim & Kim, 2017). The CPM theory developed by Petronio (2002) explains the dichotomous relationship of how open or closed the boundary to the other party is to determine the scope of personal information disclosure.
Privacy Calculus Theory suggests that individuals evaluate the risks and benefits before disclosing personal information. In journalism, this calculus becomes complex as users navigate opaque data practices and implicit personalization algorithms. Communication Privacy Management Theory, on the other hand, emphasizes the dynamic negotiation of privacy boundaries, especially in contexts where information disclosure is indirectly demanded, such as in ad personalization. This study applies these frameworks to understand how readers interpret AI-based personalization in news services—not as binary choices, but as situated, strategic decisions under informational asymmetry.
Recent empirical studies have shown that users often perceive algorithmic recommendation systems as opaque and reductive, potentially undermining their sense of agency and fairness (Binns et al., 2018). Such perceptions may influence not only users’ trust but also their willingness to share personal data with AI systems, which directly informs several constructs of this study such as perceived control and privacy concern.

3.2. Research Model

Table 2 presents the core variables employed in this study, detailing their conceptual and operational definitions alongside the key theoretical and empirical references from which they are derived. Each variable is grounded in established frameworks such as Privacy Calculus Theory and Communication Privacy Management Theory, ensuring both conceptual clarity and analytical relevance. The definitions have been adapted from validated measurement instruments in prior studies to fit the AI-driven journalism context, thereby maintaining both reliability and validity. This structured presentation enables clear linkage between theoretical constructs and hypotheses, facilitating replicability in future research.
Figure 1 conceptually illustrates the research model designed to analyze the influence of AI-driven personal data collection on news consumers’ perceptions and behaviors. Grounded in Privacy Calculus Theory and Communication Privacy Management (CPM) Theory, the model depicts the hypothesized relationships among key variables, forming the basis for empirical hypothesis testing. The variable Privacy Value Propensity reflects users’ general attitudes toward personal data collection and is hypothesized (H1–H3) to affect their Perceived AI Privacy Risk, Perceived AI Privacy Control, and AI Privacy Concern. Perceived AI Privacy Risk, which captures users’ awareness of potential threats posed by AI data collection, is examined (H4) for its effect on AI Privacy Concern. Meanwhile, Perceived AI Privacy Control, or users’ belief in their ability to manage how AI uses their data, is analyzed (H5) for its influence on AI Privacy Concern. The AI Privacy Concern variable encapsulates users’ anxiety about the use of their personal data and is tested (H6) for its impact on their Intention to Disclose AI Privacy, which represents users’ willingness to share data for AI-driven news and advertising services. Additionally, the model explores (H7) whether users’ Perceived Benefits of AI Privacy, such as personalized content or ad advantages, enhance their intention to disclose information.
Table 3 lists the measurement items used to assess factors affecting users’ intention to disclose personal information to AI.

3.3. Sampling

The survey was conducted among Korean online news consumers between 1 December and 24 December 2024. A total of 356 responses were collected through an online questionnaire, and after excluding 20 incomplete cases, 336 valid responses were analyzed. The sampling method was based on convenience sampling, but demographic diversity was ensured by including respondents across different age groups, occupations, and genders. Specifically, the final sample consisted of 153 males and 183 females; 74 respondents in their 20s, 112 in their 30s, 136 in their 40s, and 14 teenagers. In terms of occupation, 264 were office workers, 34 were students, 32 were self-employed, and 6 were unemployed. This distribution reflects a broad coverage of typical Korean digital news users.

3.4. Survey Development

The survey items were adapted from validated scales used in prior research on Privacy Calculus Theory and Communication Privacy Management Theory (e.g., Awad & Krishnan, 2006; Li, 2012; Petronio, 2002). Minor wording modifications were made to contextualize the items for AI-driven journalism and digital news services. The questionnaire was first reviewed by two domain experts in information systems and journalism to ensure clarity and face validity. A pilot test with 20 respondents was conducted to confirm that all questions were easily understood, and no major revisions were required. The final survey consisted of 18 items across six constructs, measured on a five-point Likert scale (1 = strongly disagree, 5 = strongly agree).

3.5. Validation of Survey Responses

The reliability and validity of the measurement instruments were assessed before conducting regression analyses. Cronbach’s alpha coefficients for all constructs exceeded the threshold value of 0.70, indicating acceptable internal consistency (e.g., Perceived Benefits of AI Privacy = 0.826). Exploratory factor analysis (EFA) was conducted, and six factors were extracted corresponding to the theoretical constructs (Privacy Value Propensity, Perceived AI Privacy Risk, Perceived AI Privacy Control, AI Privacy Concern, Perceived Benefits, and Intention to Disclose). The Kaiser-Meyer-Olkin (KMO) measure was 0.671, above the recommended threshold of 0.6, and Bartlett’s Test of Sphericity was significant (χ2 = 2851.898, df = 153, p < 0.001), confirming the adequacy of the data for factor analysis. These results demonstrate that the measurement model exhibits satisfactory reliability and construct validity.

4. Analyses and Results

356 quantitative surveys were conducted from 1 to 24 December 2024. The data was then analyzed using SPSS 21. For data analysis, 20 data with missing values were excluded, and 336 data were analyzed. Looking at the demographic characteristics of the respondents, out of the total 336 respondents, 153 were men and 183 were women. Looking at age, 112 people in their 30s participated, 136 people in their 40s and 50s or older each, 74 people in their 20s, and 14 people in their teens. Looking at the occupations of respondents, 264 were office workers, 34 were students, 32 were self-employed, and 6 were non-employed. Looking at the results of respondents’ preferences for online news content, 240 people preferred personalized advertising, and 96 people preferred customized paid subscription services. Afterwards, 268 people read news through portal service. 68 people read news through social media. In this study, a convenience sampling method was employed to recruit 336 Korean digital news consumers. While this sample provides meaningful insights into user perceptions, the demographic distribution is skewed towards urban white-collar workers aged 30 to 59, with underrepresentation of adolescents and older adults above 60. Table 4 shows the demographic characteristics of survey respondents. To improve external validity, future research would benefit from employing quota sampling techniques to ensure balanced representation across key demographics such as age, gender, occupation, and region. Furthermore, a post hoc power analysis was conducted based on the main regression models, confirming that the sample size is sufficient to detect medium effect sizes (Cohen’s f2 ≥ 0.15) with power above 0.80 at a significance level of 0.05. Nonetheless, increasing sample diversity would improve confidence in extrapolating findings to the entire Korean digital news consumer population.
This study employed simple regression analysis and multiple regression analysis to empirically validate the proposed research model. These methods were used to statistically examine the causal relationships among independent variables (IV), dependent variables (DV), and mediating variables (MV). First, simple regression analysis, which analyzes the relationship between a single independent variable and a dependent variable, was applied to examine the effect of users’ Privacy Value Propensity on Perceived AI Privacy Risk. The regression model was specified as follows:
Y = β0 + β1X + ϵ
where Y represents the dependent variable (Perceived AI Privacy Risk), X is the independent variable (Privacy Value Propensity), β0 is the intercept, β1 is the regression coefficient, and ϵ is the error term.
Next, multiple regression analysis, which assesses the impact of two or more independent variables on a dependent variable, was employed to test the core causal pathways of the research model. The first multiple regression model examined the effects of Perceived AI Privacy Risk and Perceived AI Privacy Control on AI Privacy Concern, specified as:
Y = β0 + β1X1 + β2X2 + ϵ
where Y denotes AI Privacy Concern, X1 is Perceived AI Privacy Risk, and X2 is Perceived AI Privacy Control.
The second multiple regression model assessed the influence of AI Privacy Concern, Perceived AI Privacy Risk, and Perceived Benefits of AI Privacy on Intention to Disclose AI Privacy, with the regression equation as follows:
Y = β0 + β1X1 + β2X2 + β3X3 + ϵ
Here, Y refers to Intention to Disclose AI Privacy, X1 is AI Privacy Concern, X2 is Perceived AI Privacy Risk, and X3 is Perceived Benefits of AI Privacy.

4.1. Description: Prefer to Personalized Advertising

Of the 240 people who preferred personalized advertising, 136 preferred ‘viewing online news for free’. Those willing to pay 1000 won per month for news content subscription were 56 people, 16 people paid 5000 won, 16 people paid 10,000 won, and 16 people paid more than 30,000 won. 88 people ‘pay a moderate amount of attention to whether advertisements appear’, 72 people ‘pay a lot of attention to whether advertisements appear’, 40 people ‘pay some attention whether advertisements appear’, 24 people ‘pay a lot of attention to whether advertisements appear’, and 16 people answered ‘I don’t care at all whether advertisements appear or not’. A total of 184 respondents ‘consume news content through major news headlines presented on portal service such as Naver’. 40 People read news through social media such as Facebook, and 8 people read news through other channels. 184 of the respondents consumed online news content almost every day. 48 people consumed news 2–3 times a week.

4.2. Description: Prefer to Dynamic Paywall

Of the 96 people who prefer personalized advertising, 40 responded that they were willing to pay 1000 won per month as a subscription fee for news content. There were 32 people with 5000 won, 16 people with 10,000 won, and 8 people with more than 30,000 won. Unlike all respondents who preferred personalized advertising, there were no respondents who preferred ‘viewing online news for free’. A total of 40 people ‘pay a lot of attention to whether advertisements appear’, 32 people ‘pay a moderate amount of attention to whether advertisements appear’, 16 people ‘pay a lot of attention to whether advertisements appear’, and 8 people answered ‘I am willing to pay a little attention to whether advertisements appear’. There was no response that said, ‘I don’t care at all whether the advertisement appears or not’. A total of 56 respondents ‘consume news content through major news headlines presented on portal sites such as Naver’; 24 people read news through social media such as Facebook, and 16 people read news through other channels; 72 of respondents consumed online news content almost every day; and 24 people consumed news 2–3 times a week.
Table 5 presents a univariate normality test for the measured variables using skewness and kurtosis values. These values help assess whether the data follows a normal distribution, which is important for parametric statistical analysis.
When the skewness which measures the symmetry of the distribution is close to 0, it indicates a symmetric distribution. Negative values (e.g., H2.1 = −1.279) suggest left-skewed data, meaning more responses are concentrated on higher values. Positive values indicate right-skewed data, meaning more responses are concentrated on lower values.
When a kurtosis which measures the peak of the distribution close 0, it indicates a normal distribution. Negative values (e.g., H1.1 = −0.809) indicate a flatter distribution. Positive values (e.g., H2.1 = 1.239) indicate a more peaked distribution, meaning responses are concentrated around the mean.
Most variables have skewness and kurtosis values within an acceptable range (±2), indicating an approximately normal distribution. H2.1 (Perceived AI Privacy Risk) shows a high negative skewness (−1.279) and positive kurtosis (1.239), suggesting that respondents perceive AI privacy risks as generally high. H2.3 and H6.3 show relatively lower kurtosis, meaning responses are more evenly spread out. These results suggest that while most variables follow a normal distribution, a few are slightly skewed, which might require transformation if strict normality is needed.
Table 6 presents the results of the Exploratory Factor Analysis (EFA), which is used to identify underlying constructs among multiple variables. In this analysis, a total of six factors were extracted, and the factor loadings indicate the correlation between each variable and the respective factor. Factor loadings above 0.6 suggest a strong association between the variable and the corresponding factor. Additionally, items with high factor loadings on the same factor are considered to measure the same underlying construct. Regarding H5(Perceived Benefits of AI Privacy), items H5.1, H5.2, and H5.3 exhibit strong factor loadings (>0.9), indicating that they represent a single construct (users’ perceived benefits from AI-based privacy mechanisms). Regarding H2 (Perceived AI Privacy Risk), items H2.1, H2.2, and H2.3 load onto this factor, confirming that they measure concerns related to AI privacy risks. Regarding H6 (Intention to Disclose AI Privacy), items H6.1, H6.2, and H6.3 form a distinct factor, suggesting that users’ willingness to disclose personal information to AI constitutes a separate construct. Regarding H3 (Perceived AI Privacy Control), items H3.1, H3.2, and H3.3 cluster together, indicating that users perceive AI privacy control as an independent concept. Regarding H1 (Disposition to Value Privacy), items H1.1, H1.2, and H1.3 load onto the same factor, suggesting that individuals who highly value privacy perceive AI privacy risks differently from those who do not. Regarding H4 (AI Privacy Concerns), items H4.1, H4.2, and H4.3 load together, indicating that AI privacy concerns are a distinct issue influencing user attitudes. The results confirm that the six theoretical constructs (Privacy Value Propensity, AI Privacy Risk, AI Privacy Control, AI Privacy Concerns, Perceived Benefits, and Intention to Disclose) are empirically distinct. The high factor loadings suggest a good construct validity, meaning the measured items effectively represent the theoretical constructs.
The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy was found to be 0.671 in this study. Since values above 0.6 are generally considered adequate, this result indicates that the sample size is suitable for factor analysis. Bartlett’s Test of Sphericity yielded a chi-square value of 2851.898 (df = 153) with a p-value of 0.000 (p < 0.01). This significant result suggests that the correlation matrix differs significantly from an identity matrix, confirming that the variables exhibit sufficient correlation for factor analysis. The level of significance was found to be significant at 0.000. A hypothesis with a level of significance of less than 0.05 was selected. Afterwards, if the beta value for the positive hypothesis was greater than 0.00, it was supported, and for the negative hypothesis, cases where it was less than 0.00 were selected as the criteria for adoption. The results are as follows. Multivariate analysis was divided into ‘prefer to personalized advertising’ group and ‘prefer to dynamic paywall’ group. To accept the hypothesis, only cases with a level of significance less than 0.05 were selected. Afterwards, the standardization coefficient (Beta) was considered. The hypothesis was accepted if the exclusion value was a positive number in the positive effect path between the variables. Conversely, if a negative number was obtained in the negative effect path, the hypothesis was accepted. The results of adopting the hypothesis are as follows.
Table 7 shows the correlation coefficients among the key research variables.
This study analyzed data from the perspective of adopting an ‘Alternative hypothesis’. Therefore, if the level of significance (p value) was more than 0.05, it was rejected.

4.3. Result: Prefer to Personalized Advertising (Study 1)

First of all, we analyzed the AI privacy perception of those who prefer free news content even when viewing online advertisements over a customized paid subscription service. As a result of analyzing the survey of respondents who preferred personalized advertising, Hypothesis 1 (propensity to value privacy will have a positive effect on perceived AI privacy risk), Hypothesis 4 (perceived AI privacy risks will have a positive impact on AI privacy concerns), Hypothesis 7 (perceived benefit will have a positive effect on intention to disclose at AI privacy) were found to have a significant value.
Table 8 presents the results of the multivariate analysis conducted on respondents who prefer personalized advertising over a dynamic paywall. This analysis evaluates the relationships between research hypotheses (H1.1~H1.7) using standardized coefficients (Beta), t-values, and level of significance.
  • H1.1 (Disposition to Value Privacy → Perceived AI Privacy Risk)
The standardized coefficient for this relationship is ß = 0.586, with a t-value of 7.854 and a level of significance of p < 0.001. These results indicate a statistically significant relationship, leading to the adoption of the hypothesis. This suggests that respondents who place a higher value on privacy are significantly more likely to perceive AI privacy as risky.
  • H1.2 (Disposition to Value Privacy → Perceived AI Privacy Control)
The estimated coefficient is ß = −0.330, with a t-value of 3.795 and a level of significance of p < 0.001. Although a negative relationship was expected, the result was not strong enough to support the hypothesis. Therefore, this hypothesis was not supported.
  • H1.3 (Disposition to Value Privacy → AI Privacy Concerns)
The estimated coefficient is ß = −0.026, a t-value of −0.284, and a level of significance of p = 0.777. This finding suggests that there is no statistically significant relationship between valuing privacy and AI privacy concerns, leading to the rejection of the hypothesis.
  • H1.4 (Perceived AI Privacy Risk →AI Privacy Concerns)
A positive relationship was observed, with ß = 0.211, t = 2.340, and p = 0.021. Given the statistical significance, this hypothesis was supported, indicating that individuals who perceive AI privacy as risky are more likely to have concerns about privacy.
  • H1.5 (Perceived AI Privacy →AI Privacy concerns)
The coefficient for this relationship is ß = −0.022, with a t-value of −0.241 and a level of significance of p = 0.810. Since the relationship was not statistically significant, this hypothesis was not supported, suggesting that perceived privacy control does not significantly affect privacy concerns.
  • H1.6 (AI Privacy Concerns → Intention to Disclose Information to AI)
Despite expectations that increased privacy concerns would reduce the intention to disclose personal information, the results show ß = 0.736, t = 11.802, and p < 0.001. As the expected negative relationship was not supported, this hypothesis was not supported.
  • H1.7 (Perceived Benefits → Intention to Disclose Information to AI)
The standardized coefficient for this relationship is ß = 0.181, with a t-value of 2.002 and a level of significance of p = 0.048. This statistically significant finding supports the hypothesis, suggesting that if users perceive benefits in AI-driven personalized advertising, they are more willing to disclose their data.
Figure 2 provides a statistical visualization of the research model verification results for the user group that prefers personalized advertising, based on the findings presented in Table 6. The analysis reveals that users with a high Privacy Value Propensity perceive greater AI Privacy Risk (ß = 0.586, p < 0.001, t = 7.854), indicating that those who highly value privacy are more sensitive to potential risks in AI data collection. Additionally, Perceived AI Privacy Risk is positively associated with AI Privacy Concern (ß = 0.211, p = 0.021, t = 2.340), suggesting that risk perception heightens users’ concerns about their personal data. Importantly, Perceived Benefits of AI Privacy positively influence users’ Intention to Disclose AI Privacy (ß = 0.181, p = 0.048, t = 2.002), showing that users who recognize the advantages of AI-driven personalized advertising are more inclined to share their personal information. Conversely, AI Privacy Concern does not show a significant effect on disclosure intention, indicating that heightened concern does not necessarily deter users from providing their data. Although the study reveals that users who prefer dynamic paywalls demonstrate a willingness to accept micro-payments despite privacy concerns, the underlying motivations for this behavior remain insufficiently understood. Similarly, the finding that elevated privacy concerns do not negatively impact users’ intention to disclose personal data (as shown by the unexpectedly positive coefficient in H1.6, β = 0.736) contradicts conventional expectations and calls for deeper exploration. These paradoxical results highlight complex trade-offs and contextual factors influencing user behavior, which are not fully captured through the current survey methodology. The findings suggest that the personalized advertising group values the benefits of AI personalization and is willing to share data accordingly, although those who value privacy tend to be more aware of associated risks. Notably, Perceived AI Privacy Control does not have a significant influence, implying that the belief in one’s ability to control personal information does not substantially affect the decision to disclose data.

4.4. Result: Prefer to Dynamic Paywall (Study 2)

Secondly, analyzed the AI Privacy perception of those who prefer a paid subscription to personalized online news rather than a subscription to free news content with advertisements. As a result, Hypothesis 1 (Privacy value propensity will have a positive effect on perceived AI privacy risk) was found to have a significant value.
Figure 3 visually represents the results of the research model validation for consumers who prefer a dynamic paywall, based on the statistical findings from Table 6. The analysis shows a very strong positive relationship between Privacy Value Propensity and Perceived AI Privacy Risk (ß = 0.731, p < 0.001, t = 7.271), indicating that individuals who place a high value on privacy are significantly more likely to perceive AI-driven data collection as risky. Interestingly, AI Privacy Concern is positively associated with Intention to Disclose AI Privacy (ß = 0.391, p = 0.006, t = 2.877), contrary to expectations, suggesting that even consumers with high concern about their privacy are willing to share personal information in exchange for access to ad-free, personalized news content. However, the relationship between Perceived Benefits of AI Privacy and disclosure intention was not statistically significant, implying that this group does not prioritize the advantages of AI personalization when making disclosure decisions. These findings indicate that dynamic paywall-preferring consumers are highly sensitive to AI privacy risks and concerns, yet they are still inclined to provide personal data in pursuit of high-quality news content without advertising. In this context, perceived benefits of AI play a limited role, with content quality and subscription value appearing to be more influential factors in their decision-making.

4.5. User Influx from Portal Service & Social Media

Thirdly, analyzed the AI Privacy perception of those who reading news content influx from portal service and social media. As a result of analyzing the survey of each respondent, Hypothesis 1 (Privacy value propensity will have a positive effect on perceived AI privacy risk) was found to have a significant value.
Table 9 examines the influence of news consumption sources (portal services vs. social media) on privacy perceptions and disclosure behavior. Specifically, it tests hypotheses H3.1 ~ H4.7 separately for users of portal services and social media platforms to assess their differential impacts.

Analysis of Key Variables and Their Effects

  • H3.1 (Disposition to Value Privacy → Perceived AI Privacy Risk)
The standardized coefficient for this relationship is ß = 0.663, with a t-value of 9.612 and a level of significance of p < 0.001. This strong positive relationship supports the hypothesis, indicating that users who access news through portals and highly value privacy perceive AI privacy risks to be significantly greater.
  • H3.2 (Disposition to Value Privacy → Perceived AI Privacy Control)
The results show ß = 0.130, t = 1.423, and p = 0.157. Since the relationship is not statistically significant, this hypothesis is not adopted. This suggests that privacy-conscious users do not necessarily feel that they have control over AI privacy mechanisms when consuming news via portals.
  • H3.3 (Disposition to Value Privacy → Perceived AI Privacy Risk)
A negative relationship was observed (ß = −0.197, t = −2.186, p = 0.031), contrary to expectations. This suggests that users who highly value privacy exhibit lower levels of AI privacy concern, leading to the rejection of the hypothesis.
  • H3.6 (AI Privacy Concerns → Intention to Disclose Information to AI)
Despite expectations that AI privacy concerns would negatively influence disclosure behavior, the results indicate ß = 0.628, t = 8.777, and p < 0.001. The hypothesis was not adopted, suggesting that privacy concerns do not significantly decrease the intention to share personal data with AI systems among portal service users.
  • H4.1 (Disposition to Value Privacy → Perceived AI Privacy Risk)
The analysis reveals a strong positive relationship (ß = 0.676, t = 5.021, p < 0.001), leading to the adoption of the hypothesis. Similar to portal users, social media users who place a high value on privacy perceive AI privacy risks as significantly greater.
  • H4.2 (Disposition to Value Privacy → Perceived AI Privacy Control)
The results show ß = 0.539, t = −1.162, and p < 0.001, indicating that this hypothesis was not adopted. This suggests that social media users who value privacy do not feel they have control over AI privacy mechanisms, similar to portal users.
  • H4.6 (AI Privacy Concerns → Intention to Disclose Information to AI)
Despite expectations of a negative relationship, the analysis finds ß = 0.782, t = 6.878, and p < 0.001, leading to the rejection of the hypothesis. This finding suggests that, even among social media users, privacy concerns do not act as a significant deterrent to data-sharing behaviors.
Figure 4 illustrates the validation results of the research model for users who consume news via portal services, based on the statistical outcomes presented in Table 7. The analysis reveals a strong positive relationship between Privacy Value Propensity and Perceived AI Privacy Risk (ß = 0.663, p < 0.001), indicating that portal news users who value privacy are more likely to perceive AI-driven data collection as risky. However, the relationship between AI Privacy Concern and Intention to Disclose AI Privacy (ß = 0.628, p < 0.001) was not supported, suggesting that despite their heightened awareness of privacy risks, these users are not significantly deterred from sharing personal information. This implies that risk perception among portal news consumers does not necessarily translate into lower willingness to disclose data.
Figure 5 visualizes the validation results of the research model for users who consume news via social media platforms, based on the statistical analysis presented in Table 7. The results show a strong positive relationship between Privacy Value Propensity and Perceived AI Privacy Risk (ß = 0.676, p < 0.001), indicating that social media news users who value privacy are more likely to perceive AI-driven data collection as risky. However, despite exhibiting a higher level of AI Privacy Concern compared to portal news users, the relationship between AI Privacy Concern and Intention to Disclose AI Privacy (ß = 0.782, p < 0.001) was not supported. This suggests that even with heightened privacy concerns, social media users are not reluctant to disclose their personal information, potentially reflecting a higher tolerance for privacy trade-offs in exchange for social or informational benefits.

5. Discussion and Conclusions

5.1. Conclusions

This study’s findings extend beyond the Korean media sector, offering insights relevant to global media companies grappling with digital transformation and the challenge of balancing sustainable business models with personal data protection. Differences in cultural and institutional contexts should be considered in generalizing the results of this study to “global media companies.” For example, news subscribers in Europe and North America may have different privacy sensitivities in their privacy regulation (GDPR, etc.) environment, and Asian countries may have relatively high dependence on mobile-based news consumption and social media, which may lead to differences in users’ data provision behavior. Therefore, while this study presents results focusing on the Korean context, it is necessary to check the possibility of generalization at a global level by performing cross-cultural comparison or multinational data-based verification in future studies. These follow-up research path proposals provide the basis for extending the results of this study to various cultures.
Users’ perceptions of AI-driven privacy collection ─ such as through personalized advertising and dynamic paywalls ─ are shaped by their risk-benefit evaluations within information society. These insights enrich ongoing discussions on algorithmic governance, platform accountability, and the ethical use of personal data in digital news. This study aimed to understand public perception regarding media companies’ business models—personalized advertising and dynamic paywalls— and AI-based privacy collection. Despite widespread privacy concerns, Richardson et al. (2019) observe that economic impacts from privacy breaches are often minimal, indicating a gap between users’ perceived risks and actual outcomes.
In Korean society, characterized by collectivistic culture and strong privacy awareness, there appear to be distinctive influences on privacy concerns between institutional portal news users and social media users, who are more centered on private network sharing. Portal users tend to have relatively stricter expectations for institutional trust and personal data management, whereas social media users may be more tolerant or agreeable to personal information exposure due to the sharing-oriented cultural nature. Understanding these culturally and institutionally rooted differential privacy attitudes is critical for designing tailored privacy policies and user protection strategies. In Korean society, characterized by collectivistic culture and strong privacy awareness, there appear to be distinctive influences on privacy concerns between institutional portal news users and social media users, who are more centered on private network sharing. Portal users tend to have relatively stricter expectations for institutional trust and personal data management, whereas social media users may be more tolerant or agreeable to personal information exposure due to the sharing-oriented cultural nature.
For media companies worldwide, these findings highlight the importance of balancing revenue generation with ethical data practices. They also advance academic discourse on algorithmic governance and digital privacy in the evolving news ecosystem, aligning with broader conversations on algorithmic accountability and surveillance capitalism (Zuboff, 2019). The need for ethical design in AI-driven media personalization is underscored. As algorithmic personalization increasingly shapes information environments, understanding user perceptions is vital for trust-building and effective governance (Nemitz, 2018; Sutanto et al., 2013). These considerations are crucial for both industry and policymaking.
Not only were a number of hypotheses not supported in the hypothesis verification results of this study, but some results were contrary to existing theories, revealing difficulties in interpretation. In particular, as ‘AI privacy concerns’ show a positive correlation with ‘data disclosure intention’ within the group that prefers personalized advertisements, it contradicts the hypothesis that ‘privacy concerns hinder the intention to disclose’ which is traditionally expected. These paradoxical results are in line with the concept of ‘privacy paradox’. In other words, while users express high concerns about the protection of personal information, they can actually be active in providing personal information when the potential benefits of using services and providing customized benefits outweigh these (Awad & Krishnan, 2006; Acquisti et al., 2015). In addition, it cannot be ruled out that social desirability bias influenced the responses of participants, exaggerating the expression of concern or underestimating the intention to disclose it. In addition, the situation in which users inevitably accept the provision of personal information may have been reflected due to limited alternative options or service inevitability.

5.1.1. Prefer to Personalized Advertising

As a result of the survey and statistical analysis conducted in this study, the majority of respondents preferred to receive news content for free even if their personal information was extensively collected for online advertising. Those who preferred free news content were worried about problems that could arise from providing privacy to AI (e.g., misuse, risk of leakage, etc.) (Hypothesis 1 and Hypothesis 4 accepted). More than half of the respondents preferred ‘viewing online news for free’. This is interpreted to be because those who prefer free news content perceive the value of their privacy to be lower than those who prefer paid subscriptions to customized online news. They were concerned about the risks that could arise from providing privacy to AI, but paradoxically expected benefits (Hypothesis 7). If a digital media organization uses AI to provide customized advertisements to individuals, it is expected that risk concerns will be resolved if it transparently discloses the method of collecting and processing personal information and applying customized advertisements in an easy-to-understand manner of users.

5.1.2. Prefer to Dynamic Paywall

Those who prefer paid subscriptions to personalized online news perceive the value of privacy more highly than those who prefer free news content. This is interpreted as a phenomenon of concern about problems that may arise from providing privacy to AI. The biggest characteristic of this group is that, unlike other groups, there was not a single person who preferred ‘viewing online news for free’. More than half of the respondents wanted to pay 1000 won per month as a subscription fee for news content, which appears to be an attempt to protect their privacy value as much as possible while paying the media in minimal increments and receiving customized services.
Therefore, when a media company uses AI to build a personalized subscription service, it is necessary to thoroughly protect and emphasize the privacy of each user. Additionally, among them, not a single person responded that they ‘do not care at all whether or not advertisements appear’. Expressed the other way around, this can be interpreted to mean, ‘I am bothered by advertisements while reading online news’. Considering this, it is believed that the ‘Prefer to Dynamic Paywall’ group, like the ‘Prefer to Personalized advertising group’, should offer as low a subscription fee as possible to secure subscribers. However, since they have paid a subscription fee, they recommend excluding all incidental advertisements other than news content.
Table 10 compares the characteristics and privacy perceptions of users who prefer personalized advertising versus those who prefer dynamic paywall.

5.2. Academic and Theoretical Implications of the Results

This study makes an academic contribution by empirically analyzing the relationship between AI-driven news content revenue models and consumers’ perceptions of privacy protection. In particular, it differentiates itself from previous research by integrating Privacy Calculus Theory and Communication Privacy Management Theory to explain AI-driven personal data collection strategies in the media industry.
Furthermore, this study empirically demonstrates that consumers’ perceptions of AI privacy protection function differently depending on the “personalized advertising model” and the “dynamic monetization model”. This finding provides practical implications for how AI-driven news services can alleviate users’ concerns about privacy protection.

5.3. Managerial Implications of Results

This study highlights several key implications for media companies. Firstly, media firms must adopt differentiated strategies tailored to distinct consumer segments. Consumers who favor personalized advertising are open to ad-supported free news, while those preferring the dynamic monetization model prioritize privacy and an ad-free experience. Therefore, companies should design segmented privacy policies and service models accordingly. Secondly, enhancing transparency in privacy protection is essential. Media organizations utilizing AI-driven news and advertising systems must ensure consumers clearly understand how their data is collected and used. Greater transparency in data practices can increase user acceptance and trust. Thirdly, media companies should promote micro-subscription models. Many consumers willing to pay a small fee for ad-free news indicate a viable market for micro-payment-based subscriptions, offering an effective strategy to diversify revenue streams.
Digital news consumers evaluate privacy trade-offs based on perceived control, transparency, and trust in AI systems, reflecting dynamic and stratified privacy attitudes (Taddicken, 2014; Hong et al., 2023). Media organizations must design responsible personalization systems that prioritize consent, user control, and clear value propositions (Martin & Murphy, 2017; Awad & Krishnan, 2006).
The findings align with previous research on the risks of poorly governed personalization, which can undermine user agency (Jaber & Abbad, 2023). Thus, ensuring algorithmic transparency and protecting user data are crucial in AI-driven media personalization. Although this study confirmed that “the effectiveness of privacy control mechanisms is limited,” it is intended to compensate for the fact that existing studies generally have not discussed the detailed components of these mechanisms or their limitations in terms of actual user experience. First of all, in many AI-based news and advertising platforms, the function of users independently controlling whether or not personal information is provided is very limited, and in most cases, information on collection and use of personal information is provided opaque. For example, there is no clear procedure for users to directly inquire personal information records collected by AI, delete the data, or stop services if they want, so the practical ‘control possibility’ is low. Therefore, the following improvement measures are proposed to strengthen transparency and control. First, a personal information management dashboard tailored to users should be provided so that users can check in real time what data is collected and used, when and how. Second, it should be designed to make it easy to exercise the user’s right to withdraw consent and delete data in the main stages of collecting and using personal information. Third, it is necessary to support users to strengthen trust and subjective decision-making by introducing an “explainable AI” function that explains how AI algorithms work and data processing methods in an easy-to-understand form.
Media companies should consider the following strategies to balance AI-based personalized advertisements and personal information protection. First of all, the process of collecting and using personal information should be transparently disclosed so that consumers can easily understand it, and the function that allows users to directly control the scope of their personal information provision should be strengthened. In addition, it is necessary to comply with the minimum data collection principle and to reduce unnecessary inconvenience by designing the frequency of advertisement exposure to be controlled by the user.
Based on the study findings, media firms must rigorously comply with relevant data protection laws and enhance algorithmic transparency while establishing clear user consent processes. Strengthening mechanisms for preventing and responding to personal data misuse and breaches, alongside educational and guidance programs for users to safely manage their data, is necessary. Collaboration with policymakers to develop industry standards and guidelines is recommended to simultaneously pursue sustainable privacy protection and innovative personalized business models.
This study advances journalism research by mapping consumer attitudes toward AI-driven monetization, emphasizing the importance of ethical design—consent, control, and transparency—in personalization systems.

5.4. Limitations and Future Studies

This study has limitations in that the sample is concentrated on urban residents, office workers, highly educated people, and respondents in their 30s and 50s or older. Therefore, in future studies, it is necessary to increase representativeness through quarter sampling or weighted analysis. This study has a limitation in that it did not consider other analysis methods, such as user and expert interviews. Therefore, there is a need to use analysis methods from more diverse perspectives, including expert interviews, in future research. Korean media companies are experiencing limitations in their existing business models. We hope that the contents of this study will contribute to future media companies using AI privacy to build new business models. Also, future studies should explore cross-cultural comparisons and emerging news platforms.

Funding

This research was supported by the 2025 Hanshin University Research Grant (grant number: 2025-10247).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that supports the findings of this study are available from the author upon reasonable request.

Conflicts of Interest

The author declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.

References

  1. Acquisti, A., Brandimarte, L., & Loewenstein, G. (2015). Privacy and human behavior in the age of information. Science, 347(6221), 509–514. [Google Scholar] [CrossRef]
  2. Awad, N. F., & Krishnan, M. S. (2006). The personalization privacy paradox: An empirical evaluation of information transparency and the willingness to be profiled online for personalization. MIS Quarterly, 30(1), 13–28. [Google Scholar] [CrossRef]
  3. Belanche, D., Casaló, L. V., Flavián, C., & Pérez-Rueda, A. (2021). The role of customers in the gig economy: How perceptions of working conditions and service quality influence the use and recommendation of food delivery services. Service Business, 15(1), 45–75. [Google Scholar] [CrossRef]
  4. Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018, April 21–26). It’s reducing a human being to a percentage’ perceptions of justice in algorithmic decisions. 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14), Montreal, QC, Canada. [Google Scholar] [CrossRef]
  5. Chae, I., Ha, J., & Schweidel, D. A. (2022). Paywall suspensions and digital news subscriptions. Marketing Science, 42(4), 729–745. [Google Scholar] [CrossRef]
  6. Criado, N., & Such, J. M. (2015). Implicit contextual integrity in online social networks. Information Sciences, 325, 48–69. [Google Scholar] [CrossRef]
  7. Davoudi, H., An, A., Zihayat, M., & Edall, G. (2018, August 19–23). Adaptive paywall mechanism for digital news media. 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 205–214), London, UK. [Google Scholar] [CrossRef]
  8. Diney, T., & Hart, P. (2006). An extended privacy calculus model for E-commerce transactions. Information Systems Research, 17(1), 61–80. [Google Scholar] [CrossRef]
  9. Eslami, M., Krishna, K., Sandvig, C., & Karahalios, K. (2018, April 21–26). Communicating algorithmic process in online behavioral advertising. 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–13), Montreal, QC, Canada. [Google Scholar] [CrossRef]
  10. Evans, D. S. (2009). The online advertising industry: Economics, evolution, and privacy. Journal of Economic Perspectives, 23(3), 37–60. [Google Scholar] [CrossRef]
  11. Haynes, D., & Robinson, L. (2023). Delphi study of risk to individuals who disclose personal information online. Journal of Information Science, 49(1), 93–106. [Google Scholar] [CrossRef]
  12. Hong, N., Lee, S., & Kim, S. (2023). Generational conflict in the digital journalism environment: Focusing on journalist interviews. Journal of Communication Science, 23(2), 109–155. [Google Scholar] [CrossRef]
  13. Jaber, F., & Abbad, M. A. (2023). A realistic evaluation of the dark side of data in the digital ecosystem. Journal of Information Science, 51(3), 667–683. [Google Scholar] [CrossRef]
  14. Kim, J., & Kim, J. (2017). A study on the internet user’s economic behavior of provision of personal information: Focused on the privacy calculus, CPM theory. The Journal of Information Systems, 26(1), 93–123. [Google Scholar] [CrossRef]
  15. Lee, H. P., Yang, Y. J., Von Davier, T. S., Forlizzi, J., & Das, S. (2024, May 11–16). Deepfakes, phrenology, surveillance, and more! A taxonomy of AI privacy risks. Chi Conference on Human Factors in Computing Systems (pp. 1–19), Honolulu, HI, USA. [Google Scholar] [CrossRef]
  16. Li, Y. (2012). Theories in online information privacy research: A critical review and an integrated framework. Decision Support Systems, 54(1), 471–481. [Google Scholar] [CrossRef]
  17. Martin, K. D., & Murphy, P. E. (2017). The role of data privacy in marketing. Journal of the Academy of Marketing Science, 45(2), 135–155. [Google Scholar] [CrossRef]
  18. Masur, P. K., & Ranzini, G. (2025). Privacy calculus, privacy paradox, and context collapse: A replication of three key studies in communication privacy research. Journal of Communication, jqaf007. [Google Scholar] [CrossRef]
  19. Medoff, N. (2001). Just a click away: Advertising on the internet. Allyn and Bacon. [Google Scholar]
  20. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. [Google Scholar] [CrossRef]
  21. Morel, V., Karegar, F., & Santos, C. (2025). “I will never pay for this” Perception of fairness and factors affecting behaviour on ‘pay-or-ok’ models. arXiv, arXiv:2505.12892. [Google Scholar] [CrossRef]
  22. Möller, J. E. (2024). Situational privacy: Theorizing privacy as communication and media practice. Communication Theory, 34(3), 130–142. [Google Scholar] [CrossRef]
  23. Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089. [Google Scholar] [CrossRef]
  24. Petronio, S. (2002). Boundaries of privacy: Dialectics of disclosure. State University of New York Press. [Google Scholar]
  25. Richardson, V. J., Smith, R. E., & Watson, M. W. (2019). Much ado about nothing: The (lack of) economic impact of data privacy breaches. Journal of Information Systems, 33(3), 227–265. [Google Scholar] [CrossRef]
  26. Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and Discrimination: Converting Critical Concerns into Productive Inquiry, 22, 4349–4357. [Google Scholar]
  27. Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. [Google Scholar] [CrossRef]
  28. Sutanto, J., Palme, E., Tan, C. H., & Phang, C. W. (2013). Addressing the personalization-privacy paradox: An empirical assessment from a field experiment on smartphone users. MIS Quarterly, 37(4), 1141–1164. [Google Scholar] [CrossRef]
  29. Taddicken, M. (2014). The ‘privacy paradox’ in the social web: The impact of privacy concerns, individual characteristics, and the perceived social relevance on different forms of self-disclosure. Journal of Computer-Mediated Communication, 19(2), 248–273. [Google Scholar] [CrossRef]
  30. Trepte, S., Scharkow, M., & Dienlin, T. (2020). The privacy calculus contextualized: The influence of affordances. Computers in Human Behavior, 104, 106115. [Google Scholar] [CrossRef]
  31. Wang, C., Zhou, B., & Joshi, Y. V. (2023). Endogenous consumption and metered paywalls. Marketing Science, 43(1), 158–177. [Google Scholar] [CrossRef]
  32. Xu, Z., Thurman, N., Berhami, J., Strasser Ceballos, C., & Fehling, O. (2025). Converting online news visitors to subscribers: Exploring the effectiveness of paywall strategies using behavioural data. Journalism Studies, 26(4), 464–484. [Google Scholar] [CrossRef]
  33. ZareRavasan, A. (2023). Boosting innovation performance through big data analytics: An empirical investigation on the role of firm agility. Journal of Information Science, 49(5), 1293–1308. [Google Scholar] [CrossRef]
  34. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs. [Google Scholar]
Figure 1. Research model based on Privacy Calculus Theory and Communication Privacy Management Theory.
Figure 1. Research model based on Privacy Calculus Theory and Communication Privacy Management Theory.
Journalmedia 06 00170 g001
Figure 2. Statistics of prefer to Personalized advertising (Study1-1).
Figure 2. Statistics of prefer to Personalized advertising (Study1-1).
Journalmedia 06 00170 g002
Figure 3. Statistics of prefer to Dynamic Paywall (Study1-2).
Figure 3. Statistics of prefer to Dynamic Paywall (Study1-2).
Journalmedia 06 00170 g003
Figure 4. Statistics of Influx from Portal service (Study2-1).
Figure 4. Statistics of Influx from Portal service (Study2-1).
Journalmedia 06 00170 g004
Figure 5. Statistics of Influx from Social media (Study2-2).
Figure 5. Statistics of Influx from Social media (Study2-2).
Journalmedia 06 00170 g005
Table 1. Comparison of ‘Personalized advertising vs. Dynamic paywall’.
Table 1. Comparison of ‘Personalized advertising vs. Dynamic paywall’.
VariablesPersonalized AdvertisingDynamic Paywall
CommonPrivacy collection through AI
Difference
  • Free news content
  • Decreased readability of news content text caused by a lot of advertising exposure
  • Collects overall information about consumers’ online activities, including website visit records
  • Paid News content
  • Personalized news content
  • Collects information tendencies such as what type of news content consumers frequently consume on a specific news homepage
PurposeAI-driven privacy collection to enhance the provision of advertising content preferred by consumers on the news homepageAI-driven privacy collection to enhance the provision of news content preferred by consumers
AI’s Privacy Collection Types<Broad digital activities—Free news content>
Personal identification data, External digital activities (data such as shopping website visit records, location information data for customized advertisements, etc.)
<Certain digital activities—News content for paid subscriptions>
Personal identification data, Internal digital activities (data such as what type of news the consumer frequently clicks on and stays on for a long time, etc.)
Table 2. Operational definition of variables.
Table 2. Operational definition of variables.
VariablesDefinitionReferences
Disposition to Value of PrivacyA person’s tendency to value privacy and give strong value to his or her personal information protectionLi (2012); Xu et al. (2025)
Perception of AI Privacy RiskThe degree to which a person recognizes the risks he/she may experience from providing personal information in the context of using AIAwad and Krishnan (2006); Lee et al. (2024)
Perception of AI Privacy ControlCognitive conviction that users believe they can control and manage their information in the use of AI-based dataBelanche et al. (2021); Kim and Kim (2017)
AI Privacy ConcernThe level at which AI systems are concerned when collecting, analyzing, and utilizing personal informationTaddicken (2014); Acquisti et al. (2015)
Perceived Benefits of AI PrivacyReal benefits expected from AI-based customized content & advertisingShin (2021); Sutanto et al. (2013)
Intention to disclosureIntention to provide your personal information to the news platformAwad and Krishnan (2006); Martin and Murphy (2017)
Table 3. Measurement instruments for intention to disclose personal information to AI.
Table 3. Measurement instruments for intention to disclose personal information to AI.
VariablesMeasurement Instruments
Disposition to Value of Privacy
(Negative)
(H1.1) I tend to be more concerned about AI-related privacy than others.
(H1.2) Compared to others, I am more interested in privacy being protected by AI.
(H1.3) I think it is the most important thing for me that privacy is protected from AI.
Perception of AI Privacy Risk
(Negative)
(H2.1) I think there is a risk that providing data to AI may compromise privacy.
(H2.2) I think my privacy can be used unfairly by AI.
(H2.3) I think providing privacy to AI involves a number of unexpected problems.
Perception of AI Privacy Control
(Positive)
(H3.1) I think I can control who has an access to the privacy collected by AI.
(H3.2) I think I can decide the range of personal information to disclose to AI.
(H3.3) I believe that I can control over how AI can utilize the personal information.
AI Privacy Concern
(Negative)
(H4.1) I am concerned that the privacy provided to AI can be misused or abused.
(H4.2) I am concerned about providing privacy to AI because privacy can be used for other purposes.
(H4.3) I am concerned about providing personal information to AI because privacy can be used in unexpected ways.
Perceived Benefits of AI Privacy
(Positive)
(H5.1) I think I will receive sophisticated services by providing privacy to AI.
(H5.2) I think I will receive various services by providing privacy to AI.
(H5.3) I think I will receive services of interest by providing privacy to AI.
Intention to disclose at AI Privacy for online news contents & ads services
(Positive)
(H6.1) I think that by providing privacy to AI, I will receive sophisticated online news contents & ads services.
(H6.2) I think that by providing privacy to AI, I will receive a variety of online news contents & ads services.
(H6.3) I believe that by providing privacy to AI, I will receive online news contents & ads services of interest.
Table 4. Demographic characteristics of survey respondents.
Table 4. Demographic characteristics of survey respondents.
VariableCategoryFrequency (N = 336)Percentage (%)
GenderMale15345.5%
Female18354.5%
AgeTeens144.2%
20s7422.0%
30s11233.3%
40s+13640.5%
OccupationOffice worker26478.6%
Student3410.1%
Self-employed329.5%
Unemployed61.8%
Table 5. Univariate normality test.
Table 5. Univariate normality test.
NMin.Max.Ave.Standard DeviationSkewnessKurtosis
H1.1168153.501.16−0.42−0.81
H1.23.381.18−0.16−0.98
H1.33.711.20−0.51−1.03
H2.14.051.07−1.281.24
H2.24.240.84−0.960.32
H2.34.240.78−0.75−0.07
H3.13.121.24−0.08−1.15
H3.23.051.16−0.09−0.95
H3.32.951.26−0.06−1.22
H4.13.480.91−0.700.02
H4.23.550.88−0.25−0.64
H4.33.500.83−0.39−0.52
H5.14.050.95−0.77−0.32
H5.24.051.03−1.040.54
H5.33.980.99−1.000.68
H6.13.360.95−0.43−0.51
H6.23.450.96−0.690.33
H6.33.570.85−0.930.84
Table 6. Exploratory factor analysis.
Table 6. Exploratory factor analysis.
H5H2H6H3H1H4Cronbach’s Alpha
H5.20.92−0.020.06−0.04−0.000.180.83
H5.10.91−0.00−0.140.100.010.06
H5.30.900.00−0.070.060.12−0.15
H2.2−0.040.890.070.020.18−0.05
H2.1−0.130.800.14−0.080.28−0.10
H2.30.290.76−0.06−0.080.39−0.01
H6.20.220.020.890.13−0.230.14
H6.1−0.120.060.88−0.020.130.29
H6.3−0.120.060.860.120.120.14
H3.2−0.07−0.090.040.940.020.11
H3.30.19−0.090.020.920.060.05
H3.1−0.040.000.200.860.17−0.11
H1.2−0.120.17−0.050.310.83−0.12
H1.10.090.550.08−0.040.70−0.04
H1.30.490.48−0.010.150.62−0.08
H4.1−0.280.070.32−0.170.080.85
H4.20.27−0.050.370.25−0.290.71
H4.30.20−0.080.560.16−0.290.61
Kaiser-Meyer-Olkin Measure of Sampling Adequacy = 0.67
Bartlett’s Test of Sphericity. Chi-Square X2 = 2851.90
(df = 153, p < 0.001) **
** p < 0.01.
Table 7. Correlation coefficients among variables.
Table 7. Correlation coefficients among variables.
H1H2H3H4H5H6
H11.00
H20.631.00
H30.20−0.071.00
H4−0.21−0.090.141.00
H50.500.80−0.11−0.011.00
H60.030.080.180.640.061.00
Table 8. Multivariate analysis of ‘Study 1’.
Table 8. Multivariate analysis of ‘Study 1’.
HypothesisPathResearch Hypothesis Test Results
Standardization Errort-ValueRegression Coefficient (Beta)p-Valuef2Adopting Hypothesis
Prefer to Personalized advertising
(n = 120)
H1.1Des → Ris0.057.850.59 (Pos) **0.000 *0.22Supported
H1.2Des → Cont0.093.800.33 (Neg) ***0.000 *0.10Not supported
H1.3Des → Conc0.07−0.28−0.03 (Pos) **0.777 *0.01Not supported
H1.4Ris → Conc0.092.340.21 (Pos) **0.021 *0.08Supported
H1.5Cont → Conc0.06−0.24−0.02 (Neg) ***0.810 *0.00Not supported
H1.6Conc → Int0.0711.800.74 (Neg) ***0.000 *0.40Not supported
H1.7Bne → Int0.092.000.18 (Pos) **0.048 *0.05Supported
Prefer to Dynamic Paywall
(n = 48)
H2.1Des → Ris0.107.270.73 (Pos) **0.000 *0.27Supported
H2.2Des → Cont0.18−0.48−0.07 (Neg) ***0.632 *0.01Not supported
H2.3Des → Conc0.08−6.06−0.67 (Pos) **0.000 *0.20Not supported
H2.4Ris → Conc0.08−5.11−0.60 (Pos) **0.000 *0.18Not supported
H2.5Cont → Conc0.083.790.49 (Neg) ***0.000 *0.15Not supported
H2.6Conc → Int0.142.880.39 (Neg) ***0.006 *0.12Not supported
H2.7Bne → Int0.11−1.55−0.22 (Pos) **0.129 *0.02Not supported
Adapting hypothesis case: * p < 0.05, ** Pos = p < 0.00, *** Neg = p > 0.00.
Table 9. Multivariate analysis of ‘Study 2’.
Table 9. Multivariate analysis of ‘Study 2’.
HypothesisPathResearch Hypothesis Test Results
Standardization Errort-ValueRegression Coefficient (Beta)p-Valuef2Adopting Hypothesis
Portal
(n = 120)
H3.1Des → Ris0.069.610.66 (Pos) **0.000 *0.25Supported
H3.2Des → Cont0.091.420.13 (Neg) ***0.1570.02Not supported
H3.3Des → Conc0.06−2.19−0.20 (Pos) **0.031 *0.06Not supported
H3.4Ris → Conc0.09−0.63−0.06 (Pos) **0.5290.00Not supported
H3.5Cont → Conc0.071.950.18 (Neg) ***0.0530.03Not supported
H3.6Conc → Int0.088.780.63 (Neg) ***0.000 *0.35Not supported
H3.7Bne → Int0.081.560.14 (Pos) **0.1220.02Not supported
Social media
(n = 48)
H4.1Des → Ris0.125.020.68 (Pos) **0.000 *0.23Supported
H4.2Des → Cont0.35−1.160.54 (Neg) ***0.000 *0.05Not supported
H4.3Des → Conc0.22−1.61−0.28 (Pos) **0.1180.04Not supported
H4.4Ris → Conc0.23−1.62−0.28 (Pos) **0.1150.04Not supported
H4.5Cont → Conc0.10−0.03−0.027 (Neg) ***0.8840.00Not supported
H4.6Conc → Int0.146.880.78 (Neg) ***0.000 *0.38Not supported
H4.7Bne → Int0.22−4.50−0.63 (Pos) **0.000 *0.22Not supported
Adapting hypothesis case: * p < 0.05, ** Pos = p < 0.00, *** Neg = p > 0.00.
Table 10. ‘Prefer to Personalized advertising’ vs. ‘Prefer to Dynamic Paywall’.
Table 10. ‘Prefer to Personalized advertising’ vs. ‘Prefer to Dynamic Paywall’.
Prefer to Personalized AdvertisingPrefer to Dynamic Paywall
-
Completely free news content oriented
-
Expect benefits through customized advertising
-
However, there are concerns about risks in the privacy collection process. Therefore, there is a need to provide sufficient explanation from collection to use of personal information
-
Willingness to accept customized news content through small payments
-
There is a need to thoroughly protect user privacy
-
There must be no additional advertisements other than news content
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shin, J.W. AI-Driven Privacy Trade-Offs in Digital News Content: Consumer Perception of Personalized Advertising and Dynamic Paywall. Journal. Media 2025, 6, 170. https://doi.org/10.3390/journalmedia6040170

AMA Style

Shin JW. AI-Driven Privacy Trade-Offs in Digital News Content: Consumer Perception of Personalized Advertising and Dynamic Paywall. Journalism and Media. 2025; 6(4):170. https://doi.org/10.3390/journalmedia6040170

Chicago/Turabian Style

Shin, Jae Woo. 2025. "AI-Driven Privacy Trade-Offs in Digital News Content: Consumer Perception of Personalized Advertising and Dynamic Paywall" Journalism and Media 6, no. 4: 170. https://doi.org/10.3390/journalmedia6040170

APA Style

Shin, J. W. (2025). AI-Driven Privacy Trade-Offs in Digital News Content: Consumer Perception of Personalized Advertising and Dynamic Paywall. Journalism and Media, 6(4), 170. https://doi.org/10.3390/journalmedia6040170

Article Metrics

Back to TopTop