Next Article in Journal
How Do Researchers and Public Officials Co-Navigate e-Participation Implementation? An Action-Research Experience with South African Municipalities
Next Article in Special Issue
The Tech-Enabled Shopper Impacting a Phygital Retail Complex System Stimulated by Adaptive Retailers’ Valorization of an Increasingly Complex E-Commerce
Previous Article in Journal
Digital Transformation for Sustainability in Industry 4.0: Alleviating the Corporate Digital Divide and Enhancing Supply Chain Collaboration
Previous Article in Special Issue
Pricing and Service Decision in a Dual-Channel System Considering Zone of Service Tolerance
 
 
Article
Peer-Review Record

Directed Consumer-Generated Content (DCGC) for Social Media Marketing: Analyzing Performance Metrics from a Field Experiment in the Publishing Industry

Systems 2025, 13(2), 124; https://doi.org/10.3390/systems13020124
by Eleni Ntousi, Chris Lazaris *, Pavlina Katiaj and Anastasios Koukopoulos
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Systems 2025, 13(2), 124; https://doi.org/10.3390/systems13020124
Submission received: 10 January 2025 / Revised: 10 February 2025 / Accepted: 14 February 2025 / Published: 17 February 2025
(This article belongs to the Special Issue Complex Systems for E-Commerce and Business Management)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

See attached document

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

Dear Authors,

 

The article addresses a relevant topic, leaning more towards practical rather than theoretical aspects. Overall, the general impression after reading the article is positive. However, certain points stand out, undermining the reliability of the testing results. There are particularly many questions and remarks regarding the research design that need to be addressed:

 

1) Lines 84-86:

“The main purpose of this study is to investigate the impact of DCGC on social media advertising content and contribute to the field of digital marketing by locating the most effective strategy.”

The research objective could be rephrased as: “To evaluate the effectiveness of DCGC (Direct Consumer Generated Content) compared to traditional online advertising in a social network, using the example of online bookstore.”

 

2) Lines 87-90:

H2: DCGC leads to more conversions compared to traditional social media advertising content.

H3: DCGC leads to higher conversion rates compared to traditional social media advertising content.

Essentially, hypotheses H2 and H3 are equivalent, differing only in the metric used. H2 evaluates conversions in absolute terms, while H3 measures them as a ratio (Conversion Rate). Absolute conversion numbers provide limited information, as they depend on additional factors such as budget. In contrast, the Conversion Rate is a more universal and objective indicator of advertising, website quality, and other factors, as it is independent of investment volume. Considering this, H2 can be removed, applying only H3.

 

3) Analysis of Advantages for Companies

At the end of section ‘2.1 Meta Social Media Platform,’ your discussion only highlights administrative challenges (barriers) associated with CGC:

Despite their advantages, social media platforms pose challenges for CGC campaigns. For example, Meta’s reliance on user networks can limit the organic spread of CGC. Brands must carefully consider these platform dynamics when designing campaigns, aligning their strategies with each platform’s strengths to maximize engagement and impact.

In fact, the problem is much broader in scope, as the spread of negative information can attract audience attention and decrease potential customers’ trust, even in the absence of objective grounds. Additionally, within the framework of Social Influence Theory, elaborated by Herbert Kelman, numerous questions arise regarding the mechanisms and principles that drive individuals, influenced by public opinion or opinion leaders, to become online shoppers.

Thus, despite the abundance of references in the second section (2. Theoretical Background) of the article, the authors only provide general information about how social media platforms operate. I recommend expanding the Theoretical Background section.

 

4) Research Design

Lines 345-348:

To ensure that different types of campaigns were presented to distinct participants, the conventional campaigns were configured to exclude the audience that had previously been exposed to the DCGC campaigns.

Did the same rule apply in reverse? If users were exposed to a traditional advertising campaign, were they then excluded from the audience for DCGC campaigns?

Moreover, CTR and CR depend on several key factors, including:

-Engagement Level: For example, if the user is in the "Awareness" stage, clicks may be frequent but conversions low, whereas at the "Consideration" stage, the probability of conversion is higher.

-Temporal Factors: For instance, educational book campaigns may experience significant behavioral changes during academic periods.

-Market Release Features: The launch of a bestseller may attract heightened interest, leading to increases in CTR and CR.

To ensure effective marketing, it is crucial to create relevant ads tailored to each audience segment and configure targeting appropriately, such as with Facebook Ads. Ad creatives are typically designed to address potential customers’ questions and build their confidence in making a purchase. However, irrelevant ads can have the opposite effect, undermining trust in the product or service and negatively affecting Click-Through Rate (CTR) and Conversion Rate (CR).

The presented article lacks a description of campaign targeting, which is critically important. It is necessary to understand whether all ad campaigns were configured identically in Facebook Ads.

Additionally, the absence of A/A testing for traditional advertising in the article is a notable gap. A/A testing evaluates natural fluctuations in data, helping to identify how differences in user behavior may manifest without changes in ads. This type of test establishes a baseline level of random variation, enabling accurate interpretation of A/B test results. If an A/A test reveals false differences between identical ad groups, it signals methodological issues or the need to increase the sample size.

For example, an A/A test can be conducted using your existing data by randomly splitting ‘traditional advertising’ into two identical groups (Subsample 1 and Subsample 2) and comparing CTR and CR metrics. If significant differences are observed in this case, it indicates the need to refine test conditions or improve the methodology.

 

5) Lines 361-368:

The products advertised belonged to the same category, allowing for a straight comparison. Some of the campaigns were firm-created “traditional” paid advertisements. Every campaign included a call-to-action to click on the link provided and purchase the book that was promoted. The campaigns were boosted by the company’s Meta accounts.

The advertised product is not clearly defined. For traditional advertising and DCGC, a specific product (Stock Keeping Unit, SKU) was chosen. However, if different SKUs were used for each format, even within the same category, this creates a significant methodological issue for A/B testing. Different products have varying levels of appeal, price points, popularity, and demand, which can significantly impact key metrics (CTR, conversions, ROAS) independently of the quality of the advertising creative. Thus, it becomes impossible to reliably determine what influenced the ad’s performance—the creative itself or the product characteristics.

Moreover, even within a single category, different products can target different audience segments. For instance, books in the same genre might attract fans of different authors with varying conversion rates. In this case, conversion levels may reflect differences in audience responses rather than the effectiveness of the advertising creative.

 

6) The article needs to clarify whether traditional advertising and DCGC impressions were conducted simultaneously (during the same time period). Currently, there is no information regarding the exact dates or periods of the test and advertising campaigns, which raises questions about the validity of the experiment.

Running an A/B test with different time frames for versions A and B may distort the results, as temporal, external, and other factors (e.g., seasonality, events affecting audience engagement) can influence the groups differently.

To obtain reliable conclusions, it is crucial to collect data simultaneously to eliminate the influence of time-related factors. Please provide detailed information on the test and campaign periods in the article, as this will enhance the transparency and scientific validity of the A/B test.

 

7) Lines 414–417:

Conversions show the number of users that not only click on the content but also complete the desired action, which in this case is a purchase. This metric is a good indicator of how effective the content is when it comes to achieving the desired marketing objectives.

Lines 418–422:

Conversion Rate (CR) measures the percentage of users that complete a desired action after clicking on an advertisement. The desired action could be completing a contact form, subscribing to a newsletter, or making a purchase. As a metric, CR directly links an ad and the desired outcome [45]. It is calculated with the following formula: CR = (Conversions/Clicks) x 100.

Please swap these paragraphs to maintain a logical narrative sequence: start with the text from Lines 418–422 (what conversion is and its types), followed by the text from Lines 414–417 (conversion as a book purchase). This will structure the content from general to specific.

 

8) Lines 382-387:

The three variations are then evaluated through a brief advertising A/B test, conducted with minimal expenditure. The video that performs best in the A/B test is the one the creator will be asked to publish on their platform. This video is promoted on Meta through the customer’s account. Videos that demonstrate a strong Return on Advertising Spend (ROAS) are retained for future advertising, while those that underperform are discontinued from advertising efforts.

How was the interim A/B test conducted? Were statistical criteria used for analysis, or was the evaluation carried out in a naive manner, relying solely on the observed results? It should be noted that when comparing three variations, statistical criteria must be taken into account, along with adjustments for multiple comparisons (e.g., in this case, there are three variations, which differs from a classic A/B test involving only two options). If the interim A/B test was conducted in a naive manner, please specify this.

 

9) Lines 388-391:

It is essential to recognize that creators receive compensation for their services; however, the remuneration is relatively modest due to their limited online presence. As a result, the pricing for this type of content is significantly lower, particularly when juxtaposed with similar collaborations involving well-known influencers.

In this context, the key factors are the cost of products provided free of charge to content creators and the cost of their services. Readers of your article need not only to compare these expenses with those for well-known creators but also to understand the share these costs occupy within the overall advertising budget when using the DCGC method. Therefore, we strongly recommend providing a detailed explanation of the costs associated with engaging content creators and accounting for the cost of products provided for promotion to ensure a comprehensive understanding of their impact on the advertising budget and, consequently, on ROAS.

 

10) It only became clear from the Discussion section (Line 507) that N=67 refers to the number of advertising campaigns, as Table 1, 'Descriptive Statistics for Meta Campaigns', lacks sufficient detail. For instance, it is unclear how the CTR value can exceed 1.

I recommend enhancing the data presentation by providing a detailed breakdown for each advertising campaign, including the following metrics: Clicks, Impressions, CTR, Conversions, Conversion Rate, and ROAS. This will make the information more transparent and easier to analyze.

Your dataset follows a binomial distribution. Now, let’s move on to comment 11 (where I argue that your data indeed exhibits a binomial distribution):

 

11) Lines 434–440:

"As the data were not normally distributed, non-parametric independent samples tests, and more specifically the Mann-Whitney U Test, also known as the Wilcoxon Rank-Sum Test, were employed to compare the means of the chosen performance metrics and ROAS between the two groups and consequently help in determining whether the differences observed have statistical significance. This technique was chosen as it provides a simple and effective way for comparing two groups on a single metric when the data are not normally distributed."

Your data aligns with a binomial distribution, especially when analyzing conversions (where 1 represents a successful conversion and 0 indicates its absence). For hypothesis testing, a classical Z-test can be used, eliminating the need for the Mann-Whitney rank test. In your paper, the Mann-Whitney test results were transformed into Z-statistics, which simplifies interpretation and enables the calculation of p-values. However, in A/B testing, relying on the binomial nature of the data can forgo ranks, making the analysis more sensitive to group differences.

Z crit = −1.645 at alpha = 0.05 (In comment number 13, Z crit is explained as it pertains to your one-tailed hypotheses).

 

The CTR for each advertising campaign can be considered as the outcome of a binomial distribution. According to the Central Limit Theorem, if N is sufficiently large, the binomial distribution can be approximated by a normal distribution. Here, N, the number of impressions, is clearly greater than 100.

Instead of calculating the CTR for each campaign separately (N=67 as previously), you can aggregate the data for all impressions and clicks for each group (traditional ads and DCGC ads—resulting in two aggregates):

N traditional is total impressions for traditional ads.

N DCGC is total impressions for DCGC ads, not the number (67) of campaigns as previously considered.

For the mean value for traditional ads is CTR = Total clicks on all traditional ads / Total impressions for traditional ads

For the mean value for DCGC ads is CTR = Total clicks on DCGC ads / Total impressions of DCGC ads

 

Similarly, calculate CR (Conversion Rate), where:

N represents the number of clicks on the ads:

N traditional is total clicks on traditional ads.

N DCGC is total clicks on DCGC ads.

For more details on binomial distributions, refer to:

https://en.wikipedia.org/wiki/Binomial_distribution

https://en.wikipedia.org/wiki/Z-test#Comparing_the_proportions_of_two_binomials 

 

12) Lines 460-461: The phrase “(sig = 0.00 < p = 0.05)” is equivalent to “(p < 0.001)” because the p-value is so small that it is less than 0.001, which is the standard way of presenting such values in scientific publications.

 

13) Your hypotheses include formulations emphasizing "higher" and "more," indicating a directional comparison, i.e., the hypotheses suggest that A < B. In this case, the value of Asymp. Sig. (one-tailed, specifically left-tailed) should be used, with Z crit = -1.645 at alpha = 0.05 (meaning the probability that our mean falls below the confidence interval of DCGC). This is because the specific direction of differences is being tested. However, in your Table 3, Asymp. Sig. (2-tailed) is reported, which tests the hypothesis that A ≠ B (i.e., the absence of equality without considering direction). This discrepancy needs to be corrected for accurate interpretation of the results in the context of your hypotheses.

 

14) Lines 553-554: “The results of this research show that what works on one platform does not necessarily perform the same way on all of them.

The authors did not explore different platforms, or, if they did, no conclusions regarding them were drawn. This sentence appears irrelevant and out of context within the article.

 

15) Comment on the section Discussions and Theoretical Implications:

I recommend delving deeper into consumer behavior in the context of trust in online advertising and identifying additional relevant studies. This will enrich the theoretical analysis, strengthen the academic foundation, and lead to improvements in the Discussions section and subsection 6.1 Theoretical Implications. It is worth noting that, in its current form, the article has a predominantly practical focus, which might limit its appeal to the academic community. Incorporating theoretical concepts and models that support your conclusions will make the paper more substantial and engaging for readers, ensuring a balance between theory and practice.

The authors may find it valuable to explore explanations through the lens of Social Influence Theory by Herbert Kelman and his followers. For example, DCGC campaigns incorporating elements of social proof, such as reviews or recommendations from opinion leaders, can potentially drive a high click-through rate (CTR).

Additionally, the Theory of Planned Behavior, developed by Icek Ajzen, might provide insights into the levels of engagement users go through online before becoming customers and the role played by DCGC campaigns in this process.

 

Although this review might appear extensive, dear authors, I assure you that you will be able to implement the necessary changes.

 

Best regards,

I hope the Authors succeed in improving the article before publication,

 

Reviewer

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

I am privileged to review this promising research article. Your methodology and potential research implications are important for the analysis of DCGC application in social media marketing. However, the following sections need to be strengthened for potential publication:

·         Introduction: The manuscript addresses an emerging topic—Directed Consumer-Generated Content (DCGC)—and its application in social media marketing. This is a novel and relevant study as it provides empirical evidence regarding DCGC's effectiveness compared to traditional social media advertising. The focus on specific performance metrics, such as click-through rate (CTR), conversion rate, and return on ad spend (ROAS), offers valuable insights for both academia and practitioners. However, while the manuscript claims to investigate an underexplored area, it lacks sufficient engagement with the latest literature on similar topics.

·         Suggestions: Addressing these gaps would further contextualize the novelty of the DCGC concept.

·         Methodology: The sample size (67 campaigns) is relatively small, which limits the generalizability of results. The study focuses exclusively on the publishing industry, which narrows the applicability of its conclusions. Qualitative elements of DCGC content are not analyzed, leaving questions about what specific characteristics drive its success.

·         Suggestions: Acknowledge and discuss the limitations of sample size and industry focus more explicitly. Propose future research avenues to explore DCGC's impact across diverse industries and larger datasets. Incorporate qualitative analysis of DCGC content to identify specific features that enhance its effectiveness.

·         Results: The results provide strong evidence supporting three of the four hypotheses, demonstrating that DCGC outperforms traditional campaigns in conversions, conversion rate, and ROAS. However, traditional campaigns achieve higher CTR, which is attributed to platform-specific algorithms favoring polished content.

·         Suggestions: Provide a more detailed analysis of CTR results and their implications for DCGC's overall effectiveness. Explore potential trade-offs between CTR and other metrics, such as ROAS, to offer a more nuanced perspective.

·         Conclusions: The manuscript highlights the practical utility of DCGC as a cost-effective strategy for achieving high ROAS and conversions. It also contributes to academic discussions on consumer-generated content and social media marketing. The practical recommendations are generic and do not account for industry-specific challenges. Theoretical implications are not sufficiently detailed. The manuscript acknowledges certain limitations, such as its focus on the Greek market and the publishing industry. However, it does not adequately address the implications of these limitations for the study’s findings.

·         Suggestions: Provide more specific recommendations for marketers, such as how to design effective DCGC campaigns. Discuss how the findings contribute to existing theories of consumer trust, engagement, and digital advertising strategies. Emphasize how cultural and industry-specific factors may influence the results. Propose future research to test the generalizability of findings in different cultural and industrial contexts. Discuss, briefly, how advancements in social media algorithms and consumer behavior trends might impact the relevance of DCGC strategies.

 

I hope my suggestions aid in strengthening and resonating your work.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The article has been corrected in accordance with the recommendations and comments.

 

Lines 775-779 in Revised article: “Another limitation of this study is the absence of an A/A test to establish baseline variance in CTR and CR. Future research should consider conducting an A/A test by randomly dividing traditional ad campaigns into two identical subsamples to measure natural fluctuations in user behavior. This would help identify potential biases and improve the methodological robustness of A/B testing comparisons.”

The Authors may, at their discretion, either keep or remove the highlighted paragraph, as Hypothesis 1 is not confirmed. In this case, CTR does not differ between Simple and DCGC advertising campaigns, making the A/A test unnecessary. An A/A test would be appropriate if Hypothesis 1 (revised article) were confirmed.

Regarding the comparison of conversions, the results clearly show that Simple group campaigns are significantly less effective in terms of both ROI and website conversions.

Nevertheless, an A/A test can still be conducted by comparing two random subgroups within the Simple group and separately comparing two random subgroups within the DCGC group. Random subgroups in this context refer not to individual campaigns but to clusters of several randomly selected campaigns. Thus, several random campaigns from the Simple group can be compared with each other, and a similar comparison can be made for several random campaigns from the DCGC group.

However, my recommendation regarding conducting A\A testing is not mandatory.

 

Best Regards,

Reviewer

 

Author Response

We are grateful for the time that you have put into reviewing and considering our manuscript, as well as the insightful comments you have provided. Based on your final recommendations we removed the related paragraph regarding the A/A test. 

Reviewer 3 Report

Comments and Suggestions for Authors

The authors have implemented my proposed revisions. I suggest the acceptance of the paper.

Author Response

We are grateful for the time that you have put into reviewing and considering our manuscript, as well as the insightful comments you have provided. We are pleased that you accepted this publication.

Back to TopTop