Affordances of Wartime Collective Action on Facebook
Round 1
Reviewer 1 Report
Comments and Suggestions for Authors
The narrative and the structure of the theoretical part of the paper is clear and well written. The different pieces of literature are connected well together, and it is easy for the reader to follow the argument. However, I would like to see more elaboration on the challenges that collective initiatives face during wartime. In addition, I think that more elaboration on the organization of collective actions, in general, (whether by individuals or organisations), would add more value in the text.
The methodology and the research design used also seem to be suitable, while the interviews conducted also offer added value to the paper. However, the use of a broader sample could add more value to the findings of the study (for example analyzing other social media platforms). Last but not least, more elaboration on how the actualization of social media affordances results to specific behaviors could also add more value in the paper.
The results (research findings) are clearly presented and the relevant correlations are explained in an appropriate manner. However, some typos need to be considered, such as in lines 367 & 375 (Error! Reference 367 source not found.). In addition, more elaboration on the empirical conceptualization of engagement and interaction seems to be needed.
In the Discussion- Conclusion section the author claims broad generalizability of the results (for example for other social media platforms), which is too risky. In addition, more elaboration on the connection of the results with the conceptual framework of affordances and collective action during wartime would add more value to the study.
In general, the article seems to be well written and organized regarding the different aspects of the issue concerned. Thus, based on its reading and having assessed its quality, I think that it is suitable for publication with this journal, subject to minor revisions.
Author Response
We would like to thank the reviewer for her/his time and effort. We believe that the reviewer’s comments helped to sharpen and improve the paper. Below we provide our responses to each of the reviewer’s comments.
- However, I would like to see more elaboration on the challenges that collective initiatives face during wartime. In addition, I think that more elaboration on the organization of collective actions, in general, (whether by individuals or organisations), would add more value in the text.
We expanded the section on collective action, adding references with recent research on the challenges of collective initiatives during wartime.
We have also extracted the discussion of individuals and organisations organizing collective action into a separate section (4) and expanded it.
- The methodology and the research design used also seem to be suitable, while the interviews conducted also offer added value to the paper. However, the use of a broader sample could add more value to the findings of the study (for example analyzing other social media platforms).
This is an insightful comment, and we acknowledge that as a limitation of the paper and discuss the expanding the enquiry to other social platforms in the future research paragraph.
- Last but not least, more elaboration on how the actualization of social media affordances results to specific behaviors could also add more value in the paper.
Based on this suggestion as well as similar suggestion from another reviewer, we expanded our argument by structuring it along three more specific hypotheses. We made the corresponding changes to the Results and Discussion sections to elaborate more on the actualization of social media affordances leads to specific behaviors.
- The results (research findings) are clearly presented and the relevant correlations are explained in an appropriate manner. However, some typos need to be considered, such as in lines 367 & 375 (Error! Reference 367 source not found.).
We fixed the problem with reference errors.
- In addition, more elaboration on the empirical conceptualization of engagement and interaction seems to be needed.
We have addressed the issue as it was suggested by both reviewers. The role of engagement/interaction in the discussion was not clear enough, so we explicitly discuss it and tie these concepts to our theory and hypotheses (particularly, H2).
- In the Discussion- Conclusion section the author claims broad generalizability of the results (for example for other social media platforms), which is too risky. In addition, more elaboration on the connection of the results with the conceptual framework of affordances and collective action during wartime would add more value to the study.
We tempered somewhat our claims to avoid overestimation. Added discussion of the results in terms of the framework proposed at the beginning of the paper. We also ensured that the manuscript avoids inferential or statistical language and instead emphasizes the interpretative nature of the analysis. Given the small number of cases (n = 8) and the limited number of profiles per group, we consider an interpretative approach more appropriate. Accordingly, we revised sections where more inferential claims or generalizations were previously made.
Again, we thank the reviewer for valuable comments, and hope our revision appropriately addresses the criticism.
Reviewer 2 Report
Comments and Suggestions for Authors
To strengthen the scientific contribution, transparency, and reproducibility of the manuscript, several areas would benefit from clearer exposition and additional analysis. First, clarify the role of hypotheses and align the strength of claims with the evidence provided. If you intend to argue that groups “significantly differ,” add small-sample–robust inferential statistics with effect sizes and uncertainty, or revise the language to emphasize descriptive patterns.
The sampling frame should be fully transparent. Please detail the exact search strings and languages, tools, the time window, and explicit inclusion and exclusion rules, ideally accompanied by a concise flow description from profile identification through screening to inclusion. Reframe generalizations as applying to high-visibility profiles unless the sample is broadened.
Your conceptualization of “affordance actualization” would benefit from a sharper separation between latent affordances, platform functions, and observable behavioral indicators. Explain why each indicator is a valid proxy and discuss construct validity risks. Closely related, define the engagement metric precisely and consider better normalization. If impressions are unavailable, normalize by per-post follower counts at the time of posting, and account for confounders such as content and media type, calendar month, and profile heterogeneity with simple controls or fixed effects.
Persistence and editability are difficult to measure when deletions precede data collection. Please acknowledge this limitation and consider feasible alternatives, such as rates of visible edit histories or periodic snapshots that enable simple survival-style summaries. Throughout the empirical sections, figures that anchor the main claims should be repaired and enriched with clear axes, sample sizes, and uncertainty intervals, and it would be helpful to present per-profile panels to avoid ecological fallacies.
The qualitative component is promising and could be integrated more systematically. Describe recruitment, consent, interview duration and language, and the analytic approach, including coding procedures and any reliability checks. A joint display that juxtaposes themes with quantitative patterns would make the triangulation explicit. The manuscript also needs an explicit ethics and risk statement covering IRB or ethics determination (including exemption criteria, if applicable), consent for interviews, handling of sensitive imagery and locations, data security, and measures to mitigate risk to depicted individuals.
For reproducibility, specify what data and code can be shared and under what access conditions. A de-identified post-level dataset or a minimally sufficient subset, profile identifiers, codebooks, and analysis code would substantially enhance transparency; if full sharing is unsafe, provide a clear rationale and alternative access arrangements. In the Discussion, temper capability or causal language - such as claims that organizations are “more apt” - unless supported by additional evidence, and present such points as interpretations or hypotheses for future testing.
Several clarifications would improve clarity and depth. Situate temporal trends against a brief annotated timeline of the conflict to separate exogenous shocks from platform or strategy effects. Add robustness checks, for example excluding early spikes, re-estimating after removing influential profiles, and exploring heterogeneity by media type. Make the “calls” versus “reports” taxonomy explicit with decision rules and treatment of mixed-purpose posts. Report distributions with medians and IQRs where variables are skewed, and document intercoder reliability if any manual coding was performed, or validation steps if automated methods were used.
The literature review would be stronger with additional recent work on wartime digital mobilization, platform governance and moderation, OSINT practices, and parasocial engagement, which will help position your contribution and clarify novelty. Consider translating your findings into a small set of concrete, evidence-based recommendations for organizational and individual initiatives regarding sensitive imagery, reporting cadence, and association practices. A dedicated limitations section should explicitly address selection toward “most successful” profiles, Facebook-specificity, unobserved deletions and edits, and the time-bounded nature of your inference. Finally, maintain consistent terminology separating affordances, platform features, and behaviors, and consider providing supplementary material such as the interview guide, coding codebook, robustness figures and tables, and a concise sampling flow description.
Author Response
We are very grateful to Reviewer 2 for the detailed and thoughtful feedback, which helped to significantly strengthen the manuscript in terms of conceptual clarity, methodological transparency and rigor. We carefully considered every suggestion and implemented extensive revisions throughout the paper. Below we summarize the main changes and rationale.
To begin with, we refined the theoretical structure of the paper by clarifying the role of hypotheses and aligning the claims more closely with the evidence. The manuscript now explicitly presents three hypotheses (H1-H3), addressing differences in affordance use across profile types, the relationship between affordance actualisation and engagement, and the constraining role of platform moderation and algorithmic governance. Corresponding revisions were made across the Results and Discussion sections. Furthermore, we eliminated potentially inferential or causal language, emphasizing instead the interpretative and descriptive nature of the analysis, which, we believe, is most appropriate given the small number of cases (n = 8). We also enhanced methodological transparency by explicitly describing sampling strategy (our sample covers the complete set of posts published by all selected profiles during the analysed period, rather than a sample derived from search strings), we clarified inclusion criteria and explained that no posts were excluded within that timeframe. The interview guidelines/questions are included in Appendix 1 to ensure transparency. Data-security considerations, including consent, data deletion, and the sensitive nature of wartime communication, are now discussed explicitly in the Methods section.
Below we provide a response on a per comment basis.
Comment:
To strengthen the scientific contribution, transparency, and reproducibility of the manuscript, several areas would benefit from clearer exposition and additional analysis. First, clarify the role of hypotheses and align the strength of claims with the evidence provided. If you intend to argue that groups “significantly differ,” add small-sample–robust inferential statistics with effect sizes and uncertainty, or revise the language to emphasize descriptive patterns.
Response:
We adjusted the formulation of hypotheses, adding H2 and H3 to make the structure of the argument clearer. We updated the Results/ Discussion/ Conclusions sections to reflect the change.
We also ensured that the manuscript avoids inferential or statistical language and instead emphasizes the interpretative nature of the analysis. Given the small number of cases (n = 8) and the limited number of profiles per group, we consider an interpretative approach more appropriate. Accordingly, we revised sections where more inferential claims or generalizations were previously made.
Comment:
The sampling frame should be fully transparent. Please detail the exact search strings and languages, tools, the time window, and explicit inclusion and exclusion rules, ideally accompanied by a concise flow description from profile identification through screening to inclusion.
Response:
There are no search strings. The analysed sample includes complete sample of publications for all the profiles within the listed period. We added this clarification in the corresponding chapter.
Comment:
Your conceptualization of “affordance actualization” would benefit from a sharper separation between latent affordances, platform functions, and observable behavioral indicators. Explain why each indicator is a valid proxy and discuss construct validity risks. Closely related, define the engagement metric precisely and consider better normalization. If impressions are unavailable, normalize by per-post follower counts at the time of posting, and account for confounders such as content and media type, calendar month, and profile heterogeneity with simple controls or fixed effects.
Response:
We added information on the core concepts in the theoretical chapter on affordances, making sure the distinction between them is clear.
We expanded significantly on the engagement and interaction rate for it to fit into the discussion of affordances and their actualisation by the users. Added clarification that interaction rate is already normalised as it considers the total number of followers at the given time, thus it is not affected by a total number of followers and shows activity per follower. Furthermore, as we are talking about relative interaction rates between profiles, fixed effects affect the profiles at the same time.
Comment:
Persistence and editability are difficult to measure when deletions precede data collection. Please acknowledge this limitation and consider feasible alternatives, such as rates of visible edit histories or periodic snapshots that enable simple survival-style summaries. Throughout the empirical sections, figures that anchor the main claims should be repaired and enriched with clear axes, sample sizes, and uncertainty intervals, and it would be helpful to present per-profile panels to avoid ecological fallacies.
Response:
You are absolutely right in saying that deletions preceding data collection make persistence and editability trickier to measure, we acknowledged this limitation in the Discussion section. However, it is likely that effect of deletions is very small, as evidenced by the interviewer comments regarding the issue.
We modified and improved the figures to include additional relevant information and make them more readable.
Comment:
The qualitative component is promising and could be integrated more systematically. Describe recruitment, consent, interview duration and language, and the analytic approach, including coding procedures and any reliability checks. A joint display that juxtaposes themes with quantitative patterns would make the triangulation explicit. The manuscript also needs an explicit ethics and risk statement covering IRB or ethics determination (including exemption criteria, if applicable), consent for interviews, handling of sensitive imagery and locations, data security, and measures to mitigate risk to depicted individuals.
Response:
Added more information regarding the qualitative component. The respondents were profile administrators and in one case communication manager of the fund, these are not random people (explanation is added in methods). Information regarding coding and data retention was also added to ensure the possible concerns are addressed.
Key questions of the semi-structured interview with the respondents are now provided as an Appendix 1 to ensure transparency. However, considering the sensitive topic, we decided against sharing the full texts of the interviews, which were subsequently deleted. No sensitive imagery or other sensitive data were shared by the respondents with us.
Comment:
For reproducibility, specify what data and code can be shared and under what access conditions. A de-identified post-level dataset or a minimally sufficient subset, profile identifiers, codebooks, and analysis code would substantially enhance transparency; if full sharing is unsafe, provide a clear rationale and alternative access arrangements.
Response:
As far as we could surmise CrowdTangle does not allow sharing of the datasets obtained with the tool without explicit permission from Meta (who owned CrowdTangle, which was shut down in 2024). However, using the information found in paper (profile names, exact period), one can reproduce the sample if one has access to a similar tool. All publications analysed are public.
Still, we are willing to share the aggregated weekly data to ensure transparency of the research. Alternatively, we can share the dataset with post metadata but the textual content of the publications removed.
Comment:
In the Discussion, temper capability or causal language - such as claims that organizations are “more apt” - unless supported by additional evidence, and present such points as interpretations or hypotheses for future testing.
Response:
We tempered somewhat our claims to avoid overestimation. Expanded discussion on limitations and future research.
Regarding organisations being more apt for wartime CA: this statement is clearly explained earlier in the paragraph, namely discussing the content of the reports: official organisations are allowed to purchase equipment (including weapons) that individuals just cannot buy, limiting their contribution to wartime CA.
Comment:
The literature review would be stronger with additional recent work on wartime digital mobilization, platform governance and moderation, OSINT practices, and parasocial engagement, which will help position your contribution and clarify novelty.
Response:
We expanded some aspects of literature review as suggested to reflect more recent scholarship, particularly on wartime mobilisation.
Comment:
Consider translating your findings into a small set of concrete, evidence-based recommendations for organizational and individual initiatives regarding sensitive imagery, reporting cadence, and association practices.
Response:
Making recommendations is out of scope of this paper, however we added more explicit “lessons learnt” in the discussion section and restructured the discussion and conclusions along the three more concrete hypotheses.
A dedicated limitations section should explicitly address selection toward “most successful” profiles, Facebook-specificity, unobserved deletions and edits, and the time-bounded nature of your inference.
Response:
The discussion of the limitations was added at the end of the Discussion section.
Comment:
Finally, maintain consistent terminology separating affordances, platform features, and behaviors, and consider providing supplementary material such as the interview guide, coding codebook, robustness figures and tables, and a concise sampling flow description.
Response:
We made edits throughout the paper to ensure consistent terminology. We edited the figures, added interview guide as appendix and explained samplin
Round 2
Reviewer 2 Report
Comments and Suggestions for Authors
Thank you for the changes - I think there's only some small revisions I can find that could improve your research - I don't mean to be Reviewer 2, but I think they would be helpful:
- Regarding the sampling frame: reconcile the description with the response letter, and document the flow. In the paper you state that “the profiles were found through keyword search and by following links and tags” and that you included all public posts for 8 profiles from 23/02/2022 to 23/11/2022 (n=3,892 posts). This conflicts with the response letter’s statement that there were no search strings and that the sample is the complete set of posts for selected profiles within the period. Please reconcile this: if keyword search was used to identify candidate profiles, list the exact search terms (and languages/locales), the platforms/tools used, and any snowballing rules via tags/links. Add a brief sampling flow (one paragraph or a small figure): identification → screening → inclusion, and justify the “likely identified all profiles fitting criteria” claim. Specify inclusion/exclusion rules (e.g., ≥10k followers at what reference date; public visibility; profile types allowed) and how you treated deactivated profiles (you note one was deactivated and excluded)
- Operationalization of “interaction/engagement rate”: provide the exact formula and normalization choices. The text states that “interaction rate indicates how many users have interacted … over a given month,” with profile means between 0.24% and 5.24% but the manuscript does not provide a formula (e.g., (reactions+comments+shares)/followers × 100) nor clarify the denominator timing (followers at post time vs. month-end vs. profile mean).
- Figures - repair cross-references and enrich axes/annotation. Several references read “Error! Reference source not found.” around Figures 3–4 and the surrounding text on interaction rates .
- Qualitative component: complete the methods and ethics details. You report 3/8 interviews with administrators/communications staff and include the interview guide in Appendix A. Please add:Recruitment pathway (initial contact mode; inclusion criteria for interviewees), language used (the guide is translated from Ukrainian; confirm interview language), duration, and whether interviews were recorded or notes-only. Consent procedures (written/oral), any compensation, and whether participants reviewed quotes. A concise description of coding (deductive/inductive, codebook availability, single-/double-coding, and how disagreements were resolved).
- Insert a short “Ethics Statement” covering IRB/ethics determination (or exemption), consent procedures, data security/retention, and steps taken to mitigate risks to depicted individuals.
If you address the above with brief insertions and figure fixes, the manuscript should be ready for acceptance.
Author Response
Thank you for the comments, see our responses below.
1. Thank you for this note, we have reworded and clarified this part in the paper.
Indeed, keywords (and snowballing) were used to find the profiles with 10000+ members/followers. After the 8 profiles were identified, full sample of publications was extracted.
I have moved the sentence about the deleted (and excluded) profile to the paragraph on sampling to make it clearer.
I added a graphical chart showing the process of selection (figure 2).
2. I have clarified this point, providing the exact explanation of how the interaction rate is calculated, including the formula.
3. I replaced the references to figures with plain text, so they should not produce errors anymore. Also updated the charts, as some axis values were not readable in the previous version.
4. Added significantly more information regarding the qualitative component (at the end of methodology section
5. We will provide the IRB note after November 13 as agreed.
I have sent the anonymized version of the informed consent form we used.

