Next Article in Journal
The Caregiving Journey: Arts-Based Methods as Tools for Participatory Co-Design of Health Technologies
Next Article in Special Issue
Online Hate Speech and Immigration Acceptance: A Study of Spanish Provinces
Previous Article in Journal
Perception of Community Environment, Satisfaction with Local Government, and Quality of Life: The Case of Gyeonggi, Korea
 
 
Article
Peer-Review Record

Using Social Media to Monitor Conflict-Related Migration: A Review of Implications for A.I. Forecasting

Soc. Sci. 2022, 11(9), 395; https://doi.org/10.3390/socsci11090395
by Hamid Akin Unver
Reviewer 1:
Reviewer 2: Anonymous
Soc. Sci. 2022, 11(9), 395; https://doi.org/10.3390/socsci11090395
Submission received: 19 May 2022 / Revised: 27 July 2022 / Accepted: 22 August 2022 / Published: 1 September 2022
(This article belongs to the Special Issue The Promise and Perils of Big Data and AI for Migration)

Round 1

Reviewer 1 Report

This article is an interesting, easily understandable introduction to the theme of algorithmic prediction of migration trends related to the specific type of migration that has been triggered by organised violence. It seem like a perfect fit to the special issue it aims at contributing. At least for a reader with limited knowledge in the field of the use of text as evidence that is being harvested by means of machine learning, this article gives a lot of new knowledge. It does, however, also have some weaknesses that need to be fixed before the article can be published. I have two more general issues that need to be tackled before this can be published. Additionally, I have many smaller issues that I think should be considered, none of which are, however, as serious as to prevent publication.   

Main issues:

1.       Review Article or a discussion on the existing literature and debate, rather than a Research Article: This study is more of a review or a discussion of existing literature than a research article. It does not focus on sufficiently tight to produce new information based on an evidence material. I think it should be introduced as one, rather than leaving the reader puzzling about the real intentions of the article. As a review article it is very interesting and educational.

2.       Need to define the focus better: The introduction does not really fully explain what this article is about. It is clear that it deals with the use of social-media-based big data in relation to forced migration caused by conflict (I suspect the focus is broader, though, and covers all kinds of organised violence rather than merely conflict violence). But already the title of the paper shows the confusion about whether this paper is about monitoring or predicting. After reading the full article I have a relatively clear idea of what it was about, but some aspects of the purpose of the article still remain unclear. The abstract suggests that algorithm-based predicting is focused not just on migration, but also on violence (“algorithmic systems monitor and predict violence and forced migration”). But the article does not (fortunately) try to go into the prediction of violence, which is another debate altogether. What needs to be done in the introduction is that the focus needs to be defined. Then the article needs to be cleansed from sentences that deviate from a clearly defined focus (such as sentences that suggest that this article also looks at the algorithmic prediction of violence).

 

Smaller issues:

1.       “triggered by man-made anthropogenic disasters,” (is the word “anthropogenic” needed after “man-made”)

2.       international or “sub-state conflicts” (sub-state conflict is normally referred to as intrastate conflict in peace and conflict studies literature)

3.       what does “Frequent exchanges of territory as a result of conflict” mean if examples of this are Myanmar, DR Congo, Somalia, Central African Republic, and Afghanistan, where the conflict has been on governance rather than territory. I can understand the expression in Ukraine, where Crimea has exchanged de facto sovereign authority, and perhaps in Iraq, where the Kurdish territory has gained a de facto independence, but not in places like Afghanistan where the entire territory has remained in the sovereignty of changing elite groups.

4.       On the first page the main causes of forced mass migration are listed without any references to evidence or studies that substantiate the claims. References need to be added.

5.       On page 2 the paper says: “From the need to establish more detailed and robust mechanisms to manage forced migration and prevent violence, arose event datasets (Lubeck, Geppert, and Nienartowicz 2003; Donnay et al. 2019). These datasets initially logged violence in binary forms, and in a dyadic fashion, but more recently a number of highly granular and multi-directional datasets have begun appearing in scientific space (Weidmann 2013).” It strikes me that “more recently” refers to a publication in 2013, while the earlier research publication is from 2019 ?.

6.       On page 2-3 the ms focuses on the problems and biases of media-based events data. The reference to (Weidmann 2016) refers to the Herman & Chomsky Propaganda Model, if I remember things correctly, but I think when talking about biases of media-based events data, I am sure one should mention the fact that studies based on the Propaganda Model reveal, that there is a dominance in the international media of the US-based/owned media that tend to focus more on events of violence that are a result of actions by US enemies.

 

Author Response

Reviewer#1

 

  • - Review Article or a discussion on the existing literature and debate, rather than a Research Article: This study is more of a review or a discussion of existing literature than a research article. It does not focus on sufficiently tight to produce new information based on an evidence material. I think it should be introduced as one, rather than leaving the reader puzzling about the real intentions of the article. As a review article it is very interesting and educational.

 

Response: This is a fair point; both the title, abstract and relevant claims in the introduction section of the article are revised in order to portray the review-oriented aims of the article better.

 

  • - Need to define the focus better: The introduction does not really fully explain what this article is about. It is clear that it deals with the use of social-media-based big data in relation to forced migration caused by conflict (I suspect the focus is broader, though, and covers all kinds of organised violence rather than merely conflict violence).

 

Response: The first part of the article is intended to prepare the reader on two fronts: how did researchers/scholars/policymakers tried to get accurate data from conflict zones in the past, how this information-seeking behavior has changed with the invention of new communication Technologies and how good/limited these approaches were. This is necessary as I try to address the most immediate reader confusion: ‘why would we even try to use social media data as emergency information?’. In the introduction I’m trying to set the scene in terms of a) what is the relationship between conflict-violence and migration, b) how did policymakers and officials develop their information extraction techniques from high-risk areas, c) why did conflict event datasets emerge, d) what were the limitations of past event data creation techniques so that we ended up becoming interested in ‘social media as event data’ (the topic of this paper). In the first phases of this paper, I had the chance to present it in a number of workshops and really had to defend the case of using social media as a form of ‘crisis information’. The introductory section Works to accomplish that purpose.

 

 

  • But already the title of the paper shows the confusion about whether this paper is about monitoring or predicting. After reading the full article I have a relatively clear idea of what it was about, but some aspects of the purpose of the article still remain unclear. The abstract suggests that algorithm-based predicting is focused not just on migration, but also on violence (“algorithmic systems monitor and predict violence and forced migration”). But the article does not (fortunately) try to go into the prediction of violence, which is another debate altogether. What needs to be done in the introduction is that the focus needs to be defined. Then the article needs to be cleansed from sentences that deviate from a clearly defined focus (such as sentences that suggest that this article also looks at the algorithmic prediction of violence).

 

Response: This is a fair point – although it is somewhat needed since I focus particularly on conflict-induced migration. The main thought process of the paper’s origin point goes as follows:

 

  1. Some large-scale migration flows happen as a result of organized violence
  2. in the past, there has been significant work on predicting violence as a proxy to predicting large-scale migration
  3. past attempts to do this forecasting used various data sources to build models or engage in prediction work
  4. there has usually been a trade-off between reliability and size of information
  5. social media data (with all its limitations) sometimes optimizes this trade off
  6. yet there are a number of ethical problems associated with using social media as a data point (which is the main focus of the rest of the paper).

 

 It appears in trying to build that bridge, I have confused both reviewers of the article, so I have revised the introduction, title and the abstract of the paper to build this connection better.

 

Smaller issues:

  1. “triggered by man-made anthropogenic disasters,” (is the word “anthropogenic” needed after “man-made”)

Fixed as just ‘anthropogenic

  1. international or “sub-state conflicts” (sub-state conflict is normally referred to as intrastate conflict in peace and conflict studies literature)

Fixed as ‘intrastate’

  1. what does “Frequent exchanges of territory as a result of conflict” mean if examples of this are Myanmar, DR Congo, Somalia, Central African Republic, and Afghanistan, where the conflict has been on governance rather than territory. I can understand the expression in Ukraine, where Crimea has exchanged de facto sovereign authority, and perhaps in Iraq, where the Kurdish territory has gained a de facto independence, but not in places like Afghanistan where the entire territory has remained in the sovereignty of changing elite groups.

I think territorial contestation between both states and non-state actors as well as between non-state actors is still an important dynamic of modern organized violence. Even when we consider battle for governance, territorial competition still remains a key theme. Perhaps not as significant as in Russia-Ukraine case, but competition over control of towns, neighbourhoods, tribal areas is quite significant in other cases. It is also important to note that territorial competition may not necessarily be a binary dynamic: there are degrees of control and in most of the cases mentioned in the article, there is generally the case of ‘hybrid’ control – i.e. state still controls municipality services but a militia group may be running the security there.

  1. On the first page the main causes of forced mass migration are listed without any references to evidence or studies that substantiate the claims. References need to be added.

Added relevant references

  1. On page 2 the paper says: “From the need to establish more detailed and robust mechanisms to manage forced migration and prevent violence, arose event datasets (Lubeck, Geppert, and Nienartowicz 2003; Donnay et al. 2019). These datasets initially logged violence in binary forms, and in a dyadic fashion, but more recently a number of highly granular and multi-directional datasets have begun appearing in scientific space (Weidmann 2013).” It strikes me that “more recently” refers to a publication in 2013, while the earlier research publication is from 2019 ?.

Revised by fixing both the chronology and the scope of related articles.

  1. On page 2-3 the ms focuses on the problems and biases of media-based events data. The reference to (Weidmann 2016) refers to the Herman & Chomsky Propaganda Model, if I remember things correctly, but I think when talking about biases of media-based events data, I am sure one should mention the fact that studies based on the Propaganda Model reveal, that there is a dominance in the international media of the US-based/owned media that tend to focus more on events of violence that are a result of actions by US enemies.

This is an important point that is added to the relevant section.(page 3)

Reviewer 2 Report

 

This is a well-written and informative paper on an important and urgent issue: the use of open-source intelligence in migration forecasting. The paper outlines the uses of social media for purposes of predicting future migration flows, and the flaws in the technologies currently used. The first part of the paper does an excellent job of situating the problem in the literature and in current debates, and sets out the question of how social media monitoring can be used in migration forecasting with respect to forced migration flows.

In the course of the paper, the argument becomes somewhat less clear, as it moves from a migration-specific analysis informed by the literature on AI technologies and data analytics in relation to migration flows, to a discussion that blends into a consideration of established concerns in AI ethics: discrimination, bias and anonymisation. These rapidly become divorced from the migration-specific harms and human rights issues identified by the literature cited in the introduction/problem statement sections, which is problematic because migration is a high-stakes and highly contested activity with regard to human rights and related technology ethics and law, and a deep engagement with the literature is required in order to avoid replicating policy discourses and aims without sufficient scrutiny.

The paper promises a consideration of the ethics of using social media to predict migration, but ends up offering a generic take on AI ethics which does not take into account the specific claims brought up by researchers such as Molnar or Achiume about the ways in which data technologies and AI create new vulnerabilities for migrants as well as reproducing existing ones. 

Specifically, it would be good to give some more attention in the paper to the analysis of the kinds of harm that can occur through the migration prediction methods outlined here. The paper deals with possibilities for discrimination and harm without digging into what these mean in the context of forced (and mixed) migration flows: should legislators and regulators be interested in preventing harm to migrants and refugees per se, in which case they might want to look at whether the use of social media in forecasting reinforces other practices which are discriminatory and harmful? Or should they confine their attention to technologically mediated harms that can be remedied through changes to the technological applications used? The paper goes firmly in the direction of the latter, but based on literature which is highly critical of this approach and which argues the former.

The notion of discrimination is a good one to think through in this context: if a region's migration policy is securitised to the point where it treats all migrants from a particular direction as a potential security threat (as Molnar and others argue), does this lead to a problem of discrimination against migrants, or to a problem of discrimination according to particular characteristics within that larger category? This matters for the logic of the paper. If the former, then the political context of the algorithmic application matters as well as the way in which the algorithm works. If the latter, then the approach still requires a political dimension because 'accuracy' on the part of a model will render discrimination more effective rather than blunting it.

This leads to potentially different concerns about harm from those that align with current AI ethics thinking, as reproduced in the paper. For instance, anonymisation is not a relevant concern if the harm envisaged is on the collective level: anonymised or identifiable, the application is directed toward a particular flow of migrants, and if it is treating them unfairly, it does so on the collective as well as the individual level.

This kind of harm is not in line with a discussion of discrimination and bias as it is conventionally found in the ethics literature. The application of these algorithmic techniques may be seen as discriminating against those deserving of protection as refugees in a more basic way, by creating a generalised judgement that people are heading in a certain direction and therefore certain precautions should be put in place for making sure migrants are kept away from EU shores (as we see historically happening with Frontex, and currently being challenged). This would be less a question of discrimination that distinguishes impermissibly between groups on the basis of presumed origin or nationality, and more one of discrimination against migrants per se, as deserving different rights from EU citizens (or not deserving the right to claim asylum as set out under international law).

For these reasons, we might need a more nuanced notion of ‘flawed outcomes’ and ‘harm against refugees’ that can take in the problem of harm on the collective level as well as harm that can be claimed individually. In an application which becomes part of a larger policy-led assemblage designed to prevent refugees from reaching their destination and claiming asylum, as has been demonstrated in relation to Frontex's activities over the last decade (https://www.theguardian.com/global-development/2021/jan/19/eu-border-force-head-fabrice-leggeri-faces-calls-to-quit-over-allegations-he-misled-meps), is a 'flawed outcome' one which does this in a manner that deviates from the application’s design requirements, or is it one that results from the application however it functions because it is designed to fulfil an illegal policy?

These questions go to the larger problem of what an ‘accurate’ model looks like with regard to migration forecasting, and whether the notion of accuracy is too political to be applied in the form in which it can be articulated in computer science. It would be good to relate this point in section 3 back to the discussion of accuracy and politics in the introduction.

Similarly, the arguments about explainability and transparency require further explanation and analysis. What would it mean for a model to be transparent to its subjects, given that the analysis it offers is based on remotely sourced data where the data subject never knows that this is taking place? (see EDPS source, cited below). To whom should such a model be transparent and explainable in order to be ethical? And if it is only transparent and explainable to the authorities who commissioned it and the researchers who use its outputs, how does this relate to the interests of data subjects, particularly in a context where serious legal challenges are underway against those executing European border policy for violating the right to seek asylum?

The point about fairness is another issue where further analysis would be useful. Is the issue that the outputs of a model designed to protect the EU from migrant flows might be unfair to refugees and asylum-seekers, or that a model is not distributing its computational power evenly across input languages? These seem very different concerns, one relating to a human-rights-based definition of fairness, and the other relating to statistical theory. It would be useful to disaggregate these concerns in the analysis.

In the end the good points about the politics of accuracy made in the introduction to the paper get lost in a discussion that relies on classic interpretations of problems in AI ethics, and which seems insufficiently adapted to the extreme stakes of migrant and refugee flow prediction. The second part of the paper requires more thought in order to integrate it with the concerns named in the first, otherwise it remains an exercise in generic AI ethics divorced from the human rights and justice concerns that surround the use of such systems in relation to migration.

One important issue to take into account is the regulatory response from EDPS on exactly this practice (https://edps.europa.eu/data-protection/our-work/publications/consultations/social-media-monitoring-reports_en) from 2019. What does this response and the issues brought up in it, specifically those based on arguments about group discrimination (see section 2 of the EDPS letter), mean for the analysis in this paper?

The conclusion of the paper seems at odds with the legal evidence: EDPS concluded that social media monitoring in the context of predicting migration flows is not legal, at least in relation to EU law. It would be good to consider this conclusion’s implications for the paper’s final assessment: that ‘social media data can be used ethically and cleanly in future A.I. migration and conflict monitoring tasks’. If it cannot also be used legally, this would seem to be an important addition to this conclusion, and throws doubt, at the very least, on the validity of this conclusion.

 

Author Response

Reviewer #2

 

2.1 These rapidly become divorced from the migration-specific harms and human rights issues identified by the literature cited in the introduction/problem statement sections, which is problematic because migration is a high-stakes and highly contested activity with regard to human rights and related technology ethics and law, and a deep engagement with the literature is required in order to avoid replicating policy discourses and aims without sufficient scrutiny.

RESPONSE: The paper is revised in a way that incorporates a more robust inclusion of the relevant literature in line with the reviewer's comments below.

2.2 The paper promises a consideration of the ethics of using social media to predict migration, but ends up offering a generic take on AI ethics which does not take into account the specific claims brought up by researchers such as Molnar or Achiume about the ways in which data technologies and AI create new vulnerabilities for migrants as well as reproducing existing ones.

RESPONSE: This is a fair point. In bid to provide an overview of the streams of argument on the more technical aspects of the matter, I realize that some case-specific illustrations have been missing. This was because the paper is coming from a more technical origin point: i.e. how can we quantify migration and violence-related events, and use them to predict migration flows. In line with reviewer suggestions, I have tried to add migration-related discussion and literature in line with the reviewer’s comments. Where I’ve added them are provided more specifically in my responses to the points below – but my apologies for not digging even further into these as the paper is at its word limit at the moment.

2.3 Specifically, it would be good to give some more attention in the paper to the analysis of the kinds of harm that can occur through the migration prediction methods outlined here. The paper deals with possibilities for discrimination and harm without digging into what these mean in the context of forced (and mixed) migration flows: should legislators and regulators be interested in preventing harm to migrants and refugees per se, in which case they might want to look at whether the use of social media in forecasting reinforces other practices which are discriminatory and harmful? Or should they confine their attention to technologically mediated harms that can be remedied through changes to the technological applications used? The paper goes firmly in the direction of the latter, but based on literature which is highly critical of this approach and which argues the former.

RESPONSE: This point is clarified in the third section by revising and sharpening the main argument. The reviewer correctly points that the main trajectory of this paper is the latter: ‘Or should they confine their attention to technologically mediated harms that can be remedied through changes to the technological applications used?’. As the paper discusses later on, this – in and of itself – is not a complete solution, but is still worth discussing and exploring pathways, keeping in mind the objections in the literature against this pathway. Hence, the paper’s reliance on the literature that is critical of this very solution. The origin point of the paper is that states and border agencies will inevitably get drawn into greater technological entanglements, and that it is more realistic to render these technologies less damaging to migrants, rather than advocating the abolishment of those technologies (which is not a realistic statement).

2.3 The notion of discrimination is a good one to think through in this context: if a region's migration policy is securitised to the point where it treats all migrants from a particular direction as a potential security threat (as Molnar and others argue), does this lead to a problem of discrimination against migrants, or to a problem of discrimination according to particular characteristics within that larger category? This matters for the logic of the paper. If the former, then the political context of the algorithmic application matters as well as the way in which the algorithm works. If the latter, then the approach still requires a political dimension because 'accuracy' on the part of a model will render discrimination more effective rather than blunting it.

RESPONSE: This would stretch the scope of the paper a bit much, because a country’s securitization motives are driven by mechanisms that are usually independent on new technologies. Discrimination within this context can be ‘non-technological’ and mixing this debate up with the forecasting/tech debate would have to be a topic of another paper.

2.4 This leads to potentially different concerns about harm from those that align with current AI ethics thinking, as reproduced in the paper. For instance, anonymisation is not a relevant concern if the harm envisaged is on the collective level: anonymised or identifiable, the application is directed toward a particular flow of migrants, and if it is treating them unfairly, it does so on the collective as well as the individual level. This kind of harm is not in line with a discussion of discrimination and bias as it is conventionally found in the ethics literature. The application of these algorithmic techniques may be seen as discriminating against those deserving of protection as refugees in a more basic way, by creating a generalised judgement that people are heading in a certain direction and therefore certain precautions should be put in place for making sure migrants are kept away from EU shores (as we see historically happening with Frontex, and currently being challenged). This would be less a question of discrimination that distinguishes impermissibly between groups on the basis of presumed origin or nationality, and more one of discrimination against migrants per se, as deserving different rights from EU citizens (or not deserving the right to claim asylum as set out under international law). For these reasons, we might need a more nuanced notion of ‘flawed outcomes’ and ‘harm against refugees’ that can take in the problem of harm on the collective level as well as harm that can be claimed individually. In an application which becomes part of a larger policy-led assemblage designed to prevent refugees from reaching their destination and claiming asylum, as has been demonstrated in relation to Frontex's activities over the last decade (https://www.theguardian.com/global-development/2021/jan/19/eu-border-force-head-fabrice-leggeri-faces-calls-to-quit-over-allegations-he-misled-meps), is a 'flawed outcome' one which does this in a manner that deviates from the application’s design requirements, or is it one that results from the application however it functions because it is designed to fulfil an illegal policy? These questions go to the larger problem of what an ‘accurate’ model looks like with regard to migration forecasting, and whether the notion of accuracy is too political to be applied in the form in which it can be articulated in computer science. It would be good to relate this point in section 3 back to the discussion of accuracy and politics in the introduction.

RESPONSE: These are excellent suggestions – although the 4th section is a better place to incorporate this point, as potential remedies are discussed there. This serves as an important counter-argument to the first proposed solution and is added right after that first suggestion to moderate its claims.

2.5 Similarly, the arguments about explainability and transparency require further explanation and analysis. What would it mean for a model to be transparent to its subjects, given that the analysis it offers is based on remotely sourced data where the data subject never knows that this is taking place? (see EDPS source, cited below). To whom should such a model be transparent and explainable in order to be ethical? And if it is only transparent and explainable to the authorities who commissioned it and the researchers who use its outputs, how does this relate to the interests of data subjects, particularly in a context where serious legal challenges are underway against those executing European border policy for violating the right to seek asylum? The point about fairness is another issue where further analysis would be useful. Is the issue that the outputs of a model designed to protect the EU from migrant flows might be unfair to refugees and asylum-seekers, or that a model is not distributing its computational power evenly across input languages? These seem very different concerns, one relating to a human-rights-based definition of fairness, and the other relating to statistical theory. It would be useful to disaggregate these concerns in the analysis.

RESPONSE: As the paper is already at its word limit, these suggestions would require a very substantial expansion of the paper's size, and is thus extremely hard to incorporate within its current, technical-oriented scope.

2.6 In the end the good points about the politics of accuracy made in the introduction to the paper get lost in a discussion that relies on classic interpretations of problems in AI ethics, and which seems insufficiently adapted to the extreme stakes of migrant and refugee flow prediction. The second part of the paper requires more thought in order to integrate it with the concerns named in the first, otherwise it remains an exercise in generic AI ethics divorced from the human rights and justice concerns that surround the use of such systems in relation to migration. One important issue to take into account is the regulatory response from EDPS on exactly this practice (https://edps.europa.eu/data-protection/our-work/publications/consultations/social-media-monitoring-reports_en) from 2019. What does this response and the issues brought up in it, specifically those based on arguments about group discrimination (see section 2 of the EDPS letter), mean for the analysis in this paper?

RESPONSE: This is a valid point and I have incorporated this consultation into the paper. However, it is important to keep in mind that the EDPS conclusion is only limited to EASO’s operations, and asserts that tracking migration within the context of migrant smuggling and human trafficking monitoring are beyond the scope of EASO, and therefore, leaves a gray area for other EU agencies that may be monitoring social media for migration on the pretext that such monitoring is done to ‘prevent crime’. To what extent EDPS designation of EASO monitoring activities can be seen as a general EU-wide rejection of migration monitoring through social media is not very clear; or to what extent individual member states will limit their monitoring operations within the framework of EASO. This leaves still quite a large space for the issues discussed in the paper. More specifically, EDPS judgement is very limited in terms of providing technical justification for its conclusion and most of the technical mechanisms that form the fundamentals of this decision are left unmentioned. To that end, the primary focus of this particular paper: technical background of flawed decisions, is still relevant even in light of the EDPS judgement.

2.7 The conclusion of the paper seems at odds with the legal evidence: EDPS concluded that social media monitoring in the context of predicting migration flows is not legal, at least in relation to EU law. It would be good to consider this conclusion’s implications for the paper’s final assessment: that ‘social media data can be used ethically and cleanly in future A.I. migration and conflict monitoring tasks’. If it cannot also be used legally, this would seem to be an important addition to this conclusion, and throws doubt, at the very least, on the validity of this conclusion.

RESPONSE: As the paper’s scope is not limited to the EU, European regulations are quite important, but doesn't fully link to the broader methodological and technical points it is making. Quite a large number of countries both within the EU and outside, still employ social media monitoring for migration prediction. It is difficult to measurably observe to what extent (or whether) EDPS consultation had a binding effect on de facto everyday border protection practices of either EU nations, or other countries that are still exploring ways of using social media data to predict migration. To that end, EDPS conclusion is just one legal opinion (which I have included in the paper) among an ocean of different legal perspectives on the matter, but I am not sure if it throws the conclusions of the paper to doubt – given the overwhelming momentum across the rest of the world’s border protection agencies to pursue paths that deviate from the EU norms. There’s still a very large global debate on these matters as countries try to find their own solutions independent of the EU debates, and the EPDS decision form only a small part of this norm-setting process that is ongoing across the world.

Round 2

Reviewer 1 Report

This article is an interesting, easily understandable introduction to the theme of algorithmic prediction of migration trends related to the specific type of migration that has been triggered by organised violence. It seem like a perfect fit to the special issue it aims at contributing. At least for a reader with limited knowledge in the field of the use of text as evidence that is being harvested by means of machine learning, this article gives a lot of new knowledge. 

The article is not presented as a review article, which I think is important. It does not present a new research with a research question, method and analysis, but rather it reviews elegantly existing research and makes suggestions on the direction of study in the field. As such the article is very useful. 

The introduction is also much better. it now describes what the article is all about.  

Consequently, I would be happy to recommend publication. 

Back to TopTop