4.1. An Issue of Communication about Models?
A lack of trust and acceptance of ecological computer models indicate a potential weakness in communication between modellers and decision makers. Ecological computer modellers tend to focus a considerable part of the communication effort on the model itself [8
]. This focus is based on the knowledge-gap assumption that in order to accept an ecological computer model, decision makers need to understand what the model does and how it does it. This is not surprising, since for many modellers a model is a simplified representation of the real world and its ‘quality’ relates to how effective this representation is. A focus on the model itself can work to address the ‘black box’ perception. However, our interviews suggest that this approach may not address what is important to decision makers (input data quality, and model assumptions and manipulation) as part of the general trust in, and acceptance of, ecological computer models.
The concern about trustworthiness of ecological computer models highlights a further item of potential misunderstanding between modellers and decision makers. Tuning input parameters to generate different model output can be interpreted as (a) a key function of a model, which allows for exploring different alternative scenarios (as a function of natural variability and human intervention); or (b) an avenue for manipulation, allowing for the generation of any output a modeller or a decision maker desires. In the first case, variability in model output is seen as a feature of the real system which the model reproduces, in the latter as an idiosyncrasy of the model, which can be used to provide an appearance of objectivity to what in fact may be a pre-determined, value-laden choice [17
]. In the first case, a modeller can help the engagement process by communicating, explaining and addressing model uncertainty. In the latter case, transparency can be at least partially addressed in terms of input data, and model validation.
summarises the issues raised in this paper by focusing on the perception of where in a modelling process the highest uncertainty resides, according to modelers (dark bars) and stakeholders and decision makers (light bars). Modellers are deeply aware of uncertainty in input data and model parameterisation (dark tall bar on the left). Because of their professional expertise and possible professional bias, they believe that ecological computer models are reasonable representations of real ecological processes, as shown by the terminology they use to describe them (“micro-cosmos”, “virtual ecosystem”, “virtual laboratory”) [18
]. As a result, ecological computer models (dark short uncertainty bar in the middle) are believed to constrain the uncertainty arising from input data, by enforcing ecological and physical consistency on the model results. The uncertainty of model output (short dark bar on the right) is mostly constrained by the model. Decision makers are skeptical of ecological computer models as discussed above (tall light uncertainty bar on the middle). As a result, they believe that only input data can constrain the uncertainty arising from model use, by enforcing consistency between model and observations. The uncertainty of model output (light bar on the right) is mostly constrained by input data. As result, in communication about model projects, modellers may focus on the ecological computer model while decision makers may expect the focus to be on input data.
4.2. Addressing Concerns about Computer Models
As mentioned in the Introduction, scepticism about computer models and concerns about their use in environmental decision making is not shared only by decision-makers and stakeholders but also by some scientists and modellers [19
]. Ecological modellers have long been aware of this challenge and have tried to address it by proposing standards for model description, validation, calibration, parameterisation and data use. One such attempt, TRACE [20
], builds on an analogy between ecological computer models and general laboratory equipment: just like both scientists and decision makers expect sophisticated instrumentation to come with a manual, instructions and certification of quality and accuracy, so should they expect similar documentation and quality control on ecological computer models. This would both reassure decision makers and allow them to verify that criteria for scientific rigour are met. In a similar vein, the Open Science movement (see description and references in [22
]) calls for transparency in model and data sharing, to facilitate cross testing among modelling groups. A different approach, also motivated by concerns about the provision of scientific advice for decision making, focusses on information about the source, provenance, currency and reliability of both the ecological computer model and input data, as well as the extent to which the science behind the results is considered by experts to be sound [23
]. This recommendation has been partly adopted in the notion of ‘pedigree’ used to assess the reliability of different implementations of the Ecopath with Ecosim model [24
], widely used in fishery and marine ecosystem studies.
It is interesting to consider the extent to which these approaches may help the communication between ecological modellers and decision makers in years to come. On the one hand, the core message from our interviews suggests that it is unlikely that the approaches described above would have a short term impact on an existing project, since much of their focus is on filling the knowledge-gap. Improved and more transparent documentation may address the ‘black box’ challenge only to the extent that environmental modelers are willing to make such documentation easily intelligible to a non-expert audience and the extent to which the non-expert audience is willing to put the necessary effort into understanding some level of technical content. Lack of trust arising from the perception that models can be manipulated to produce any pre-conceived outcome belongs to human and institutional considerations that are beyond modeling standards and knowledge-gap concerns [26
]. It is, however, possible that in the longer run, initiatives like TRACE and Open Science (see also [27
] specifically for agent-based models) may lead to a cultural shift, through a path of first strengthening confidence in ecological models within the modeling community, then porting this confidence to the wider scientific community and finally to the decision making community. Eventually, for many ecological applications, the ultimate goal of ecological modelling is not a specific level of scientific accuracy, as much as better decision making [3
], and the strong focus on modelers-decision makers’ interaction (as predicated by the Open Science platform, see https://www.fosteropenscience.eu/taxonomy/term/7
) may indeed lead to an attitude shift.
We left the discussion of input data last, because in most ecological studies poor quality input data is a reality, not a modeling problem. According to some, the field of marine ecological studies is experiencing a ‘big data’ revolution [29
]. It is reasonable to expect a massive amount of environmental data of policy relevance will be available in real time in many areas of marine ecology and fishery science. This raises an interesting dilemma. On the one hand, the decision makers’ desire to see a large amount of current and local data feed marine ecological computer models could be fulfilled. On the other hand, as it is widely recognised, the sheer size and complexity of ‘big data’ will inevitably require its analysis to be automated. The task of pre-processing, filtering and transforming (possibly even interpreting) data for subsequent modelling will likely be delegated to algorithms, that is, to more models. The need to trust models and modellers, which generally refers to the parameterisation, implementation and running of the models, will thus extend to data collection and selection [31
]. Depending on what algorithm is used to carry out such tasks, this step may in fact be even less transparent than traditional ecosystem simulation. The trust issue may be exacerbated if neural-network-based approaches, like deep learning techniques are at the core of the collection and selection algorithms [33
]. Due to their complexity, such approaches may be perceived as inscrutable and untestable not just by decision makers but also by the model developers [32
]. To complicate things further, [34
] suggests that improvement in computer power and artificial intelligence (AI) has allowed the simulation of any
physical phenomena to the point of making them virtually indistinguishable from reality. If decision makers see current ecological computer modelling as prone to manipulation, this may become even more problematic once such complex algorithms and hardware become commonplace in scientific modelling.