Emerging Technologies and Innovation—Hopes for and Obstacles to Inclusive Societal Co-Construction

: Since the late twentieth century, the concept of emerging technologies, ﬁelds designated as such and their governance have received increasing attention in academia, the media and policymaking. This also applies to the strongly interdisciplinary ﬁeld of technology assessment (TA), sustainability research (SR), and activities and discussions about responsible (research and) innovation (RI/RRI). A crucial question in this context is how these technologies can be developed and governed in an inclusive manner in order to foster societally beneﬁcial and widely accepted innovations. Given the diversity of values and socio-economic interests, such inclusive societal co-construction is not easy to achieve. Discussing various ﬁelds of emerging technology (applications) and based on the results of pertinent earlier research and dialogue activities, this article analyses hopes for and obstacles to such co-construction. It concludes with a plea to integrate meta-consensus approaches in governance conceptions for emerging technologies in RI/RRI, SR and TA.


Introduction: Emerging Technologies and the Quest for Governing Their Development
'Emerging technologies' has become a term and topic of increasing relevance in academia, news articles and policymaking during recent decades [1]. This has occurred despite (or maybe partly because of) the apparent lack of a clear, generally accepted definition of, or methods to 'detect' or predict such technologies [1,2]. In keeping with this understanding, online search results for the term are awash with technologies and best-of rankings generated by a wide range of different players, all falling under the label of emerging technologies.
Although there appears to be no generally accepted definition of emerging technologies, the term has been associated with 'radically' novel or 'disruptive' and relatively fast-developing, but still nascent, scientific-technological developments that (may) have prominent economic, environmental or social impacts. Furthermore, due to their novelty and their potential or expected impacts, they create considerable hopes or concerns, especially in potentially directly affected societal groups [1,3]. Emerging technologies that are discussed not only in academia but also in the media and the political arena are, for example, ('classical') genetic engineering (in particular for crop production) [4,5] and nanotechnology [6,7]. More recently, these have been joined by, for example, synthetic biology and genome-editing technologies (such as new breeding techniques, human germline engineering or gene drives) [8][9][10][11][12], neurotechnologies [13,14], quantum technologies [15] and artificial intelligence (AI) related to various applications (such as autonomous vehicles, environmental monitoring, financial modelling or biomedicine) [16,17].
Potential contributions to solving pressing societal challenges, such as economic development, public health or environmental sustainability, including climate change, make emerging technologies highly attractive and relevant for research and its funding, business and policymaking [18]. At the same time, uncertainties linked to the nascent stage of these technologies, and ambiguities related to the possible effects on various social groups and their interests or values, make them difficult territory for governance and policymaking. Imponderables and possible backlashes regarding the adoption of innovations can put the desired economic, political and societal benefits at risk.
Given these difficulties and their relevance for politics and societies at large, emerging technologies and their governance have become an important topic for national and supranational government activities, with the goals to shape science and innovation and better align them with societal needs and expectations [19][20][21][22]. For the same reasons, the governance of emerging technologies has evolved into an attractive and increasingly competitive topic for 'accompanying research' and policy advice activities. These include the interdisciplinary fields of science and technology studies (STS) and technology assessment (TA), as well as activities of consultancy firms [1,20,23].
Participatory and co-constructionist concepts have been proposed as key elements for governance settings to cope with the uncertainties and disruptive potentials of emerging technologies and to shape their development according to societal needs (see the following sections). Such concepts appear to have become especially relevant and common to the fields of TA, sustainability research (SR) and responsible (research and) innovation (RI/RRI).
In this perspective piece, we aim to examine and reflect on hopes and challenges for inclusive deliberation and, on this basis, 'co-construction' of science, technology and innovation in these fields. Our analysis aims to provide food for thought and starting points towards policy paths that not only allow to include diverse voices but also thereby promoting effective, differentiated policies. Furthermore, the article aspires to contribute to identifying paths that, beyond shaping policy outcomes, may also shape people's perceptions, in order to reach the shared goal of these fields: sustainable socio-technical developments aligned with societal needs and expectations.

Common Challenges Arising for RI/RRI, SR and TA from Emerging Technologies
Given the supposed importance of emerging technologies for solving societal challenges and the controversies around them, the quest for how to best govern these issues has become a centrepiece of major political and policy-oriented science-based approaches that aim to assess, shape or guide scientific-technical developments and innovations. This is particularly the case for approaches in TA, Responsible Innovation (RI)/Responsible Research and Innovation (RRI) and sustainability research (SR).
The term 'technology assessment' first appeared in the 1960s, mainly when the U.S. Congress called for better information and policy advice to evaluate governmental decisions and to choose or implement research and development (R&D) policy [24]. Such demand for policy advice led to the establishment of the Office for Technology Assessment of the U.S. Congress (OTA) in 1972, followed by the founding of TA institutions with similar tasks in various (mainly OECD) countries [25,26]. Beyond such 'parliamentary TA'-which has primarily aimed to inform about the potential consequences of technologies-TA has diversified into various strands [25][26][27]. Expanding this primary focus on possible impacts, major strands include 'constructive TA' (CTA) and 'real-time TA'. CTA attempts to broaden the actual design and implementation of technologies through the feedback of TA activities, such as experimentation with new technologies and dialogue between innovators and the public, in particular [28]. Similarly, real-time TA strives to allow modulation of innovation paths and outcomes in response to ongoing ('real-time') analysis and discourse. It focuses on creating a reflexive capability in the R&D process by integrating social science and policy research with natural science and engineering from the outset [29]. Especially in the context of emerging technologies and 'technosciences' [30] and their associated (future) imponderables, more recent TA developments are related to 'hermeneutic' approaches. These make techno-visionary 'futures' subjects of TA to identify contexts or 'societal meaning' (including 'wishes' or expectations, cultural narratives, or values and attitudes) that may be relevant for science and technology governance (e.g., [30][31][32]). All the different TA forms can be primarily based on scientific and other academic experts or (in addition) can be 'participatory', involving stakeholders and publics (as 'participatory TA' approaches).
While RI may be described as having stronger academic origins than RRI, which emerged largely as a policy-driven conception from the European Commission [33,34], both conceptions are similar with respect to their underlying principles and aims. Both go back to discourses and concepts on interrelations between science, technology and society from areas such as STS (including ideas of anticipatory governance), applied ethics and research into ethical, legal and social implications/issues or aspects of technologies (ELSI/ELSA), as well as various TA approaches (such as CTA and real-time TA) [20,35]. Both RI and RRI aim to direct research and innovation towards, and align them with, societal needs and expectations. The common underlying principles may be described as social and ethical reflexivity about motivations, purposes or possible impacts; anticipation (in the sense of exploring possibilities) by describing and analysing potential impacts; inclusive deliberation by engaging wider perspectives from various societal groups and stakeholders; and responsiveness, using the prior principles to set the direction and influence the paths of research and innovation [20,[33][34][35][36]. Furthermore, derived from the more policy-related roots of RRI in the EU in particular, sustainable (economic) development as part of a focus of research and innovation on societal 'grand challenges' (including climate change and securing supplies of energy, water and food) has been defined as a 'normative anchor point' for the directions and impacts of beneficial societal innovations in RI/RRI [20,34]. RI/RRI thus has links to the concepts of social and sustainable innovation, which also aim to address global societal challenges [37]. Likewise, they involve stakeholders and deliberative approaches. Additionally, as regards innovation outcomes, they strive for beneficial social and environmental impacts in addition to economic benefits [37]. However, concepts of social and sustainability innovation are driven by practitioners and are practice-oriented. Moreover, sustainability-related innovations typically have a market orientation, satisfy needs and are competitive on the market, according to (current) innovation systems [20,37]. In contrast, current RI/RRI concepts have been developed by researchers and policy makers, and focus on science and technological development. This focus has been linked to a major strand of critique in RI/RRI (other strands relate to questions on power dynamics, dangers of instrumentalisation and democratic accountability; see, e.g., [35,38]): it often remains unclear how its concepts can be implemented and reconfigure present innovation systems that are based on competitive advantage [20,39]. This also includes the more specific issue of how RI/RRI concepts could be implemented in a business context [37]. The sustainability challenge, and possible roles for novel technologies in solving it, has made SR another area that seeks to align the development of emerging technologies with societal needs and priorities, similarly to RI and RRI [40]. Such alignment includes both branches that are commonly ascribed to SR: 'descriptive-analytical' assessment of sustainability issues related to environmental and social impacts of technologies, which is largely based on systems analysis and modelling, and 'transformational' research, which strives to develop solutions for these issues, in the sense of large-scale and 'real-world' societal change processes. Such processes include 'transition(s)' by social, institutional and technological change in societal sub-systems (such as energy or mobility [41]), through the actions and collaboration of different actors and stakeholders [40][41][42].
While TA, RI/RRI and SR thus share the goal to align emerging technologies with societal needs and expectations, all three conceptions also face common governance challenges posed by these technologies. These relate in particular to (i) uncertainties about opportunities and risks, including long-term safety, environmental and socio-economic effects, and (ii) the experience that both the technologies and the assessments or perceptions of their benefits and risks are value-laden.
Uncertainties with respect to benefits and risks often emerge, since there are (or can be) no meaningful 'long-term' safety data on health and environmental impacts (including sustainability issues) before such technologies are actually used more broadly-even if it could be agreed on what such data should encompass or mean. Examples are genetically modified (GM) crops or microbial strains [43,44], biofuels [45] or new human reproductive technologies such as certain in vitro fertilisation methods, mitochondrial replacement therapy or possible genome editing-based germline interventions [46,47]. As regards the latter in particular, safety data on humans would not be available before these technologies are first applied in people (such as in clinical trials) [46][47][48]. In addition, socio-economic impacts may emerge, the extent of which is difficult to predict. These include possible effects on the livelihoods of farmers derived from the production of plant components (such as flavours, fragrances or pharmaceutical compounds) by microbial production strains with new metabolic pathways (re-)designed by synthetic biology techniques [49].
The second common governance challenge, which to some extent also affects the first (i.e., what would be 'safe enough' when evidence is scarce and uncertainty prevails), is the role that values and worldviews play in the debates on these technologies and their benefits and risks. For instance, scientific data on the benefits and risks of GM crops or microbial strains metabolically engineered via synthetic biology are interpreted (with respect to relevant biological monitoring parameters and biological effects) from the perspectives of different backgrounds and contexts, including scientists from different disciplines and funding areas [43,50]. Such different backgrounds also include distinct views and stances on factors such as 'naturalness' with respect to food or food ingredients [49,51], socio-economic effects including issues of access and benefit sharing or impacts on livelihoods (by cheaper, more easily available or more suitable synthetic biology products such as vanillin, stevia compounds and algal oils). Furthermore, relevant factors encompass shifts of power to globalised companies [44,49] and fundamental ethical issues (e.g., human dignity, selfconception, autonomy and the question of what it means to be human in the light of technological progress) linked to technologies such as human germline engineering [46,47] or neurotechnologies [52][53][54].
Moreover, sustainability has numerous dimensions, and different sustainability goals may clash. Agricultural sustainability, for instance, may be defined in terms of biodiversity or food security (among other aspects), leading to a situation where proponents and opponents of GM crops use the same framework of sustainable development to support their arguments but do not reach a consensus [55]. Furthermore, even with agreement on prioritising food security, it would still be a question of values or depend on the self-interestedness and position of the viewer, whether agricultural growth/productivity, sustainability or development is seen as the ultimate aim [56]. The influence of values and interests on interpretations of sustainability indicators [57] makes their development a process of both scientific and political-normative character [58].
These two challenges, of uncertainty in assessing the consequences of the use of emerging technologies (especially those used on a global scale) and individually differing preferences for qualitatively different benefits or indicators, make the idea that there can be an 'objective' or 'strictly scientific' answer when assessing the benefits and risks of such technologies appear unrealistic.

'Inclusive Construction' as a Common Theme in Governance
In the wake of experiences of backlashes to value-laden technologies such as GM crops and nuclear power plants in various countries, governments have strived to better manage societally and ethically problematic areas of research and development. Examples are nanotechnology or, more recently, synthetic biology, genome editing, AI and neurotechnologies [6,46,47,[59][60][61][62]. In the following, we identify two common developments and conceptions-inclusive deliberation and, on that basis, 'co-construction' of science, technology and innovation-that have become central to RI/RRI, SR and TA, in order to deal with the challenges in governing emerging technologies outlined above.
Early inclusion of and active engagement with stakeholders, various social groups and citizens from the general public are now seen as key to governing and aligning emerging technologies with societal needs and expectations in all three approaches or fields [20,40,63].
Such inclusive engagement is intended to help avoid the neglect of potentially negative societal impacts and ethical issues (and thus also possible backlashes) by taking into account the entire range of interests, concerns and perspectives, living up to the idea of deliberative democracy [20,63,64].
Interdisciplinary as well as transdisciplinary knowledge (i.e., knowledge and mutual learning derived from the inclusion of actors from both the scientific community and other societal groups as described above) should ultimately enable and be used to 'co-construe', 'co-create' or 'co-produce' science, technology and innovation. These processes are expected to involve both co-production or co-creation of 'socially' more 'robust' knowledge and support for legitimate decision-making. Furthermore, they should provide 'constructive input' derived from inclusive deliberation and mutual learning to set priorities on research directions, technology design criteria or products [20,28,34,40]. We refer to this notion as inclusive construction in the following paragraphs.

The Quandary of Inclusive Construction
While such inclusive construction appears to have been embraced as a common concept in shaping and governing the development of emerging technologies and innovations derived from them, we argue that there are two major issues that challenge this concept and need to be addressed by governance schemes and policies on emerging technologies.

Inclusive Disunion
The first issue relates to the paradox that the very diversity and pluralism of stakeholders and people that can help to learn about different interests, perspectives, ethical-moral stances or worldviews can hamper the co-construction or even the prioritisation of technologies or innovations based on them. The (desired) diversity and pluralism in inclusive deliberation activities can result in de facto incommensurabilities in stakeholder views or states of 'inclusive disunion'. These can arise both with respect to basic normative goals and the ways to realise-even shared-aims such as sustainability. To explain this more specifically, we refer at this point to our experiences in highly inclusive dialogue activities within the framework of SYNENERGENE (www.synenergene.eu, accessed on 24 November 2021), a major EU-funded international project on RRI in synthetic biology. Incommensurabilities emerged with respect to RI/RRI notions such as societal benefits that should go beyond economic gains [34]: some stakeholders perceived broader societal benefits and ethical values linked to synthetic biology applications as something that should and can be effectively separated from macro-or micro-economic aspects, such as economic growth and more employment, or economic decisions and situations of groups of people. Conversely, others appeared not to believe in the possibility of such a separation but felt that economic aspects are integral parts of societal benefits and ethical decisions about them. Further incommensurabilities in stakeholder views arose when it came to ideas of realising shared goals (such as sustainability or food security) by using synthetic biology and were mainly linked to deep divides on synthetic-biology-driven impacts in a new bioeconomy (https://www.synenergene.eu/resource/workshop-summary-reportcreating-responsible-bioeconomies.html, accessed on 24 November 2021). For example, while industry representatives, some scientists and policy experts mainly see solutions by synthetic biology for economic growth through more efficient biomass use (and thus less land and lower carbon footprints) (see also, e.g., [65][66][67]), many civil society organisations reject any 'green' or 'sustainability' claims made by synthetic biology industry representatives. They deem such claims as 'greenwashing' of a highly problematic field of profit-driven technoscience. In their view, technology use will accelerate and increase biomass use, and greenhouse gas emissions will increase from biomass cultivation for biofuels or commodity chemicals. The latter may include carbon emissions by 'land use change', generated either directly, or indirectly if new ways to use the land lead to emissionintensive land use changes elsewhere (e.g., when replacement of rangeland by sugarcane induces the generation of new rangeland from Amazonian forest areas [68]). Moreover, the use of technology, such as for the production of biofuels or plant products by redesigned microorganisms, would affect the livelihoods of farmers in developing countries, or even human rights if indigenous people are displaced (see, e.g., [69,70]). Representatives of these groups thus demand to limit the ambitions of companies from industrial societies that have negative impacts on the global South. Similar conflicts and differences-in and about interests, values, notions of nature and worldviews-emerged or became clear with respect to 'sustainability' and the 'naturalness' of plant compounds produced by synthetic biology, such as flavours and fragrances (see also [44,49]). Thus, a consensual constructive input from such inclusive processes that may help to shape or prioritise specific solutions on typically value-laden emerging technologies often appears elusive.
However, these findings may not be generalisable. When looking beyond highly contested fields that receive broader media and political attention, there are other emerging technologies that have direct relevance to sustainability, such as in the area of digitalisation and AI in agriculture (precision/smart farming or dairying). These technologies promise economic and ecological benefits but also raise profound ethical issues of data ownership and access, distribution of power, and other impacts on human life and society [71]. However, co-construction projects so far rarely go beyond inclusion of dairy farmers in technology development and adoption, excluding other societal actors and neglecting wider socio-ethical implications [72,73]. Whether co-construction and potentially more inclusive approaches may also run into fundamental problems in these cases appears unclear so far. As with synthetic biology, the important role of technology suppliers may lead to economic dependency and lock-ins, and views about the distribution of market shares or profits between farmers and technology providers will likely differ substantially. Similarly, the 'naturalness' issue is also present in discourse on smart dairying: there is the concern that smart farming may distance farmers (further) from their animals when direct contact is reduced to a minimum due to intelligent systems for animal health surveillance. Incommensurabilities in views on efficiency and economic benefits versus qualitative aspects like the relationships between humans and animals are evident here. However, while there are similarities here with synthetic biology regarding the impacts of worldviews on acceptability, there are also differences. The use and adoption of smart farming technology in restricted areas, i.e., on a small scale, appear easier, for example. Spreading of and contamination by genetically modified traits (such as by gene-drive technologies, in particular) and thus the risk of irreversibility in biological terms are not an issue. Moreover, consumers of smart-farming products may not feel as (directly) affected as in the case of food or food ingredients based on genetic modification and synthetic biology.

The Limits of Objective Evidence and Gold Standards
Adding to the issue of inclusive disunion, the divides described above are hard to resolve by persuading opponents to follow a specific path to reach common goals (where these exist) by using 'objective' scientific data or empirical, causal evidence on the impacts of technologies. This is not only because values and worldviews or worldview-based models play a crucial role in how people assess emerging technologies and policy measures to govern them (see previous examples and also [44,74,75]). Pivotal further reasons lie in the nature of the evidence that can be obtained on measures or ways to govern technologies and the issues these may entail.
Thus, even the most prominent or 'gold standard' methods of evidence-based policy making-randomised controlled trials (RCTs) and systematic reviews of multiple studies (including such trials)-frequently raise questions with respect to conclusions drawn from such methods. Inspired by evidence-based medicine, RCTs are often considered (solely) capable of determining a causal relationship between an intervention and an outcome [76,77]. In RCT designs to assess policies, a population is randomly divided into two or more groups that receive different interventions ('treatment group') or no intervention ('control group'). Randomisation is used to control for biases and confounding factors between intervention and control groups that may affect the probability for a given outcome. It thus allows us to make causal claims about the effectiveness of interventions (in a studied population). However, such studies often raise questions when it comes to the reliability and applicability of experimental results and causal claims for different real-world contexts ('external validity') [76,77]. This is because the results of interventions studied in a specific place depend on diverse causal influences (or supporting factors) that are specific to different places with typically complex social contexts. For instance, when studying the benefits and possible harms of agricultural biotechnology in the form of a specific GM crop (see, e.g., [78]), outcomes such as yields, pesticide use or economic benefits to farmers, and, in turn, policy implications, depend on multiple factors. These include the extent to which places are affected by certain pests (which the GM trait should overcome), access to credits and markets for small-scale farmers, or support for farmers by training [9,78]. Therefore (without knowing all the context-dependent specifics), transferring results and predicting impacts of tested interventions or measures to different contexts is usually not possible. Although practitioners in the field know these problems, study results and employed (specific) policy interventions are still often generalised and then become subject to abstraction by actors such as policy-advice bodies, industry or civil society organisations to warrant general claims. To be reliably transferred and used in different contexts, such abstract knowledge and claims would thus need 'de-abstraction' [76].
Similarly, in sustainability research, for example, the view was formulated that more synthesis and mapping of evidence are needed, by pulling together and categorising systematic reviews, impact evaluations and other primary-research studies in particular areas [79]. Such an evidence mapping (see, e.g., [80]) may be used to draw very general conclusions, for example that incentive programmes in agriculture linked to short-term economic benefit have a higher adoption rate than those aimed solely at providing an ecological service. The study cited [80] examines the evidence of nearly 18,000 papers with respect to whether three different broad kinds of incentive-based programmes lead to the adoption of sustainable practices and their effects on environmental, economic and productivity outcomes. Besides drawing general conclusions, the authors also point out that the success of incentive programmes is dependent on a diverse set of contextual factors. Thus, also here, transferring 'evidence-based' conclusions to other cases requires de-abstraction in the sense of collecting detailed knowledge about these contextual factors that led to adoption of programmes and their outcomes under different conditions.

Implications for Policy Approaches
Given the important role of moral views or values and divides between stakeholders concerning them (including cultural differences), as well as the difficulties regarding 'objective' and generalisable evidence about the impacts of emerging technologies or policies to govern them, what type of inclusive input may then be possible or necessary to co-construe these technologies in pluralist societies? Obviously, one could argue that a result of mutual learning processes is that different players become aware of divides or states of inclusive disunion related to different interests, values or worldviews and the role they play. Yet, at the same time, the question arises as to whether mutual learning and inclusive deliberation can go beyond that point. What could or should this mean for strategies aiming at 'constructive input' for emerging technology development, research and innovation policies, or for public policy, in order to align research and innovation processes according to contentious or plural societal expectations, needs and values?
Abstractions and notions of objectivity by both science (including social sciences and psychology) [76] and systematic or unidimensional philosophical or political conceptions (e.g., [81] and refs. therein) appear to be in sharp contrast to the pluralism of the concrete and particular lived experiences or 'realities'-in the sense that what is real is what is basic for an individual, a group, a nation and thus contains a normative or an 'existential' component [82]-associated with different interests, values and worldviews among individuals or groups within societies. This tension, which has been pointed out by various authors, including from political philosophy and philosophy of science (e.g., [81][82][83]), thus poses a severe challenge for the aim to shape or align science and innovation with 'societal' needs and expectations.
Calls for 'broad societal consensus' (as a result of 'broad societal debates'), often even on a global scale, still appear to be widely seen as a solution. This is ironic, given the issues of abstraction by science and systematic philosophy [81] to deal with 'causally dense' and value-laden environments (such as those typically related to social policies or economics), as well as the pluralism within and between societies. Prominent recent examples of such calls relate to AI applications, neurotechnologies and human germline editing [61,[84][85][86][87]. In the latter case, these calls are in stark contrast to previous experience that bioethical consensus has proved largely evasive on various emerging reproductive technologies (e.g., [46,48]).
Despite the divides in worldviews and values described above, could there nevertheless be a type of consensus or consensual dimension regarding constructive input into developing contentious emerging technologies? In the traditionally industrialised societies, the concept of consensus-based decision-making has increasingly become popular and has been associated with participatory democracy activities since the 1960s. It includes collective decisions and the right of individuals to block decisions that were perceived by them as harmful or immoral. However, this ideal has frequently been compromised for the sake of efficiency (i.e., to come to decisions more rapidly or easily), by restricting the authority to make certain decisions or by adopting a modified consensus process that requires only a super-majority (e.g., 90%) [88].
Agreements on both the values that should predominate and drive decisions (sometimes called normative consensus) and the beliefs about why policies would make an impact (epistemic consensus) have proved to be dependent on moral views and ethical stances and thus are critical for an agreement on policy preferences (preference consensus). Furthermore, deep moral divides appear to prevent a 'universal' consensus and therefore agreement on a specific preferred action or policy [89]. In keeping with this, deep moral divides have been shown to make consensus at the simple level of one of these three types-and thus agreement on policies driven by a dominant value or belief-unattainable in inclusive deliberative processes [89,90].
A more realistic aim appears to be the achievement of a meta-consensus, especially a normative meta-consensus, in the sense that different sides recognise that the values of the others are legitimate, even if they do not share them. This type of consensus has been shown to be an important prerequisite to jointly find mutually acceptable solutions to common issues of concern, while divides in morals and values can remain deep or incompatible. A frequently cited example for this is the dialogue process on HIV/AIDS policies in Colorado, USA, where fundamentalist Christians and gay activists agreed on using distinct prevention methods appropriate to the distinct moral perspectives of the communities concerned. For example, in a gay bar, sexually explicit material could be used to educate people about HIV/AIDS, while in schools with pupils whose parents would be uncomfortable with sexually explicit material, other methods aligned with the moral perspective and appropriate across the spectrum of their community should be used [89,90].
With regard to the emerging technology examples outlined above, a relevant metaconsensus could include that all sides recognise that their different notions of, and concerns about, sustainability-for example 'more efficient biomass use by new technologies' vs. 'unwanted land-use change or protection of livelihoods'-or 'naturalness' are legitimate. Another element of such a meta-consensus could be that the opposite sides agree to adopt distinct measures that are in accordance with their respective values in order to jointly foster common goals such as 'sustainable development'. Such measures could, for instance, be directly related to the development and use of the technologies for certain purposes or non-technological solutions such as generating or redesigning product and labelling standards (e.g., regarding environmental impacts or 'natural' labels). Similarly, policies to support and protect indigenous people (including access and benefit-sharing provisions) or to facilitate market access for traditional products (such as from countries affected by 'plant' compounds derived from synthetic biology) might contribute to such solutions.
Integration of meta-consensus approaches in governance conceptions for emerging technologies in RI/RRI, SR and TA may be capable not only of including and engaging a diversity of voices but also thereby supporting effective, differentiated policies. Furthermore, this approach may not only help to shape policy outcomes but also people's perceptions, in line with the shared goal to help enable sustainable socio-technical developments aligned with societal needs and expectations. How far research on RI/RRI, SR and TA, as well as policies for fostering and governing emerging technologies derived from such work, may actually take up principles based on such 'normative meta-consensus' approaches remains to be seen. However, experiences and work on polycentric governance schemes in sustainability research (such as on mitigating climate change) [91,92] may be of help to develop or adapt concepts and policy approaches (also in RI/RRI and TA) for emerging technologies that may better reconcile plural and deliberative elements with notions of co-construction. Application areas that may benefit from such approaches may in particular include technologies that have experienced fierce controversies and acceptability issues for a considerable time. Examples include agricultural biotechnology (with new conflicts on genome editing techniques emerging), synthetic biology for a new biomass-based economy and so-called 'human enhancement' technologies (and genetic 'enhancement' technologies in particular). Relevant starting points could be self-organisation, such as in multi-stakeholder initiatives, to govern technologies on different levels (e.g., local, regional or international). This could help to generate trust and avoid a domination position of single actors. It may also allow more effective approaches to deal with differences at the level most closely related to the specific issues and thus help to recognize site-specific conditions [92]. The latter may advance the necessary de-abstraction of 'evidence' and claims (as pointed out above) in order to assess and develop policies that are simultaneously efficient and differentiated. Further starting points could be the facilitation of experimentation and learning, for example, by experimental policy (including regulatory approaches) and by supporting 'citizen science' settings. Governance schemes with or without involvement of governments may support this and could draw inspiration from inclusive multi-stakeholder models for developing standards and certification, such as the Forest Stewardship Council (FSC) or the Roundtable on Sustainable Biomaterials (RSB) [93,94]. These democratically governed initiatives between various societal actors, including social and environmental NGOs and profit-making companies, might provide models for constructive participation of stakeholders with different values and interests in innovations from emerging technologies and the governance of their impacts. In such models, a 'societal dynamics' would be incorporated into the design of the R&D process and product development in the context of specific 'real-life' political and economic conditions, or stakeholders' 'realities'. To what extent such models and their governance principles could also shape, or could be adapted to shape, more 'upstream' research and research agendas-as part of RRI and SR in basic science-is another interesting possibility to explore.

Conclusions
Inclusive deliberation and the associated co-construction-what we call 'inclusive construction'-have been adopted in RI/RRI, SR and TA as a common concept in order to shape and govern emerging technologies and innovations derived from them. We suggest that there are two major issues that challenge this concept and need to be addressed in governance schemes and policies.
First, the very diversity and pluralism in inclusive deliberation activities required for mutual learning about different interests, perspectives or moral stances can result in deep divides or de facto incommensurabilities in stakeholder views. Such states of 'inclusive disunion' can relate to both basic normative goals and measures to realise shared goals such as sustainability. Second, such divides are hard to resolve by persuading opponents with 'objective' scientific data or causal evidence. This is not only because values and worldviews play a crucial role in how people judge data about technology's impacts or policy measures to govern them; equally important are issues related to the reliability and applicability of causal claims from methods of evidence-based policy making for different real-world contexts.
Given these issues, 'universal' consensus on the 'best' or preferred policies for (contested) emerging technologies appears to be unattainable in inclusive deliberative processes. We thus propose to integrate meta-consensus approaches, especially related to normative consensus in the sense that different sides recognise that the values of the others are legitimate. Such approaches may allow plural and deliberative elements to be included for engaging diverse voices, also thereby creating effective, differentiated policies.
Relevant starting points towards such models for constructive participation could be polycentric self-organisation schemes and models inspired by existing inclusive multistakeholder models for developing standards and certification. Being embedded in specific 'real-life' political and economic conditions, or the 'realities' of stakeholders, both points of departure may help to find the differentiated policy paths needed to cope with the outlined challenges to inclusive construction.