Next Article in Journal
Heuristic Optimization for the Energy Management and Race Strategy of a Solar Car
Next Article in Special Issue
Implementation of Responsible Research and Innovation (RRI) Practices in Industry: Providing the Right Incentives
Previous Article in Journal
Investment Strategy in a Closed Loop Supply Chain: The Case of a Market with Competition between Two Retailers
Previous Article in Special Issue
An Investigation into Risk Perception in the ICT Industry as a Core Component of Responsible Research and Innovation
Article Menu

Export Article

Sustainability 2017, 9(10), 1719; doi:10.3390/su9101719

Responsible Innovation: A Complementary View from Industry with Proposals for Bridging Different Perspectives
Marc Dreyer 1,*,, Luc Chefneux 2,, Anne Goldberg 3,, Joachim von Heimburg 4,, Norberto Patrignani 5,, Monica Schofield 6, and Chris Shilling 7,
FUTOPEDIA, Champsot 28, CH-1822 Sonzier, Switzerland
Académie royale de Belgique, rue Ducale 1, 1000 Brussels, Belgium
Corporate Research & Innovation, Solvay Campus, 310 rue de Ransbeek, 1120 Bruxelles, Belgium
jvhinnovation GmbH, 4125 Riehen, Switzerland
Politecnico of Torino, Via S.G.Bosco 4, 10015 Ivrea, Italy
TUTECH INNOVATION GMBH, Harburger Schloßstr. 6-12, 21079 Hamburg, Germany
The Florin Partnership, Temple Ewell CT16 3DA, UK
Correspondence: Tel.: +41-79-477-2150
The authors are members of a task force on Responsible Innovation RI set up under the auspices of the European Industrial Research Management Association (EIRMA). This document represents the views of individual members only and should not be taken as a statement of the views of the organisation as whole.
Received: 28 June 2017 / Accepted: 18 September 2017 / Published: 25 September 2017


This paper presents a consensus view on Responsible Innovation by a group of industry practitioners, each with high level management experience in driving innovation from within industrial companies operating in different sectors. The authors argue that, while a substantial body of academic research on Responsible Research & Innovation (RRI) of potential interest now exists, it is failing to have impact in the industrial community, where it is understood to be partially targeted, because many of the concepts, tools and methodologies are not aligned with current industrial practices. This is leading to a misconception as to where industry stands on topics addressed by RRI and difficulties in forwarding a dialogue that is meaningful to both parties. The need to distinguish between processes relating to research and innovation is argued, together with the view that research into RRI should encompass more of the on-going work being carried out in related fields such as the role of companies in society, the debate around Corporate Social Responsibility (CSR) and Corporate Shared Value (CSV), responsible digital innovation, the elements of ethical leadership, sustainable investment policies and work on establishing social impact factors as well as public concerns on innovation. Directions for an alignment of the terminology and concepts are also proposed. This paper is to be viewed as an attempt at bridging perspectives with the aim of finding common ground to develop the field of RRI further so that it provides effective concepts, tools and methodologies to guide industrial innovation towards better societal outcomes.
RRI; Responsible Research and Innovation; research integrity; responsible innovation; trust; responsible digital innovation; Corporate Social Responsibility (CSR); Corporate Shared Values (CSV); Sustainable Development Goals (SDG); values; ethical leadership; business ethics; responsible investments

1. Introduction

The task force that originated this document was formed through the initiative of individuals united under the European Industrial Research Management Association (EIRMA) umbrella, seeing the necessity for a framework on best practices to guide the research and innovation process—encompassing the efforts of academia, public administration government, and industry—to make a positive impact on society. We recognise that there is a growing body of citizens with concerns about societal values, and consumers with concerns about the risks and benefits they face from continuing innovation. Without clear and credible guidelines, the danger is that malpractices will further fuel an already existing crisis of mistrust that citizens have towards the business community, which could severely hamper the use of science and technology to develop innovative solutions addressing societal needs with true benefits and value.
While literature abounds on the topic of Responsible Research and Innovation (RRI), most of it originates from academia and government bodies, but does not resonate with practitioners. The perception within industry is that this existing research on RRI is not relevant to industry, either because it uses a “taxonomy” that is too different from current practices (with different words or concepts used to mean similar things), or because it does not (or only vaguely) include elements that industry considers as central. A probable consequence is that the efforts by most companies to redirect their activities towards more sustainable and positive societal impact are not properly understood by the RRI research community, and the research on RRI is not adopted by industry. To illustrate this difference of perception in more detail, we start from a literature review and propose some bridges between the published RRI frameworks and present practices in industry. We then define the terms of research and of innovation; how they differ, how they can fail to deliver, and why they must be treated differently. We also propose for discussion some directions for a framework for Responsible Research (or Research Integrity), and for Responsible Innovation. We conclude with proposals for a discussion between academia and business, and propose for discussion a draft of a manifesto on RRI.
We at EIRMA think that, as a company member association debating, exchanging on, and trying to draw lessons from the evolution of industrial research and innovation management across various industrial sectors for over 50 years, we have accumulated valuable experience [1]. We consider this experience valuable as a basis for engaging with other experts in the field, primarily from academia and public bodies. We are convinced that a much more impactful implementation of RRI concepts will result from achieving understanding and alignment between these 3 communities.

2. Background on Responsible Research and Innovation: Views from Industry

2.1. A Critical Review of RRI from a Perspective of Business

Several attributes and frameworks describing Responsible Research and Innovation (RRI), or Responsible Innovation (RI) are available. For Stilgoe et al. [2], RRI should have attributes of democratic governance, responsiveness and framing of responsibility, and should take “care of the future through collective stewardship of science and innovation in the present”. Von Schomberg et al. [3] refer to “societal actors and innovators being mutually responsive to each other”. Other definitions can be found in the literature, but they do not differ significantly in context. What is not obvious from these definitions is whether this can actually be implemented, and this may be one of the reasons RRI is failing to have meaningful impact on the governance of innovation. Such concerns are also shared by academic circles [4]. For the next part of this discussion, we will follow the latest update available on RRI practices, by Lubberink et al. [5].
A first element that can create issues, is that research and innovation are quite often handled as a single concept, while they are actually very different processes: research is about generating knowledge, and innovation is about generating new benefits, or (economic) value. A discussion about the ethics of generating knowledge cannot take place at the same level as about the ethics of generating societal or consumer benefits from innovation. We revert to this later in the text.
A second element that is causing concerns is about the perspective RRI provides on innovation governance. Innovation is a highly individual process, where leadership plays a vital role. This is about taking risks, swimming against the stream, perceiving opportunities before others, and demonstrating qualities of resilience, consistency, and somehow stubbornness. Shoehorning all this into a democratic process to share responsibilities is an approach highly likely to kill innovation. To illustrate the point with social innovations, the abolition of slavery or the entitlement of the voting rights to women were two interventions that were initially only met with limited democratic agreement.
Referring to the RRI framework proposed by Stilgoe, Owen et al. [2] and accepted as standard [6] defining the four building blocks of anticipation, reflexivity, inclusion and deliberation, and responsiveness, we find typically that these blocks exist under different names in innovation management, and there would certainly be added value in unifying these concepts and definitions under one common roof. For example, Anticipation, which is about addressing possible implications of the innovation to be developed, can be linked to the Business Model Generation methodology of Osterwalder and Pigneur [7] and risk analysis, a standard approach well defined by the Project Management Institute (PMI) in its project management body of knowledge [8]. Reflexivity, or critically reviewing one’s own activities, can also be related to the audit of practices and compliance, such as ISO, for quality, environment, safety, and others. Inclusion and deliberation, or the upstream engagement of stakeholders, including the public, to define the various implications of the innovation being developed, can be related to the front-end activities in innovation management, a well-established practice in product development described by R.Cooper [9]. “Design Thinking” [10], a well-established concept, also offers a relevant methodology. Responsiveness, or the capacity to change direction in response to stakeholder value propositions, can be related to the requirements of Agile Project Management [11], and also reflects the recent adjustments in project management, as defined by Kerzner [12].
Revisiting the RRI framework, presently confined to some academic circles, in the light of practices from Innovation Project Management and the Business Innovation toolkits widely applied in business and industry, would certainly accelerate its adoption. Indeed, and as an example, EIRMA has been working on topics such as “Project Management”, “Knowledge Management” or “Open Innovation” for decades, as can been seen from EIRMA’s working group reports [13].
Additionally, in our attempt to build bridges between the two different worlds of academia and industry to reach a shared understanding of the various approaches to RRI, we discuss how RRI could be better served by reflecting concepts related to leadership, Corporate Social Responsibility (CSR) or Creating Shared Value (CSV), and alignment to the UN’s Sustainable Development Goals (SDG) in its framework.

2.2. A Purpose for RRI: Society, Resilience and Innovation

Acceptance of a RRI framework will be greatly facilitated if its relevance to societal issues is clearly understood by all stakeholders, and a need for trade-off is accepted. We are in a time of dramatic change; a time of great accelerations presenting existential challenges. Adapting to this new environment requires the ability to rapidly translate progress in science and technology into innovations that generate benefits and value, not only for the shareholders, but also for consumers and society. Doing this effectively and rapidly requires the early involvement of all stakeholders to identify value and benefits, and negotiate trade-offs that are well accepted and can be implemented smoothly.
Science and technology have delivered unprecedented improvements in recent decades, resulting in a dramatic extension of life expectancy, with unimaginable developments in information and communication technologies (ICT), health and life science, as well as the availability of cheap food and energy. Poverty has been dramatically reduced, in absolute and relative terms. Without doubt, we can claim that our generation is today enjoying a quality of life never seen before, and the innovation driven by science and technology has a role at the heart of this revolution [14].
It is also clear that existing economic and social models are reaching their limits on delivering an ever-improving quality of life: emerging data for developed countries suggests that, while life expectancy is increasing, the quality of life appears to be declining in some segments of the population, if not for whole countries [15,16]. We are facing a dramatic surge in malnutrition such as obesity [17], and in health issues related to air pollution especially in major cities [18]. An increase in the rate of diagnosed depression, especially amongst adolescents and young adults [19] has also been observed. As a positive example, progress in detecting and curing cancers (even as the number of cases increases due to overall longevity) is a demonstration of the beneficial impact of science [20].
At the same time, global Climate Change is setting new and dramatically challenging constraints in decarbonisation of energy. The supply of resources such as energy and water is facing increasing tensions [21], and the need for a 70% increase in food supply by 2050 to meet the projected population growth [22] will be a serious challenge. These issues have been referred to as “The Great Acceleration” [23], with possible dire consequences, such as a “sixth extinction” [24]. Beyond this, rising unemployment and inequalities [25] and the effects of so-called Industry 4.0 call into question some foundations of our liberal democratic societies. The emergence of the digital economy, if not managed responsibly, could fail to deliver its expected benefits and worsen rather than improve society.
Solutions to many of these issues are already available or progressing rapidly, for example precision agriculture, renewable energy supply, genomics, smart data and artificial intelligence. However, science will not be able to address these challenges on its own. History teaches us that societies that fail to demonstrate the necessary agility and fail to innovate will not be able to adjust fast enough to new situations. Not being able to address the question of societal acceptance of new solutions at an early stage is likely to be a major bottleneck in reaping the benefits from new scientific and technological developments. One of the main ingredients in reaping these benefits is the trust that citizens have in their institutions and organisations as drivers of innovation. It is hard for society to function without trust, but it is a fragile ingredient [26]. Examples of Climate Change [27], Obesity [28], Genetically Modified Organisms (GMO) [29] or controversies such as sugar [30] or glyphosate [31] illustrate at a high level the challenge of transforming science and technology into solutions without getting stuck in a quagmire of controversies.
From research on resilience [32] and survival [33], we learn that societies most able to avoid collapse are the ones that are most agile; they are the ones resilient enough to adopt practices favourable to their own survival and discard unfavourable ones [34]. In short: they are innovative!
Innovation lies at the root of successful economies. It is the combined effect of basic and applied research, development and deployment and can only be sustainably achieved (i.e., with long-term benefits) with the joint functioning of an efficient academic system that is pursuing world class research with integrity, of a sound education system at all levels, and of responsible companies that generate innovation with a high societal acceptance. This is Responsible Research and Innovation for the shared benefit of society, citizen and industry. Whenever we indulge shortcuts in delivering benefits that are socially accepted, we erode the necessary trust in our institutions and organisations, further handicapping our ability to use science and technology as agents of change to improve society.

2.3. What Makes Research and Innovation Different

In several articles on RRI, Research and Innovation are subjected to a similar analysis, while these are actually very different processes, with separate issues. Research is about generating knowledge, and Innovation is about creating value by developing a new Business proposition that will have the potential to generate benefits. “Research and Innovation” defines therefore two different processes, i.e., Research, which is about generating new knowledge, and Innovation, which is the process of translating an idea or invention into a product or service that creates value or benefits for which customers will pay [35]. In essence, Research is about using money to generate knowledge, and Innovation is about using knowledge to generate money. However, there is a lot more to the issue than this simple difference, and this is the core of what we want to communicate. Figure 1 illustrates these two levels of Research and Innovation [36].
The Research process of generating new knowledge is an activity that may seem chaotic, because it is the world of trial and error, and of unpredictable outcomes. Progress in basic science has traditionally been driven, to a very large extent, by academia and public funded organisations, and to a much lesser extent by the private sector. Major achievements in innovation, such as medicine, agriculture productivity, ICT, or energy, will have roots in the knowledge generated by the laboratories and research centres of universities or institutions that are most often financed by governments [37], according to an agenda set largely by the researchers themselves [36]. The drivers for excellence in research are centred on competencies of mastering science and technology, and supported by leadership talents such as curiosity, intellectual integrity and resilience.
The Innovation process, on the other hand, is about generating value by translating ideas, inventions or new knowledge into new solutions addressing a need or solving a problem to the benefit either of consumers or societies; hence “Creating Shared Value” [38]. Innovation does not necessarily rely on advances in science and technology as a starting point.
The OECD defines innovation in business as the implementation of a new or significantly improved product (good or service) or process, a new marketing method, or a new organisational method in business practices, workplace organisation or external relations [39]. A successful innovation requires a strong ability to connect the knowledge and discoveries made in science and technology with an understanding of consumer needs, business opportunity and potential societal impact, together with a willingness to take risks. It must also be implemented in compliance with rules and regulations, standards and norms, and must deliver sustainably, transparently and robustly a promised benefit, or value to a target population of consumers, users or patients [40].
An Innovation can be based on an invention (but does not have to be), either protected by intellectual property or made public, and often, but not always (at least not directly), relies on new knowledge from research. It might address one or more of a range of needs, including social, business process, software or financial, and can also be the result of a combination of solutions that are already available separately [41]. In general, the implementation of innovation is the territory of small or large companies, which typically provide a significant proportion of the R&D efforts required to expand or adapt academic, governmental or private fundamental research to deliver a practical business solution (in the USA, this is 70% of the overall effort [42]). Industry therefore requires the competency of clearly understanding the expected value, benefits and impacts of innovations for all stakeholders (consumers, customers, companies and society).
There are several models for the innovation process [36]; the terminology depends on the specific business and industry in question, e.g., chemicals, pharmaceuticals, IT, metallurgy, or consumer goods. In New Product Development, the typical model is the Innovation Funnel [43], with the Front-End Loading, Development, and New Product Introduction stages. In the Drug Development process, this is differentiated in pre-clinical research, clinical development and regulatory review [44]. These models typically assume three stages, often further divided:
Exploration, or the task of identifying the issue and exploring potential solutions. It is also called “Applied Research” in technology development, or “Front-End Loading” in New Product Development. This stage is typically chaotic, with a high level of unknowns and uncertainties that must be clarified, and often leads to a project idea being killed. Timelines for this stage are usually difficult to predict and maintain. At this stage, we typically deal with qualitative statements on markets, consumers and benefits; the “reason why”. This stage typically culminates with a pre-feasibility study, including a mapping of risks and opportunities, and cost estimates for the next stage.
Development, sometimes called “Pilot and Demonstration”, with concepts such as fast prototyping, pilot development, and pre-industrial trials. In sectors relying on sophisticated processes and enabling technologies, such as aeronautical, automotive, advanced materials or advanced manufacturing and processing, this is further differentiated by Technology Readiness Levels (TRL) representing the level of maturity of the technology under development [36]. At the end of this process, the benefits for consumers, the risks and the value both for the company and society must be clearly and quantitatively identified. Risk management is central to this stage. The gap between when an opportunity is identified, the knowledge base developed, and the perceived readiness of the market is often called the “Valley of Death”, owning to the high rate of project failures at this stage. Whatever the name given to the development process, the objective is to quantify the value and benefits, identify and mitigate or remove risks and quantify the parameters of the business model such as consumer targets, market share, pricing and costing. Furthermore, because uncertainties are never completely removed and decisions must be taken, this is also the stage where entrepreneurship is most important—where the ability to take risks and have the right business intuition comes to the fore [45,46]. This stage typically culminates in a feasibility study where benefits, risks and investments are quantified as accurately as reasonably possible for the decision to launch. At this stage, the risks must be clearly identified and removed or mitigated, because correcting errors of estimation during or even after implementation can result in crippling liabilities, the more so when malpractices underlie the decisions taken.
Implementation (also “practice, market, society”) is about delivering value not only to consumers, but also to society, while ensuring the compliance of the delivered solution to regulations and standards, as well as scaling up through capital investments, and the necessary investments in marketing, sales and distribution [47]. This stage normally culminates with the product launch.
This innovation model assumes a linear process going through gates from one stage to another. This assumption of linearity has been challenged in recent years, especially in the software industry with the “agile manifesto” addressing necessities of speed to market and dealing with uncertainties with a different, much more flexible, spiral-like project management approach [48]. Whilst it is clear that not all projects can be managed this way, this agile approach is an important consideration in looking for ways to realise the benefits of innovations more quickly.
As a result, competencies and leadership skills driving excellence and success in research [49] or in innovation [50] are different. As research and innovation are recognised as potential drivers to generate value, governments and companies have both designed schemes to foster either good quality research or effective innovation. Since these two processes are different, the schemes, tools and criteria defining best practices and governance also need to follow different rules and principles.
The on-going discussion about the genome editing based on CRISP-Cas9 is a good illustration of the necessity to differentiate between basic research and innovation. Presently, based on the Oviedo convention on medicine and human rights [51], germinal genomic editing of the human genome is only acceptable for the purpose of generating knowledge about the molecular mechanisms (i.e., for research), but applying it for reproductive purpose (i.e., as an innovation) is deemed irresponsible, and must be banned [52]. This is not yet the end of the debate on the trade-off between the precautionary principle and innovation potential [53].
As a summary, the process of Basic Research (Research) and the process of Applied Research and Development (Innovation) must be clearly dissociated. Basic research generates knowledge with an impact that is difficult to anticipate and quantify, but which can be measured in terms of integrity. Applied Research and Development is concerned with implementing an innovation, and should generate value for society, shareholders and individual customers/consumers, with impacts that should be reasonably anticipated and quantified. However, it is intrinsic to the nature of innovation that this prediction of the future impact of an innovation about to be implemented can be incomplete or even substantially wrong, and that the accuracy and probability of these predictions may not satisfy all stakeholders.

3. Some Issues with RRI as Viewed from Industry

3.1. Overview

Both research and innovation can fail to deliver responsibly, but they will have different ways of failing: research can fail by delivering inadequate or misleading knowledge due to inappropriate practices or even fraud; innovation can fail to deliver to value to consumers or to society either through misleading claims based on failed research, or through deception about benefits, inadequate business models or without considering external influencing factors.
The socio-psychological determinants of acceptance of emerging technologies are not yet fully understood [54]. Why some technologies get accepted while others face public rejection is not clear, but a pattern is emerging: perceived risks, trust and perceived benefits, such as perceived use and ease of use are significant factors [55]. Investigations in the field of GMO acceptance confirm the determinants of risks, trust and perceived benefits [56], with a wide variability among countries and consumers. Similar patterns can be observed in nuclear energy [57], where global events such as accidents affect perception. Similar patterns apply to digital technologies as well (see below). Consequently, we can expect that any controversies related to bad practices in research or innovation for emerging technologies will increase resistance to their acceptance, their commercialisation or their ability to solve societal issues. This supports the view that clarity on how to conduct research and innovation responsibly is needed. Technology assessments, including preliminary impact assessment, would be a way to pave the path to policies in implementing new technologies [58]. Unfortunately, technologies at an early stage of development have a high level of ambiguity in their potential impacts [59].
The field of identifying best practices in RRI is emerging today, and is the focus of attention for many communities. It is responding to a need to address several challenges, one of these being that research and innovation could fail to fully anticipate and address with priority the needs of society and its citizens. While scientific expertise enjoys a relatively high level of trust among citizens [60], especially compared to trust in governments (which was 80% in 1958 and is now below 20%) [61] or industry (similar pattern to governments) [62], trust in expertise and institutions has been generally continuously declining [63]. The consequence of this decline in trust is an increasing difficulty in providing adequate governance, and a concern that society will not entrust those doing science or developing technology with the license and resources to work on solutions to emerging challenges. Similarly, society will be more reluctant to provide a license to innovate and operate to industry. Some examples that illustrate this:
Research on Climate Change raised doubts on the causalities of human activity. Over 95% of scientists active in this field relate climate change to human activity. Citizens have also accepted this message to a large degree, but this message was initially blurred for a long time by campaigns of denial that resulted in procrastination in taking the necessary actions [64]. The debate is not yet completely closed, as we know.
For many years, research on the health effects of smoking tobacco was not communicated transparently by many key actors, and was accompanied by denials of its carcinogenic effects. This also resulted in delaying actions, causing additional deaths that could have been prevented [65].
The deleterious effects of asbestos on lungs were known many years before its use as insulating material was banned [66].
The health effects of high sugar consumption on diabetes and obesity were not communicated transparently, or were blurred by deceptive information on fats and lipids, resulting in an increase of such diseases [67].
The controversy on glyphosates (roundup herbicides) has resulted in delaying any action, mostly based on lack of transparency of data causing different interpretation of analysis [68].
The late withdrawal of Paxil (paroxetine), an antidepressant that was launched on the basis on data misleadingly reported and interpreted, which was later demonstrated to have acute side-effects and questionable efficacy [69].
The recent “diesel-gate” with car engines, resulting from misreporting diesel engine emissions caused by deception and fraud [70], seriously eroding the consumer trust in the car industry’s ability to self-regulate.
In the above cases, we have examples of research that was used to deliver innovations that failed to provide value, and were ultimately harmful to society.
Malpractices at the level of research must be prevented for several reasons:
They discredit science, and therefore weaken the robustness of science-led policy making (Brussels Declaration [71]).
They tend to reinforce “science myths”, or beliefs that are based on inaccurate information taken to be “facts” [72], and result in inadequate societal decisions or at least in delaying them, with associated additional cost.
Of course, they generate waste of scarce resources by allocating them to the wrong type of research.
Social media and the Internet have made information available to everybody with little restriction, but have also removed many filters for fact checking and scientific validation, often reinforcing prejudices. This is related to cognitive bias [73], a topic of recent investigations demonstrating the limitations of our reasoning and reluctance to shift away from prejudices [74], whereby:
We tend to put far more weight on negative information.
We tend to mentally screen facts and figures that reinforce our set of beliefs.
Of course, the design of the algorithms that underlie social media is a culprit, enclosing all of us in bubbles of shared beliefs, but cognitive bias explains our general laziness in not being more motivated to escape from our mental comfort zones. This means that once an opinion is forged and turned into a belief, it will be very difficult to change, as demonstrated by research on cognitive dissonance has demonstrated [75].
Policymaking has historically been informed by experts translating the state-of-the-art of science and technologies into adequate recommendations. More recently, this role of experts has been challenged, with the consequence that policies and strategies, especially in the democratic debates, increasingly tend to rely on shallow justifications, sometimes distanced from scientific and technology rationale [76]. Each time science and technology is discredited as a result of some form of malpractice, trust in the use of expert insight for policymaking is further diminished.

3.2. A Focus on Issues with Basic Research

Because research is about generating knowledge, it can fail to deliver knowledge that can be reliable when malpractices such as inadequate methods, weak statistical power or selective data inclusion are tolerated. This type of research must be combated as it may result in inadequate policies or innovations, and will discredit science.
The controversy of dual benefits of science—the debate on whether research should be limited to science that does good, and avoiding science that could do harm [77] is thus not adequate because it implies a discussion of whether knowledge is good or bad. This question is, however, central to innovation, which is about generating benefits, as we shall see later, because benefits can indeed be good or bad, depending on the business model that is applied.
There are several ways to fail in delivering satisfactory research, stretching from unintended shortcomings to outright fraud [78]:
Using inadequate methods, with statistical mistakes, producing results based on weak statistical power;
Using questionable practices such as data selection, inadequate clinical studies, not divulging conflicts of interest, doctoring images, or over-interpreting results;
Jumping to inadequate conclusions, based on false evidence or inadequate logic; and
Committing outright fraud, such as falsification of data, fabricated data or plagiarism (fortunately these are exceptional situations).
Outright fraud is clearly unacceptable and is highly likely to be ultimately debunked. The problem tends to come with the less obvious malpractices of cutting corners to get results that seem interesting enough to be publishable. Such bias is reportedly becoming increasingly frequent, with the widely reported crisis of reproducibility, widely reported, affecting life and social sciences in particular [79]. Issues of reproducibility even affect the discipline of bio-computing [80], seemingly in relation to the enormous complexity of data and software, but also of limitations of the computers, and this may not be without important consequences for conclusions and decisions based on this field of research.
Additionally, there are too many instances of blurring real benefits of achievements from research through questionable communication practices [81], such as distorting and over-interpreting facts and figures on purpose [82]. While such practices can be successful at achieving specific goals in the short term, they end up contributing to discrediting the research community in the eyes of public opinion, or adding a burden to the credibility of science and technology as an agent of change for the better. A decrease of trust is the price to be paid.
The recently created METRICS-center (METa Research Center of Stanford) [83] is an institute dedicated to improving the quality of science, by providing recommendations for more integrity in research, covering:
Methods: The phase of designing and conducting research;
Reporting: The phase of communicating research;
Evaluation: The phase of evaluating research;
Reproducibility: The phase of verifying research; and
Incentives: The phase of rewarding research.
From this perspective, research that is conducted responsibly is research that is aimed at delivering reliable knowledge. This has become known as Research Integrity, a definition that is perhaps more appropriate than Responsible Research. The RRI framework would benefit from clarifications on the question of what constitutes research integrity.

3.3. A Focus on Issues with Innovation

Innovation is the engine of change for societies and companies. It is aimed at generating value and benefits for different stakeholders, but it can fail to generate this value when it is based on deception about benefits and side effects, or is based on unethical business models. Such types of innovation must be combated not only because they are not delivering the claimed benefits, but because they will discredit the business community, and erode the societal licence to operate.
While there is no one-size-fits-all for the type of ecosystem that will foster innovation, having lean regulation, low barriers for scaling up, and a system facilitating funding of risky innovation [84] are strong facilitators for success (see also the discussion on precautionary and innovation principles below). This is much easier to achieve when there is trust in the ecosystem between all the stakeholders—it is the core opportunity for establishing responsible practices in innovation. As noted above, trust in institutions (government, business, etc.) has eroded sharply in recent years [62], and regaining it will require us to make the system work for all stakeholders, giving individual citizens and society the opportunity to play a larger role in addressing their needs and fears, not only generating shareholder value but also generating value for people and society.
Any innovation must deliver value to customers and consumers as a top priority [85]. There are several ways to fail in delivering Innovations, often related to tunnel visions [86]. A complete discussion would exceed the scope of this document, but innovations can fail to deliver value to customers and consumers for a variety of different reasons, including when they:
are based on deceptive research, and will therefore fail to deliver promised benefits;
are based on misleading scientific reporting;
fail to communicate valid benefits honestly;
underestimate or do not communicate side effects;
are abusing the trust of users by benefitting from asymmetry of information, or claims that users cannot check;
embed unnecessary built-in obsolescence; and
generate unacceptable externalities.
Additionally, innovations may impact negatively on society, because they do not address external considerations and:
are relying on unethical practices (e.g., child labour, disregarding work safety, and discrimination);
abuse a monopoly, through pricing or not delivering full value;
fail to comply with rules and regulations (e.g., environment, work safety, and contracts); and
fail to respect the best practices of CSR integrating sustainable development issues.
We limit the discussion on innovation here but fully recognise that delivering on the above is only part of the overall need for responsibility, along with meeting other societal expectations such as employment conditions, paying the fair due of taxes, abiding by ethical business practices with vendors and suppliers, and refraining from lobbying for unethical causes [62,87].
Consequently, questionable innovations may be the result of inadequate developments based on failed research, but also of questionable ethical practices, or even questionable business models.
According to the World Business Council for Sustainable Development (WBCSD) [88], trust in business has eroded sharply since the last global financial crisis. The social fabric is wearing thinner, and many will see business as reneging on its social contract. A probable consequence of this lack of trust will be a push for more regulation (i.e., shifting from the innovation principles to the precautionary principle, which will not only add costs to doing business, but is very likely to act as a straightjacket that will increasingly seriously affect industry’s ability to generate beneficial innovation. This means that responsible innovation should be considered as a priority for the business community, if we do not want to jeopardise the ecosystem that will foster innovation, and want to further build a sustainable economy and society.
More research on how innovation can fail to generate true benefits, and what practices and ecosystems can be conducive to such failure, would certainly aid in developing a framework that gains more acceptance from the business community and industry.

3.4. Some Clarifications on Responsibility

We define responsibility as a way of acting by taking accountability for the consequences of our actions, desired or not, anticipated or not, specific and measurable, and by including external factors. Human bias in behaviour can also affect integrity in research or innovation.
Acting responsibly is to act to prevent or avoid undesired consequences from the actions of individuals or the community, and taking the necessary actions to correct these consequences as soon as they become more apparent [89].
Generating new knowledge in science and technology—i.e., the act of research—is certainly neutral in terms of ethics and responsibility. It is the way in which it is conducted that might not be responsible, especially when this activity does not comply with rules and regulations. However, the act of generating new knowledge should not be classified as responsible or not. It is the act of translating such knowledge into innovation that is never neutral, and that can either be done in a responsible way or not, according to the resulting impacts and the type of business model.
Rating an action as responsible or not will imply a level of ambiguity. For instance, a breach of a contract might be assessed with little ambiguity when the terms are clearly defined. Whether or not an act is an infringement of the law will be assessed through a corpus of legislation and regulations with ambiguities subsequently clarified by jurisprudence. The assessment of infringements of a responsibility with a moral character is often rooted in cultural practices and can result in high levels of ambiguity: what is moral and legitimate may not be legal, and vice-versa. What is acceptable in a cultural context may not be in another. When assessing responsibility, it is therefore important and useful to understand which type of responsibility we are talking about [89]:
Contractual responsibility, based on clearly defined mutual obligations that are very specific because they are based on an agreement between two or more parties, and are often related to penalties where a breach occurs. However, as innovation is necessarily linked to uncertainties and ambiguities, or asymmetry of information, contracts may still be a source of litigation.
Legal responsibility, which is specific, as it is based on laws and a jurisprudence providing a framework of obligations, but which is dependent on the laws applicable within a specific jurisdiction, e.g., a particular country. More specifically, the laws and jurisprudence related to Product Liability address claims such as negligence, manufacturing defect, design defect, and breach of an expressed or implied warranty. It can extend to strict liability, when the producer is expected to anticipate the negative impacts of its product and take responsibility for them, in view of the asymmetry of information (the producer knowing more about their product than the consumer).
Moral responsibility, which is value and culture-sensitive (see below) and may be open to interpretations that are outside of the competence area of scientists or engineers and must be elevated to the societal level. The link between moral responsibility and values is often illustrated by the thought experiment of the trolley problem (see below).

3.5. A Focus on Moral Responsibility

Contractual and the legal responsibility are both fairly unambiguous, and relatively clearly defined. They are also supported by jurisprudence. Moral responsibility is however much more ambiguous, and too often left undefined in discussions on RRI. For this reason, we shall shed additional light on it.
Innovation implies making choices and trade-offs about the benefit for stakeholders. At the heart of this decision process are the values not only of a company but also of society, and failing to address this at an early stage of the innovation process may result in a disconnect between the two, and in innovations that are technically but not socially sustainable.
Discussions on responsibility extend beyond the scientific and technical community to include the practitioner communities of lawyers, social scientists and philosophers, especially for moral responsibility. The Trolley Problem [90], well-known in the social science community, is the typical starting basis of a discussion on moral responsibility. It is a thought experiment illustrating the impact of values on a decision. It concerns the controller of a trolley (tram) who can either divert the switch of a runaway trolley, killing one person, or do nothing and let the trolley continue its direction and kill five people. Without further information, about 90% of people surveyed would divert the switch, kill one person, and save five. However, if the one person is himself, a close relative, and additionally the five are criminals, the results will be different. This demonstrates the ambiguities of outcomes in a simple situation, and the types of moral values that underlie such a decision process. The solutions to this problem can be based on ethics [91] or on potential liabilities [92]. Replacing the trolley with the concept of technology, we shall now describe the framework of moral choice and responsibility.
This Trolley Problem is at the root of an extended debate about the role of value in the design of products and services, and which any organisation or person involved in innovation should consider: how we should address trade-offs, or conflict of interest, and whether such conflicts can be accommodated or mitigated through design adjustments [93]. Selecting any solution must be done after including the different stakeholders such as society, communities, consumers and suppliers, preferably at an early stage of the process, and have an adequate governance process capable of arbitrating between the different options and impacts. Classic Project Management principles provide the same requirement.
The screening process is likely to consider the profitability of the innovation project on one hand, but also its adequacy to meet the range of stakeholder value propositions on the other. The next question is of course defining value. There is a consensus among philosophers that there are not really such things as universal values. They can vary with time (e.g., the perception of slavery throughout history) and geographically or culturally (e.g., the perception of human rights today), with different expressions in geographies and religious alignments [94]. For business and industry, the closest framework with a universal dimension that could guide the assessment of impacts of innovations is the Sustainable Development Goals, a framework of 17 Goals developed by the UN to tackle the world’s most pressing social, economic, and environmental challenges in the lead-up to 2030. This framework has been adopted by several large companies [95] and can therefore provide a set of criteria against which to assess the impact of innovation projects on society.
In a broad sense, this could mean that governments should do more than discontinue ethically unacceptable and unsustainable research, and should also take the lead in defining and realising areas of public value for innovation [89]. The problem with such expectations is that it implies a vision of what future we collectively want science and innovation to bring about [89], obviously something that will necessitate democratic debate, a high level of independence from lobbying and political influences, and a high level of trust in the organisations and the experts leading the process. This is certainly a courageous assumption!
The issue tends to be different within companies. The prevailing culture will largely dictate the behaviour in conducting ethical innovation. This may go from a timidity to “going the extra mile” in trade-offs to meet social goals and expectations, to a loose, and occasionally even fraudulent, approach to respecting rules and regulations. Root-causes for such deviations may be found typically in a specific financial situation (which may not provide the necessary margin of flexibility), to sheer greed, or even deviant patterns identified as the dark triad of narcissism, Machiavellian tendencies and psychopathy [96]. These patterns are of course publicly condemned, but often strong drivers for career development in some organisations, and are highly likely to lead to a disrespect or cynicism about concepts such as shared value and ethical innovation.
Additionally, there are tensions between the precautionary principle underlying the regulatory framework, and the innovation principle of trying and failing until achieving success [97] that should be addressed through a structured dialogue between these stakeholders (companies and public bodies) as illustrated with the example of the “Nanodiode” project [98].

4. A Framework for Responsible Research and Innovation for Industry

4.1. Responsible Research and Research Integrity

Several frameworks for conducting research with integrity have been developed. They are centred on transparency, peer review, good governance, fairness in providing references and credits, and clarity on conflict of interest.
As stated earlier, research is about generating new knowledge, and knowledge as such is neutral. This point is worth the clarification because the question of anticipating the impact of research is highly challenging, as it would mean that a researcher would need to know in advance all of the possible consequences of the translation of their research results into innovation.
Knowledge can be used in various ways: for example, quantum physics is used for weapons, microprocessors, etc. This would imply, taking examples to their logical conclusions, that the theories of General Relativity or Quantum Mechanics should be screened on their anticipated impacts in, for example, generating the atomic bomb, or enabling the guidance systems for missiles. Translating new knowledge from fundamental research such as Quantum Mechanics or Molecular Biology into new products or services takes many years, and the impact is near to impossible to predict accurately. We should recall the predictions about computers made in the late 1940s—the market size would be no more than five machines worldwide [99]—or that the problem of e-mail spam would be solved “two years from now” [100]. Other examples include GMOs, with a low societal acceptance in Europe but not in USA, and the cycle of acceptance of nuclear power, rated as a magical technology for energy supply in the 1950s, and then demonised sixty years later [101].
Is knowledge moral, ethical or neutral? If we assume the last point is correct, ethics and morality are then linked to innovation and not research. The discussion on research is then about integrity: about building solid knowledge in a transparent way, open to peer review, that can be trusted.
However, breakthrough innovation is, to a large extent, based on new knowledge, which can be disclosed in a deceptive way, i.e., with information and data that are concealed, manipulated or misreported. As such, it can be used to support unethical innovation. When research fails as a result of minor misconduct such as selective reporting, selective citation and flaws in quality assurance and monitoring, or even from major misconduct such as fraud or manipulation (fortunately an exception), it will generate sloppy science and mistrust. The so-called crisis of reproducibility, especially for medical sciences, is an alarming sign that the call from scientists for better practices should be heard [102].
In the case of research, responsibility is often linked to a commitment to integrity, a topic that is regularly addressed in the “World Conferences on Research Integrity” [103] that has published the Singapore Statement on Research Integrity [104] promoting, among others, values of transparency, compliance to regulation, rigorous peer reviews, commitment to reporting irresponsible research practices, promoting research environments conducive to integrity, and embedding societal considerations into research programmes. This statement was extended by the Montreal Statement of Research Integrity in Cross Boundary Research Collaborations [105], in order to reflect the additional requirements attached to research partnership, which are, among others, about defining with clarity the respective roles of the research organisations partners, acknowledging the different practices of the partners, and anticipating questions of governance and public communication, including clarifying the authority to communicate on behalf of the collaboration.
The EU has issued a similar but more comprehensive document, the European Code of Conduct on Research Integrity [106], which deals with the conduct of research, addressing misconduct and Good Research Practices. This document represents a solid framework guiding the principles for conducting research with integrity. It addresses topics such as honesty in communication, reliability in performing research, objectivity, impartiality and independence, openness and accessibility, fairness in providing references and giving credit. It intentionally excludes the question of ethics of research, defined as
Any ethical questions that arise when science is regarded in a wider ethical/social context. Is the subject worthy of investigation? What are the consequences of such research? Could the research result in harm for people, nature or society, or conflict with basic human values? Is the research sufficiently independent of interested parties? Could a university or laboratory become too dependent on sponsored contract research? Could the researcher guard against the improper or selective use and misinterpretation of their findings, or against objectionable applications of their discoveries?”.
Finally, bias caused by human factors is often the cause of misconduct and malpractice. Narcissism and greed are not absent from the research community either [107].
In any case, improving the reproducibility of research, a major concern in medical, life and social science, will result in an increase of credibility of the knowledge generated. This can be achieved by acting on cognitive and methodological bias through training, promoting further transparency, more diversity with peer-reviews, and reinforcing study-preregistration [108].
The discussion on ethics in research is certainly beyond the boundaries of science and must be multi-disciplinary. This is the area where social sciences such as philosophy and sociology and hard sciences meet. An example of such investigation is the Center for the Study of Ethics in Professions, which provides a comprehensive catalogue of Codes of Ethical Conduct for various professions [109] and conducts research on the ethical impact of research, in their specific case with a focus on neuroscience.

4.2. A Framework for Responsible Innovation

Innovation, defined as the implementation of solutions generating value that meet new requirements and needs of customers, has a business dimension. Again, innovation is neither invention nor research, both defined earlier, but their translation into a business proposition. There are many tools, guidelines and frameworks that are relevant in defining and assessing business practices that are also relevant to innovation, such as: UN Global Compact [110], OECD Guideline for Investment [111], Corporate Social Responsibility (CSR) [112], Creating Shared Value (CSV) [113] or even the UN Sustainable Development Goals (SDG) [114]. We discuss below how these concepts are linked to RRI, how they overlap with and are relevant to RRI.
The UN Compact is a set of 10 principles addressing human rights, labour, environment and anti-corruption. It is a framework to guide the efforts of companies towards designing a company’s value system and a principled approach to doing business. As such, this is applicable to any organisation, whether for research or innovation.
Corporate Social Responsibility (CSR) addresses how companies manage their economic, social, and environmental impact, as well as relationships with stakeholders (workplace, market, supply chain, community [115]). The essence of CSR is captured in the ISO 26000 [116] standard, which defines Social Responsibility (SR) as the responsibility of an organisation for the impacts of its decisions and activities on society and the environment through transparent and ethical behaviour. It is therefore not restricted to environment or philanthropy and it implies complying with laws and regulations, delivering consistent quality of products, addressing human rights and workplace conditions, minimising externalities (i.e., costs such as air pollution that are not accounted for) and environmental impact and operating with transparency and integrity, including in lobbying. More generally, it is a way to address the triple bottom line of people, profit and planet that should drive a sustainable business strategy. As such, it can be considered as a broader extension and adaptation of the UN Compact.
The concept of Creating Shared Value (CSV) was introduced by Michael Porter [117]. It is about making a connection between societal and economic progress by developing corporate policies and practices that enhance the competitiveness of a company while simultaneously advancing the social and economic conditions in which the company operates. It is centred on the value generated for different stakeholders, while CSR is centred on the impact a company has (both negative and positive). Both are important and complementary for innovation, with CSV delineating what should be achieved in terms of common goals for business and societal benefits. Implementing CSV means therefore:
reconceiving products and markets that achieve common benefits;
redefining productivity in the value chain; and
enabling local cluster development, so that economic activity also has a local beneficial impact.
This concept of CSV has been criticised as well: while appealing to practitioners, and elevating social goals to a strategic level, it is also perceived to underestimate the tensions between social and economic goals, and to be naïve regarding the challenges of business compliance [118].
The ethical and societal benefits of both CSR and CSV are therefore still debated [119]. On one side, not everything that is legal is ethical, and vice-versa. There are thus justifications to require companies not to restrict the framework of their innovation activities to just complying with the law. The perception of risks is dynamic, and anticipation beyond regulations based on the precautionary principle can be considered a corporate duty, especially when the state-of-the-art in an area of research is still ambiguous. On the other side, companies have no mandate from society to decide what type of societal issues should be advocated, and how. This should be the job of government, especially when it is democratically elected. This also reflects the view that the first responsibility of a company is to pay its fair amount of taxes, to abide by laws and regulations (especially on labour and environment), and to disclose with transparency information of societal interest. Another view is that a company that wants to lead in applying Responsible Innovation should have the societal impacts of its R&D on its radar, in order to maintain its legitimacy from society to operate and innovate. In any case, there is a clear requirement to clarify the type of reporting and accounting that can reflect the positive or negative impact of a company on its stakeholders [120].
Sustainable Finance and Investment is an emerging concept [121] that is applicable here, because the criteria that are used to select responsible innovation projects are not significantly different from those used to select responsible or sustainable investments. Indeed, crossing the Valley of Death (Figure 1) very often requires huge investments. Typically, a pilot or demonstration project costs ten times more than the previous phase of applied research and launch on the market is ten times more expensive again that the previous stage. In such cases, Responsible Innovation should comply with responsible investment rules. Responsible Investment must ensure that the criteria of Environment, Society and Governance (ESG) are met in selecting a portfolio [122]. Because this is an emerging domain in finance, there is presently no final consensus on the criteria to include or exclude in an investment portfolio. Additionally, research on the profitability of sustainable (green) investments, is still progressing. Finally, there are potential limitations or controversies with this concept, and e.g., most countries put a framework of regulations on investments by pension funds (the fiduciary duty), that stipulate that priority must be given to economic criteria, and that beyond compliance with regulations, environment and social criteria can only be additional factors rather than trade-offs that would sub-optimise return on investments (ROI). This is in line with the view of Milton Friedman on CSV, suggesting that the trade-off between the expected ROI of a classical portfolio and of an ESG-focused one has to be considered in the light of shareholder expectations, and perhaps even any legal constraints, that may require ESG criteria to only be supplementary, and not to be a justification for compromising on optimal expected ROI [123]. Consequently, Responsible Innovation should progress hand-in-hand with Responsible Investment, and criteria that would exclude investments in a sustainable or ethical portfolio can certainly apply to innovation as well.
Another important element of Responsible Innovation deals with the inherent nature of innovation—predicting future impact can be incomplete or even substantially wrong and that the accuracy and probability of these predictions may not satisfy all stakeholders. This is the dilemma of the precautionary and innovation principles. On the one hand, society and stakeholders want more innovations and their related benefits, but then may be reluctant to accept risks that are difficult and ambiguous to assess. Too much emphasis on the precautionary principle may kill innovation, while too much emphasis on the innovation principle may backfire as well. A society that wants to foster innovation must be ready to accept risks, but must accompany this with an agile governance framework for innovations with a large level of ambiguity. This dilemma must to be part of a discussion on Responsible Innovation.

4.3. Digital Innovation and Responsibility

Increasingly, a large proportion of innovation is taking place in the digital economy. In 2016, the first three companies rated according to their stock market value were Apple, Alphabet and Microsoft—digital companies that did not exist 40 years ago. Innovation in the digital economy has transformed our lives, and more is expected to take place in the coming decades. Innovation in business models based on Big Data, deep-learning algorithms, and Internet of Things (IoT) is expected to dramatically alter many paradigms such as employment, communication, health and productivity that are fundamental to our lives. Acting responsibly in this area is therefore of primary importance in shaping society and the values of tomorrow.
Several working groups are actively involved in defining the principles of a Digital Responsible Innovation [124]. While no final consensus is yet in place, the main directions for responsibility in Digital Innovation are emerging.
Digital innovation should not be based on algorithms or databases that lead to manipulation of information, infringement of freedom of expression, shortcomings in data protection and privacy or freedom of association. Neither should it negatively affect the right to education and multilingualism, consumer rights, and capacity building in the context of the right to economic development [125].
As research evaluates algorithms that could rule our lives (think about a “smart city”), then a major issue is emerging: code should be open and auditable, since, in this case, Lawrence Lessig’s statement “code is law” [126] is very true. With the general complexity and issue of transparency of design and functionality of deep-learning and big data, this is becoming even more sensitive, and many opinion leaders such as Stephen Hawking, Elon Musk and Bill Gates [127] have expressed concerns and a call that we should establish a framework for algorithm design. One such framework is the Asilomar principles for the design of AI, with the goal of creating beneficial rather than undirected intelligence, using principles such as transparency in failure, clarity in responsibility, alignment to human values, robustness with respect to security, or respect of privacy [128].
Another important emerging issue in ICT is related to its environmental impact in terms of the input materials (e.g., the “rare earth” minerals for building microprocessors used in the manufacturing process: will they be still available in thirty years from now?). Equally in terms of the power consumption of the gigantic data centres of “cloud” providers, and the unresolved issue of e-waste (the mountain of electronic devices that very quickly become garbage). A new design paradigm of ICT devices is emerging as a core requirement for RRI in ICT, i.e., an ICT device that is repairable, modular and in line with the circular economy approach (see
The impact of algorithms and big data on the future of work is also a concern, as more and more jobs are expected to be replaced by systems based on data and artificial intelligence, starting with clerical tasks in offices and administration, but extending as well to supply chain or manufacturing [129]. Consequently, unless we accompany such developments with measures to mitigate the impact on the changes in the workplace and the possible widening of inequalities, a popular backlash is highly likely to result, making progress in this direction more difficult [130]. Discussions have been initiated on topics such as basic universal income, taxation of robots, or ecological taxation, but these discussions are at an early stage.
The European Union has launched a public consultation on building the European Data Economy, with the intention of developing a framework that is a trade-off between the need to protect digital human rights and the necessary freedom to protect innovation and openness that is key to the success of this economy [131]. This is work in progress, together with several other initiatives presently unfolding.
One element is emerging on social media: science can be easily and rapidly discredited in the eyes of public opinion through the propagation of controversies based on fake or manipulated news. Fake news and information is by far not new, as historians will readily confirm. The typical scientific illiteracy of the average population and a growing disaffection for science among young people is not helping to turn this tide [132]. What is new, or at least comes as a surprise, is that this is not only still taking place at a time of almost completely open access to information, but seems to be even more pronounced now. As reported in Foreign Affairs [133], this is leading to the erosion of the authority of experts and of the trust of citizens in their institutions and organisations, with worrying consequences for our ability to maintain democratic debates on complex issues
This is reaching such alarming proportions that the perception of integrity of science as an engine of progress may be seriously damaged. In the age of “fake news”, scientists must be aware of the mechanisms of surging controversies, and must assume a moral duty to combat false information whenever it is identified in their social media communities.
Because Digital Innovations already have a major impact on our lives and societies, it cannot be ignored in a discussion on Responsible Innovation.

4.4. Responsible Institutions

Several research papers [134] have clearly highlighted that the type of governance in place in an institution, i.e., the values of its leaders and the incentive programmes, can play a major role in fostering malpractices, since people and not organisations ultimately hold responsibilities. Unrealistic goals, resources disconnected from the planned objectives, a leadership demonstrating a relaxed ethical behaviour, a financial situation that is not healthy, a casual compliance with safety requirements, deficient quality or health rules—all of these situations may lead to a culture of “corner-cutting” that can induce malpractices [135]. An institution that wants to secure responsible behaviour must therefore be equipped by reinforcing the need for regulatory and ethical compliance, protecting internal whistle blowers, integrating feedback from customers and society, and have strict criteria on the behaviour of their leadership, as well as the incentive structure.

4.5. EU and RRI

In recent years, the EU has developed a comprehensive approach to RRI. The highlights of this approach are [136]
Early engagement with all societal actors with the goal of inclusiveness;
Gender equality, addressing under-representation of women;
Science education, i.e., enhancing the current education process;
Ethics, addressing both the mandatory legal aspects and the societal relevance and acceptability of research and innovation outcomes;
Open Access, i.e., giving free online access to the results of publicly-funded research (publications and data); and
Governance, i.e., preventing harmful or unethical developments in research and innovation.
The EU’s RRI framework is supported by tools, providing training modules and assessment kits.
The current version of the EU’s RRI framework is mainly designed for the research process but still aiming to be relevant to Innovation. In view of issues highlighted above, the following points are open for discussion about the EU’s RRI framework:
It addresses public research that is not directly linked to the “value chain of innovation” while providing its indispensable breeding ground.
It does not clearly separate integrity of research (see Singapore and Montreal Statement above) from ethics of research institutions.
On research, it does not include or reflect many concepts that already exist or are well established, such as the Montreal Declaration on Research Integrity.
It is rather superficial on what ethics is, and fails to reflect the cultural element of the values that underlie choices.
In large part, it covers elements that are more closely related to good governance of an institution or an organisation, (e.g., gender balance, science education), well described by the UN Compact
It is not adequate for the innovation process as we have defined it above, which is typically implemented by the private sector, and it does not sufficiently reflect the many practices formalised within the OECD or UN or from leading Business Schools or Universities and that are now well established and accepted as guidelines. This is detailed in the section above on responsible innovation.
Finally, it is not sufficiently relevant for innovation in the Digital Economy, one of the major fields of innovation at present.

5. Conclusions, and Proposals for a Way Forward

Innovation, which is about translating knowledge into wealth opportunities, or into new benefits, is mostly, but not exclusively, driven by the business community, and is distinct from research, which de facto is the pursuit of new frontiers of knowledge by researchers, mostly, but not exclusively, from academia or research institutions.
RRI is an emerging topic of research within academia, but presently there is no clear agreement or understanding of what it encompasses and how it relates to well established disciplines such as technology assessment and business ethics. For the business community, the perception is that the RRI academic community is taking a very reductionist approach without adequate reference to on-going work in related fields and is therefore failing to have an impact on innovation governance. As a result, it is not yet seen as being relevant. In particular, the business community perceives the current RRI framework to have several shortcomings, among others:
It does not properly reflect established business practices on innovation and product development management, market analysis and consumer research or compliance,
It has failed to observe parallel developments such as the debates on CSV and CSR, sustainable finance, or ethical leadership.
After years of working in silos, industry, academia, and policy makers need to create opportunities for dialogue to discuss and clarify the important key issues and challenges that are not only a focus for RRI, but are also those faced by innovators and wealth creators in responding to the needs of society. This should lead to a shared understanding of the topic in order to ensure that it is relevant for industry, and enable the RRI discipline to properly reflect concerns of potential malpractices that might occur during both the research and the innovation processes.
Concerns about the present form of the RRI concept are shared by many in academia, such as Lubberink et al. [6], Blok and Lemmens [5] or Crane et al. [118], quoted previously in this paper, and broader dialogue will offer a fruitful opportunity for progress.
In this paper, we have described elements that can complement the existing RRI framework to highlight new directions that RRI research could take, with the aim of encouraging constructive dialogue. Contrary to the apparent perception within the RRI academic community, industry is neither disinterested, nor unaware nor does it lag in addressing these concerns. The EIRMA Task Force was initiated by industrialists representing a broad sector of industry. Addressing the following points presented in this paper through a joint working group would certainly be conducive to a better perception and higher impact of the idea of RRI within industry:
There is a need for a disambiguation of research (generating knowledge) and innovation (generating economic or societal value). These are very different processes, but are often combined, creating confusion.
We need to better understand how research can fail (e.g., by lacking integrity), how innovation can fail (e.g., by generating undue externalities and deceiving benefits), and what mechanisms we can put in place to minimise the likelihood of these failures occurring.
How cognitive bias can affect research integrity needs to be understood.
The RRI framework must be better aligned to the business and industry practices, embedding elements such as Design Thinking, Business Innovation Canvas, Innovation Project Management, and Risk Management.
We need to clarify the purpose of the RRI framework to induce a better acceptance of this idea. Why should it be implemented? What would the mutual benefits for the various stakeholders be? It should therefore encompass the discussions on the SDG implementation and the development of the circular economy;
The debate on the role of industry in society linked to CSR and CSV, and the emerging discussions on Sustainable Finance and Investment must be considered in the RRI framework.
Innovation in the digital sector is impacting society in a major way, and the Responsible Digital Innovation debate must also be part of the discussion on RRI;
We need to clarify the concept of responsibility: how decisions are made, how they are affected by situations, what the guiding principles are for acting responsibly, and how people can be distracted from behaving ethically.
Regarding business governance, the elements of ethical leadership must be part of the discussion: how an organisation or individuals can fail to behave ethically, and what the practices of good governance are that can prevent this.
For societal governance, we need to clarify the dilemma on the precautionary principle (focus on compliance with regulation) and innovation principles (focus on risks and opportunities), and better define the role that agile governance can play in facilitating innovation without giving up on the precautionary principle.
While not comprehensive, this list is certainly a minimal starting basis.
Society faces an accelerating rate of change. Science and technology, with business and industry as enablers, have the potential to contribute in a major way to addressing societal challenges, provided they are trusted by citizens. RRI thus has the potential as a field of research to make a valuable contribution in helping to resolve tensions on what constitutes appropriate innovation directions. However, it needs to have a much stronger, more relevant connection with existing business and industrial approaches to ensure impact. The authors believe dialogue to capture best practices and develop tools and methodologies to support innovation is both possible and necessary, with the opportunity to deliver benefits to both society and business, enabling RRI to be accepted as relevant and thus drive better adoption of the concept. The authors declare themselves to be part of such a dialogue.


This study was conducted in full independence at the sole initiative of the authors. No funding was received for the publication.

Author Contributions

All authors were involved in the development of this paper. Marc Dreyer is the lead writer of the text. Luc Chefneux developed the Research and Innovation model. Anne Goldberg, Joachim von Heimburg and Chris Shilling contributed to the content, reviewed the text and provided comments. Norberto Patrignani reviewed and expanded the part on digitalisation. Monica Schofield is the convenor of the working group on RRI, reviewed the text and provided inputs related to the history of RRI and the EU framework. All authors have read in full the document. The content and views expressed is also based on contributions of other members of the EIRMA task force not listed as authors.

Conflicts of Interest

The authors declare no conflict of interest.


  1. EIRMA Library EIRMA Publications. Available online: (accessed on 15 August 2017).
  2. Stilgoe, J.; Owen, R.; Macnaghten, P. Developing a Framework for Responsible Innovation. Res. Policy 2013, 42, 1568–1580. Available online: (accessed on 15 August 2017). [Google Scholar]
  3. Von Schomberg, R. “A Vision of Responsible Research and Innovation”. In Responsible Innovation; Owen, R., Heintz, M., Bessant, J., Eds.; John Wiley: London, UK, 2013; Available online: (accessed on 15 August 2017).
  4. Blok, V.; Lemmens, P. The Emerging Concept of Responsible Innovation. Three Reasons Why It Is Questionable and Calls for a Radical Transformation of the Concept of Innovation. 2015. Available online: (accessed on 15 August 2017).
  5. Lubberink, B.; Blok, V.; van Ophem, J.; Omta, O. Lessons for Responsible Innovation in the Business Context: A Systematic Literature Review of Responsible, Social and Sustainable Innovation Practices. Sustainability 2017, 9, 721. Available online: (accessed on 15 August 2017). [Google Scholar] [CrossRef]
  6. What Is RRI—RRI Tools? Available online: (accessed on 20 September 2017).
  7. Osterwalder, A.; Pigneur, Y. Business Model Generation. 2010. Available online: (accessed on 15 August 2017).
  8. PMI. PMBOK® Guide—Fifth Edition. Available online: (accessed on 15 August 2017).
  9. Cooper, R.G. New Products—What Separates the Winners from the Losers and What Drives Success. Chapter One, PDMA Handbook. 2013. Available online: (accessed on 15 August 2017).
  10. Platner, H. An Introduction to Design Thinking Process Guide. Available online: (accessed on 15 August 2017).
  11. Agile Project Management: Best Practices and Methodologies. Available online: (accessed on 15 August 2017).
  12. Kerzner, H. PM 2.0: The Future of Project Management. 2015. Available online: (accessed on 15 August 2017).
  13. EIRMA Publications. Working Groups Reports. Available online: (accessed on 15 August 2017).
  14. Lubin, G.; Kasperkevic, J. The 100 Greatest Trends of the Twentieth Century. 2012. Available online: (accessed on 15 August 2017).
  15. Bardi, U.; Perini, V. Declining Trends of Healthy Life Years Expectancy (HLYE) in Europe. Available online: (accessed on 15 August 2017).
  16. Zauli, S.S.; Battista, A.; Frova, L.; Lauriola, P. Healthy Life Years: A very Promising Indicator to be Handled with Caution. Epidemiol. Prev. 2014, 38, 394–397. Available online: (accessed on 15 August 2017). [Google Scholar]
  17. Obesity and Overweight—Fact Sheet. Available online: (accessed on 15 August 2017).
  18. Air Pollution. Crossing Borders. 2016. Available online: (accessed on 15 August 2017).
  19. Mojtabai, R.; Olfson, M.; Han, B. National Trends in the Prevalence and Treatment of Depression in Adolescents and Young Adults. Am. Acad. Pediatr. 2016. Available online: (accessed on 15 August 2017). [Google Scholar]
  20. Jones, G. Why Are Cancer Rates Increasing? Cancer Res. 2015. Available online: (accessed on 15 August 2017). [Google Scholar]
  21. Mekonnen, M.; Hoekstra, A. Four Billion People Facing Severe Water Scarcity. Sci. Adv. 2016. Available online: (accessed on 15 August 2017). [Google Scholar]
  22. Searchinger, T.; Craig, H.C.; Ranganathan, J.; Lipinski, B.; Waite, R.; Winterbottom, R.; Dinshaw, A.; Heimlich, R. The Great Balancing Act, Creating a Sustainable Food Future. 2013. Available online: (accessed on 15 August 2017).
  23. Steffen, W.; Broadgate, W.; Deutsch, L.; Gaffney, O.; Ludwig, C. The Trajectory of the Anthropocene: The Great Acceleration. Antropocene Rev. 2015, 2, 81–98. Available online: (accessed on 15 August 2017). [Google Scholar] [CrossRef]
  24. Ceballos, G.; Ehrlich, P.; Barnosky, A.; García, A.; Pringle, R.; Palmer, T. Accelerated Modern Human–Induced Species Losses: Entering the Sixth Mass Extinction. Sci. Adv. 2015. Available online: (accessed on 15 August 2017). [Google Scholar]
  25. Cassidy, J. Forces of Divergence; Is Surging Inequality Endemic to Capitalism? 2014. Available online: (accessed on 15 August 2017).
  26. Schneier, B. Liars and Outliers: Enabling the Trust that Society Needs to Thrive. 2012. Available online: (accessed on 15 August 2017).
  27. NASA. Climate Change: How Do We Know? Available online: (accessed on 15 August 2017).
  28. O’Connor, A. How the Sugar Industry Shifted Blame to Fat. 2016. Available online: (accessed on 15 August 2017).
  29. Bawa, A.; Anilakumar, K. Genetically modified foods: safety, risks and public concerns—A review. J. Food Sci. Technol. 2013, 50, 1035–1046. Available online: (accessed on 15 August 2017). [Google Scholar] [CrossRef] [PubMed]
  30. Sifferlin, A. How the Sugar Lobby Skewed Health Research. 2016. Available online: (accessed on 15 August 2017).
  31. Whaley, P. EFSA, IARC and the Glyphosate Controversy. 2016. Available online: (accessed on 15 August 2017).
  32. Ehrlich, P.; Ehrlich, A. Can a Collapse of the Global Civilisation Be Avoided? 2017. Available online: (accessed on 15 August 2017).
  33. Meadows, D.; Randers, J.; Meadows, D. A Synopsis-Limits to Growth, the 30 Year Update. 2004. Available online: (accessed on 15 August 2017).
  34. WIKIPEDIA. Societal Collapse. Available online: (accessed on 15 August 2017).
  35. Saracco, R. The Saga of Research versus Innovation. 2014. Available online: (accessed on 15 August 2017).
  36. Why Innovate? What Are the Challenges for Europe? 2017. Available online: (accessed on 15 August 2017).
  37. The Entrepreneurial State. 2013. Available online: (accessed on 15 August 2017).
  38. About Shared Value. Available online: (accessed on 20 September 2017).
  39. The Measurement of Scientific and Technological Activities. Available online: (accessed on 15 August 2017).
  40. Cooper, R. Formula for Success in New Product Development. 2006. Available online: (accessed on 20 September 2017).
  41. Defining Innovation Goes Far beyond R&D. Available online: (accessed on 15 August 2017).
  42. Batelle. 2014 GLOBAL R&D FUNDING FORECAST. 2013. Available online: (accessed on 15 August 2017).
  43. Innovation Funnel. Available online: (accessed on 15 August 2017).
  44. FDA. The Drug Development Process. Available online: (accessed on 15 August 2017).
  45. Kahneman, D. Strategic Decisions: When Can You Trust Your Gut? 2010. Available online: (accessed on 15 August 2017).
  46. Osterwalder, A.; Pigneur, Y. Business Generation Model. 2009. Available online: (accessed on 15 August 2017).
  47. Kerzner, H. Project Management 2.0: Leveraging Tools, Distributed Collaboration, and Metrics for Project Success. 2015. Available online: (accessed on 15 August 2017).
  48. Sutherland, J. SCRUM: The Management System behind the World’s Top Tech Companies. 2014. Available online: (accessed on 15 August 2017).
  49. Zenger, J.; Folkman, J. Research: 10 Traits of Innovative Leaders. 2014. Available online: (accessed on 15 August 2017).
  50. Horth, D.M.; Vehar, J. Innovation How Leadership Makes the Difference. 2015. Available online: (accessed on 15 August 2017).
  51. Convention of Oviedo on Human Rights and Biomedicine. Available online: (accessed on 15 August 2017).
  52. Kaplan, J.C. CRISPR-Cas9: Un Scalpel Génomique à Double Tranchant. 2017. Available online: (accessed on 15 August 2017).
  53. Hirsch, F.; Lévy, Y.; Chneiweiss, H. CRISPR-Cas9: A European Position on Genome Editing. Nature 2017, 541, 30. Available online: (accessed on 15 August 2017). [Google Scholar]
  54. Gupta, N.; Fischer, A.; Frewer, L. Socio-Psychological Determinants of Public Acceptance of Technologies: A Review. 2011. Available online: (accessed on 15 August 2017).
  55. Technology Acceptance Model. 2014. Available online: (accessed on 15 August 2017).
  56. Lucht, J. Public Acceptance of Plant Biotechnology and GM Crops. Viruses 2015, 7, 4254–4281. Available online: (accessed on 15 August 2017). [Google Scholar] [CrossRef] [PubMed]
  57. Kim, Y.; Kim, M. An International Comparative Analysis of Public Acceptance of Nuclear Energy. 2013. Available online: (accessed on 15 August 2017).
  58. Von Schomberg, R. Prospects for Technology Assessment in a Framework of Responsible Research and Innovation. 2011. Available online: (accessed on 15 August 2017).
  59. Von Schomberg, R. A Vision of Responsible Innovation. 2013. Available online: (accessed on 15 August 2017).
  60. Fiske, S.; Dupree, C. What Do We Know about Public Trust in Science? 2015. Available online: (accessed on 15 August 2017).
  61. Pew Research Center. Public Trust in Government: 1958–2017. 2017. Available online: (accessed on 15 August 2017).
  62. 2017 Edelman Trust Barometer: Executive Summary. Available online: (accessed on 15 August 2017).
  63. Edelman, R. A Crisis of Trust—A Warning to both Business and Government. 2017. Available online: (accessed on 15 August 2017).
  64. Nuticelli, D. The Global Warming Debate Isn’t about Science. 2013. Available online: (accessed on 15 August 2017).
  65. Bates, C.; Rowell, A. The Truth about the Tobacco Industry in its Own Words. Available online: (accessed on 15 August 2017).
  66. Baur, X. Asbestos: Social Legal and Scientific Controversies and Unsound Science in the Context with the Worldwide Asbestos Tragedy—Lessons to be Learned. Pneumologie 2015, 69, 654–661. Available online: (accessed on 15 August 2017). [Google Scholar] [CrossRef] [PubMed]
  67. Kearns, C.; Schmidt, L.; Glantz, S. Sugar Industry and Coronary Heart Disease Research. JAMA Intern. Med. 2016, 176, 1680–1685. Available online: (accessed on 20 September 2017). [Google Scholar] [CrossRef] [PubMed]
  68. Corporate Europe Observatory. Scientist Writes to Juncker: New Tumor Evidence Found in Confidential Glyphosate Data. 2017. Available online: (accessed on 15 August 2017).
  69. The Economist. Clinical Trials, For My Next Trick. 2016. Available online: (accessed on 15 August 2017).
  70. The Economist. Difference Engine, the Dieselgate Dilemma. 2016. Available online: (accessed on 15 August 2017).
  71. Curry, J. Brussel Declaration on Principles for Science & Policy Making. 2017. Available online: (accessed on 15 August 2017).
  72. Harker, D. Creating Scientific Controversies: Uncertainty and Bias in Science and Society. 2015. Available online: (accessed on 15 August 2017).
  73. Lebowitz, S.; Lee, S. 20 Cognitive Biases that Screw up Your Decisions. 2015. Available online: (accessed on 15 August 2017).
  74. Kolbert, E. Why Facts Don’t Change Our Minds. 2017. Available online: (accessed on 15 August 2017).
  75. Cook, J. Inoculation Theory: Using Misinformation to Fight Misinformation. 2017. Available online: (accessed on 15 August 2017).
  76. Hilger, N. Why Don’t People Trust Experts? J. Law Econ. 2016, 59, 2. Available online: (accessed on 20 September 2017). [Google Scholar] [CrossRef]
  77. Brumfiel, G. Controversial Research: Good Science Bad Science. Nature 2012, 484, 432–434. Available online: (accessed on 15 August 2017). [Google Scholar]
  78. A Rough Guide to Spotting Bad Science. 2014. Available online: (accessed on 15 August 2017).
  79. Unreliable Research, Trouble at the Lab. 2013. Available online: (accessed on 15 August 2017).
  80. Center for Genomic Regulation. New Method Addresses Reproducibility in Computational Experiments. 2017. Available online: (accessed on 15 August 2017).
  81. Oreskes, N.; Conway, E. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. 2010. Available online: (accessed on 15 August 2017).
  82. Why Bad Science Persists—Incentive Malus. 2016. Available online: (accessed on 15 August 2017).
  83. METRICS. Stanford. Why MetaResearch Matters. Available online: (accessed on 15 August 2017).
  84. Regional Innovation Ecosystems. 2016. Available online: (accessed on 15 August 2017).
  85. Almquist, E.; Senior, J.; Bloch, N. The Elements of Value. 2016. Available online: (accessed on 15 August 2017).
  86. Hobcraft, P. Top Ten Causes of Innovation Failure. Available online: (accessed on 15 August 2017).
  87. The Economist. Schumpeter Social Saints, Fiscal Fiends. Available online: (accessed on 15 August 2017).
  88. Business and Sustainable Development Commission—Better Business Better World. Available online: (accessed on 15 August 2017).
  89. Iatridis, K.; Schroeder, D. Responsible Research and Innovation in Industry. Available online: (accessed on 15 August 2017).
  90. Panahi, O. Could There Be a Solution to the Trolley Problem? Available online: (accessed on 15 August 2017).
  91. Dean, J. The Trolley Dilemna and How It Related to Communication. Available online: (accessed on 15 August 2017).
  92. Marshal, A. Lawyers Not Ethicists Will Solve the Robocar Trolley Problem. Available online: (accessed on 15 August 2017).
  93. Richardson, G.; Penn, J. Value-Centric Analysis and Value-Centric Design. Available online: (accessed on 15 August 2017).
  94. Icelandic Human Right Center. Human Rights Definitions and Classification. Available online: (accessed on 15 August 2017).
  95. WBCSD. New CEO Guide to the Sustainable Development. Available online: (accessed on 15 August 2017).
  96. Illimitable Men. Understanding the Dark Triad—A General Overview. Available online: (accessed on 15 August 2017).
  97. Mauron, A. Les Aspects Éthiques du Diagnostic Pré-Implantatoire. Available online: (accessed on 15 August 2017).
  98. The Nanodiode Project. Enabling Dialogue on Nanotechnologies. Available online: (accessed on 15 August 2017).
  99. The Telegraph. Worst Tech Predictions of All Time. Available online: (accessed on 15 August 2017).
  100. Barnett, A. Gates: I’ll Rid the World of Spams. Available online: (accessed on 15 August 2017).
  101. Gupta, N.; Fischer, A.; Frewer, L. Socio-psychological determinants of public acceptance of technologies. Publ. Underst. Sci. 2011, 21, 782–795. Available online: (accessed on 15 August 2017). [Google Scholar] [CrossRef] [PubMed]
  102. Bouter, L.; Tijdink, J.; Axelsen, N.; Riet, G. Ranking Major and Minor Research Misbehaviors. Available online: (accessed on 15 August 2017).
  103. World Conference on Research Integrity. Available online: (accessed on 15 August 2017).
  104. Singapore Statement on Research Integrity. Available online: (accessed on 15 August 2017).
  105. Montreal Statement on Research Integrity. Available online: (accessed on 15 August 2017).
  106. European Code of Conduct for Research Integrity. Available online: (accessed on 15 August 2017).
  107. Lemaitre, B. Science, narcissism and the quest for visibility. FEBS 2017, 284, 875–882. Available online: (accessed on 15 August 2017). [Google Scholar] [CrossRef] [PubMed]
  108. Munafò, M.; Nosek, B.; Bishop, D.; Button, K.; Chambers, C.; du Sert, N.; Simonsohn, U.; Wagenmakers, E.; Ware, J.; Ioannidis, J. A Manifesto for Reproducible Science. Available online: (accessed on 15 August 2017).
  109. Center of Study for the Ethics in Profession. Available online: (accessed on 15 August 2017).
  110. UN. Global Compact 2017 Toolbox. Available online: (accessed on 15 August 2017).
  111. OECD. Declaration and Decisions on International Investment and Multinational Enterprises. Available online: (accessed on 15 August 2017).
  112. Financial Time. Definition of Corporate Responsibility. Available online: (accessed on 15 August 2017).
  113. Financial Time. Definition of Creating Shared Value CSV. Available online: (accessed on 15 August 2017).
  114. UN. Sustainable Development Goals (SDG)—17 Goals to Transform Our World. Available online: (accessed on 15 August 2017).
  115. Investopedia. Corporate Social Responsibility. Available online: (accessed on 15 August 2017).
  116. ISO 26000:2010, Guidance on Social Responsibility. Available online: (accessed on 15 August 2017).
  117. Porter, M.; Kramer, M. Creating Shared Value. Available online: (accessed on 15 August 2017).
  118. Crane, A.; Palazzo, G.; Spence, L.J.; Matten, D. Contesting the Value of “Creating Shared Value”. Available online: (accessed on 15 August 2017).
  119. The Economist. Corporate Social Responsibility, the Ethics of Business. Available online: (accessed on 15 August 2017).
  120. Can a Different Framework of Value Enable Greater Trust in Business? Available online: (accessed on 15 August 2017).
  121. Guide de l’Investissement Durable. Swiss Sustainable Finance. Available online: (accessed on 15 August 2017).
  122. PRI. What Is Responsible Investment? PRI-Principles for Responsible Investments. Available online: (accessed on 15 August 2017).
  123. The Economist. Businesses Can and Will Adapt to the Age of Populism. Available online: (accessed on 15 August 2017).
  124. European Commission. Responsible Innovation in ICT. Available online: (accessed on 15 August 2017).
  125. Wikipedia. Digital Rights. Available online: (accessed on 15 August 2017).
  126. Lessig, L. Code and Other Laws of Cyberspace. Available online: (accessed on 15 August 2017).
  127. Hsu, J. Tech Leaders Are Just Now Getting Serious about the Threats of AI. Backchannel. Available online: (accessed on 15 August 2017).
  128. Future of life Institute. Asilomar AI Principles. Available online: (accessed on 15 August 2017).
  129. WEF. The Future of Jobs. Available online: (accessed on 15 August 2017).
  130. OECD. Key Issues for Digital Transformation in G20. Available online: (accessed on 15 August 2017).
  131. European Commission. Summary Report of the Public Consultation on Building a European Data Economy. Available online: (accessed on 15 August 2017).
  132. UNESCO. Harnessing Science to Society. World Conference on Science. Available online: (accessed on 15 August 2017).
  133. Nichols, T. How America Lost Faith in Expertise and Why That’s a Giant Problem. Available online: (accessed on 15 August 2017).
  134. Hodak, M.; Buchanan, B. Central Ethical Issues of Corporate Governance. Available online: (accessed on 15 August 2017).
  135. Jennings, M. Seven Signs of Ethical Collapse. Available online: (accessed on 15 August 2017).
  136. European Commission. Responsible Research and Innovation Europe’s Ability to Respond to Societal Challenges. 2012. Available online: (accessed on 15 August 2017).
Figure 1. The two disciplines of Research and Innovation.
Figure 1. The two disciplines of Research and Innovation.
Sustainability 09 01719 g001
Sustainability EISSN 2071-1050 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top