Tech Giants’ Responsible Innovation and Technology Strategy: An International Policy Review
Abstract
:1. Introduction
2. Literature Background
3. Methodology
4. Results
4.1. Quantitative Content Analysis
4.2. Qualitative Content Analysis
4.2.1. Acceptability Goals
Equitability Considerations
“AI systems can perpetuate and even amplify biases present in the data used to train them. This bias can lead to discriminatory outcomes, such as denying certain individuals access to opportunities or services”.
“We understand that special care must be taken to address bias if a product or service will have a significant impact on an individual’s life, such as with employment, housing, credit, and health”.
“Avoiding unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability and political or religious belief”.
“In its utilization of AI, Sony will respect diversity and human rights of its customers and other stakeholders without any discrimination while striving to contribute to the resolution of social problems through its activities in its own and related industries”.
“It is critical to invest stringent effort to identify such bias to avoid unfair and improper behavior from AI systems. Regardless of race, gender, disabilities, income, and any other indicator of diversity, all people should be treated fairly by AI systems”.
“We want our company and our technologies to be open, inclusive, fair and just: to reflect the human-centric values and fundamental human rights that we all share”.
Ethics Considerations
“As AI systems become more advanced, they may be able to operate independently and make decisions on their own. This potential development raises questions about who is responsible for the actions of these systems and how to ensure they align with human values”.
“Applications of AI often use personal data that could impact individual privacy and civil liberties if not managed properly”.
“As designers and developers of AI systems, it is an imperative to understand the ethical considerations of our work. A technology-centric focus that solely revolves around improving the capabilities of an intelligent system doesn’t sufficiently consider human needs. By empowering our designers and developers to make ethical decisions throughout all development stages, we ensure that they never work in a vacuum and always stay in tune with users’ needs and concerns”.
“The number one rule we apply when developing AI and data science is ethics and compliance in line with our Trust Charter. We leverage digital technologies for a sustainable future based on human-centered design with a ‘do no harm’ oversight”.
“Preserve and fortify users’ power over their own data and its uses”. “It’s your team’s responsibility to keep users empowered with control over their interactions and data”. “Organizations have a responsibility to use AI ethically as the technology matures. AI should be used to amplify our privacy, rather than undermine it”.
“AI systems should preserve the autonomy of human beings and warrant freedom from subordination to—or coercion by—AI systems. The conscious act to employ AI and its smart agency, while ceding some of our decision-making power to machines, should always remain under human control, so as to achieve a balance between the decision-making power we retain for ourselves and that which we delegate to artificial agents as well as ensure compliance with privacy principles”.
“We focus on improving and developing people’s capabilities and experiences and leverage a ‘human-in-the-loop’ approach to enable end-user control over ultimate decisions”.
Harmlessness Considerations
“NVIDIA aims to reduce the risk of harm from deployment of AI models or systems”.
“Be made available for uses that accord with these principles: We will work to limit potentially harmful or abusive applications”.
“We want our products to be distinguished not only by their capabilities but also by the care and attention we put into producing them”.
“AI can automate many tasks and processes, which can lead to job displacement. This displacement raises concerns about how to support workers and communities affected by these changes”.
“Xiaomi is firmly dedicated to ensuring security and safety throughout development and application of trustworthy AI technologies, providing users with safe and trustworthy AI products and services and making sure that our trustworthy AI will not do any harm to society”.
“AI systems should not harm human beings. By design, AI building blocks should protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work. Human well-being should be the desired outcome in all system designs”.
4.2.2. Accessibility Goals
Adaptability Considerations
“A reliable system functions consistently and as intended, not only in the lab conditions in which it is trained, but also in the open world or when they are under attack from adversaries”.
“Where appropriate, AI-powered systems should have control mechanisms to allow human operators to deactivate the AI component without affecting business continuity”.
“Our products are programmable and general purpose in nature”.
“Foundry’s interoperable and extensible architecture has enabled data science teams worldwide to readily collaborate with their business and operational teams, enabling all stakeholders to create data-driven impact”.
“We seek technological innovation to allow equal and convenient access to our products and services by all consumers. We apply the 4C Accessibility Design Principles when developing our products and services”.
Affordability Considerations
“We seek technological innovation to allow equal and convenient access to our products and services by all consumers”.
“We work for social and environmental progress in whatever we do, which includes a commitment to respect human rights; to invest in the diversity, equity, and inclusion of our teams and larger ecosystem; and to make Atlassian products and experiences fully accessible and usable for everyone”.
Inclusiveness Considerations
“We believe there is a need for equity, inclusion, and cultural sensitivity in the development and deployment of AI. We strive to ensure that the teams working on these technologies are diverse and inclusive. We believe that the AI technology domain should be developed and informed by diverse populations, perspectives, voices, and experiences”.
“Diverse teams help to represent a wider variation of experiences”.
“Embrace team members of different ages, ethnicities, genders, educational disciplines, and cultural perspectives”.
“Although bias can never be fully eliminated, it is the role of a responsible team to minimize algorithmic bias through ongoing research and responsible data collection representative of a diverse population”.
“AI should improve the human condition and represent the values of all those impacted, not just the creators. We will advance diversity, promote equality, and foster equity through AI”.
“Diversity and inclusiveness in society result in teams that generate better outcomes—including in the practice of AI”.
“We are committed to having diverse teams design and develop our ML models, to ensure a wide variety of perspectives and experience are considered. After all, ML models impact humans, and human experience should inform that impact”.
4.2.3. Alignment Goals
Deliberateness Considerations
“There are a wide variety of use cases that may incorporate ML, with different goals, characteristics, user bases, and potential impacts. Developers should consider the benefits and potential risks of their specific use case. Given the broad nature and applicability of ML, many applications may pose limited or no risk (e.g., movie recommendation systems), while others could involve significant risk, especially if used in a way that impacts human rights or safety”.
“Designed to be appropriately cautious and in accordance with best practices in AI safety research, including testing in constrained environments and monitoring as appropriate”.
“We will approach designing and maintaining our AI technology with thoughtful evaluation and careful consideration of the impact and consequences of its deployment”.
“It is critical to take steps to ensure that AI systems function according to their design purpose”. “Careful forethought is needed to develop AI systems that accurately and consistently operate in accordance with their designers’ expectations”.
“We’re thoughtful and deliberate in our approach at Workday, and we only develop AI solutions that align with our values”.
Meaningfulness Considerations
“Sony will be cognizant of the effects and impact of products and services that utilize AI on society and will proactively work to contribute to developing AI to create a better society and foster human talent capable of shaping our collective bright future through R&D and/or utilization of AI”.
“The value of AI is to empower mankind to learn and grow instead of surpassing and replacing mankind; the ultimate ideal of AI is to bring more freedom and possibilities to humankind”.
“We believe AI is best utilized when paired with human ability, augmenting people, and enabling them to make better decisions. We aspire to create technology that empowers everyone to be more productive and drive greater impact within their organizations”.
“We design AI to help our customers and their employees unlock opportunities and focus on meaningful work. Our solutions support human decision-making, improve experiences, and put users in control to decide whether to accept the recommendations provided by our AI-based solutions”.
“Through advancing its AI-related R&D and promoting the utilization of AI in a manner harmonized with society, Sony aims to support the exploration of the potential for each individual to empower their lives, and to contribute to enrichment of our culture and push our civilization forward by providing novel and creative types of Kando. Sony will engage in sustainable social development and endeavor to utilize the power of AI for contributing to global problem-solving and for the development of a peaceful and sustainable society”.
“We help create new sources of income for Hosts sharing their existing spaces and skills, making it possible to empower them financially while fostering connection with people from around the world and supporting local communities in the process”.
“We know that behind every great human achievement, there is a team. We also believe that new technologies can help empower those teams to achieve even more. If we use these technologies (like AI) responsibly and intentionally, then we can supercharge this vision and contribute to better outcomes across our communities”.
Sustainability Considerations
“Improving performance and energy efficiency is a principal goal in each step of our research, development, and design processes. We aim to make every new generation of GPUs faster and more energy efficient than its predecessor. And our technology is driving some of the most important advances for modelling our climate, reducing carbon emissions, and designing mitigation and adaptation strategies in a changing world”.
“AI systems should be assessed regarding their impact on the environment. The development and consumption of AI technologies should align with and support the company’s ESG goals”.
4.2.4. Trustworthiness Goals
Explainability Considerations
“AI systems can be difficult to understand, which can make it challenging to explain their decisions and assess their performance, for example, a medical-diagnosis AI system that can’t explain its decision-making process, or a criminal-risk-assessment AI system that has a high rate of false positives for certain demographic groups”.
“Intelligibility can uncover potential sources of unfairness, help users decide how much trust to place in a system, and generally lead to more usable products”.
“We encourage explainability and transparency of AI-decision-making processes in order to build and maintain trust in AI systems”. “The goal of interpretability is to describe the internals of the system in a way that is understandable to humans. The system should be capable of producing descriptions that are simple enough for a person to understand. It should also use a vocabulary that is meaningful for the user and will enable the user to understand how a decision is made”.
“During the planning and design stages for its products and services that utilize AI, Sony will strive to introduce methods of capturing the reasoning behind the decisions made by AI utilized in said products and services. Additionally, it will endeavor to provide intelligible explanations and information to customers about the possible impact of using these products and services”.
Security Considerations
“Awareness about cybersecurity and the potential for damages caused by increasingly sophisticated cyberattacks remains at the forefront of our security considerations”.
“AI systems should be safe and reliable, guarding the wellbeing of users and yielding results consistent with our values”.
“AI systems can be used in applications such as self-driving cars, military drones, and medical treatments. Ensuring that these systems are safe for their intended users and the public is crucial”.
“We must adopt the highest appropriate level of security and data protection to all hardware and software, ensuring that it is pre-configured into the design, functionalities, processes, technologies, operations, architectures and business models”.
“In order to ensure the integrity of their AI outcomes, businesses must verify that none of these inputs have been corrupted and put rigorous checks in place to ensure data security and integrity”. “In addition to protecting data integrity, data governance is also essential to providing the context that goes along with your AI outcomes”.
Transparency Considerations
“Recognizing that technology can have a profound impact on people and the world, we’ve set priorities that are rooted in fostering positive change and enabling trust and transparency in AI development”.
“As transparency is one of our Trust Principles and core to this framework, we inform customers when AI is being used to make decisions that affect them in material and consequential ways. Customers and users can then inform us of their concerns or let us know when they disagree with decisions. By keeping communications channels open, we intend to build, maintain, and grow the trust that our customers, users, employees, and other stakeholders place in our AI offerings”.
“Users need to be aware that they are interacting with an AI system, and they need the ability to retrace that AI system’s decisions”.
“Users should be provided appropriate disclosures and control over their interactions with AI and its use of their data”.
“These objectives also transparently communicate state about a particular AI/ML solution—from model development to testing, to deployment and further post-deployment actions like monitoring and upgrades. This enables users to be more intentional, responsible, and effective in how they use AI to address their organization’s operational challenges”.
4.2.5. Well-Governance Goals
Accountability Considerations
“AI solutions must be traceable, auditable, and governable in order to be used effectively and responsibly”.
“Accountability for AI solutions and the teams that develop them is essential to responsible development and operations throughout the AI lifecycle. AI tools often have more than one application, including unintended use cases and uses that might not have been foreseeable at the time of development. Companies that develop, deploy, and use AI solutions must take responsibility for their work in this area by implementing appropriate governance and controls to ensure that their AI solutions operate as intended and to help prevent inappropriate use”.
“Individuals in your organization should be accountable for the ideation, design, implementation, and deployment of each AI-powered system they create and/or use—including the outcomes, results, and consequences of its use”.
“We have implemented audit and risk assessments to test our ML models as the baseline of our oversight methodologies. We continue to actively monitor and improve our models and systems to ensure that changes in the underlying data or model conditions do not inappropriately affect the desired results. And we apply our existing compliance, business ethics, and risk management governance structures to our ML development activities”.
“Consider the need for implementing mechanisms to track and review steps taken during development and operation of the ML system, e.g., to trace root causes for problems or meet governance requirements. Evaluate the need to document relevant design decisions and inputs to assist in such reviews. Establishing a traceable record can help internal or external teams evaluate the development and functioning of the ML system”.
“We take ownership over the outcomes of our AI-assisted tools. We will have processes and resources dedicated to receiving and responding to concerns about our AI and taking corrective action as appropriate”.
Participation Considerations
“… Sony will seriously consider the interests and concerns of various stakeholders including its customers and creators, and proactively advance a dialogue with related industries, organizations, academic communities and more… Sony will construct the appropriate channels for ensuring that the content and results of these discussions are provided to officers and employees, including researchers and developers, who are involved in the corresponding businesses, as well as for ensuring further engagement with its various stakeholders”.
“Co-innovation and partnerships are key to harness the power of AI and accelerate the AI journey”.
“Drawing on rigorous and multidisciplinary scientific approaches, we promote thought leadership in this area in close cooperation with a wide range of stakeholders. We will continue to share what we’ve learned to improve AI technologies and practices. Thus, in order to promote cross-industrial approaches to AI risk mitigation, we foster multi-stakeholder networks to share new insights, best practices and information about incidents”.
“We—not just Facebook but also the tech industry, the AI research community, policymakers, advocacy groups, and others—need to collaborate on figuring out how to make AI impact assessment work at scale, based on clear and reasonable standards, so that we can identify and address potential negative AI-related impacts while still creating new AI-powered products that will benefit us all”.
“We are committed to putting in place processes that help us to obtain feedback from our stakeholders and take guidance from experts, internally and externally. We encourage our customers to tell us if something has gone wrong. In those cases, we will investigate and work to fix it”.
“The development and implementation of AI applications should be periodically reviewed by both internal and external legal, ethics, technical and business professionals to ensure ongoing compliance and transparency”.
Regulatory Considerations
“The number one rule we apply when developing AI and data science is ethics and compliance in line with our Trust Charter”.
“Sony, in compliance with laws and regulations as well as applicable internal rules and policies, seeks to enhance the security and protection of customers’ personal data acquired via products and services utilizing AI, and build an environment where said personal data is processed in ways that respect the intention and trust of customers”.
“Engage with legal advisors to assess requirements for and implications of building your ML system. This may include vetting legal rights to use data and models, and determining applicability of laws around privacy, biometrics, anti-discrimination, and other use-case specific regulations. Be mindful of differing legal requirements across states, provinces, and countries, as well as new AI/ML regulation being considered and proposed around the world. Re-visit legal requirements and considerations through future deployment and operations phases”.
“We engage U.S. federal, state, and local governments, the European Union, and other governments around the world to advocate for workable, risk-based regulatory approaches that build trust in AI technology and enable innovation. As our development process continues to evolve to account for new best practices and emerging regulatory frameworks, we remain committed to supporting the delivery of trustworthy AI solutions that provide value to our customers, the workforce, and society”.
“The implementation and use of AI should comply with the letter and spirit of globally applicable laws, be consistent with corporate codes of conduct and align with an evolving consensus on ethical practices. The development and implementation of AI applications should be periodically reviewed by both internal and external legal, ethics, technical and business professionals to ensure ongoing compliance and transparency”.
5. Discussion
6. Conclusions
- Discrepancies in RIT considerations: Why or what are some key considerations of RIT, which the academic community and user groups have emphasised, rarely mentioned, and discussed by high-tech companies?
- RIT and corporate social responsibility: How do high-tech companies view the relationship between responsible innovation and corporate social responsibility?
- Influences on RIT guidance: Do internal and external pressures directly influence how high-tech companies shape their RIT guidance, and if so, how? How do collaborations with various stakeholders and vendors in a business network (e.g., in the IoT and IIoT contexts) affect the focal organisation’s RIT policies?
- Comparative perspectives on RIT: What are the differences or priorities between users’ expectations for RIT and the viewpoints high-tech companies, academia, and the policy community advocate?
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Acceptability Goals | |||||||
---|---|---|---|---|---|---|---|
Equitability Considerations | Scale | ||||||
Objectives | Statements | Low | Medium-Low | Medium | Medium-High | High | |
OBJ 1 | Avoid bias | Does not create, reinforce, or propagate harmful or unfair biases in all stages of innovation and technology practice, from design to deployment and beyond. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 2 | Guard against discrimination | Upholds the rights of all individuals and groups, embraces the full spectrum of social diversity, and actively prevents any form of discrimination. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 3 | Strive for fairness | Proactively identifies and eliminates obstacles to ensure fair treatment for all and empower every individual equally through innovation and technology. | ☐ | ☐ | ☐ | ☐ | ☐ |
Ethics considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 4 | Human value-based design | Prioritises human values and morals in the innovation process, ensuring that technological outcomes meet functional requirements and align with broader ethical and societal norms. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 5 | Human-in-the-loop mechanism | Embeds appropriate human oversight and intervention into decision-making processes to balance efficiency with ethical considerations. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 6 | Respect and enforce privacy | Incorporates privacy principles at every stage of the innovation lifecycle, ensuring that innovation and technological outcomes consistently prioritise and protect user privacy. | ☐ | ☐ | ☐ | ☐ | ☐ |
Harmlessness considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 7 | Harmless to environment | Ensures respect and protection of the environment, avoiding practices that lead to environmental degradation or the corruption of natural resources. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 8 | Harmless to individual | Does not lead to any harm against any individuals, including physical and psychological harm. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 9 | Harmless to society | Minimises adverse impacts on society and commits to the harmonious progress of technology and society. | ☐ | ☐ | ☐ | ☐ | ☐ |
Accessibility goals | |||||||
Adaptability considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 10 | Flexible in nature | Allows individual modification or replacement and can seamlessly operate with other systems to ensure compatibility and ease of integration. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 11 | Future-proof design | Remains adaptable in the face of evolving socio-technical challenges and paradigms and continues to serve intended purposes throughout their lifecycle. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 12 | Universal access | Ensures accessibility for all individuals, irrespective of differences in physical ability, technological, cognitive, or actual usage context. | ☐ | ☐ | ☐ | ☐ | ☐ |
Affordability considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 13 | Cost-effective solution | Provides cost-effective solutions or alternatives to reduce economic disparities in technology adoption. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 14 | Financial feasibility | Reduces costs to allow broad population segments to access and benefit from innovation and technological advances without negative financial implications. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 15 | Value for money | Offers genuine value, ensuring that users receive meaningful benefits or solutions that justify the costs, promoting long-term adoption and utility. | ☐ | ☐ | ☐ | ☐ | ☐ |
Inclusiveness considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 16 | Cultural and contextual sensitivity | Respects shared values while remaining sensitive and attuned to the nuances of diverse cultural norms and contexts. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 17 | Diverse representation | Ensures marginalised or underrepresented groups are included and have representation in innovation and technology practices. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 18 | Localisation-friendly | Localises to match local customs, languages, and preferences, ensuring relevance and acceptance in different geo-cultural contexts. | ☐ | ☐ | ☐ | ☐ | ☐ |
Alignment goals | |||||||
Deliberateness considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 19 | Define the purpose | Before taking any initiative or action, clarifies and articulates the intended purpose to ensure alignment with overall goals and values. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 20 | Proactive risk assessment | Anticipates potential negative outcomes, challenges, or pitfalls, and designs strategies to mitigate or avoid them. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 21 | User relevance | Gains a deep understanding of the users’ needs, preferences, and challenges to ensure the solutions provided are directly relevant to and satisfy the end-user’s needs. | ☐ | ☐ | ☐ | ☐ | ☐ |
Meaningfulness considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 22 | Realise human potential | Unlocks humanity’s potential, empowering individuals to address complex challenges effectively with increased capability through complementary collaboration with technology. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 23 | Social license to operate | Strives for continuous community recognition, emphasising legitimacy, trust, and ethical alignment beyond mere regulatory compliance. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 24 | Socially beneficial | Promotes human welfare for both current and future generations, fostering growth, prosperity, and positive societal outcomes. | ☐ | ☐ | ☐ | ☐ | ☐ |
Sustainability considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 25 | Commitments to climate change | Targets a reduction in greenhouse gas emissions and prevention of waste generation, supporting global efforts to adapt to and mitigate climate change. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 26 | Design for longevity | Ensures resources invested in creating products or solutions provide value over the long term and contribute to more sustainable and resilient societies. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 27 | Eco-friendly design | Incorporates environmental considerations starting from the design phase to ensure products or solutions are eco-friendly throughout their lifecycle. | ☐ | ☐ | ☐ | ☐ | ☐ |
Trustworthiness goals | |||||||
Explainability considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 28 | Comprehensive explanation | Consistently provides clear and understandable interpretations across a range of inputs and scenarios, rather than being limited to specific instances or datasets. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 29 | Interrogable justification | Provides valid and clear reasons for decisions, operations, or predictions, aligned with pre-defined objectives and standards. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 30 | Intuitive interpretation | Presents operations and outcomes in a manner that is immediately comprehensible to users, irrespective of their technical expertise. | ☐ | ☐ | ☐ | ☐ | ☐ |
Security considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 31 | Data security governance | Adopts ethical, safe, and regulatory-compliant data management mechanisms, ensuring security of organisational data and stakeholders’ privacy, and upholding trust amidst innovation practices. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 32 | Digitally secure | Establishes systematic measures to protect digital assets, data, and user privacy, ensuring user trust and building a resilient digital ecosystem. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 33 | Physically secure | Implements tangible measures to protect hardware and data storage, ensuring operational continuity and guarding against physical threats to innovation and digital ecosystems. | ☐ | ☐ | ☐ | ☐ | ☐ |
Transparency considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 34 | Appropriate disclosure | Transparently discloses issues and matters that substantially affect stakeholders, equipping those engaging with innovation and new technologies with comprehensive information. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 35 | Aware of interaction | Ensures users are consistently informed about their interactions with intelligent systems to promote user autonomy and prevent misuse or unintended consequences. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 36 | Openness of structure | Existence of architecture, training data, algorithms, and operational works that are open, clear, and available for review. | ☐ | ☐ | ☐ | ☐ | ☐ |
Well-governance goals | |||||||
Accountability considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 37 | Redress and remediation | Establishes procedures to rectify any harm or mistakes, compensating affected parties when necessary. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 38 | Responsibility attribution | Clearly identifies parties responsible for decisions, outcomes, or errors arising from the innovation and technology practices. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 39 | Traceable records | Maintains detailed records of decisions, justifications, processes, and outcomes, and ensures the records are accessible to relevant stakeholders. | ☐ | ☐ | ☐ | ☐ | ☐ |
Participation considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 40 | Collaborative governance | Embraces a collaborative governance model, which invites diverse stakeholders to jointly shape and monitor innovation practices, ensuring that technological outcomes resonate with and serve the wider public interest. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 41 | Cooperative design | Facilitates co-design sessions to allow potential users and other stakeholders to directly contribute to the innovation’s design or refinement. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 42 | Feedback loops | Establishes mechanisms that allow continuous feedback from users and stakeholders to adapt the innovation accordingly. | ☐ | ☐ | ☐ | ☐ | ☐ |
Regulatory considerations | Scale | ||||||
Objectives | Statements | Low | Medium-low | Medium | Medium-high | High | |
OBJ 43 | Consistent with best practice guidelines | Observes industry-specific best practices, standards, and codes of conduct that might be set by professional bodies or associations. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 44 | Legal compliance | Adheres to both the letter and spirit of global laws and regulations, while ensuring that innovation and technological outcomes remain adaptable to evolving regulatory landscapes. | ☐ | ☐ | ☐ | ☐ | ☐ |
OBJ 45 | Periodic and independent review | Regularly conducts independent reviews of innovation and technology systems and adjusts as needed to ensure they function as intended and to pre-emptively deter misuse. | ☐ | ☐ | ☐ | ☐ | ☐ |
References
- David, A.; Yigitcanlar, T.; Li, R.; Corchado, J.; Cheong, P.; Mossberger, K.; Mehmood, R. Understanding local government digital technology adoption strategies: A PRISMA review. Sustainability 2023, 15, 9645. [Google Scholar] [CrossRef]
- Son, T.; Weedon, Z.; Yigitcanlar, T.; Sanchez, T.; Corchado, J.; Mehmood, R. Algorithmic urban planning for smart and sustainable development: Systematic review of the literature. Sustain. Cities Soc. 2023, 94, 104562. [Google Scholar] [CrossRef]
- Li, W.; Yigitcanlar, T.; Liu, A.; Erol, I. Mapping two decades of smart home research: A systematic scientometric analysis. Technol. Forecasting Social Change 2022, 179, 121676. [Google Scholar] [CrossRef]
- Yigitcanlar, T.; Li, R.; Beeramoole, P.; Paz, A. Artificial intelligence in local government services: Public perceptions from Australia and Hong Kong. Gov. Inf. Q. 2023, 40, 101833. [Google Scholar] [CrossRef]
- Marasinghe, R.; Yigitcanlar, T.; Mayere, S.; Washington, T.; Limb, M. Computer vision applications for urban planning: A systematic review of opportunities and constraints. Sustain. Cities Soc. 2024, 100, 105047. [Google Scholar] [CrossRef]
- Lewallen, J. Emerging technologies and problem definition uncertainty: The case of cybersecurity. Regul. Gov. 2021, 15, 1035–1052. [Google Scholar] [CrossRef]
- Nili, A.; Desouza, K.; Yigitcanlar, T. What can the public sector teach us about deploying artificial intelligence technologies? IEEE Softw. 2022, 39, 58–63. [Google Scholar] [CrossRef]
- Regona, M.; Yigitcanlar, T.; Xia, B.; Li, R. Opportunities and adoption challenges of AI in the construction industry: A PRISMA review. J. Open Innov. Technol. Mark. Complex. 2022, 8, 45. [Google Scholar] [CrossRef]
- Moqaddamerad, S.; Tapinos, E. Managing business model innovation uncertainties in 5G technology: A future-oriented sensemaking perspective. RD Manag. 2023, 53, 244–259. [Google Scholar] [CrossRef]
- Lubberink, R.; Blok, V.; Van Ophem, J.; Omta, O. Lessons for responsible innovation in the business context: A systematic literature review of responsible, social and sustainable innovation practices. Sustainability 2017, 9, 721. [Google Scholar] [CrossRef]
- Millar, C.; Lockett, M.; Ladd, T. Disruption: Technology, innovation and society. Technol. Forecast. Soc. Chang. 2018, 129, 254–260. [Google Scholar] [CrossRef]
- Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Ethics Gov. Policies Artif. Intell. 2021, 144, 19–39. [Google Scholar]
- Jakobsen, S.; Fløysand, A.; Overton, J. Expanding the field of responsible research and innovation (RRI): From responsible research to responsible innovation. Eur. Plan. Stud. 2019, 27, 2329–2343. [Google Scholar] [CrossRef]
- Yigitcanlar, T.; Sabatini-Marques, J.; da-Costa, E.; Kamruzzaman, M.; Ioppolo, G. Stimulating technological innovation through incentives: Perceptions of Australian and Brazilian firms. Technol. Forecast. Soc. Chang. 2019, 146, 403–412. [Google Scholar] [CrossRef]
- Boenink, M.; Kudina, O. Values in responsible research and innovation: From entities to practices. J. Responsible Innov. 2020, 7, 450–470. [Google Scholar] [CrossRef]
- Stilgoe, J.; Owen, R.; Macnaghten, P. Developing a framework for responsible innovation. Res. Policy 2013, 42, 1568–1580. [Google Scholar] [CrossRef]
- Ribeiro, B.; Smith, R.; Millar, K. A mobilising concept? Unpacking academic representations of responsible research and innovation. Sci. Eng. Ethics 2017, 23, 81–103. [Google Scholar] [CrossRef]
- Thapa, R.; Iakovleva, T.; Foss, L. Responsible research and innovation: A systematic review of the literature and its applications to regional studies. Eur. Plan. Stud. 2019, 27, 2470–2490. [Google Scholar] [CrossRef]
- Owen, R.; Macnaghten, P.; Stilgoe, J. Responsible research and innovation: From science in society to science for society, with society. Sci. Public Policy 2012, 39, 751–760. [Google Scholar] [CrossRef]
- Von Schomberg, R. A vision of responsible research and innovation. In Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society; John Wiley & Sons: Hoboken, NJ, USA, 2013; pp. 51–74. [Google Scholar] [CrossRef]
- Pavie, X.; Carthy, D.; Scholten, V. Responsible Innovation: From Concept to Practice; World Scientific: Singapore, 2014. [Google Scholar]
- Li, W.; Yigitcanlar, T.; Browne, W.; Nili, A. The making of responsible innovation and technology: An overview and framework. Smart Cities 2023, 6, 1996–2034. [Google Scholar] [CrossRef]
- Gurzawska, A. Responsible innovation in business: Perceptions, evaluation practices and lessons learnt. Sustainability 2021, 13, 1826. [Google Scholar] [CrossRef]
- Li, Y.; Jiang, L.; Yang, P. How to drive corporate responsible innovation? A dual perspective from internal and external drivers of environmental protection enterprises. Front. Environ. Sci. 2023, 10, 1091859. [Google Scholar] [CrossRef]
- Hadj, T. Effects of corporate social responsibility towards stakeholders and environmental management on responsible innovation and competitiveness. J. Clean. Prod. 2020, 250, 119490. [Google Scholar] [CrossRef]
- Adomako, S.; Nguyen, N. Green creativity, responsible innovation, andproduct innovation performance: A study ofentrepreneurial firms inan emerging economy. Bus. Strategy Environ. 2023, 32, 4413–4425. [Google Scholar] [CrossRef]
- Xie, X.; Wu, Y.; Tejerob, C. How responsible innovation builds business network resilience to achieve sustainable performance during global outbreaks: An extended resource-based view. IEEE Trans. Eng. Manag. 2022, 1–15. [Google Scholar] [CrossRef]
- Jarmai, K.; Tharani, A.; Nwafor, C. Responsible innovation in business. In Responsible Innovation; SpringerBriefs in Research and Innovation Governance; Jarmai, K., Ed.; Springer: Dordrecht, The Netherlands, 2020. [Google Scholar]
- Barros, A.; Sindhgatta, R.; Nili, A. Scaling up chatbots for corporate service delivery systems. Commun. ACM 2021, 64, 88–97. [Google Scholar] [CrossRef]
- Adomako, S.; Tran, M. Environmental collaboration, responsible innovation, and firm performance: The moderating role of stakeholder pressure. Bus. Strategy Environ. 2022, 31, 1695–1704. [Google Scholar] [CrossRef]
- Makasi, T.; Nili, A.; Desouza, K.; Tate, M. Public service values and chatbots in the public sector: Reconciling designer efforts and user expectations. In Proceedings of the 55th Hawaii International Conference on System Sciences, Honolulu, HI, USA, 4–7 January 2022; University of Hawai’i at Manoa: Honolulu, HI, USA, 2022; pp. 2334–2343. [Google Scholar]
- Lukovics, M.; Nagy, B.; Kwee, Z.; Yaghmaei, E. Facilitating adoption of responsible innovation in business through certification. J. Responsible Innov. 2023, 10, 1–19. [Google Scholar] [CrossRef]
- Tian, H.; Tian, J. The mediating role of responsible innovation in the relationship between stakeholder pressure and corporate sustainability performance in times of crisis: Evidence from selected regions in China. Int. J. Environ. Res. Public Health 2021, 18, 7277. [Google Scholar] [CrossRef]
- Chin, T.; Caputo, F.; Shi, Y.; Calabrese, M.; Aouina-Mejri, C.; Papa, A. Depicting the role of cross-cultural legitimacy for responsible innovation in asian-pacific business models: A dialectical systems view of Yin-Yang harmony. Corp. Soc. Responsib. Environ. Manag. 2022, 29, 2083–2093. [Google Scholar] [CrossRef]
- Salzmann, O.; Ionescu-Somers, A.; Steger, U. The business case for corporate sustainability: Literature review and research options. Eur. Manag. J. 2005, 23, 27–36. [Google Scholar] [CrossRef]
- Carroll, A.; Shabana, K. The business case for corporate social responsibility: A review of concepts, research and practice. Int. J. Manag. Rev. 2010, 12, 85–105. [Google Scholar] [CrossRef]
- Kolk, A.; Van Tulder, R. International business, corporate social responsibility and sustainable development. Int. Bus. Rev. 2010, 19, 119–125. [Google Scholar] [CrossRef]
- Searcy, C. Corporate sustainability performance measurement systems: A review and research agenda. J. Bus. Ethics 2012, 107, 239–253. [Google Scholar] [CrossRef]
- Novitzky, P.; Bernstein, M.J.; Blok, V.; Braun, R.; Chan, T.T.; Lamers, W.; Loeber, A.; Meijer, I.; Lindner, R.; Griessler, E. Improve alignment of research policy and societal values. Science 2020, 369, 39–41. [Google Scholar] [CrossRef] [PubMed]
- Völker, T.; Mazzonetto, M.; Slaattelid, R.; Strand, R. Translating tools and indicators in territorial RRI. Front. Res. Metr. Anal. 2023, 7, 1038970. [Google Scholar] [CrossRef] [PubMed]
- Blok, V.; Lemmens, P. The emerging concept of responsible innovation. Three reasons why it is questionable and calls for a radical transformation of the concept of innovation. In Responsible Innovation 2; Koops, B.J., Oosterlaken, I., Romijn, H., Swierstra, T., van den Hoven, J., Eds.; Springer: Cham, Switzerland, 2015. [Google Scholar]
- Loureiro, P.; Conceição, C. Emerging patterns in the academic literature on responsible research and innovation. Technol. Soc. 2019, 58, 101148. [Google Scholar] [CrossRef]
- Hellström, T. Systemic innovation and risk: Technology assessment and the challenge of responsible innovation. Technol. Soc. 2003, 25, 369–384. [Google Scholar] [CrossRef]
- Von Schomberg, R.; Towards Responsible Research and Innovation in the Information and Communication Technologies and Security Technologies Fields. Available at SSRN 2436399. 2011. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2436399 (accessed on 1 December 2023).
- Baregheh, A.; Rowley, J.; Sambrook, S. Towards a multidisciplinary definition of innovation. Manag. Decis. 2009, 47, 1323–1339. [Google Scholar] [CrossRef]
- Voegtlin, C.; Scherer, A.; Stahl, G.; Hawn, O. Grand societal challenges and responsible innovation. J. Manag. Stud. 2022, 59, 1–28. [Google Scholar] [CrossRef]
- Lehoux, P.; Silva, H.; Denis, J.; Miller, F.; Pozelli Sabio, R.; Mendell, M. Moving toward responsible value creation: Business model challenges faced by organizations producing responsible health innovations. J. Prod. Innov. Manag. 2021, 38, 548–573. [Google Scholar] [CrossRef]
- Jarmai, K. Learning from sustainability-oriented innovation. In Responsible Innovation; SpringerBriefs in Research and Innovation Governance; Jarmai, K., Ed.; Springer: Dordrecht, The Netherlands, 2020. [Google Scholar]
- Auer, A.; Jarmai, K. Implementing responsible research and innovation practices in SMEs: Insights into drivers and barriers from the Austrian medical device sector. Sustainability 2017, 10, 17. [Google Scholar] [CrossRef]
- Chatfield, K.; Borsella, E.; Mantovani, E.; Porcari, A.; Stahl, B. An investigation into risk perception in the ICT industry as a core component of responsible research and innovation. Sustainability 2017, 9, 1424. [Google Scholar] [CrossRef]
- Gurzawska, A.; Mäkinen, M.; Brey, P. Implementation of Responsible Research and Innovation (RRI) practices in industry: Providing the right incentives. Sustainability 2017, 9, 1759. [Google Scholar] [CrossRef]
- Centers for Disease Control and Prevention (CDC). Policy Analysis. 2023. Available online: https://www.cdc.gov/policy/polaris/policyprocess/policyanalysis/index.html (accessed on 28 November 2023).
- Cook, L.; LaVan, H.; Zilic, I. An exploratory analysis of corporate social responsibility reporting in US pharmaceutical companies. J. Commun. Manag. 2018, 22, 197–211. [Google Scholar] [CrossRef]
- Micozzi, N.; Yigitcanlar, T. Understanding smart city policy: Insights from the strategy documents of 52 local governments. Sustainability 2022, 14, 10164. [Google Scholar] [CrossRef]
- Olivera, J.; Ford, J.; Sowden, S.; Bambra, C. Conceptualisation of health inequalities by local healthcare systems: A document analysis. Health Soc. Care Community 2022, 30, e3977–e3984. [Google Scholar] [CrossRef]
- CompaniesMarketCap. Largest Tech Companies by Market Cap. 2023. Available online: https://companiesmarketcap.com/tech/largest-tech-companies-by-market-cap/ (accessed on 25 June 2023).
- Federal Trade Commission. Non-HSR Reported Acquisitions by Select Technology Platforms, 2010–2019: A Report of the FTC. 2021. Available online: https://www.ftc.gov/system/files/documents/reports/non-hsr-reported-acquisitions-select-technology-platforms-2010-2019-ftc-study/p201201technologyplatformstudy2021.pdf (accessed on 25 June 2023).
- Capgemini; Efma. Unprecedented Access to Capital Investment Fuels InsurTech and BigTech Maturity and Customer Adoption, World Insurtech Report. 2021. Available online: https://www.capgemini.com/in-en/wp-content/uploads/sites/18/2021/09/WORLD-INSURTECH-REPORT-2021.pdf (accessed on 25 June 2023).
- Congressional Research Service. Big Tech in Financial Services. 2022. Available online: https://crsreports.congress.gov/product/pdf/R/R47104 (accessed on 25 June 2023).
- Kerzel, U. Enterprise AI canvas integrating artificial intelligence into business. Appl. Artif. Intell. 2021, 35, 1–12. [Google Scholar] [CrossRef]
- Elliott, K.; Price, R.; Shaw, P.; Spiliotopoulos, T.; Ng, M.; Coopamootoo, K.; van Moorsel, A. Towards an equitable digital society: Artificial intelligence (AI) and corporate digital responsibility (CDR). Society 2021, 58, 179–188. [Google Scholar] [CrossRef]
- Brauner, P.; Hick, A.; Philipsen, R.; Ziefle, M. What does the public think about artificial intelligence? A criticality map to understand bias in the public perception of AI. Front. Comput. Sci. 2023, 5, 1113903. [Google Scholar] [CrossRef]
- Zhdanov, D.; Bhattacharjee, S.; Bragin, M. Incorporating FAT and privacy aware AI modeling approaches into business decision making frameworks. Decis. Support Syst. 2022, 155, 113715. [Google Scholar] [CrossRef]
- Anagnostou, M.; Karvounidou, O.; Katritzidaki, C.; Kechagia, C.; Melidou, K.; Mpeza, E.; Konstantinidis, I.; Kapantai, E.; Berberidis, C.; Magnisalis, I.; et al. Characteristics and challenges in the industries towards responsible AI: A systematic literature review. Ethics Inf. Technol. 2022, 24, 37. [Google Scholar] [CrossRef]
- Kunz, W.; Wirtz, J. Corporate digital responsibility (CDR) in the age of AI: Implications for interactive marketing. J. Res. Interact. Mark. 2023. [Google Scholar] [CrossRef]
- IBM Everyday Ethics for Artificial Intelligence. 2022. Available online: https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf (accessed on 25 June 2023).
- Intel. 2022–2023 Corporate Responsibility Report. 2023. Available online: https://csrreportbuilder.intel.com/pdfbuilder/pdfs/CSR-2022-23-Full-Report.pdf (accessed on 25 June 2023).
- Oracle. Oracle’s Guide to Ethical Considerations in AI Development and Deployment. 2023. Available online: https://www.oracle.com/artificial-intelligence/ai-ethics/ (accessed on 25 June 2023).
- Microsoft. Research Collection: Research Supporting Responsible AI. 2020. Available online: https://www.microsoft.com/en-us/research/blog/research-collection-research-supporting-responsible-ai/ (accessed on 25 June 2023).
- NXP Semiconductors. The Morals of Algorithms. 2020. Available online: https://www.nxp.com/docs/en/white-paper/AI-ETHICAL-FRAMEWORK-WP.pdf (accessed on 25 June 2023).
- Workday. Workday’s Continued Diligence to Ethical AI and ML Trust. 2022. Available online: https://blog.workday.com/en-us/2022/workdays-continued-diligence-ethical-ai-and-ml-trust.html (accessed on 25 June 2023).
- Xiaomi. Xiaomi Trustworthy AI White Paper. 2021. Available online: https://trust.mi.com/pdf/Xiaomi_Trustworthy_AI_White_Paper_EN_May_2021.pdf (accessed on 25 June 2023).
- Cisco. Cisco Principles for Responsible Artificial Intelligence. 2022. Available online: https://www.cisco.com/c/dam/en_us/about/doing_business/trust-center/docs/cisco-responsible-artificial-intelligence-principles.pdf (accessed on 25 June 2023).
- Google. 2022 AI Principles Progress Update. 2022. Available online: https://ai.google/static/documents/ai-principles-2022-progress-update.pdf (accessed on 25 June 2023).
- Nakao, Y.; Stumpf, S.; Ahmed, S.; Naseer, A.; Strappelli, L. Toward involving end-users in interactive human-in-the-loop AI fairness. ACM Trans. Interact. Intell. Syst. 2022, 12, 1–30. [Google Scholar] [CrossRef]
- Balasubramaniam, N.; Kauppinen, M.; Rannisto, A.; Hiekkanen, K.; Kujala, S. Transparency and explainability of AI systems: From ethical guidelines to requirements. Inf. Softw. Technol. 2023, 159, 107197. [Google Scholar] [CrossRef]
- Sanderson, C.; Douglas, D.; Lu, Q.; Schleiger, E.; Whittle, J.; Lacey, J.; Newnham, G.; Hajkowicz, S.; Robinson, C.; Hansen, D. AI ethics principles in practice: Perspectives of designers and developers. IEEE Trans. Technol. Soc. 2023, 4, 171–187. [Google Scholar] [CrossRef]
- Malik, M.; Kanwal, L. Impact of corporate social responsibility disclosure on financial performance: Case study of listed pharmaceutical firms of Pakistan. J. Bus. Ethics 2018, 150, 69–78. [Google Scholar] [CrossRef]
- Akbari, M.; Rezvani, A.; Shahriari, E.; Zúñiga, M.; Pouladian, H. Acceptance of 5 G technology: Mediation role of Trust and Concentration. J. Eng. Technol. Manag. 2020, 57, 101585. [Google Scholar] [CrossRef]
- Chouaibi, S.; Rossi, M.; Siggia, D.; Chouaibi, J. Exploring the moderating role of social and ethical practices in the relationship between environmental disclosure and financial performance: Evidence from ESG companies. Sustainability 2021, 14, 209. [Google Scholar] [CrossRef]
- Kelly, S.; Kaye, S.; Oviedo-Trespalacios, O. What factors contribute to acceptance of artificial intelligence? A systematic review. Telemat. Inform. 2022, 77, 101925. [Google Scholar] [CrossRef]
- Webster, P. Tech companies criticise health AI regulations. Lancet 2023, 402, 517–518. [Google Scholar] [CrossRef] [PubMed]
- McStay, A. Emotional AI and EdTech: Serving the public good? Learn. Media Technol. 2020, 45, 270–283. [Google Scholar] [CrossRef]
- Adobe. Adobe’s Commitment to AI Ethics. 2023. Available online: https://www.adobe.com/content/dam/cc/en/ai-ethics/pdfs/Adobe-AI-Ethics-Principles.pdf (accessed on 25 June 2023).
- Sony. Sony Group’s Initiatives for Responsible AI. 2023. Available online: https://www.sony.com/en/SonyInfo/sony_ai/responsible_ai.html (accessed on 25 June 2023).
- VMware. Why Your Organization Needs a Set of Ethical Principles for AI. 2022. Available online: https://octo.vmware.com/why-your-organization-needs-ethical-principles-for-ai/ (accessed on 25 June 2023).
- Atlassian. Atlassian’s Responsible Technology Principles. 2023. Available online: https://www.atlassian.com/trust/responsible-tech-principles (accessed on 25 June 2023).
- Schneider Electric. AI knowledge Base—Responsible and Ethical AI. 2023. Available online: https://www.se.com/ww/en/about-us/artificial-intelligence/knowledge-base.jsp (accessed on 25 June 2023).
- NVIDIA Corporate Responsibility Report 2022. 2022. Available online: https://images.nvidia.com/aem-dam/en-zz/Solutions/csr/FY2022-NVIDIA-Corporate-Responsibility.pdf (accessed on 25 June 2023).
- Qualcomm. Qualcomm Corporate Responsibility Report. 2022. Available online: https://www.qualcomm.com/content/dam/qcomm-martech/dm-assets/documents/2022-qualcomm-corporate-responsibility-report.pdf (accessed on 25 June 2023).
- Palantir. Enabling Responsible AI in Palantir Foundry. 2023. Available online: https://blog.palantir.com/enabling-responsible-ai-in-palantir-foundry-ac23e3ad7500 (accessed on 25 June 2023).
- Samsung. Samsung Electronics Sustainability Report 2022. 2022. Available online: https://images.samsung.com/is/content/samsung/assets/uk/sustainability/overview/Samsung_Electronics_Sustainability_Report_2022.pdf (accessed on 25 June 2023).
- Towse, A.; Mauskopf, J. Affordability of new technologies: The next frontier. Value Health 2018, 21, 249–251. [Google Scholar] [CrossRef] [PubMed]
- Li, W.; Yigitcanlar, T.; Erol, I.; Liu, A. Motivations, barriers and risks of smart home adoption: From systematic literature review to conceptual framework. Energy Res. Soc. Sci. 2021, 80, 102211. [Google Scholar] [CrossRef]
- Tuncer, I. The relationship between IT affordance, flow experience, trust, and social commerce intention: An exploration using the SOR paradigm. Technol. Soc. 2021, 65, 101567. [Google Scholar] [CrossRef]
- Salesforce. Meet Salesforce’s Trusted AI Principles. 2023. Available online: https://blog.salesforceairesearch.com/meet-salesforces-trusted-ai-principles/ (accessed on 25 June 2023).
- Automatic Data Processing. ADP: Ethics in Artificial Intelligence. 2022. Available online: https://www.adp.com/-/media/adp/redesign2018/pdf/data-privacy/ai-ethics-statement.pdf?rev=934d7063975f402889c4ed8610324c36&hash=9FA7B34280D71654740CC51D14F74E79 (accessed on 25 June 2023).
- Amazon. Introducing AWS AI Service Cards: A New Resource to Enhance Transparency and Advance Responsible AI. 2022. Available online: https://aws.amazon.com/blogs/machine-learning/introducing-aws-ai-service-cards-a-new-resource-to-enhance-transparency-and-advance-responsible-ai/ (accessed on 25 June 2023).
- Baidu. Responsible AI. 2023. Available online: https://esg.baidu.com/en/article/Responsible_AI (accessed on 25 June 2023).
- Airbnb. Airbnb’s Work on Human Rights. 2021. Available online: https://news.airbnb.com/airbnbs-work-on-human-rights/ (accessed on 25 June 2023).
- Dell. Dell Technologies Principles for Ethical Artificial Intelligence. 2022. Available online: https://www.delltechnologies.com/asset/en-us/solutions/business-solutions/briefs-summaries/principles-for-ethical-ai.pdf (accessed on 25 June 2023).
- Equinix. 4 Factors That Define Responsible AI. 2023. Available online: https://blog.equinix.com/blog/2023/01/09/4-factors-that-define-responsible-ai/ (accessed on 25 June 2023).
- Meta Platforms. Facebook’s Five Pillars of Responsible AI. 2021. Available online: https://ai.meta.com/blog/facebooks-five-pillars-of-responsible-ai/ (accessed on 25 June 2023).
- Makridakis, S. The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures 2017, 90, 46–60. [Google Scholar] [CrossRef]
- French, A.; Shim, J.P.; Risius, M.; Larsen, K.R.; Jain, H. The 4th Industrial Revolution powered by the integration of AI, blockchain, and 5G. Commun. Assoc. Inf. Syst. 2021, 49, 6. [Google Scholar] [CrossRef]
- Ahmed, T.; Karmaker, C.L.; Nasir, S.B.; Moktadir, M.A.; Paul, S.K. Modeling the artificial intelligence-based imperatives of industry 5.0 towards resilient supply chains: A post-COVID-19 pandemic perspective. Comput. Ind. Eng. 2023, 177, 109055. [Google Scholar] [CrossRef]
- Buhmann, A.; Fieseler, C. Towards a deliberative framework for responsible innovation in artificial intelligence. Technol. Soc. 2021, 64, 101475. [Google Scholar] [CrossRef]
- Tubadji, A.; Huang, H.; Webber, D.J. Cultural proximity bias in AI-acceptability: The importance of being human. Technol. Forecast. Soc. Change 2021, 173, 121100. [Google Scholar] [CrossRef]
- Hua, D.; Petrina, N.; Young, N.; Cho, J.G.; Poon, S.K. Understanding the factors influencing acceptability of AI in medical imaging domains among healthcare professionals: A scoping review. Artif. Intell. Med. 2023, 147, 102698. [Google Scholar] [CrossRef]
- Laux, J.; Wachter, S.; Mittelstadt, B. Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk. Regul. Gov. 2023; in press. [Google Scholar] [CrossRef]
- AI, H. High-Level Expert Group on Artificial Intelligence; Ethics guidelines for trustworthy AI, 6; European Commission: Brussels, Belgium, 2019. [Google Scholar]
- Sovacool, B.K.; Kester, J.; Noel, L.; de Rubens, G.Z. Energy injustice and Nordic electric mobility: Inequality, elitism, and externalities in the electrification of vehicle-to-grid (V2G) transport. Ecol. Econ. 2019, 157, 205–217. [Google Scholar] [CrossRef]
- Altinay, F.; Ossiannilsson, E.; Altinay, Z.; Dagli, G. Accessible services for smart societies in learning. Int. J. Inf. Learn. Technol. 2020, 38, 75–89. [Google Scholar] [CrossRef]
- Early, J.; Hernandez, A. Digital disenfranchisement and COVID-19: Broadband internet access as a social determinant of health. Health Promot. Pract. 2021, 22, 605–610. [Google Scholar] [CrossRef] [PubMed]
- Brand, T.; Blok, V. Responsible innovation in business: A critical reflection on deliberative engagement as a central governance mechanism. J. Responsible Innov. 2019, 6, 4–24. [Google Scholar] [CrossRef]
- Padilla-Lozano, C.P.; Collazzo, P. Corporate social responsibility, green innovation and competitiveness–causality in manufacturing. Compet. Rev. Int. Bus. J. 2021, 32, 21–39. [Google Scholar] [CrossRef]
- Wang, L.; Qu, G.; Chen, J. Towards a meaningful innovation paradigm: Conceptual framework and practice of leading world-class enterprise. Chin. Manag. Stud. 2022, 16, 942–964. [Google Scholar] [CrossRef]
- Carayannis, E.G.; Grigoroudis, E.; Stamati, D.; Valvi, T. Social business model innovation: A quadruple/quintuple helix-based social innovation ecosystem. IEEE Trans. Eng. Manag. 2019, 68, 235–248. [Google Scholar] [CrossRef]
- Hagedoorn, J.; Haugh, H.; Robson, P.; Sugar, K. Social innovation, goal orientation, and openness: Insights from social enterprise hybrids. Small Bus. Econ. 2023, 60, 173–198. [Google Scholar] [CrossRef]
- Sáez-Martínez, F.J.; Ferrari, G.; Mondéjar-Jiménez, J. Eco-innovation: Trends and approaches for a field of study. Innovation 2015, 17, 1–5. [Google Scholar] [CrossRef]
- Nickel, P.J.; Franssen, M.; Kroes, P. Can we make sense of the notion of trustworthy technology? Knowl. Technol. Policy 2010, 23, 429–444. [Google Scholar] [CrossRef]
- Liu, H.; Wang, Y.; Fan, W.; Liu, X.; Li, Y.; Jain, S.; Liu, Y.; Jain, A.; Tang, J. Trustworthy ai: A computational perspective. ACM Trans. Intell. Syst. Technol. 2022, 14, 1–59. [Google Scholar] [CrossRef]
- Petkovic, D. It is Not “Accuracy vs. Explainability”—We Need Both for Trustworthy AI Systems. IEEE Trans. Technol. Soc. 2023, 4, 46–53. [Google Scholar] [CrossRef]
- Chi, O.H.; Jia, S.; Li, Y.; Gursoy, D. Developing a formative scale to measure consumers’ trust toward interaction with artificially intelligent (AI) social robots in service delivery. Comput. Hum. Behav. 2021, 118, 106700. [Google Scholar] [CrossRef]
- Jacovi, A.; Marasović, A.; Miller, T.; Goldberg, Y. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual, 3–10 March 2021; pp. 624–635. [Google Scholar]
- Keding, C.; Meissner, P. Managerial overreliance on AI-augmented decision-making processes: How the use of AI-based advisory systems shapes choice behavior in R&D investment decisions. Technol. Forecast. Soc. Chang. 2021, 171, 120970. [Google Scholar]
- Shneiderman, B. Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Trans. Interact. Intell. Syst. (TiiS) 2020, 10, 1–31. [Google Scholar] [CrossRef]
- Wells, L.; Bednarz, T. Explainable ai and reinforcement learning—A systematic review of current approaches and trends. Front. Artif. Intell. 2021, 4, 550030. [Google Scholar] [CrossRef]
- Hickman, E.; Petrin, M. Trustworthy AI and corporate governance: The EU’s ethics guidelines for trustworthy artificial intelligence from a company law perspective. Eur. Bus. Organ. Law Rev. 2021, 22, 593–625. [Google Scholar] [CrossRef]
- Omrani, N.; Rivieccio, G.; Fiore, U.; Schiavone, F.; Agreda, S.G. To trust or not to trust? An assessment of trust in AI-based systems: Concerns, ethics and contexts. Technol. Forecast. Soc. Chang. 2022, 181, 121763. [Google Scholar] [CrossRef]
- Bacq, S.; Aguilera, R.V. Stakeholder governance for responsible innovation: A theory of value creation, appropriation, and distribution. J. Manag. Stud. 2022, 59, 29–60. [Google Scholar] [CrossRef]
- Hahn, G. Industry 4.0: A supply chain innovation perspective. Int. J. Prod. Res. 2020, 58, 1425–1441. [Google Scholar] [CrossRef]
- Huang, S.; Wang, B.; Li, X.; Zheng, P.; Mourtzis, D.; Wang, L. Industry 5.0 and Society 5.0—Comparison, complementation and co-evolution. J. Manuf. Syst. 2022, 64, 424–428. [Google Scholar] [CrossRef]
- Kovacs, O. Inclusive industry 4.0 in Europe: Japanese lessons on socially responsible industry 4.0. Soc. Sci. 2022, 11, 29. [Google Scholar] [CrossRef]
- Ivanov, D. The industry 5.0 framework: Viability-based integration of the resilience, sustainability, and human-centricity perspectives. Int. J. Prod. Res. 2023, 61, 1683–1695. [Google Scholar] [CrossRef]
- Saihi, A.; Awad, M.; Ben-Daya, M. Quality 4.0: Leveraging Industry 4.0 technologies to improve quality management practices–a systematic review. Int. J. Qual. Reliab. Manag. 2023, 40, 628–650. [Google Scholar] [CrossRef]
- Sai Manohar, S.; Pandit, S.R. Core values and beliefs: A study of leading innovative organizations. J. Bus. Ethics 2014, 125, 667–680. [Google Scholar] [CrossRef]
- Chen, J.; Sun, C.; Liu, J. Corporate social responsibility, consumer sensitivity, and overcapacity. Manag. Decis. Econ. 2022, 43, 544–554. [Google Scholar] [CrossRef]
- Segarra-Oña, M.; Peiró-Signes, Á.; Mondéjar-Jiménez, J. Twisting the twist: How manufacturing & knowledge-intensive firms excel over manufacturing & operational and all service sectors in their eco-innovative orientation. J. Clean. Prod. 2016, 138, 19–27. [Google Scholar]
Company | Region | Profile |
---|---|---|
Microsoft | USA | Microsoft is a technology company specialising in software, hardware, cloud services, and digital solutions, driving innovation in numerous sectors, from computing to business applications. |
Alphabet (Google) | USA | Alphabet is the parent company of Google, focusing on search, advertising, cloud computing, AI, and digital services, with ventures in healthcare, autonomous vehicles, and other technological innovations. |
Amazon | USA | Amazon is an e-commerce company providing cloud services via AWS, streaming with Prime Video, and branching into AI, devices, and retail, driving transformative consumer and business solutions. |
NVIDIA | USA | NVIDIA is a technology company renowned for its graphics processing units (GPUs) for gaming, also venturing into AI, deep learning, automotive AI solutions, and data centre advancements. |
Meta (Facebook) | USA | Meta Platforms, formerly Facebook, focuses on social media services, augmented and virtual reality, advertising, and digital communication tools, aspiring to build a comprehensive metaverse for global users. |
Samsung | Republic of Korea | Samsung is an electronics company specialising in smartphones, TVs, semiconductors, and home appliances, while also venturing into software, digital services, and cutting-edge technology innovations. |
Oracle | USA | Oracle is a technology company specialising in database software, cloud solutions, enterprise software products, and hardware systems, serving businesses with integrated technology stacks. |
Adobe | USA | Adobe is a software company known for creative and multimedia solutions, digital marketing tools, and document management, driving digital content creation and optimisation across platforms. |
Salesforce | USA | Salesforce is a cloud-based software company specialising in customer relationship management (CRM) solutions and offering a suite of enterprise applications for marketing, sales, service, and analytics. |
Cisco | USA | Cisco is a technology company focusing on networking hardware, software, telecommunications equipment, and cybersecurity solutions, enabling seamless connectivity and digital transformation for businesses. |
Intel | USA | Intel is a semiconductor manufacturer specialising in microprocessors, chipsets, and integrated solutions, driving advancements in computing, data centres, AI, and broader technology ecosystems. |
Qualcomm | USA | Qualcomm is a semiconductor manufacturer specialising in wireless technology innovations, designing chips for smartphones, and pioneering advances in 5G, IoT, and AI across various platforms. |
IBM | USA | IBM is a technology company focusing on cloud computing, AI, enterprise software, and hardware, offering integrative business solutions and IT consultancy. |
Sony | Japan | Sony is an electronics company specialising in electronics, entertainment, gaming (PlayStation), music, film production, and professional broadcasting solutions, driving innovation in media and consumer technologies. |
Schneider Electric | France | Schneider Electric is a global specialist in energy management and automation, offering solutions for homes, buildings, data centres, infrastructure, and industries, driving sustainable and integrated efficiency. |
Automatic Data Processing | USA | Automatic Data Processing (ADP) is a global provider specialising in human capital management solutions, offering payroll, tax, HR services, and analytics to businesses of varying sizes. |
Airbnb | USA | Airbnb is a global online platform connecting travellers with hosts, specialising in unique accommodations, experiences, and evolving into travel services, redefining how people experience new destinations. |
Equinix | USA | Equinix is a technology company specialising in data centre services, connecting businesses to their customers and partners inside interconnected data centres, driving digital business performance through platform solutions. |
VMware | USA | VMware is a software company specialising in cloud infrastructure, virtualisation, networking, security, and digital workspace technology, empowering businesses with integrated IT solutions for modern computing. |
Workday | USA | Workday is a cloud-based software provider focusing on human capital management, financial management, and enterprise planning, offering adaptive solutions for business insights and growth. |
Baidu | China | Baidu is a technology company specialising in internet services, AI research, autonomous driving, and digital advertising, often referred to as China’s premier search engine platform. |
NXP Semiconductors | The Netherlands | NXP Semiconductors is a technology company specialising in secure connectivity solutions for embedded applications, driving innovations in automotive, industrial, and IoT markets. |
Atlassian | Australia | Atlassian is a software company providing collaboration and productivity tools for teams, including Jira, Confluence, and Bitbucket, serving developers and businesses to enhance workflow and project management. |
Dell | USA | Dell is a technology company specialising in personal computers, servers, storage solutions, and network devices, also offering software and IT services to drive digital transformation for businesses. |
Xiaomi | China | Xiaomi is an electronics company, known for smartphones, smart home devices, and IoT products, emphasising innovative technology, design, and cost-effective solutions for a connected lifestyle. |
Palantir | USA | Palantir is a software company specialising in big data analytics, offering platforms for data integration, decision-making, and operational intelligence, serving government agencies and private sectors. |
Node | Sub-Node |
---|---|
Acceptability Goals | Equitability considerations, Ethics considerations, Harmlessness considerations |
Accessibility Goals | Adaptability considerations, Affordability considerations, Inclusiveness considerations |
Alignment Goals | Deliberateness considerations, Meaningfulness considerations, Sustainability considerations |
Trustworthiness Goals | Explainability considerations, Security considerations, Transparency considerations |
Well-governance Goals | Accountability considerations, Participation considerations, Regulatory considerations |
Node | Sub-Node | Sub-Nodes Mentioned in Policy Documents | Frequency of Sub-Node | Total Frequency Sub-Node |
---|---|---|---|---|
Acceptability Goals | Equitability considerations | 20 | 20 | 51 |
Ethics considerations | 18 | 24 | ||
Harmlessness considerations | 7 | 7 | ||
Accessibility Goals | Adaptability considerations | 11 | 14 | 31 |
Affordability considerations | 2 | 2 | ||
Inclusiveness considerations | 13 | 15 | ||
Alignment Goals | Deliberateness considerations | 8 | 9 | 33 |
Meaningfulness considerations | 13 | 18 | ||
Sustainability considerations | 6 | 6 | ||
Trustworthiness Goals | Explainability considerations | 15 | 15 | 65 |
Security considerations | 22 | 33 | ||
Transparency considerations | 15 | 17 | ||
Well-governance Goals | Accountability considerations | 13 | 16 | 33 |
Participation considerations | 8 | 8 | ||
Regulatory considerations | 9 | 9 |
Equitability Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 1 | Avoid bias | Does not create, reinforce, or propagate harmful or unfair biases in all stages of innovation and technology practice, from design to deployment and beyond. |
OBJ 2 | Guard against discrimination | Upholds the rights of all individuals and groups, embraces the full spectrum of social diversity, and actively prevents any form of discrimination. |
OBJ 3 | Strive for fairness | Proactively identifies and eliminates obstacles to ensure fair treatment for all and empower every individual equally through innovation and technology. |
Ethics Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 4 | Human value-based design | Prioritises human values and morals in the innovation process, ensuring that technological outcomes meet functional requirements and align with broader ethical and societal norms. |
OBJ 5 | Human-in-the-loop mechanism | Embeds appropriate human oversight and intervention into decision-making processes to balance efficiency with ethical considerations. |
OBJ 6 | Respect and enforce privacy | Incorporates privacy principles at every stage of the innovation lifecycle, ensuring that innovation and technological outcomes consistently prioritise and protect user privacy. |
Harmlessness Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 7 | Harmless to environment | Ensures respect and protection of the environment, avoiding practices that lead to environmental degradation or the corruption of natural resources. |
OBJ 8 | Harmless to individual | Does not lead to any harm against any individuals, including physical and psychological harm. |
OBJ 9 | Harmless to society | Minimises adverse impacts on society and commits to the harmonious progress of technology and society. |
Adaptability Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 10 | Flexible in nature | Allows individual modification or replacement and can seamlessly operate with other systems to ensure compatibility and ease of integration. |
OBJ 11 | Future-proof design | Remains adaptable in the face of evolving socio-technical challenges and paradigms and continues to serve intended purposes throughout their lifecycle. |
OBJ 12 | Universal access | Ensures accessibility for all individuals, irrespective of differences in physical ability, technological, cognitive, or actual usage context. |
Affordability Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 13 | Cost-effective solution | Provides cost-effective solutions or alternatives to reduce economic disparities in technology adoption. |
OBJ 14 | Financial feasibility | Reduces costs to allow broad population segments to access and benefit from innovation and technological advances without negative financial implications. |
OBJ 15 | Value for money | Offers genuine value, ensuring that users receive meaningful benefits or solutions that justify the costs, promoting long-term adoption and utility. |
Inclusiveness Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 16 | Cultural and contextual sensitivity | Respects shared values while remaining sensitive and attuned to the nuances of diverse cultural norms and contexts. |
OBJ 17 | Diverse representation | Ensures marginalised or underrepresented groups are included and have representation in innovation and technology practices. |
OBJ 18 | Localisation-friendly | Localises to match local customs, languages, and preferences, ensuring relevance and acceptance in different geo-cultural contexts. |
Deliberateness Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 19 | Define the purpose | Before taking any initiative or action, clarifies and articulates the intended purpose to ensure alignment with overall goals and values. |
OBJ 20 | Proactive risk assessment | Anticipates potential negative outcomes, challenges, or pitfalls, and designs strategies to mitigate or avoid them. |
OBJ 21 | User relevance | Gains a deep understanding of the users’ needs, preferences, and challenges to ensure the solutions provided are directly relevant to and satisfy the end-user’s needs. |
Meaningfulness Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 22 | Realise human potential | Unlocks humanity’s potential, empowering individuals to address complex challenges effectively with increased capability through complementary collaboration with technology. |
OBJ 23 | Social license to operate | Strives for continuous community recognition, emphasising legitimacy, trust, and ethical alignment beyond mere regulatory compliance. |
OBJ 24 | Socially beneficial | Promotes human welfare for both current and future generations, fostering growth, prosperity, and positive societal outcomes. |
Sustainability Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 25 | Commitments to climate change | Targets a reduction in greenhouse gas emissions and prevention of waste generation, supporting global efforts to adapt to and mitigate climate change. |
OBJ 26 | Design for longevity | Ensures resources invested in creating products or solutions provide value over the long term and contribute to more sustainable and resilient societies. |
OBJ 27 | Eco-friendly design | Incorporates environmental considerations starting from the design phase to ensure products or solutions are eco-friendly throughout their lifecycle. |
Explainability Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 28 | Comprehensive explanation | Consistently provides clear and understandable interpretations across a range of inputs and scenarios, rather than being limited to specific instances or datasets. |
OBJ 29 | Interrogable justification | Provides valid and clear reasons for decisions, operations, or predictions, aligned with pre-defined objectives and standards. |
OBJ 30 | Intuitive interpretation | Presents operations and outcomes in a manner that is immediately comprehensible to users, irrespective of their technical expertise. |
Security Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 31 | Data security governance | Adopts ethical, safe, and regulatory-compliant data management mechanisms, ensuring security of organisational data and stakeholders’ privacy, and upholding trust amidst innovation practices. |
OBJ 32 | Digitally secure | Establishes systematic measures to protect digital assets, data, and user privacy, ensuring user trust and building a resilient digital ecosystem. |
OBJ 33 | Physically secure | Implements tangible measures to protect hardware and data storage, ensuring operational continuity and guarding against physical threats to innovation and digital ecosystems. |
Transparency Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 34 | Appropriate disclosure | Transparently discloses issues and matters that substantially affect stakeholders, equipping those engaging with innovation and new technologies with comprehensive information. |
OBJ 35 | Aware of interaction | Ensures users are consistently informed about their interactions with intelligent systems to promote user autonomy and prevent misuse or unintended consequences. |
OBJ 36 | Openness of structure | Existence of architecture, training data, algorithms, and operational works that are open, clear, and available for review. |
Accountability Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 37 | Redress and remediation | Establishes procedures to rectify any harm or mistakes, compensating affected parties when necessary. |
OBJ 38 | Responsibility attribution | Clearly identifies parties responsible for decisions, outcomes, or errors arising from innovation and technology practices. |
OBJ 39 | Traceable records | Maintains detailed records of decisions, justifications, processes, and outcomes and ensures the records are accessible to relevant stakeholders. |
Participation Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 40 | Collaborative governance | Embraces a collaborative governance model which invites diverse stakeholders to jointly shape and monitor innovation practices, ensuring that technological outcomes resonate with and serve the wider public interest. |
OBJ 41 | Cooperative design | Facilitates co-design sessions to allow potential users and other stakeholders to directly contribute to the innovation’s design or refinement. |
OBJ 42 | Feedback loops | Establishes mechanisms that allow for continuous feedback from users and stakeholders to adapt the innovation accordingly. |
Regulatory Considerations | ||
---|---|---|
Objectives | Statements | |
OBJ 43 | Consistent with best practice guidelines | Observes industry-specific best practices, standards, and codes of conduct that might be set by professional bodies or associations. |
OBJ 44 | Legal compliance | Adheres to both the letter and spirit of global laws and regulations, while ensuring that innovation and technological outcomes remain adaptable to evolving regulatory landscapes. |
OBJ 45 | Periodic and independent review | Regularly conducts independent reviews of innovation and technology systems and adjusts as needed to ensure they function as intended and to pre-emptively deter misuse. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, W.; Yigitcanlar, T.; Nili, A.; Browne, W. Tech Giants’ Responsible Innovation and Technology Strategy: An International Policy Review. Smart Cities 2023, 6, 3454-3492. https://doi.org/10.3390/smartcities6060153
Li W, Yigitcanlar T, Nili A, Browne W. Tech Giants’ Responsible Innovation and Technology Strategy: An International Policy Review. Smart Cities. 2023; 6(6):3454-3492. https://doi.org/10.3390/smartcities6060153
Chicago/Turabian StyleLi, Wenda, Tan Yigitcanlar, Alireza Nili, and Will Browne. 2023. "Tech Giants’ Responsible Innovation and Technology Strategy: An International Policy Review" Smart Cities 6, no. 6: 3454-3492. https://doi.org/10.3390/smartcities6060153
APA StyleLi, W., Yigitcanlar, T., Nili, A., & Browne, W. (2023). Tech Giants’ Responsible Innovation and Technology Strategy: An International Policy Review. Smart Cities, 6(6), 3454-3492. https://doi.org/10.3390/smartcities6060153