Next Article in Journal
Design and Implementation of Attention Depression Detection Model Based on Multimodal Analysis
Previous Article in Journal
Changes in People’s Mobility Behavior in Greece after the COVID-19 Outbreak
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of Artificial Intelligence on Sustainable Development in Electronic Markets

Department of Electronic Commerce, School of Information Management, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(6), 3568; https://doi.org/10.3390/su14063568
Submission received: 12 January 2022 / Revised: 2 March 2022 / Accepted: 7 March 2022 / Published: 18 March 2022

Abstract

:
With the emergence of artificial intelligence (AI), the technological revolution has transformed human lives and processes, empowering the products and services in today’s marketplaces. AI introduces new ways of doing jobs and business, and of exploring new global market opportunities. However, on the other hand, it provides many challenges to comprehend. Therefore, our study’s main objective was to examine the behavioral, cultural, ethical, social, and economic challenges of AI-enabled products and services in consumer markets and discuss how businesses might shape their approaches to address AI-related ethical issues. AI offers numerous social, ethical, and behavioral difficulties for people, organizations, and societies that endanger the sustainable development of economies. These fundamental challenges due to AI technologies have raised serious questions for the sustainable development of electronic markets. Based on this, the current study presents a framework highlighting these issues. Systematic reviewing was our research method; we looked for explicit information and methods that indicate the credibility of research or reduce biases. This paper is of great importance, as it highlights several behavioral, societal, ethical, and cultural aspects in electronic markets which were not presented in previous studies. Some key issues are the security and privacy of consumers, AI biases, individual autonomy, wellbeing, and issues of unemployment. Therefore, companies that use AI systems need to be socially responsible and make AI systems as secure as possible to promote the sustainable development of countries. The results suggest that AI has undoubtedly transformed life and has both positive and negative effects. However, the main aim regarding AI should be to use it for the overall goals of humanity. Moreover, authorities operating in e-business environments need to create proper rules and regulations and make the systems as secure as possible for people.

1. Introduction

Artificial intelligence has evolved dramatically in recent years due to more frequent security breaches and privacy invasions in many e-commerce companies, which have had significant impacts on how business operations are conducted. The present study presents the challenges AI has brought for consumers and societies in electronic markets. With the advancements in AI technology, its applicability in different areas, from personal to professional life, has evoked an extensive range of ethical debates. AI decision-making programs, moral values, accountability, and transparency have been topics of discussion in many types of research [1,2,3]. E-markets have recently evolved into a tool for increasing efficiency and speed in practically every commercial operation [4]. Johnson [5] defines e-markets as “inter-organizational trading systems that seek to smooth out supply chain inefficiencies by facilitating buyer–supplier information exchange on products, services, prices, and transactions in an integrated and synchronous Internet-based environment”. It has changed the nature of business transactions and operations worldwide [6]. Electronic business, unlike traditional trade, avoids physical touch between customers and sellers. This point, however, poses a slew of technological, security, privacy, trust, and legal issues, along with other difficulties [7].
The major goal of AI’s ethical development is to increase trust and acceptance of the technology. As AI evolves in every company worldwide, there should be a drive to create a secure and reliable system [8]. AI has transformed the world, and therefore, it has captured widespread interest, and AI can be defined as the ability of a machine to demonstrate human-like actions [9]. On the one hand, AI represents freedom, efficiency, miracles, and promises, while on the other, it represents human reliance on technology, obsolescence, and inactivity. AI-enabled systems raise ethical and moral concerns concerning cyber-security, unemployment, decision-making, and other issues [10]. With the advancement of AI systems, these challenges have become much more challenging to address. As a result, there has been a pressing need to study AI ethical standards and regulations to keep it under human control [11,12]. The objective of this paper is to help create a secure electronic business environment. Ending poverty must work hand-in-hand with plans that build economic growth and fulfill diverse social needs, such as education, health, social protection, and environmental protection, for a country’s continuing growth and sustainable development.
COVID-19’s emergence is wreaking havoc on the global economy and ruining the lives of billions of people throughout the planet. The outbreak is a unique wake-up call, exposing severe imbalances and the failures addressed in the 2030 Agenda for Sustainable Development and the Paris Climate Agreement. Bold initiatives can help the world return to the Sustainable Development Goals by taking advantage of a crisis when customary policies and societal norms have been disrupted. The SDGs are critical for developing sustainable, more inclusive, robust, and much more resilient societies. Recently, advances in computing technologies, processing speed, Internet technology, and the availability of big data have created a revolution in the reproduction or even surpassing of human intellectual capacities. Consequently, AI-driven forces are currently a disruptive force globally in many sectors, such as communication, transportation, healthcare, manufacturing, and finance. The market value of AI technologies was $16.06 billion in 2017 and is estimated to reach $190.61 billion by 2025 [9]. AI systems such as digital personal assistants (e.g., Apple’s Siri and Amazon’s Alexa), robots, and other AI devices have become very popular and essential parts of everyday life. At the same time, researchers and practitioners consider AI a valuable tool because of its speed, efficiency, infinite remembrance, and self-learning capability. Sundar Pichai, the CEO of Google, said that, “AI is one of the most important things humanity is working on. It is more profound than electricity or fire” [6].
Studies mainly focus on concerns that have arisen due to AI’s adoption, which has had a significant impact on our daily lives. The fundamental issues posed by AI technology have prompted serious concerns about the electronic market’s long-term viability. As AI poses a slew of social, ethical, and behavioral difficulties for individuals, businesses, and societies, it jeopardizes the economy’s long-term viability. Therefore, this study aims to examine the current behavioral, cultural, ethical, social, and economic constraints of AI-enabled products or services in consumer markets and discusses how enterprises might shape their actions toward socially responsible behaviors that address AI-related ethical issues. Consumers respect AI’s superior skills but are concerned about the security and privacy implications of such sophisticated technologies [13,14]. Sustainable development is an essential concept in the e-commerce economy [15]. Sustainable development has become very popular in business operations for consumers and enterprises; therefore, sustainability is considered an effective mode of maintaining competitiveness and appealing to more consumers in virtual markets [16]. In e-commerce businesses, sustainable development models greatly influence certain dimensions such as economic, societal, and environmental [17]. However, the use of sustainable development in e-commerce, as along with the proper balance of individual dimensions, may have a positive impact on an enterprise’s efficiency and effectiveness [18].
In the next industrial revolution, safety and privacy should be the top priority for AI systems, focusing on keeping the system as ethical as feasible under control. The previous studies discussed opportunities and positive aspects or efficiencies of AI technology on the market but neglected the negative aspects. As previously identified, the security, privacy, and social issues related to AI should also be discussed for the sustainable development of economies [19,20,21,22,23]. To fill this gap, the current study focuses on the following research question: “How does AI affect behavioral, psychological, ethical, social, and cultural issues in electronic markets?” This study uses systematic review as a research approach because it is based on detailed data and systematic methods that improve research credibility and eliminate biases. In academic and practitioner research, comprehensive literature reviews are among the most influential research methodologies. The research adds to the body of knowledge in AI on both an individual and market level. First, this study examines the specific challenges of AI, including behavioral, psychological, and ethical issues and how they affect human life. Second, this paper discusses the security and privacy problems connected with AI in electronic marketplaces, along with AI’s legal and accountability issues. All product and service markets where customers make online purchases and exchange their information are covered in this article. Such a detailed description of the impact of AI on people’s and society’s behavior should help businesses comprehend the importance of developing ethical norms in and laws for electronic markets to protect consumer safety.

2. Literature Review

Advancements in AI challenge human behavior, culture, and values and prevent humans in some population regions. AI changes the way humans interact with each other, which in turn provides new challenges for them and the overall society [24,25]. Artificial intelligence has changed human behavior and culture, having a significant impact on human psychology. Implementing AI systems in organizations has generated many ethical and social challenges for people and the firms themselves [11]. AI is a concept that is defined as “a system’s ability to correctly interpret external data, to learn from such data and to use that learning to achieve specific goals and tasks through flexible adaptation” [12]. AI brings about new ways of accomplishing tasks and business, and new market prospects worldwide.
AI poses several difficulties. As a result, this study’s main goal is to address the essential concerns about AI technologies, which do/could significantly impact our daily lives. The fundamental issues posed by AI technology have prompted serious concerns about the electronic market’s long-term viability [26]. Therefore, this paper examines the impacts of AI on human life and needs and the economy. These factors are illustrated in Figure 1.
As shown in Figure 1, this part discusses some factors highlighting the behavioral, cultural, psychological, ethical, and social issues due to advancements in AI technologies and electronic markets. The ethical, social, cultural, and legal challenges that result from technological achievements are also discussed in this study. Furthermore, this article examines the impacts of AI on the market and economy, and how AI has transformed traditional business procedures into cutting-edge business behavior. However, in this modern era of technology, a business must face a number of obstacles. Artificial intelligence (AI) has changed human behavior (psychology) and business structures. This advancement, however, has both positive and negative effects on human life, providing challenges for businesses and consumers. All of these topics are discussed in further detail further down.

2.1. Behavioral, Cultural, and Psychological Issues

The evolution of technology provides many advantages in terms of work. Still, it raises implausible expectations and social challenges related to AI technologies, complicated by inadequate information about the value and benefits of implementing AI technologies [27]. Researchers have debated the social implications of AI, particularly the potential job losses due to the emergence of AI machines. This topic has gotten much attention from the media and other forums. The human workforce is changing and evolving as a result of AI. With humans losing occupations to machines, the true problem is identifying new responsibilities requiring specialized human skills. This adds to society’s pressures, alters human behavior, and stresses people mentally, forcing them to strive even harder to survive [28]. According to PwC, more than seven million current jobs will be replaced by machines in the only UK from 2017 to 2037. Benedikt and Osborne [14] also examined 700 jobs facing the possibility of replacement and found that 47 percent of jobs are at risk of being entirely replaced by machines and algorithms. This workforce substitution will hurt individuals’ social standing through unemployment [29,30]. This alarming situation would change people’s way of living and could be very challenging [31]. AI is becoming so proficient in certain jobs that it may have a profound impact on society.
Risse [17] argued that AI could disturb working patterns, having an impact on the status of individuals as members of society [32]. Humans, on the other hand, are concentrating on utilizing human attributes to advance in problem-solving and to bring in a new era of technology with a combined AI and human-centric workforce [25,33,34]. The current advancements of AI aim to help society by motivating advanced research in various domains, ranging from money and law to scientific concerns, such as security, verification, control, and validation [14]. However, it might create trouble for users or even much of society if a device involved in a major system gets hacked or crashes [35]. As AI becomes more involved in our automobiles, planes, and trading, there will be serious concerns. Managing lethal autonomous arms, for that matter, is a significant worry regarding AI technology [13,28]. AI is evolving fast, and systems such as super-intelligence may spark a wave of intellectual discovery that may leave human brains in the dust [14]. On the other hand, super-intelligence systems and such innovative technologies might help the world with diseases, scarcity, and warfare, so the advancements of strong AI might be the most notable in history [36]. Apart from that, the main thing to mention about AI is that it is a system and does not have any human-like feelings, so there is no reason to consider that any AI might become malicious or benevolent in the future [37]. AI decisions are indeed dependent upon programming and without access to feelings and emotions, but that is not a good thing: these decisions might have unintended consequences for the humans involved [38]. Bill Gates, Stephen Hawking, Steve Wozniak, and other public figures in science and technology have started to stress the risks associated with AI development and are joined by many AI analysts. They feel that since AI technology is stronger than any human, we have no idea how it will act in the future [39]. There is a probability that humans will be constrained by their own made super-intelligence systems [40,41].
Data power AI algorithms, and as more data about each individual’s demographics are collected, our privacy is jeopardized. Interactions with technology are a significant problem for society, as they have already altered life. Using AI for everyday tasks, such as searching for information, navigation systems, and purchasing goods and services online with the help of virtual assistants, such as Siri or Alexa, has also become common [13,42]. These positives might help drive acceptance of AI systems, but these changes could also lead to distortion between humans and robots, and it may become impossible to differentiate between them. These communication systems (i.e., Siri and Alexa) might also cause harm, as suggested by Nomura et al. [43], who argue that such technologies tend to be highly polarized and can cause stress and anxiety, resulting in avoidance behaviors towards machines. Negative attitudes and emotions arise because some individuals might struggle to accept novelty in technology [43,44].
Moreover, people wasting more time using these technologies tend to be more compassionate. Some researchers consider the advantages of AI technologies but also articulate their concerns, since AI, intentionally or not, could cause massive destruction if not managed and appropriately checked. Researchers argue that existing research and development in AI would help improve understanding and preparation for potential adverse effects, thereby enhancing the positives of AI technologies while evading risks [14,45].

2.2. Ethical and Social Issues

The term “artificial intelligence ethics” is the branch of the ethics of technology-specific to AI systems. It is distributed into concerns about the behaviors of the humans who design, make, use, and treat artificially intelligent systems; and the behavior of the systems [31]. Using AI systems for daily tasks provides new kinds of work opportunities and brings new legal and ethical concerns associated with psychological practices. With the development of AI technology, there have been many ethical and social issues concerning the activities of humans and control of technologies that function autonomously [46]. Isaac Asimov, a well-known author of science fiction, stated ethical dilemmas regarding the usage of intelligent machines in the early 1940s in his groundbreaking “Three Laws of Robotics” [31]. According to these laws, intelligent machines must not harm any human being, must obey humas’ orders, and must be able to defend their existence. Later, Asimov further added that intelligent robots must not harm humanity. Nick Bostrom [30] also argued that an artificial intelligence system that must not endanger humankind and its evolution.
For this reason, [47] suggested the use of intelligent machines in a real and controlled environment to negate any significant crisis or unpredictable behavior. The evolution of society at the technological level with the advancement of AI cannot be stopped. Still, we can implant a code of ethics with preventive measures and manage all activities, anticipating all possible outcomes to minimize risks associated with this technology [47,48]. Asimov`s three laws shaped the ethical dilemmas and showed that even when certain instructions are applied to a system, rules tend to fail when interrupted by a distinct style of thinking.
AI currently satisfies humans on a product level because it is free from biases and promotes fairness [49]. Although most experts disagree on when and whether super AI will come, they all believe it should sufficiently integrate with consumer moral norms. Furthermore, several studies stress the need for ethical considerations in socio-technical approaches, at both customer and societal levels [19,30,34]. It is critical to establish what impacts AI technologies have on society and issues, such as causing cyber-security issues, unemployment, and consumer privacy, all of which must be evaluated and addressed due to AI’s rapid rise. In the context of AI, societal issues are highlighted; these include the potential for large-scale unemployment, reduced autonomy, and a decline in wellbeing. Due to the rise in AI technologies, many people are currently losing jobs; machines are replacing them. This situation is getting worse day by day with the advancements in technology.
AI could threaten the autonomy of individuals [11]. For instance, AI-enabled systems offer the majority of Web advertisements. These AI systems use data from various sources, including social media sites, websites, public records, and browser history, to target customers with targeted adverts based on their preferences. Individual autonomy may be harmed by such highly focused adverts, since they manipulate people’s preferences, deny them the opportunity to reflect on their own decisions, and reduce the space for autonomous decision-making [11,50]. Another problem with such technology-based advertising is that it relies on past behavior and disregards current preferences, attitudes, and emotions, as AI algorithms cannot access this information [50].
AI platforms such as Facebook, Instagram, and YouTube are widely used worldwide. They have had detrimental effects ethically and socially by engaging people online, resulting in addictive behaviors related to smartphones and social media platforms that distract these users from healthier activities [51,52]. Additionally, research is mounting evidence that a high level of engagement with social media platforms negatively impacts society and mental health, particularly among youth [52,53,54]. Digital addiction is widespread and causes disturbances that negatively influence individual academic or organizational performance, quality of life, and relationships [51,54,55].
Moreover, researchers have also discussed that individuals and organizations lack trust in AI systems and have concerns for ethical considerations in terms of online sharing of data [25,56]. The rapid change in AI systems and technologies is enhancing the legal and ethical issues, and it is not yet clear how these issues and challenges can be resolved. Adequate policies, ethical guidelines, rules and regulations, and legal frameworks should be developed to avoid the misuse of AI systems [57]. Gupta and Kumari [58] highlight and reinforce the legal and ethical challenges, such as the interoperability of AI systems and data sharing with the greater use of AI technologies. AI systems can display a level of discrimination, even though the choices made do not include human beings, highlighting the seriousness of AI algorithms’ transparency [25,49,58].

2.3. AI Effects on Market and Economy

The market is the place where buyers and sellers exchange goods and services. With the emergence of AI, online markets have changed their operating patterns: online platforms and social media platforms provide online products and services to consumers. On the other hand, the economy is defined as the management of financial matters for a community or business. AI has provided a boost to the world economy, especially for those markets that have adopted it well. Researchers have discussed the impacts of AI on markets that have changed the traditional ways of buying and selling. These electronic markets have impacted price dispersion, information gathering, product search costs, and market efficiency. In a conventional market, consumers incur substantial costs when collecting information about the features of products and services [59]. However, the advancements in AI technology in electronic markets, including product demonstration, parametric searching, and various shopping mediators, have made it easy to hunt for products and services online and have made search costs negligible. Additionally, in electronic markets, buyers have more product offerings and choices, which leads to more competition, ultimately reducing the costs of the product/services [59,60].
As information search cost theory also states that if there is a cost included in acquiring product or service information, sellers may charge different prices according to their expenses, resulting in price dispersion in the market. Due to operating in the electronic market, the issue of price dispersion is minimized, as there is no usual search cost existing in the case of online markets [61]. Even though online markets reduce the information gathering cost, its impact on market competence is inconclusive. Many researchers propose that issue of price dispersion still exists in these online markets because of the buyer and seller heterogeneities [60,62,63,64].
Grover, Lim, and Ayyagari [51] believe that higher price transparency and lower prices in online markets can discourage buyers’ participation in these markets. They also proposed that online markets could provide additional benefits to consumers to compensate for high or poor transparency levels to attract more users. Even though online markets are far more efficient than traditional markets due to the easily available information, lower search costs, and price dispersion, these electronic markets face numerous obstacles.
Researchers such as Ackoff [65] and Grover, Lim, and Ayyagari [66] argued about the negative side of online markets, discussed the issue of information overload due to the availability of so much information, and studied its effects on consumers’ intentions and price dispersion. Successful online markets such as Amazon and eBay can create practical entry barriers for new entrants by building a large community of members. Such online marketplaces provide several alternatives to consumers, which ultimately can create a cognitive burden on them and reduce their decision effectiveness correspondingly [66,67,68,69]. Additionally, such large online communities lean towards undesirable system externalities because of having increased congestion, consumption, and slow access to business resources [70]. Another issue that is faced in online markets is trust deficiency. Most of the transactions in online markets occur between people who have not met before, so there is a risk involved in electronic markets.
In many cases, sellers may not provide accurate and detailed information, resulting in a lack of trust in such marketplaces [60,71]. Currently, a topic of debate in online markets is that these systems are connected with online servers and transfer information about buyers daily. Additionally, these systems can be breached potentially by illegitimate means, and people’s data can be accessed. Moreover, the policy framework, rules, and regulations on how online markets control, manage, store, disseminate, and use clients’ information are not yet clear. This is a critical phase for online markets before they reach their full potential. There should be robust security standards, and a clear set of data-driven privacy policies needs to be established. Data security in traditional computing devices has been given a lot of importance, but the same rigorous security standards are currently not found in electronic markets, leading to trust issues in such markets [60]. Furthermore, the seller’s credibility is compromised if the buyer does not provide positive feedback about that vendor [72].
Another negative aspect of the AI revolution is that it has harmed the market and economy by causing mass unemployment and unpredictability in the labor market. According to the McKinsey Global Institute (MGI), AI machines might eventually replace 1.1 billion jobs worldwide, costing $15.8 trillion in wages [73]. Additionally, this revolution is continuing, as driverless cars have already replaced many drivers, Chatbots are replacing call center agents, and likewise, many intellectual and creative jobs are also challenged by these AI machines [74]. In this modern era, this changes are enormous. Researchers stated that we need to build devices that support us in doing work and create financial stability in markets and the overall economy [73,74]. Therefore, we need to be very careful with implementing AI systems in our daily work practices [75]. Government has an essential role in addressing societal issues of job replacement due to AI technology and needs to develop policies and regulations that would benefit people and society, especially by controlling unemployment [11]. The mass introduction of AI technology can also significantly impact organizations’ and institutes’ working practices and investments, creating economic challenges. Implementing AI technology in any organization or institute and training its employees according to new technology would require a large financial investment [76]. AI technology can significantly impact the global market and economy. McKinsey’s report on the economic challenges of AI [77] suggests how organizations should adopt AI in markets successfully. This report develops a narrative that organizations likely to adopt AI technology could experience profits and losses according to their countries. This could further widen the gap between developed and developing countries and increase the imbalance between rich and poor [25,77].

2.4. Security and Privacy Risk

Security is defined as protecting sensitive information from online vulnerability and ensuring confidentiality, authenticity, and data integrity. In the case of privacy, it is depicted as the promise that users sustain control over sensitive information. For providing a secure environment to its users, AI systems must focus on users’ data, improvements in privacy technologies, and regulations about managing users’ and objects’ identities [78]. In recent years there have been only a few attempts to clearly and precisely define a “right to privacy”. Some experts assert that the right to privacy “should not be defined as a separate legal right” at all. By their reasoning, existing laws relating to privacy, in general, should be sufficient. A working definition for a “right to privacy” is therefore proposed here: The right to privacy refers to our ability to maintain a domain around us that includes all aspects of ourselves, such as our bodies, homes, property, thoughts, feelings, secrets, and identities. The right to privacy allows us to control which portions of our domain can be accessible by others and the extent, methods, and timing for such use. Due to technological advancements, especially social media, displaying one’s identity online has several drawbacks in today’s environment. There are various components to these difficulties, such as online discussions, image sharing, location sharing data, and in-group actions that reveal one’s personality and character to others. Therefore, transparency, visibility, and privacy are compromised with sharing on social media, and research emphasizes that users on these social media applications are not in control of their own identities, conversations, information, and images, leading to all sorts of security and privacy risks [79]. Recent research in the UK found that issues related to privacy and security on social media platforms are significant concerns for young people and children [79,80]. The use of AI devices raised issues recently relating to the information provided. The first issue is about the concerning bodies that collect data themselves as they need to be careful in storing information provided by people. On the other hand, the second concern is to keep that data secure from cyberattacks or any other threatening bodies [81].
In terms of consumers, AI systems also enhance chances to access, collect, share the consumers’ personal information, which is morally wrong and can be risky [31]. Privacy is currently one of the most significant issues worldwide due to the data-centric nature of AI systems. With the development of AI technologies, controlling people’s information has become challenging, as there are many ways to spread it. AI is not in control and is not regulated in terms of sharing of data [11]. Particularly, consumer privacy has various dimensions, including collecting data, unauthorized use of that data, and inadequate data access by third parties [3,82]. Suppose a customer provides information at one store. In that case, he should be given surety that this personal information will not be shared with any third party, and this is the ethical responsibility of the firm collecting data. However, due to abundant information available on the Internet and social media platforms, such standards of privacy and security regulation for every individual are not possible, which raises the issue of privacy and cybersecurity for consumers in electronic markets [83,84]. Highly interactive products enhance the chance for gathering, utilizing, and transmitting information, which provides challenges for consumer privacy protection and pose greater risks than low AI-enabled products having low interactivity. For example, AI devices such as Apple Watch, Sensorial innovative clocks, and digital assistants (e.g., Siri or Alexa) having high interactivity not only gather a lot of data quantitatively, but also collect a large variety of information (e.g., audio, video, textual, or sensory information). Much of this sensory information is collected by AI devices without consumers’ awareness and informed consent.
The information collected by firms is used by these firms for purposes that are not necessarily unethical [11]. Cybersecurity is a concept that is also linked with privacy. Recently, there has been a burst of data breaches in various systems, including social media (e.g., Google, Facebook, Instagram, LinkedIn, and Yahoo), developers of software (e.g., Adobe, where more than 150 million users’ passwords were compromised), retailers (e.g., more than 40 million debit and credit cards were stolen in stores), banks (e.g., Federal Reserve Bank of US website was hacked), and many others [85]. These data breaches may expose consumers’ sensitive personal data to different parties who can use these data in illegal ways. Therefore, with the advancements in AI systems and AI-enabled products, and the constant rise of social media sites, cloud data, and mobile environments, the potential risk of cybercrime is rising, reinforcing the need for cybersecurity. There is optional anonymity in big data. However, even with that, firms still can locate people’s information based on clearly distinguished information, such as location data and search history, so there is no way to ensure privacy in this digital world [84]. For that reason, firms need to install constant preventive measures to protect their data and AI systems.
Present research on AI’s security and privacy issues presents several suggestions to improve the overall system. Firstly, firms need to mention their privacy and regulatory policies to consumers and explain how their information is gathered, stored, and protected by their online systems to gain consumers’ trust [86,87,88]. These policies help individuals understand the attempts at data privacy [88]. Secondly, firms should provide compensation benefits to customers for their data. These compensations should include free services, personalized offers, or other financial benefits in return for their information that would show a firm’s distributive fairness in terms of privacy of data [87,89]. Thirdly, firms should give their customers more control over shared information and management decisions regarding their data. AI users can have options regarding how their data should be collected, communicated, or shared with others. If firms give consumers these options and control their data, it will enhance their trust and confidence in firms and the overall AI system [11,87]. However, with so many innovative applications, online transactions, social media platforms, and other digital sources available, it has become very challenging and complex to manage and control datasets and communicate the scope of gathering data and privacy policies to every consumer [90]. Furthermore, advanced AI systems and big data make privacy more vulnerable than ever and violate privacy standards, causing individuals anxiety, humiliation, and financial losses [11].

2.5. Accountability and Legal Issues

With the evolution of AI technology, there has been a significant rise in legal and accountability issues for companies using AI. Along with the problems related to data protection and privacy, there are further legal implications in using AI technologies in all sectors. Accountability is one of the significant legal issues of using AI technology. When AI starts making decisions autonomously, its role goes beyond just a support tool, and whether creator or developer can be held accountable for its decisions is a question [81]. The issue of accountability asks: who will be held responsible if the AI device makes a mistake? AI decision-making is based solely on data, and it works on algorithms that are put in their system from the beginning. The reasons for AI technology’s unpredictability are based on two factors [2].
Firstly, AI devices or networks cannot imitate the human brain to think about different matters and make decisions according to different situations. These are just programs that can make programmed repetitive decisions, but the positives of these devices are that they are accurate and fast when making decisions [91,92,93]. Concerning a large amount of data in every situation, humans cannot screen all the data and decide because human brains usually consider apparent data and make decisions based on a given set of data that we can easily retrieve. However, for AI devices, it is easy to process all the data regardless of the amount, and look at every perspective of it within seconds, and then make decisions according to it, which is often impossible for humans [1,92,93,94].
Secondly, AI systems are programmed to learn from their data experiences, making them more unpredictable. As it is challenging to predict what experience a procedure will have to face and foresee how a system will behave in a specific situation, it is worth considering that when a system makes a mistake based on its experience or data, we must ask who will be held responsible for its wrong decisions: either the system itself or the AI developer/designer who makes that system [2]. In terms of the legal implications of AI systems, Gupta and Kumari [58] argued about the legal challenges concerned with using AI technologies. They discussed that one of the issues of using AI systems is when an error occurs using AI software. Another huge legal issue of AI devices is copyrighting. Currently, there is a significant need for legal frameworks to ensure the safety and protection of AI-generated work [95]. Wirtz, Weyerer, and Geyer [81] also focus on implementation challenges that a firm has to face within government postulating requirements and the impact of AI-based applications. Many scholars have identified legal challenges in applying AI-based systems in government and public sector organizations [25,27].
Given the advancements of and complexity in AI systems, it is expected that very few people will ever understand how they work. At the same time, the majority of the workings of these networks will appear to be “black boxes” to most people [56,96]. Any system without AI is a machine designed by humans and controlled by its operator. Therefore, accountability should be on the operator. According to all public and criminal laws around the globe, they unanimously attribute this responsibility to the operator, manufacturer, developer, or owner of the machines depending on the case and facts [97,98,99]. However, when machines are equipped with AI and can make self-directed decisions, accountability becomes difficult to answer.
Moreover, the algorithms used for decision-making in these systems are sometimes unknown to the developer himself. Therefore, AI machines can reach unpredictable results and discover ideal ways of completing tasks using unintended means. For instance, we can recall a famous incident on Facebook, where two robots started conversations with each other in an invented language to complete a task they were given. These robots were programmed to converse using Natural Language Processing (NLP), but they developed a new and more effective communication language, which shows the unintended consequences of using AI systems [25,100,101]. Though accountability remains a question, significantly, in a broader way, we should differentiate outputs from AI and human-based decision-making. For instance, the best external evidence can be acquired in the medical area using medical expertise from medical societies, government bodies, and patients’ preferences and values. We can also have internal proof acquired from AI software and procedures by using AI systems in the medical area. As a result, we may predict that in the future, AI systems will handle data management in medical domains to maintain patient records. However, as patients significantly value empathy and human interactions, especially in the medical field, human interaction will need to remain in this field and be integrated with the AI systems [102,103]. Therefore, legal and ethical responsibility will remain a questionable factor in AI decision-making. From these perspectives, it is likely that multidisciplinary boards will take accountability in complex situations by looking at the information delivered as relevant but not conclusive all the time [81,104,105].
So, in this study, we identified number of variables and discussed their challenges and issues faced in society. The complete information about the variables used in this study and the author details are discussed below in Table 1. Also based on the above discussion, we propose the following research question.
Research Question: How does AI affect behavioral, psychological, ethical, social, and cultural issues in electronic markets?

3. Materials and Methods

A systematic review of the literature summarizes existing information accurately and is the basis for answering specific research questions. We adopted the systematic review as the research method, as it focuses on explicit information and systematic methods that enhance the credibility of research and reduce biases. Tranfield, Denyer, and Smart [106] depicted the systematic review as one of the best research methods in academic and practitioner research. The systematic review method is detailed in Figure 2 below.

Selection of Studies

In March 2021, we did our research using the Web of Science (WEB) and Harvard Hollis, two well-known databases in the social sciences, with WEB of Science being the most popular. These databases aided us in conducting extensive research on our subject. Both datasets were searched using the terms “bad impacts of AI”. We defined the search algorithms in terms of “time”, “document type”, and “language”. We ensured that our selection resulted in high levels of reliability and validity in the systematic literature evaluation. More precisely, two steps were taken in this regard. First, the steps for review and analysis were discussed with researchers both within and outside the field. Second, three researchers were active in the review process to stimulate a higher degree of inter-rater reliability [107,108]. The identification and selection of papers were completed in five steps: (1) selection of resources, (2) selection of keywords, (3) trial search, (4) refining keywords, and (5) constructing a list of papers. These steps of the selection process are shown in Figure 3.
The researchers identified the number of studies in total, which were then compared to the selection criteria to determine the eligibility of studies. The researchers scrutinized articles and categorized them either “included in the study” or “excluded from the study”. Articles that met the selection criteria were labeled “included in study”, whereas those that did not meet the criteria were labeled “excluded from the study”. Some articles did not match and were unmatched from the selection criteria; thus, the researchers put them in a separate category called “possible inclusions”. Despite the fact that some of the articles were irrelevant, they were helpful in understanding the concept of e-commerce. “Conceptual studies” was the label given to the entries in this category. In these publications, researchers looked for theoretical underpinnings and various consequences of e-markets in organizations. The researchers assess the quality of the studies included in the review in the second phase and decided whether or not they should be included. After the first search for studies, we identified 476 pieces of research about negative AI outcomes. After removing duplicate studies, 356 articles remained for our study. The datasheet was prepared with each article title, author name, publication year, and abstract. We screened the articles by thoroughly reading the abstracts and excluded irrelevant studies in the next step. Finally, a total of 137 studies were selected for full-text review. Complete selection criteria for the studies are given in Figure 4.

4. Results and Discussion

This paper discusses the harmful effects of advancements in AI technologies concerning individual behavior and overall society. In this study, we propose the ethical, social, cultural, and legal challenges stemming from the evolution of technology. Additionally, this paper debates the impacts of AI on the market and economy, how the ways of doing business are changing with the help of AI, and what challenges organizations to have to face in this modern era of technology. The rise of AI technologies has transformed human life and ways of doing business. AI is a programmed structure and is not going to express any human-like feelings any time, so there is no reason to think that AI could be hateful or benevolent [37]. AI decisions depend on programming; hence, decisions might have unintentional consequences on the people involved [38]. AI technologies can be highly polarizing, resulting in stress and anxiety. These negative attitudes and emotions arise because some individuals might struggle to accept such novelty in technology [43]. Apart from an individual level, AI is undoubtedly causing the human workforce to change and evolve, and has dramatically impacted ethical and social life. AI has created an additional burden on society. It can stress people mentally, in part by eventually making their work even more challenging [28].
Other dark effects of the AI revolution are that it has dented the market and economy, causing historic mass unemployment and an unpredictable job market transformation. This change is deliberately massive, as researchers stated that we need to build machines that support us in doing work and provide financial stability in markets and the overall economy [11]. The major problem with AI advancement is determining the social and ethical issues for society, such as cyber-security, consumer privacy, and data protection. Due to the evolution of social media platforms and displaying of one’s identity, many adverse outcomes have arisen. There are many things involved in using these sites, such as sharing images, location-sharing data, and in-group behaviors that adversely affect people’s personal information. In these sites, transparency, visibility, and privacy are compromised, and researchers emphasize that users on these social media sites are not in control of their individuality, conversations, information, and images, leading to all sorts of security and privacy threats.
The latest investigation in the UK found that subjects related to privacy and security on social media platforms are the main worries for young people and children [79,80]. Current research on AI’s security and privacy problems presents recommendations to improve the whole system. Researchers propose that companies mention their privacy and regulatory guidelines to consumers and elucidate them on how their information is gathered, stored, and protected [86,87,88]. These policies would help individuals understand the attempt to maintain fairness and gain consumer trust towards AI systems [88]. Additionally, firms should offer consumers more control over their information. Alongside the data protection and privacy issues, there are further legal implications in using AI technologies. Accountability is the primary legal issue of using AI technology. When AI starts making decisions independently, its role goes beyond just a support tool, and whether creator or developer can be held accountable for its decisions is a dilemma. Gupta and Kumari [58] argued the legal issues of AI systems, and mentioned that one of the issues of using AI systems is when the errors occur. Another significant legal issue of AI systems is copyright. At present, there is a need to develop specific legal frameworks to safeguard the safety and protection of AI-generated work. Wirtz, Weyerer, and Geyer [81] also focused on implementation challenges that a firm has to face due to government requirements and the impacts of AI-based applications. Many scholars have recognized legal challenges in executing AI-based systems in government and public sector organizations [25,27].
The public debate on AI’s behavioral, ethical, social, and cultural issues is still in its early stages. There is no consensus that AI evolves positively or negatively in society and what the actual impacts will be soon. Nevertheless, it is recommended that this public debate should be given more attention and stakeholder participation to find out more aspects of it [109]. Discussion related to AI technology currently surrounds the industry regulations and even government regulations on this domain. Some media articles show unrealistic concerns about the expansion of AI technology, but the real problem will be avoiding the plausible negative effects of AI systems, such as unemployment, privacy protection, and loss of human lives. Cases such as the Tesla Autopilot accident in 2016 in which 40-year-old Joshua Brown died because of the AI system showed the criticality of technology [9]. Ethics, morality, and values vary across cultures at the societal level. These issues continue to evolve according to new trends, technological advancements, and tendencies.
Though AI has made these issues more complex, there is not yet a solution to manage these ethical standards in the online world. The main objective is to align the system’s objectives with the individuals’ moral values and the ethical procedures of society. Another resolution that could be taken is to take responsibility and accountability for the irreversible effects that AI can create if it is misused or falls into the wrong hands. Moreover, the AI system should be given a proper code of conduct and ethical standards upon which development activities will be based. The AI system’s top priority and main focus in the new industrial revolution should be safety. People should focus on making the system as ethical as possible and within the control of people. The EU charter illustrates the ethical standards humans and robots will have to respect and follow: privacy, safety, and dignity [47]. Humanity should not put its future in the hands of machines, since it will be tough to take power back from AI technology because a world run by machines will cause unpredictable consequences for human culture, lifestyle, and the overall possibility of survival for humanity [105]. However, we cannot ignore the necessity of technology in this modern world. Thus, the interaction between humans and AI is necessary to maintain a symbolic relation between both parties and to evolve with the help of each other.

5. Theoretical and Practical Implications

This paper discusses the dark effects of AI due to advancements in information technology. It focuses on the following research question: “How does AI affect behavioral, psychological, ethical, social, and cultural issues in electronic markets?” The paper details a broad range of behavioral, cultural, ethical, social, and economic challenges associated with the applications of AI. We further aimed to address the privacy and security risks of AI-based applications. The paper addresses a highly relevant and topical issue. However, as technology with extraordinarily high transformative potential in many such areas, AI needs to be scrutinized in terms of positive and adverse effects on society in multiple ways. From the theoretical perspective, we discussed a broad range of issues that are due to technological advancements from inter-related human (behavioral, cultural, ethical, and social) and market (accountability, legal, security, and privacy) perspectives.
In terms of its practical implications, the primary concerns of AI systems should be safety and privacy. People should focus on making the system as ethical as possible and within the control of ourselves. The current study contributes to AI on an individual and market level in this vein. This paper discusses the individual challenges, such as the behavioral, psychological, and ethical issues of AI and how it affects human life; and identities the security and privacy risks associated with AI in electronic markets. It also discusses the legal and accountability issues of AI in online markets and identifies the role of government in making these systems ethical and secure for the public.
We clarified the roles of people and societies in adopting these innovative technologies to make better use of advanced technology. Such detailed elucidation debating the impact of AI concerning people and society’s behavior might help companies understand the need for developing ethical rules and regulations in electronic markets to ensure the safety of consumers. The companies and marketing firms should make adequate policies, ethical guidelines, rules, and regulations to make AI systems more secure to gain the trust of people and firms.

6. Conclusions

Artificial intelligence systems have both positive and negative impacts on sustainable development in electronic markets. This paper contributes to theoretical and practical knowledge by discussing both sides of AI, including behavioral, psychological, ethical, social, and cultural issues, and offering solutions to better the existing situation. However, the most important component in using AI technology is to consider social and ethical considerations. The AI revolution has harmed the market and economy with historic mass unemployment and an uncertain job market shift. Furthermore, advances in AI are raising security and privacy concerns, particularly as social media platforms emerge. There will be no solution to AI security unless all humans who are capable of breaching AI security are ethically sound. Modeling a people-friendly AI system and an AI-friendly environment for individuals could be a viable strategy for establishing a common context for robots and humans. Making robots more human-like will align machines with human psychology, positively impacting society.
AI has unquestionably changed lives, with both beneficial and harmful consequences. However, the primary purpose of AI should be to use it for humanity’s overarching interests. While AI is fast growing in all areas of people’s personal and professional lives, we must be cautious because history has shown that no matter how powerful a tool is, it can be sabotaged. We should strengthen AI while ensuring that we maintain control over it. AI should be developed in a controlled environment with precise and aligned data collection to fulfill our goals.

Author Contributions

Conceptualization, H.T. and J.W.; Data curation, H.T.; Formal analysis, H.T. and J.W.; Funding acquisition, H.T. and J.W.; Investigation, H.T. and J.W.; Methodology, H.T. and J.W.; Project administration, H.T. and J.W.; Resources, H.T.; Software, H.T.; Supervision, J.W.; Validation, H.T. and J.W.; Visualization, J.W.; Writing—original draft preparation, H.T.; Writing—review & editing, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef]
  2. Scherer, M.U. Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. SSRN Electron. J. 2015, 29, 353. [Google Scholar] [CrossRef]
  3. Malhotra, N.K.; Kim, S.S.; Agarwal, J. Internet users’ information privacy concerns (IUIPC): The construct, the scale, and a causal model. Inf. Syst. Res. 2004, 15, 336–355. [Google Scholar] [CrossRef] [Green Version]
  4. Anandalingam, G.; Day, R.W.; Raghavan, S. The landscape of electronic market design. Manag. Sci. 2005, 51, 316–327. [Google Scholar] [CrossRef] [Green Version]
  5. Johnson, M. Barriers to innovation adoption: A study of e-markets. Ind. Manag. Data Syst. 2010, 110, 157–174. [Google Scholar] [CrossRef]
  6. Oreku, G. Rethinking E-commerce Security. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28 November 2005. [Google Scholar]
  7. Yazdanifard, R.; Edres, N.A.-H.; Seyedi, A.P. Security and Privacy Issues as a Potential Risk for Further E-commerce Development. In Proceedings of the International Conference on Information Communication and Management-IPCSIT; 2011; Volume 16, pp. 23–27. Available online: http://www.ipcsit.com/vol16/5-ICICM2011M008.pdf (accessed on 14 August 2021).
  8. Bauer, W.A.; Dubljević, V. AI Assistants and the Paradox of Internal Automaticity. Neuroethics 2020, 13, 303–310. [Google Scholar] [CrossRef]
  9. Russell, S. Rationality and Intelligence: A Brief Update. In Fundamental Issues of Artificial Intelligence; Springer: Cham, Germany, 2016; pp. 7–28. [Google Scholar]
  10. Goodell, J.W.; Kumar, S.; Lim, W.M.; Pattnaik, D. Artificial intelligence and machine learning in finance: Identifying foundations, themes, and research clusters from bibliometric analysis. J. Behav. Exp. Financ. 2021, 32, 100577. [Google Scholar] [CrossRef]
  11. Du, S.; Xie, C. Paradoxes of artificial intelligence in consumer markets: Ethical challenges and opportunities. J. Bus. Res. 2020, 129, 961–974. [Google Scholar] [CrossRef]
  12. Kaplan, A.; Haenlein, M. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus. Horiz. 2019, 62, 15–25. [Google Scholar] [CrossRef]
  13. Wirtz, B.W.; Weyerer, J.C.; Sturm, B.J. The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration. Int. J. Public Adm. 2020, 43, 818–829. [Google Scholar] [CrossRef]
  14. Kumar, G.; Singh, G.; Bhatanagar, V.; Jyoti, K. Scary dark side of artificial intelligence: A perilous contrivance to mankind. Humanit. Soc. Sci. Rev. 2019, 7, 1097–1103. [Google Scholar] [CrossRef] [Green Version]
  15. Yang, Z.; Shi, Y.; Yan, H. Scale, congestion, efficiency and effectiveness in e-commerce firms. Electron. Commer. Res. Appl. 2016, 20, 171–182. [Google Scholar] [CrossRef]
  16. Faust, M.E. Cashmere: A lux-story supply chain told by retailers to build a competitive sustainable advantage. Int. J. Retail. Distrib. Manag. 2013, 41, 973–985. [Google Scholar] [CrossRef]
  17. Ingaldi, M.; Ulewicz, R. How to make e-commerce more successful by use of Kano’s model to assess customer satisfaction in terms of sustainable development. Sustainability 2019, 11, 4830. [Google Scholar] [CrossRef] [Green Version]
  18. Lim, W.M. The Sustainability Pyramid: A Hierarchical Approach to Greater Sustainability and the United Nations Sustainable Development Goals with Implications for Marketing Theory, Practice, and Public Policy. Aust. Mark. J. 2022, 4, 1–21. [Google Scholar] [CrossRef]
  19. Lv, Z.; Qiao, L.; Singh, A.K.; Wang, Q. AI-empowered IoT Security for Smart Cities. ACM Trans. Internet Technol. 2021, 21, 1–21. [Google Scholar] [CrossRef]
  20. Rao, B.T.; Patibandla, R.S.M.L.; Narayana, V.L. Comparative Study on Security and Privacy Issues in VANETs. In Proceedings of the Cloud and IoT-Based Vehicular Ad Hoc Networks, Guntur, India, 22 April 2021; pp. 145–162. [Google Scholar]
  21. Holzinger, A.; Weippl, E.; Tjoa, A.M.; Kieseberg, P. Digital Transformation for Sustainable Development Goals (SDGs)—A Security, Safety and Privacy Perspective on AI. Lect. Notes Comput. Sci. 2021, 12844, 1–20. [Google Scholar]
  22. Nguyen, V.L.; Lin, P.C.; Cheng, B.C.; Hwang, R.H.; Lin, Y.D. Security and Privacy for 6G: A Survey on Prospective Technologies and Challenges. IEEE Commun. Surv. Tutor. 2021, 23, 2384–2428. [Google Scholar] [CrossRef]
  23. Oseni, A.; Moustafa, N.; Janicke, H.; Liu, P.; Tari, Z.; Vasilakos, A. Security and privacy for artificial intelligence: Opportunities and challenges. arXiv 2021, arXiv:2102.04661. [Google Scholar]
  24. Xu, J.; Yang, P.; Xue, S.; Sharma, B.; Sanchez-Martin, M.; Wang, F.; Beaty, K.A.; Dehan, E.; Parikh, B. Translating cancer genomics into precision medicine with artificial intelligence: Applications, challenges and future perspectives. Hum. Genet. 2019, 138, 109–124. [Google Scholar] [CrossRef] [Green Version]
  25. Dwivedi, Y.K.; Hughes, L.; Ismagilova, E.; Aarts, G.; Coombs, C.; Crick, T.; Duan, Y.; Dwivedi, R.; Edwards, J.; Eirug, A.; et al. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 2021, 57, 101994. [Google Scholar] [CrossRef]
  26. Kumar, S.; Lim, W.M.; Pandey, N.; Westland, J.C. 20 years of Electronic Commerce Research. Electron. Commer. Res. 2021, 21, 1–40. [Google Scholar] [CrossRef]
  27. Sun, T.Q.; Medaglia, R. Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare. Gov. Inf. Q. 2019, 36, 368–383. [Google Scholar] [CrossRef]
  28. Złotowski, J.; Proudfoot, D.; Yogeeswaran, K.; Bartneck, C. Anthropomorphism: Opportunities and Challenges in Human—Robot Interaction. Int. J. Soc. Robot. 2015, 7, 347–360. [Google Scholar] [CrossRef]
  29. Benedikt, C.; Osborne, M.A. Technological Forecasting & Social Change The future of employment: How susceptible are jobs to computerisation? Technol. Forecast. Soc. Chang. 2017, 114, 254–280. [Google Scholar]
  30. Horvitz, E. Artificial Intelligence and Life in 2030. In One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel; Stanford: California, CA, USA, 2016. [Google Scholar]
  31. Kaplan, A.; Haenlein, M. ScienceDirect Rulers of the world, unite! The challenges and opportunities of artificial intelligence. Bus. Horiz 2019, 63, 37–50. [Google Scholar] [CrossRef]
  32. Risse, M. Human rights and artificial intelligence: An urgently needed Agenda. Hum. Rights Q. 2019, 41, 1–16. [Google Scholar] [CrossRef]
  33. Jonsson, A.; Svensson, V. Systematic Lead Time Analysis. Master’s Thesis, Chalmers University of Technology, Göteborg, Sweden, 2016. [Google Scholar]
  34. Wang, L.; Törngren, M.; Onori, M. Current status and advancement of cyber-physical systems in manufacturing. J. Manuf. Syst. 2015, 37, 517–527. [Google Scholar] [CrossRef]
  35. Furnell, S.M.; Warren, M.J. Computer hacking and cyber terrorism: The real threats in the new millennium? Comput. Secur. 1999, 18, 28–34. [Google Scholar] [CrossRef]
  36. Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to sequence learning with neural networks. Adv. Neural Inf. Process. Syst. 2014, 4, 3104–3112. [Google Scholar]
  37. Arulkumaran, K.; Deisenroth, M.P.; Brundage, M.; Bharath, A.A. A Brief Survey of Deep Reinforcement Learning. IEEE Signal Process. Mag. 2017, 34, 26–38. [Google Scholar] [CrossRef] [Green Version]
  38. Banerjee, S.; Singh, P.K.; Bajpai, J. A comparative study on decision-making capability between human and artificial intelligence. In Nature Inspired Computing; Springer: Singapore, 2018; pp. 203–210. [Google Scholar]
  39. Lacey, G.; Taylor, G. Deep Learning on FPGAs: Past, present, and future. arXiv 2016, arXiv:1602.04283. [Google Scholar]
  40. Norman, D.A. Approaches to the study of intelligence. Artif. Intell. 1991, 47, 327–346. [Google Scholar] [CrossRef]
  41. Lin, W.; Lin, S.; Yang, T. Integrated Business Prestige and Artificial Intelligence for Corporate Decision Making in Dynamic Environments. Cybern. Syst. 2017, 48, 303–324. [Google Scholar] [CrossRef]
  42. Thierer, A.; O’Sullivan, A.C.; Russell, R. Artificial Intelligence and Public Policy; Mercatus Research Centre at George Mason University: Arlington, VA, USA, 2017. [Google Scholar]
  43. Nomura, T.; Kanda, T.; Suzuki, T.; Kato, K. Prediction of Human Behavior in Human—Robot Interaction Using Psychological Scales for Anxiety and Negative Attitudes Toward Robots. IEEE Trans. Robot 2008, 24, 442–451. [Google Scholar] [CrossRef]
  44. Dautenhahn, K.; Bond, A.H.; Canamero, L.; Edmonds, B. Socially Intelligent Agents: Creating Relationships with Computers and Robots; Kluwer Academic Publishers: Munich, Germany, 2008. [Google Scholar]
  45. Raina, R.; Madhavan, A.; Ng, A.Y. Large-scale deep unsupervised learning using graphics processors. In Proceedings of the 26th Annual International Conference on Machine Learning, California, CA, USA, 14 June 2009; pp. 873–880. [Google Scholar]
  46. Luxton, D.D. Artificial intelligence in psychological practice: Current and future applications and implications. Prof. Psychol. Res. Pract. 2014, 45, 332–339. [Google Scholar] [CrossRef] [Green Version]
  47. Pavaloiu, A.; Kose, U. Ethical Artificial Intelligence—An Open Question. J. Multidiscip. Dev. 2017, 2, 15–27. [Google Scholar]
  48. Wang, P. On Defining Artificial Intelligence. J. Artif. Gen. Intell. 2019, 10, 1–37. [Google Scholar] [CrossRef] [Green Version]
  49. Bostrom, N.; Yudkowsky, E. The ethics of artificial nutrition. Medicine 2014, 47, 166–168. [Google Scholar]
  50. André, Q.; Carmon, Z.; Wertenbroch, K.; Crum, A.; Frank, D.; Goldstein, W.; Huber, J.; Van Boven, L.; Weber, B.; Yang, H. Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big Data. Cust. Needs Solut. 2018, 5, 28–37. [Google Scholar] [CrossRef] [Green Version]
  51. Samaha, M.; Hawi, N.S. Computers in Human Behavior Relationships among smartphone addiction, stress, academic performance, and satisfaction with life. Comput. Hum. Behav. 2016, 57, 321–325. [Google Scholar] [CrossRef]
  52. Van den Eijnden, R.J.J.M.; Lemmens, J.S.; Valkenburg, P.M. Computers in Human Behavior the Social Media Disorder Scale: Validity and psychometric properties. Comput. Hum. Behavior. 2016, 61, 478–487. [Google Scholar] [CrossRef] [Green Version]
  53. Lee, J.; Kim, S.; Ham, C. A Double-Edged Sword? Predicting Consumers’ Attitudes Toward and Sharing Intention of Native Advertising on Social Media. Am. Behav. Sci. 2016, 60, 1425–1441. [Google Scholar] [CrossRef]
  54. Valenzuela, S.; Piña, M.; Ramírez, J. Behavioral Effects of Framing on Social Media Users: How Conflict, Economic, Human Interest, and Morality Frames Drive News Sharing. J. Commun. 2016, 67, 803–826. [Google Scholar] [CrossRef]
  55. Roberts, J.A.; David, M.E. Computers in Human Behavior My life has become a major distraction from my cell phone: Partner phubbing and relationship satisfaction among romantic partners. Comput. Hum. Behav. 2016, 54, 134–141. [Google Scholar] [CrossRef]
  56. Wirtz, B.W.; Weyerer, J.C.; Geyer, C. Artificial intelligence and the public sector—Applications and challenges. Int. J. Public Adm. 2019, 42, 596–615. [Google Scholar] [CrossRef]
  57. Duan, Y.; Edwards, J.S.; Dwivedi, Y.K. International Journal of Information Management Artificial intelligence for decision making in the era of Big Data—Evolution, challenges and research agenda. Int. J. Inf. Manag. 2019, 48, 63–71. [Google Scholar] [CrossRef]
  58. Gupta, R.K.; Kumari, R. Artificial Intelligence in Public Health: Opportunities and Challenges. JK Sci. 2017, 19, 191–192. [Google Scholar]
  59. Bakos, J.Y. Reducing buyer search costs: Implications for electronic marketplaces. Manag. Sci. 1997, 43, 1676–1692. [Google Scholar] [CrossRef] [Green Version]
  60. Pathak, B.K.; Bend, S. Internet of Things Enabled Electronic Markets: Transparent. Issues Inf. Syst. 2020, 21, 306–316. [Google Scholar]
  61. Piccardi, C.; Tajoli, L. Complexity, centralization, and fragility in economic networks. PLoS ONE 2018, 13, 1–13. [Google Scholar] [CrossRef]
  62. Baye, M.R.; Morgan, J.; Scholten, P. Chapter 6 Information, search, and price dispersion. In Handbook on Economics and Information Systems; Elsevier Press: Amsterdam, The Netherland, 2006; pp. 323–375. [Google Scholar]
  63. Smith, M.D.; Brynjolfsson, E. Consumer decision-making at an Internet shopbot: Brand still matters. J. Ind. Econ. 2001, 49, 541–558. [Google Scholar] [CrossRef]
  64. Clay, K.; Krishnan, R.; Wolff, E.; Fernandes, D. Retail strategies on the web: Price and non-price competition in the online book industry. J. Ind. Econ. 2002, 50, 351–367. [Google Scholar] [CrossRef]
  65. Ackoff, R.L. Management misinformation systems. Manag. Sci. 1967, 14, 11. [Google Scholar] [CrossRef]
  66. Grover, V.; Lim, J.; Ayyagari, R. The dark side of information and market efficiency in e-markets. Decis. Sci. 2006, 37, 297–324. [Google Scholar] [CrossRef]
  67. Keller, K.L.; Staelin, R. Effects of Quality and Quantity of Information on Decision Effectiveness. J. Consum. Res. 1987, 14, 200. [Google Scholar] [CrossRef]
  68. Pontiggia, A.; Virili, F. Network effects in technology acceptance: Laboratory experimental evidence. Int. J. Inf. Manag. 2010, 30, 68–77. [Google Scholar] [CrossRef]
  69. Bantas, K.; Aryastuti, N.; Gayatri, D. The relationship between antenatal care with childbirth complication in Indonesian’s mothers (data analysis of the Indonesia Demographic and Health Survey 2012). J. Epidemiol. Kesehat. Indones. 2019, 2, 2. [Google Scholar] [CrossRef]
  70. Lee, I.H.; Mason, R. Market structure in congestible markets. Eur. Econ. Rev. 2001, 45, 809–818. [Google Scholar] [CrossRef]
  71. Swan, J.E.; Nolan, J.J. Gaining customer trust: A conceptual guide for the salesperson. J. Pers. Sell. Sales Manag. 1985, 5, 39–48. [Google Scholar]
  72. Bolton, G.E.; Kusterer, D.J.; Mans, J. Inflated reputations: Uncertainty, leniency, and moral wiggle room in trader feedback systems. Manag. Sci. 2019, 65, 5371–5391. [Google Scholar] [CrossRef]
  73. Manyika, J.; Chui, M.; Miremadi, M.; Bughin, J.; George, K.; Willmott, P.; Dewhurst, M. Harnessing Automation for a Future that Works. McKinsey Glob. Inst. 2017, 8, 1–14. [Google Scholar]
  74. Briot, J.P. Deep learning techniques for music generation—A survey. arXiv 2017, arXiv:1709.01620. [Google Scholar]
  75. Zanzotto, F.M. Viewpoint: Human-in-the-loop Artificial Intelligence. J. Artif. Intell. Res. 2019, 64, 243–252. [Google Scholar] [CrossRef] [Green Version]
  76. Tizhoosh, L.P.R. Artificial Intelligence and Digital Pathology: Challenges and Opportunities. J. Pathol. Inform. 2018, 9. [Google Scholar] [CrossRef]
  77. Bughin, J.; Seong, J.; Manyika, J.; Chui, M.; Joshi, R. Notes from the AI Frontier: Modeling the Global Economic Impact of AI|McKinsey. Available online: https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy (accessed on 14 August 2021).
  78. Sahmim, S.; Gharsellaoui, H. Privacy and Security in Internet-based Computing: Cloud Computing, Internet of Things, Cloud of Things: A review. Procedia Comput. Sci. 2017, 112, 1516–1522. [Google Scholar] [CrossRef]
  79. Baccarella, C.V.; Wagner, T.F.; Kietzmann, J.H.; McCarthy, I.P. Social media? It’s serious! Understanding the dark side of social media. Eur. Manag. J. 2018, 36, 431–438. [Google Scholar] [CrossRef]
  80. Cowie, H. Cyberbullying and its impact on young people’s emotional health and well-being. Psychiatrist 2013, 37, 167–170. [Google Scholar] [CrossRef] [Green Version]
  81. Pesapane, F.; Volonté, C.; Codari, M.; Sardanelli, F. Artificial intelligence as a medical device in radiology: Ethical and regulatory issues in Europe and the United States. Insights Imaging 2018, 9, 745–753. [Google Scholar] [CrossRef]
  82. Smith, H.J.; Milberg, S.J.; Burke, S.J.; Hall, O.N. Privacy: Concerns Organizational. MIS Q. 1996, 20, 167–196. [Google Scholar] [CrossRef]
  83. Gwebu, K.L.; Wang, J.; Wang, L. The Role of Corporate Reputation and Crisis Response Strategies in Data Breach Management. J. Manag. Inf. Syst. 2018, 35, 683–714. [Google Scholar] [CrossRef]
  84. Barocas, S.; Nissenbaum, H. Big data’s end run around anonymity and consent. In Privacy, Big Data, and the Public Good: Frameworks for Engagement; Cambridge University Press: New York, NY, USA, 2013. [Google Scholar]
  85. Sujitparapitaya, S.; Shirani, A.; Roldan, M. Issues in Information Systems. Issues Inf. Syst. 2012, 13, 112–122. [Google Scholar]
  86. Wirtz, J.; Lwin, M.O. Regulatory focus theory, trust, and privacy concern. J. Serv. Res. 2009, 12, 190–207. [Google Scholar] [CrossRef] [Green Version]
  87. Palmatier, R.W.; Martin, K.D. The Intelligent Marketer’s Guide to Data Privacy: The Impact of Big Data on Customer Trust; Springer International Publishing: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
  88. Vail, M.W.; Earp, J.B.; Antón, A.I. An empirical study of consumer perceptions and comprehension of web site privacy policies. IEEE Trans. Eng. Manag. 2008, 55, 442–454. [Google Scholar] [CrossRef]
  89. Ashworth, L.; Free, C. Marketing dataveillance and digital privacy: Using theories of justice to understand consumers’ online privacy concerns. J. Bus. Ethics 2006, 67, 107–123. [Google Scholar] [CrossRef]
  90. Strandburg, K.J. Monitoring, Datafication, and Consent: Legal Approaches to Privacy in the Big Data Context. In Privacy, Big Data and the Public Good; Cambridge University Press: New York, NY, USA, 2013. [Google Scholar]
  91. Kohli, M.; Prevedello, L.M.; Filice, R.W.; Geis, J.R. implementing machine learning in radiology practice and research. Am. J. Roentgenol. 2017, 208, 754–760. [Google Scholar] [CrossRef]
  92. Krittanawong, C. The rise of artificial intelligence and the uncertain future for physicians. Eur. J. Intern. Med. 2018, 48, e13–e14. [Google Scholar] [CrossRef]
  93. Ravì, D.; Wong, C.; Deligianni, F.; Berthelot, M.; Andreu-Perez, J.; Lo, B.; Yang, G.Z. Deep Learning for Health Informatics. IEEE J. Biomed. Health Inform. 2017, 21, 4–21. [Google Scholar] [CrossRef] [Green Version]
  94. Mitchell, T.; Brynjolfsson, E. Track how technology is transforming work. Nature 2017, 544, 290–292. [Google Scholar] [CrossRef] [Green Version]
  95. Zatarain, J.M.N. The role of automated technology in the creation of copyright works: The challenges of artificial intelligence. Int. Rev. Law Comput. Technol. 2017, 31, 91–104. [Google Scholar] [CrossRef]
  96. Castelvecchi, D. The black box 2.0 I. Nature 2016, 538, 20–23. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  97. Nambu, T. Legal regulations and public policies for next-generation robots in Japan. AI Soc. 2016, 31, 483–500. [Google Scholar] [CrossRef]
  98. Recht, M.; Bryan, R.N. Artificial Intelligence: Threat or Boon to Radiologists? J. Am. Coll. Radiol. 2017, 14, 1476–1480. [Google Scholar] [CrossRef] [PubMed]
  99. Staples, M.; Niazi, M.; Jeffery, R.; Abrahams, A.; Byatt, P.; Murphy, R. An exploratory study of why organizations do not adopt CMMI. J. Syst. Softw. 2007, 80, 883–895. [Google Scholar] [CrossRef]
  100. Howard, A.; Borenstein, J. The Ugly Truth about Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity. Sci. Eng. Ethics 2018, 24, 1521–1536. [Google Scholar] [CrossRef]
  101. Lauscher, A. Life 3.0: Being human in the age of artificial intelligence. Internet Hist. 2019, 3, 101–103. [Google Scholar] [CrossRef]
  102. Gaggioli, A.; Riva, G.; Peters, D.; Calvo, R.A. Emotions and Affect in Human Factors and Human-Computer Interaction; Jeon, M., Ed.; Elsevier: Cambridge, MA, USA, 2017. [Google Scholar]
  103. Soh, C.; Markus, M.L.; Goh, K.H. Electronic Marketplaces and Price Transparency: Strategy, Information Technology, and Success. Pharmacogenomics 2006, 3, 781–791. [Google Scholar] [CrossRef]
  104. Etzioni, A.; Etzioni, O. Incorporating Ethics into Artificial Intelligence. J. Ethics 2017, 21, 403–418. [Google Scholar] [CrossRef]
  105. Yampolskiy, R.V. Artificial intelligence safety engineering: Why machine ethics is a wrong approach. Stud. Appl. Philos. Epistemol. Ration. Ethics 2013, 5, 389–396. [Google Scholar]
  106. Murata, K.; Wakabayashi, K.; Watanabe, A. Study on and instrument to assess knowledge supply chain systems using advanced kaizen activity in SMEs. Supply Chain Forum 2014, 15, 20–32. [Google Scholar] [CrossRef]
  107. Kitchenham, B.; Charters, S. Methods for Automatic Control Of Lifting Devices in Jack-Up Systems; IEEE Access: Hoo Chi Minh, Vietnam, 2007. [Google Scholar]
  108. Perez-Staples, D.; Prabhu, V.; Taylor, P.W. Post-teneral protein feeding enhances sexual performance of Queensland fruit flies. Physiol. Entomol. 2007, 32, 225–232. [Google Scholar] [CrossRef]
  109. Bostrom, N.; Yudkowsky, E. The Ethics of Artificial Intelligence. IFIP Adv. Inf. Commun. Technol. 2021, 555, 55–69. [Google Scholar]
Figure 1. Conceptual Framework highlighting risks and issues associated with AI.
Figure 1. Conceptual Framework highlighting risks and issues associated with AI.
Sustainability 14 03568 g001
Figure 2. Systematic review process.
Figure 2. Systematic review process.
Sustainability 14 03568 g002
Figure 3. Flowchart of the studies selection process.
Figure 3. Flowchart of the studies selection process.
Sustainability 14 03568 g003
Figure 4. Flowchart of selection criteria for the studies.
Figure 4. Flowchart of selection criteria for the studies.
Sustainability 14 03568 g004
Table 1. Contributions of key studies concerning behavioral, social, and cultural factors due to rising AI technologies.
Table 1. Contributions of key studies concerning behavioral, social, and cultural factors due to rising AI technologies.
Identified Variables Main Challenges/Issues Discussed in LiteratureAuthors Discussing These Variables
Behavioral, psychological, and cultural factors
  • AI is causing the human workforce to change and evolve. With humans losing jobs to machines, the real challenge is to find new responsibilities that may require unique human abilities
  • AI This creates extra pressure on society and change human behavior and stress them psychologically, making them work even more challenging for a living
  • AI systems do not show any human-like feelings, so there is no chance that AI would be malicious or benevolent in the future, so it can drag the entire world into an AI war that could cause significant setbacks
  • Data power AI algorithms, and as more and more data are collected about every individual’s demographics, our privacy gets compromised. Interaction with machines is a huge challenge for society as it has already changed behaviors
  • People wasting more time in using these technologies tend to be more compassionate. Some researchers consider the advantages of AI technologies but also articulated their concerns since AI intentionally or not, could cause massive destruction if not managed and checked properly
  • Ackoff, 1967
  • Barocas & Nissenbaum, 2013
  • Cowie, 2013
  • Gaggioli et al., 2017
  • Gwebu et al., 2018
  • Howard & Borenstein, 2018
  • Kaplan & Haenlein, 2019
  • Lauscher, 2019
  • Nambu, 2016
  • Nomura et al., 2008;
  • Roberts & David, 2016
  • Russell, 2016
  • Samaha & Hawi, 2016
  • Scherer, 2015
  • Staples et al., 2007
  • Valenzuela et al., 2016
Ethical and social issues
  • With the development of AI technology, there have been many ethical and social issues concerning the activities of humans and the control of technologies that function autonomously
  • In the context of AI, societal issues are highlighted; these include the potential for large-scale unemployment, reduced autonomy, and a decline in wellbeing. Due to the rise in AI technologies, many people are currently losing jobs; machines are replacing them. This situation is getting worse day by day with the advancement of technology
  • AI platforms such as Facebook, Instagram, and YouTube are widely used all over the world and have had detrimental effects ethically and socially by engaging people online, resulting in addictive behaviors related to smartphones and social media platforms that distracts these users away from healthier activities
  • Digital addiction is widespread and causes disturbances that negatively influence individual academic or organizational performance, quality of life, and relationships
  • André et al., 2018
  • Baccarella et al., 2018
  • Baye et al., 2006
  • Castelvecchi, 2016
  • Christina Soh & Kim Huat Goh, 2006
  • Cowie, 2013
  • Kohli et al., 2017
  • Mnih et al., 2015
  • Nomura et al., 2008
  • Qian & Medaglia, 2018
  • Raina et al., 2009
  • Recht & Bryan, 2017
  • Strandburg, 2013
Security and privacy issues
  • AI systems enhance chances to access, collect, share the consumers’ personal information, which is morally wrong and risky.
  • Privacy is currently one of the most significant issues worldwide due to the data-centric nature of AI systems. With the development of AI technologies, it has become challenging to control people’s information as there are many ways to spread it.
  • AI is not in one’s control and not specified in terms of sharing of data so that one can access your information
  • Therefore, with the advancement in AI systems and AI-enabled products, the constant rise of social media sites, cloud data and mobile environments enhances the potential risk of cybercrime are factors that are reinforcing the need for cybersecurity
  • Recently, there has been a burst of data breaches in various systems, including social media (e.g., Google, Facebook, Instagram, LinkedIn, Yahoo), developers of software (e.g., Adobe, where more than 150 million users passwords were compromised), retailers (e.g., more than 40 million debit and credit cards were stolen in stores), banks (e.g., Federal Reserve Bank of US website was hacked) and many others
  • André et al., 2018
  • Baccarella et al., 2018
  • Bakos, 1997
  • Banerjee et al., 2018
  • Barocas & Nissenbaum, 2013
  • Bauer & Dubljević, 2020
  • Cowie, 2013
  • Du & Xie, 2020
  • Etzioni & Etzioni, 2017
  • Furnell & Warren, 1999
  • Gaggioli et al., 2017
  • Gwebu et al., 2018
  • Horvitz, 2016
  • Kumar et al., 2019
  • Xu et al., 2019
  • Yampolskiy, 2013
  • Zatarain, 2017
  • Złotowski et al., 2015
Accountability and legal issues
  • When AI starts making decisions autonomously, its role goes beyond just a support tool, and a problem occurs whether creator or developer can be held accountable for its decisions
  • The issue of accountability asks who will be held responsible if the AI device makes a mistake? AI decision-making is based solely on data, and it works on algorithms that are put in their system from the beginning.
  • AI devices or networks cannot imitate the human brain to think on a different matter and make decisions according to different situations. These are just programs and can make programmed repetitive decisions, but the positives of these devices are that they are more accurate and quick in making decisions
  • Therefore, legal and accountability will remain a questionable factors in AI decision-making. Although, from these perspectives, it’s likely to say that multidisciplinary boards will take responsibility in complex situations by looking at the information delivered as relevant but not conclusive all the time
  • Ashworth & Free, 2006
  • Baccarella et al., 2018
  • Barocas & Nissenbaum, 2013
  • Kohli et al., 2017
  • Mitchell & Brynjolfsson, 2017
  • Nambu, 2016
  • Pavaloiu & Kose, 2017
  • Pesapane et al., 2018
  • Pontiggia & Virili, 2010
  • Staples et al., 2007
  • Strandburg, 2013
  • Udo, Bagchi, 2012
  • Vail et al., 2008
  • Yampolskiy, 2013
  • Zanzotto, 2019
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Thamik, H.; Wu, J. The Impact of Artificial Intelligence on Sustainable Development in Electronic Markets. Sustainability 2022, 14, 3568. https://doi.org/10.3390/su14063568

AMA Style

Thamik H, Wu J. The Impact of Artificial Intelligence on Sustainable Development in Electronic Markets. Sustainability. 2022; 14(6):3568. https://doi.org/10.3390/su14063568

Chicago/Turabian Style

Thamik, Hanane, and Jiang Wu. 2022. "The Impact of Artificial Intelligence on Sustainable Development in Electronic Markets" Sustainability 14, no. 6: 3568. https://doi.org/10.3390/su14063568

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop