Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = AI arms race

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 454 KiB  
Article
Artificial Intelligence and the Sustainability of the Signaling and Human Capital Roles of Higher Education
by W. Robert J. Alexander and Raffaella Belloni
Sustainability 2024, 16(20), 8802; https://doi.org/10.3390/su16208802 - 11 Oct 2024
Cited by 1 | Viewed by 2293
Abstract
Over the last several decades, there has been an arms race to acquire credentials as higher education has shifted from an elitist system to mass education. From an individual perspective, given the higher education system and labor market conditions, it is rational to [...] Read more.
Over the last several decades, there has been an arms race to acquire credentials as higher education has shifted from an elitist system to mass education. From an individual perspective, given the higher education system and labor market conditions, it is rational to pursue advanced qualifications. However, whether the education system delivers improvements in human capital or is principally a signaling mechanism is questionable. Estimates of the proportion of labor market rewards due to signaling range as high as 80%, suggesting that education is not only expensive but inefficient. In an increasingly transactional environment in which education providers are highly motivated by financial considerations, this situation is only likely to be exacerbated by the rapid developments in artificial intelligence (AI). The use of AI has the potential to make learning more effective, but given that many students see credential acquisition as transactional, it may reduce both human capital and the value of the signaling effect. If the credibility of the credentials offered is further damaged, the higher education sector in its present form and scale may well be unsustainable. We examine the evidence on credential inflation, returns to education, and mismatch of graduates to jobs before analyzing how AI is likely to affect these trends. We then suggest possible responses of prospective students, education providers, and employers to the growing adoption of AI in both education and the workplace. We conclude that the current offerings of generalist degrees, as opposed to vocational qualifications, are not sustainable and that to survive, even in a downsized form, the sector must respond to this disruptive technology by changing both the nature of its offerings and its methods of ensuring that the credentials they offer reflect genuine student learning. Full article
(This article belongs to the Section Sustainable Education and Approaches)
Show Figures

Figure 1

9 pages, 471 KiB  
Review
Generative Artificial Intelligence in Tertiary Education: Assessment Redesign Principles and Considerations
by Che Yee Lye and Lyndon Lim
Educ. Sci. 2024, 14(6), 569; https://doi.org/10.3390/educsci14060569 - 26 May 2024
Cited by 11 | Viewed by 5643
Abstract
The emergence of generative artificial intelligence (AI) such as ChatGPT has sparked significant assessment concerns within tertiary education. Assessment concerns have largely revolved around academic integrity issues among students, such as plagiarism and cheating. Nonetheless, it is also critical to consider that generative [...] Read more.
The emergence of generative artificial intelligence (AI) such as ChatGPT has sparked significant assessment concerns within tertiary education. Assessment concerns have largely revolved around academic integrity issues among students, such as plagiarism and cheating. Nonetheless, it is also critical to consider that generative AI models trained on information retrieved from the Internet could produce biased and discriminatory outputs, and hallucination issues in large language models upon which generative AI acts provide made-up and untruthful outputs. This article considers the affordances and challenges of generative AI specific to assessments within tertiary education. It illustrates considerations for assessment redesign with the existence of generative AI and proposes the Against, Avoid and Adopt (AAA) principle to rethink and redesign assessments. It argues that more generative AI tools will emerge exponentially, and hence, engaging in an arms race against generative AI and policing the use of these technologies may not address the fundamental issues in assessments. Full article
(This article belongs to the Special Issue Teaching and Learning with Generative AI)
Show Figures

Figure 1

23 pages, 1105 KiB  
Article
An Ontology-Based Cybersecurity Framework for AI-Enabled Systems and Applications
by Davy Preuveneers and Wouter Joosen
Future Internet 2024, 16(3), 69; https://doi.org/10.3390/fi16030069 - 22 Feb 2024
Cited by 7 | Viewed by 4845
Abstract
Ontologies have the potential to play an important role in the cybersecurity landscape as they are able to provide a structured and standardized way to semantically represent and organize knowledge about a domain of interest. They help in unambiguously modeling the complex relationships [...] Read more.
Ontologies have the potential to play an important role in the cybersecurity landscape as they are able to provide a structured and standardized way to semantically represent and organize knowledge about a domain of interest. They help in unambiguously modeling the complex relationships between various cybersecurity concepts and properties. Leveraging this knowledge, they provide a foundation for designing more intelligent and adaptive cybersecurity systems. In this work, we propose an ontology-based cybersecurity framework that extends well-known cybersecurity ontologies to specifically model and manage threats imposed on applications, systems, and services that rely on artificial intelligence (AI). More specifically, our efforts focus on documenting prevalent machine learning (ML) threats and countermeasures, including the mechanisms by which emerging attacks circumvent existing defenses as well as the arms race between them. In the ever-expanding AI threat landscape, the goal of this work is to systematically formalize a body of knowledge intended to complement existing taxonomies and threat-modeling approaches of applications empowered by AI and to facilitate their automated assessment by leveraging enhanced reasoning capabilities. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

24 pages, 9062 KiB  
Review
Unveil the Secret of the Bacteria and Phage Arms Race
by Yuer Wang, Huahao Fan and Yigang Tong
Int. J. Mol. Sci. 2023, 24(5), 4363; https://doi.org/10.3390/ijms24054363 - 22 Feb 2023
Cited by 24 | Viewed by 8459
Abstract
Bacteria have developed different mechanisms to defend against phages, such as preventing phages from being adsorbed on the surface of host bacteria; through the superinfection exclusion (Sie) block of phage’s nucleic acid injection; by restricting modification (R-M) systems, CRISPR-Cas, aborting infection (Abi) and [...] Read more.
Bacteria have developed different mechanisms to defend against phages, such as preventing phages from being adsorbed on the surface of host bacteria; through the superinfection exclusion (Sie) block of phage’s nucleic acid injection; by restricting modification (R-M) systems, CRISPR-Cas, aborting infection (Abi) and other defense systems to interfere with the replication of phage genes in the host; through the quorum sensing (QS) enhancement of phage’s resistant effect. At the same time, phages have also evolved a variety of counter-defense strategies, such as degrading extracellular polymeric substances (EPS) that mask receptors or recognize new receptors, thereby regaining the ability to adsorb host cells; modifying its own genes to prevent the R-M systems from recognizing phage genes or evolving proteins that can inhibit the R-M complex; through the gene mutation itself, building nucleus-like compartments or evolving anti-CRISPR (Acr) proteins to resist CRISPR-Cas systems; and by producing antirepressors or blocking the combination of autoinducers (AIs) and its receptors to suppress the QS. The arms race between bacteria and phages is conducive to the coevolution between bacteria and phages. This review details bacterial anti-phage strategies and anti-defense strategies of phages and will provide basic theoretical support for phage therapy while deeply understanding the interaction mechanism between bacteria and phages. Full article
(This article belongs to the Special Issue Bacteriophage Biology: From Genomics to Therapy)
Show Figures

Figure 1

20 pages, 881 KiB  
Article
Deterring Deepfake Attacks with an Electrical Network Frequency Fingerprints Approach
by Deeraj Nagothu, Ronghua Xu, Yu Chen, Erik Blasch and Alexander Aved
Future Internet 2022, 14(5), 125; https://doi.org/10.3390/fi14050125 - 21 Apr 2022
Cited by 13 | Viewed by 4579
Abstract
With the fast development of Fifth-/Sixth-Generation (5G/6G) communications and the Internet of Video Things (IoVT), a broad range of mega-scale data applications emerge (e.g., all-weather all-time video). These network-based applications highly depend on reliable, secure, and real-time audio and/or video streams (AVSs), which [...] Read more.
With the fast development of Fifth-/Sixth-Generation (5G/6G) communications and the Internet of Video Things (IoVT), a broad range of mega-scale data applications emerge (e.g., all-weather all-time video). These network-based applications highly depend on reliable, secure, and real-time audio and/or video streams (AVSs), which consequently become a target for attackers. While modern Artificial Intelligence (AI) technology is integrated with many multimedia applications to help enhance its applications, the development of General Adversarial Networks (GANs) also leads to deepfake attacks that enable manipulation of audio or video streams to mimic any targeted person. Deepfake attacks are highly disturbing and can mislead the public, raising further challenges in policy, technology, social, and legal aspects. Instead of engaging in an endless AI arms race “fighting fire with fire”, where new Deep Learning (DL) algorithms keep making fake AVS more realistic, this paper proposes a novel approach that tackles the challenging problem of detecting deepfaked AVS data leveraging Electrical Network Frequency (ENF) signals embedded in the AVS data as a fingerprint. Under low Signal-to-Noise Ratio (SNR) conditions, Short-Time Fourier Transform (STFT) and Multiple Signal Classification (MUSIC) spectrum estimation techniques are investigated to detect the Instantaneous Frequency (IF) of interest. For reliable authentication, we enhanced the ENF signal embedded through an artificial power source in a noisy environment using the spectral combination technique and a Robust Filtering Algorithm (RFA). The proposed signal estimation workflow was deployed on a continuous audio/video input for resilience against frame manipulation attacks. A Singular Spectrum Analysis (SSA) approach was selected to minimize the false positive rate of signal correlations. Extensive experimental analysis for a reliable ENF edge-based estimation in deepfaked multimedia recordings is provided to facilitate the need for distinguishing artificially altered media content. Full article
(This article belongs to the Special Issue 6G Wireless Channel Measurements and Models: Trends and Challenges)
Show Figures

Figure 1

44 pages, 2868 KiB  
Review
Comprehensive Survey of Using Machine Learning in the COVID-19 Pandemic
by Nora El-Rashidy, Samir Abdelrazik, Tamer Abuhmed, Eslam Amer, Farman Ali, Jong-Wan Hu and Shaker El-Sappagh
Diagnostics 2021, 11(7), 1155; https://doi.org/10.3390/diagnostics11071155 - 24 Jun 2021
Cited by 56 | Viewed by 7873
Abstract
Since December 2019, the global health population has faced the rapid spreading of coronavirus disease (COVID-19). With the incremental acceleration of the number of infected cases, the World Health Organization (WHO) has reported COVID-19 as an epidemic that puts a heavy burden on [...] Read more.
Since December 2019, the global health population has faced the rapid spreading of coronavirus disease (COVID-19). With the incremental acceleration of the number of infected cases, the World Health Organization (WHO) has reported COVID-19 as an epidemic that puts a heavy burden on healthcare sectors in almost every country. The potential of artificial intelligence (AI) in this context is difficult to ignore. AI companies have been racing to develop innovative tools that contribute to arm the world against this pandemic and minimize the disruption that it may cause. The main objective of this study is to survey the decisive role of AI as a technology used to fight against the COVID-19 pandemic. Five significant applications of AI for COVID-19 were found, including (1) COVID-19 diagnosis using various data types (e.g., images, sound, and text); (2) estimation of the possible future spread of the disease based on the current confirmed cases; (3) association between COVID-19 infection and patient characteristics; (4) vaccine development and drug interaction; and (5) development of supporting applications. This study also introduces a comparison between current COVID-19 datasets. Based on the limitations of the current literature, this review highlights the open research challenges that could inspire the future application of AI in COVID-19. Full article
Show Figures

Figure 1

24 pages, 2714 KiB  
Article
Risk Capital and Emerging Technologies: Innovation and Investment Patterns Based on Artificial Intelligence Patent Data Analysis
by Roberto S. Santos and Lingling Qin
J. Risk Financial Manag. 2019, 12(4), 189; https://doi.org/10.3390/jrfm12040189 - 14 Dec 2019
Cited by 20 | Viewed by 9590
Abstract
The promise of artificial intelligence (AI) to drive economic growth and improve quality of life has ushered in a new AI arms race. Investments of risk capital fuel this emerging technology. We examine the role that venture capital (VC) and corporate investments of [...] Read more.
The promise of artificial intelligence (AI) to drive economic growth and improve quality of life has ushered in a new AI arms race. Investments of risk capital fuel this emerging technology. We examine the role that venture capital (VC) and corporate investments of risk capital play in the emergence of AI-related technologies. Drawing upon a dataset of 29,954 U.S. patents from 1970 to 2018, including 1484 U.S. patents granted to 224 VC-backed start-ups, we identify AI-related innovation and investment characteristics. Furthermore, we develop a new measure of knowledge coupling at the firm-level and use this to explore how knowledge coupling influences VC risk capital decisions in emerging AI technologies. Our findings show that knowledge coupling is a better predictor of VC investment in emerging technologies than the breadth of a patent’s technological domains. Furthermore, our results show that there are differences in knowledge coupling between private start-ups and public corporations. These findings enhance our understanding of what types of AI innovations are more likely to be selected by VCs and have important implications for our understanding of how risk capital induces the emergence of new technologies. Full article
(This article belongs to the Special Issue Venture Capital and Private Equity)
Show Figures

Figure 1

23 pages, 284 KiB  
Article
Global Solutions vs. Local Solutions for the AI Safety Problem
by Alexey Turchin, David Denkenberger and Brian Patrick Green
Big Data Cogn. Comput. 2019, 3(1), 16; https://doi.org/10.3390/bdcc3010016 - 20 Feb 2019
Cited by 12 | Viewed by 7814
Abstract
There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous [...] Read more.
There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four groups: 1. No AI: AGI technology is banned or its use is otherwise prevented; 2. One AI: the first superintelligent AI is used to prevent the creation of any others; 3. Net of AIs as AI police: a balance is created between many AIs, so they evolve as a net and can prevent any rogue AI from taking over the world; 4. Humans inside AI: humans are augmented or part of AI. We explore many ideas, both old and new, regarding global solutions for AI safety. They include changing the number of AI teams, different forms of “AI Nanny” (non-self-improving global control AI system able to prevent creation of dangerous AIs), selling AI safety solutions, and sending messages to future AI. Not every local solution scales to a global solution or does it ethically and safely. The choice of the best local solution should include understanding of the ways in which it will be scaled up. Human-AI teams or a superintelligent AI Service as suggested by Drexler may be examples of such ethically scalable local solutions, but the final choice depends on some unknown variables such as the speed of AI progress. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
Back to TopTop