Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = AI-assisted cybersecurity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 3597 KB  
Article
Social Engineering Attacks Using Technical Job Interviews: Real-Life Case Analysis and AI-Assisted Mitigation Proposals
by Tomás de J. Mateo Sanguino
Information 2026, 17(1), 98; https://doi.org/10.3390/info17010098 (registering DOI) - 18 Jan 2026
Abstract
Technical job interviews have become a vulnerable environment for social engineering attacks, particularly when they involve direct interaction with malicious code. In this context, the present manuscript investigates an exploratory case study, aiming to provide an in-depth analysis of a single incident rather [...] Read more.
Technical job interviews have become a vulnerable environment for social engineering attacks, particularly when they involve direct interaction with malicious code. In this context, the present manuscript investigates an exploratory case study, aiming to provide an in-depth analysis of a single incident rather than seeking to generalize statistical evidence. The study examines a real-world covert attack conducted through a simulated interview, identifying the technical and psychological elements that contribute to its effectiveness, assessing the performance of artificial intelligence (AI) assistants in early detection and proposing mitigation strategies. To this end, a methodology was implemented that combines discursive reconstruction of the attack, code exploitation and forensic analysis. The experimental phase, primarily focused on evaluating 10 large language models (LLMs) against a fragment of obfuscated code, reveals that the malware initially evaded detection by 62 antivirus engines, while assistants such as GPT 5.1, Grok 4.1 and Claude Sonnet 4.5 successfully identified malicious patterns and suggested operational countermeasures. The discussion highlights how the apparent legitimacy of platforms like LinkedIn, Calendly and Bitbucket, along with time pressure and technical familiarity, act as catalysts for deception. Based on these findings, the study suggests that LLMs may play a role in the early detection of threats, offering a potentially valuable avenue to enhance security in technical recruitment processes by enabling the timely identification of malicious behavior. To the best of available knowledge, this represents the first academically documented case of its kind analyzed from an interdisciplinary perspective. Full article
Show Figures

Figure 1

26 pages, 911 KB  
Article
Pedagogical Transformation Using Large Language Models in a Cybersecurity Course
by Rodolfo Ostos, Vanessa G. Félix, Luis J. Mena, Homero Toral-Cruz, Alberto Ochoa-Brust, Apolinar González-Potes, Ramón A. Félix, Julio C. Ramírez Pacheco, Víctor Flores and Rafael Martínez-Peláez
AI 2026, 7(1), 25; https://doi.org/10.3390/ai7010025 - 13 Jan 2026
Viewed by 263
Abstract
Large Language Models (LLMs) are increasingly used in higher education, but their pedagogical role in fields like cybersecurity remains under-investigated. This research explores integrating LLMs into a university cybersecurity course using a designed pedagogical approach based on active learning, problem-based learning (PBL), and [...] Read more.
Large Language Models (LLMs) are increasingly used in higher education, but their pedagogical role in fields like cybersecurity remains under-investigated. This research explores integrating LLMs into a university cybersecurity course using a designed pedagogical approach based on active learning, problem-based learning (PBL), and computational thinking (CT). Instead of viewing LLMs as definitive sources of knowledge, the framework sees them as cognitive tools that support reasoning, clarify ideas, and assist technical problem-solving while maintaining human judgment and verification. The study uses a qualitative, practice-based case study over three semesters. It features four activities focusing on understanding concepts, installing and configuring tools, automating procedures, and clarifying terminology, all incorporating LLM use in individual and group work. Data collection involved classroom observations, team reflections, and iterative improvements guided by action research. Results show that LLMs can provide valuable, customized support when students actively engage in refining, validating, and solving problems through iteration. LLMs are especially helpful for clarifying concepts and explaining procedures during moments of doubt or failure. Still, common issues like incomplete instructions, mismatched context, and occasional errors highlight the importance of verifying LLM outputs with trusted sources. Interestingly, these limitations often act as teaching opportunities, encouraging critical thinking crucial in cybersecurity. Ultimately, this study offers empirical evidence of human–AI collaboration in education, demonstrating how LLMs can enrich active learning. Full article
(This article belongs to the Special Issue How Is AI Transforming Education?)
Show Figures

Figure 1

23 pages, 1356 KB  
Article
Digital Transformation in Accounting: An Assessment of Automation and AI Integration
by Carlos Sampaio and Rui Silva
Int. J. Financial Stud. 2025, 13(4), 206; https://doi.org/10.3390/ijfs13040206 - 5 Nov 2025
Viewed by 6078
Abstract
This study conducts a bibliometric analysis of the scientific literature on digital, automated, and AI-assisted accounting systems. The data include documents listed in the Web of Science and Scopus databases. The analysis identifies the main authors, countries/territories, sources, and thematic trends. The results [...] Read more.
This study conducts a bibliometric analysis of the scientific literature on digital, automated, and AI-assisted accounting systems. The data include documents listed in the Web of Science and Scopus databases. The analysis identifies the main authors, countries/territories, sources, and thematic trends. The results reveal that the scientific output within this research field has increased since 2018, emphasising the integration of artificial intelligence (AI), robotic process automation, and blockchain technologies in accounting. The findings also suggest that automation enhances efficiency, accuracy, and reliability while also raising concerns about ethics, cybersecurity, and job displacement. This study evaluates the accounting research from early discussions on information systems and automation to current topics such as digital transformation, sustainability, and intelligent decision-making. Furthermore, it contributes to the understanding of the scientific development of digital accounting and addresses future research directions involving AI and machine learning for predictive analytics and fraud detection, blockchain for secure and transparent accounting systems, sustainability through the integration of ESG reporting, and interdisciplinary collaboration between accounting, computer science, and business management to develop intelligent financial systems. The findings provide insights for academics and practitioners aiming to understand the ongoing digital transformation of accounting systems. Full article
(This article belongs to the Special Issue Technologies and Financial Innovation)
Show Figures

Figure 1

30 pages, 1774 KB  
Review
A Systematic Literature Review on AI-Based Cybersecurity in Nuclear Power Plants
by Marianna Lezzi, Luigi Martino, Ernesto Damiani and Chan Yeob Yeun
J. Cybersecur. Priv. 2025, 5(4), 79; https://doi.org/10.3390/jcp5040079 - 1 Oct 2025
Viewed by 2566
Abstract
Cybersecurity management plays a key role in preserving the operational security of nuclear power plants (NPPs), ensuring service continuity and system resilience. The growing number of sophisticated cyber-attacks against NPPs requires cybersecurity experts to detect, analyze, and defend systems and data from cyber [...] Read more.
Cybersecurity management plays a key role in preserving the operational security of nuclear power plants (NPPs), ensuring service continuity and system resilience. The growing number of sophisticated cyber-attacks against NPPs requires cybersecurity experts to detect, analyze, and defend systems and data from cyber threats in near real time. However, managing a large numbers of attacks in a timely manner is impossible without the support of Artificial Intelligence (AI). This study recognizes the need for a structured and in-depth analysis of the literature in the context of NPPs, referring to the role of AI technology in supporting cyber risk assessment processes. For this reason, a systematic literature review (SLR) is adopted to address the following areas of analysis: (i) critical assets to be preserved from cyber-attacks through AI, (ii) security vulnerabilities and cyber threats managed using AI, (iii) cyber risks and business impacts that can be assessed by AI, and (iv) AI-based security countermeasures to mitigate cyber risks. The SLR procedure follows a macro-step approach that includes review planning, search execution and document selection, and document analysis and results reporting, with the aim of providing an overview of the key dimensions of AI-based cybersecurity in NPPs. The structured analysis of the literature allows for the creation of an original tabular outline of emerging evidence (in the fields of critical assets, security vulnerabilities and cyber threats, cyber risks and business impacts, and AI-based security countermeasures) that can help guide AI-based cybersecurity management in NPPs and future research directions. From an academic perspective, this study lays the foundation for understanding and consciously addressing cybersecurity challenges through the support of AI; from a practical perspective, it aims to assist managers, practitioners, and policymakers in making more informed decisions to improve the resilience of digital infrastructure. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

32 pages, 2361 KB  
Article
Exploring the Use and Misuse of Large Language Models
by Hezekiah Paul D. Valdez, Faranak Abri, Jade Webb and Thomas H. Austin
Information 2025, 16(9), 758; https://doi.org/10.3390/info16090758 - 1 Sep 2025
Viewed by 1833
Abstract
Language modeling has evolved from simple rule-based systems into complex assistants capable of tackling a multitude of tasks. State-of-the-art large language models (LLMs) are capable of scoring highly on proficiency benchmarks, and as a result have been deployed across industries to increase productivity [...] Read more.
Language modeling has evolved from simple rule-based systems into complex assistants capable of tackling a multitude of tasks. State-of-the-art large language models (LLMs) are capable of scoring highly on proficiency benchmarks, and as a result have been deployed across industries to increase productivity and convenience. However, the prolific nature of such tools has provided threat actors with the ability to leverage them for attack development. Our paper describes the current state of LLMs, their availability, and their role in benevolent and malicious applications. In addition, we propose how an LLM can be combined with text-to-speech (TTS) voice cloning to create a framework capable of carrying out social engineering attacks. Our case study analyzes the realism of two different open-source TTS models, Tortoise TTS and Coqui XTTS-v2, by calculating similarity scores between generated and real audio samples from four participants. Our results demonstrate that Tortoise is able to generate realistic voice clone audios for native English speaking males, which indicates that easily accessible resources can be leveraged to create deceptive social engineering attacks. As such tools become more advanced, defenses such as awareness, detection, and red teaming may not be able to keep up with dangerously equipped adversaries. Full article
Show Figures

Figure 1

34 pages, 2219 KB  
Review
The Role of the Industrial IoT in Advancing Electric Vehicle Technology: A Review
by Obaida AlHousrya, Aseel Bennagi, Petru A. Cotfas and Daniel T. Cotfas
Appl. Sci. 2025, 15(17), 9290; https://doi.org/10.3390/app15179290 - 24 Aug 2025
Cited by 1 | Viewed by 2670
Abstract
The use of the Industrial Internet of Things within the domain of electric vehicles signifies a paradigm shift toward advanced, integrated, and optimized transport systems. This study thoroughly investigates the pivotal role of the Industrial Internet of Things in elevating various features of [...] Read more.
The use of the Industrial Internet of Things within the domain of electric vehicles signifies a paradigm shift toward advanced, integrated, and optimized transport systems. This study thoroughly investigates the pivotal role of the Industrial Internet of Things in elevating various features of electric vehicle technology, comprising predictive maintenance, vehicle connectivity, personalized user management, energy and fleet optimization, and independent functionalities. Key IIoT applications, such as Vehicle-to-Grid integration and advanced driver-assistance systems, are examined alongside case studies highlighting real-world implementations. The findings demonstrate that IIoT-enabled advanced charging stations lower charging time, while grid stabilization lowers electricity demand, boosting functional sustainability. Battery Management Systems (BMSs) prolong battery lifespan and minimize maintenance intervals. The integration of the IIoT with artificial intelligence (AI) optimizes route planning, driving behavior, and energy consumption, resulting in safer and more efficient autonomous EV operations. Various issues, such as cybersecurity, connectivity, and integration with outdated systems, are also tackled in this study, while emerging trends powered by artificial intelligence, machine learning, and emerging IIoT technologies are also deliberated. This study emphasizes the capacity for IIoT to speed up the worldwide shift to eco-friendly and smart transportation solutions by evaluating the overlap of IIoT and EVs. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

13 pages, 733 KB  
Proceeding Paper
AI-Based Assistant for SORA: Approach, Interaction Logic, and Perspectives for Cybersecurity Integration
by Anton Puliyski and Vladimir Serbezov
Eng. Proc. 2025, 100(1), 65; https://doi.org/10.3390/engproc2025100065 - 1 Aug 2025
Viewed by 1286
Abstract
This article presents the design, development, and evaluation of an AI-based assistant tailored to support users in the application of the Specific Operations Risk Assessment (SORA) methodology for unmanned aircraft systems. Built on a customized language model, the assistant was trained using system-level [...] Read more.
This article presents the design, development, and evaluation of an AI-based assistant tailored to support users in the application of the Specific Operations Risk Assessment (SORA) methodology for unmanned aircraft systems. Built on a customized language model, the assistant was trained using system-level instructions with the goal of translating complex regulatory concepts into clear and actionable guidance. The approach combines structured definitions, contextualized examples, constrained response behavior, and references to authoritative sources such as JARUS and EASA. Rather than substituting expert or regulatory roles, the assistant provides process-oriented support, helping users understand and complete the various stages of risk assessment. The model’s effectiveness is illustrated through practical interaction scenarios, demonstrating its value across educational, operational, and advisory use cases, and its potential to contribute to the digital transformation of safety and compliance processes in the drone ecosystem. Full article
Show Figures

Figure 1

29 pages, 1626 KB  
Article
Cybersecurity for Analyzing Artificial Intelligence (AI)-Based Assistive Technology and Systems in Digital Health
by Abdullah M. Algarni and Vijey Thayananthan
Systems 2025, 13(6), 439; https://doi.org/10.3390/systems13060439 - 5 Jun 2025
Cited by 2 | Viewed by 3159
Abstract
Assistive technology (AT) is increasingly utilized across various sectors, including digital healthcare and sports education. E-learning plays a vital role in enabling students with special needs, particularly those in remote areas, to access education. However, as the adoption of AI-based AT systems expands, [...] Read more.
Assistive technology (AT) is increasingly utilized across various sectors, including digital healthcare and sports education. E-learning plays a vital role in enabling students with special needs, particularly those in remote areas, to access education. However, as the adoption of AI-based AT systems expands, the associated cybersecurity challenges also grow. This study aims to examine the impact of AI-driven assistive technologies on cybersecurity in digital healthcare applications, with a focus on the potential vulnerabilities these technologies present. Methods: The proposed model focuses on enhancing AI-based AT through the implementation of emerging technologies used for security, risk management strategies, and a robust assessment framework. With these improvements, the AI-based Internet of Things (IoT) plays major roles within the AT. This model addresses the identification and mitigation of cybersecurity risks in AI-based systems, specifically in the context of digital healthcare applications. Results: The findings indicate that the application of the AI-based risk and resilience assessment framework significantly improves the security of AT systems, specifically those supporting e-learning for blind users. The model demonstrated measurable improvements in the robustness of cybersecurity in digital health, particularly in reducing cyber risks for AT users involved in e-learning environments. Conclusions: The proposed model provides a comprehensive approach to securing AI-based AT in digital healthcare applications. By improving the resilience of assistive systems, it minimizes cybersecurity risks for users, specifically blind individuals, and enhances the effectiveness of e-learning in sports education. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

32 pages, 2549 KB  
Review
A Narrative Review of Systematic Reviews on the Applications of Social and Assistive Support Robots in the Health Domain
by Daniele Giansanti, Andrea Lastrucci, Antonio Iannone and Antonia Pirrera
Appl. Sci. 2025, 15(7), 3793; https://doi.org/10.3390/app15073793 - 30 Mar 2025
Cited by 4 | Viewed by 4448
Abstract
As the interest in social and assistive support robots (SASRs) grows, a review of 17 systematic reviews was conducted to assess their use in healthcare, emotional well-being, and therapy for diverse populations, including older adults, children, and individuals with autism and dementia. SASRs [...] Read more.
As the interest in social and assistive support robots (SASRs) grows, a review of 17 systematic reviews was conducted to assess their use in healthcare, emotional well-being, and therapy for diverse populations, including older adults, children, and individuals with autism and dementia. SASRs have demonstrated potential in alleviating depression, loneliness, anxiety, and stress, while also improving sleep and cognitive function. Despite these promising outcomes, challenges remain in identifying the most effective interventions, refining robot designs, and evaluating long-term impacts. User acceptance hinges on trustworthiness and empathy-driven design. Compared to earlier review studies, recent research emphasizes the ongoing significance of emotional engagement, the refinement of robot functionalities, and the need to address ethical issues such as privacy and autonomy through robust cybersecurity and data privacy measures. The field is gradually shifting towards a user-centered design approach, focusing on robots as tools to augment, rather than replace, human care. While SASRs offer substantial benefits for emotional well-being and therapeutic support, further research is crucial to enhance their effectiveness and address concerns about replacing human care. Algorethics (AI ethics), interdisciplinary collaboration, and standardization and training emerge as key priorities to ensure the responsible and sustainable deployment of SASRs in healthcare settings, reinforcing the importance of rigorous methodologies and ethical safeguards. Full article
Show Figures

Figure 1

23 pages, 1361 KB  
Article
Using Fuzzy Multi-Criteria Decision-Making as a Human-Centered AI Approach to Adopting New Technologies in Maritime Education in Greece
by Stefanos I. Karnavas, Ilias Peteinatos, Athanasios Kyriazis and Stavroula G. Barbounaki
Information 2025, 16(4), 283; https://doi.org/10.3390/info16040283 - 30 Mar 2025
Cited by 2 | Viewed by 2426
Abstract
The need to review maritime education has been highlighted in the relevant literature. Maritime curricula should incorporate recent technological advances, as well as address the needs of the maritime sector. In this paper, the Fuzzy Delphi Method (FDM) and the Fuzzy Analytic Hierarchy [...] Read more.
The need to review maritime education has been highlighted in the relevant literature. Maritime curricula should incorporate recent technological advances, as well as address the needs of the maritime sector. In this paper, the Fuzzy Delphi Method (FDM) and the Fuzzy Analytic Hierarchy Process (FAHP) are utilized in order to propose a fuzzy multicriteria decision-making (MCDM) methodology that can be used to assess the importance of new technologies in maritime education and design a fuzzy evaluation model that can assist in maritime education policy-making. This study integrates the perspectives of the main maritime education stakeholders, namely, lecturers and maritime sector management. We selected data from a group of 19 experienced maritime professors and maritime business managers. The results indicate that new technologies such as artificial intelligence (AI), augmented and virtual reality (AR/VR), the Internet of Things (IoT), digital twins (DTs), and cybersecurity, as well as eLearning platforms, constitute a set of requirements that maritime education policies should meet by designing their curricula appropriately. This study suggests that fuzzy logic MCDM methods can be used as a human-centered AI approach for developing explainable education policy-making models that integrate stakeholder requirements and capture the subjectivity that is often inherited in their perspectives. Full article
Show Figures

Figure 1

20 pages, 523 KB  
Article
Navigating the CISO’s Mind by Integrating GenAI for Strategic Cyber Resilience
by Šarūnas Grigaliūnas, Rasa Brūzgienė, Kęstutis Driaunys, Renata Danielienė, Ilona Veitaitė, Paulius Astromskis, Živilė Nemickienė, Dovilė Vengalienė, Audrius Lopata, Ieva Andrijauskaitė and Neringa Gaubienė
Electronics 2025, 14(7), 1342; https://doi.org/10.3390/electronics14071342 - 27 Mar 2025
Viewed by 1534
Abstract
AI-driven cyber threats are evolving faster than current defense mechanisms, complicating forensic investigations. As attacks grow more sophisticated, forensic methods struggle to analyze vast wearable device data, highlighting the need for an advanced framework to improve threat detection and responses. This paper presents [...] Read more.
AI-driven cyber threats are evolving faster than current defense mechanisms, complicating forensic investigations. As attacks grow more sophisticated, forensic methods struggle to analyze vast wearable device data, highlighting the need for an advanced framework to improve threat detection and responses. This paper presents a generative artificial intelligence (GenAI)-assisted framework that enhances cyberforensics and strengthens strategic cyber resilience, particularly for chief information security officers (CISOs). It addresses three key challenges: inefficient incident reconstruction, open-source intelligence (OSINT) limitations, and real-time decision-making difficulties. The framework integrates GenAI to automate routine tasks, the cross-layering of digital attributes from wearable devices and open-source intelligence (OSINT) to provide a comprehensive understanding of malicious incidents. By synthesizing digital attributes and applying the 5W approach, the framework facilitates accurate incident reconstruction, enabling CISOs to respond to threats with improved precision. The proposed framework is validated through experimental testing involving publicly available wearable device datasets (e.g., GPS data, pairing and activity logs). The results show that GenAI enhances incident detection and reconstruction, increasing the accuracy and speed of CISOs’ responses to threats. The experimental evaluation demonstrates that our framework improves cyberforensics efficiency by streamlining the integration of digital attributes, reducing the incident reconstruction time and enhancing decision-making precision. The framework enhances cybersecurity resilience in critical infrastructures, although challenges remain regarding data privacy, accuracy and scalability. Full article
Show Figures

Figure 1

27 pages, 2467 KB  
Article
Enhancing Security Operations Center: Wazuh Security Event Response with Retrieval-Augmented-Generation-Driven Copilot
by Ismail, Rahmat Kurnia, Farid Widyatama, Ilham Mirwansyah Wibawa, Zilmas Arjuna Brata, Ukasyah, Ghitha Afina Nelistiani and Howon Kim
Sensors 2025, 25(3), 870; https://doi.org/10.3390/s25030870 - 31 Jan 2025
Cited by 4 | Viewed by 8151
Abstract
The sophistication of cyberthreats demands more efficient and intelligent tools to support Security Operations Centers (SOCs) in managing and mitigating incidents. To address this, we developed the Security Event Response Copilot (SERC), a system designed to assist analysts in responding to and mitigating [...] Read more.
The sophistication of cyberthreats demands more efficient and intelligent tools to support Security Operations Centers (SOCs) in managing and mitigating incidents. To address this, we developed the Security Event Response Copilot (SERC), a system designed to assist analysts in responding to and mitigating security breaches more effectively. SERC integrates two core components: (1) security event data extraction using Retrieval-Augmented Generation (RAG) methods, and (2) LLM-based incident response guidance. This paper specifically utilizes Wazuh, an open-source Security Information and Event Management (SIEM) platform, as the foundation for capturing, analyzing, and correlating security events from endpoints. SERC leverages Wazuh’s capabilities to collect real-time event data and applies a RAG approach to retrieve context-specific insights from three vectorized data collections: incident response knowledge, the MITRE ATT&CK framework, and the NIST Cybersecurity Framework (CSF) 2.0. This integration bridges strategic risk management and tactical intelligence, enabling precise identification of adversarial tactics and techniques while adhering to best practices in cybersecurity. The results demonstrate the potential of combining structured threat intelligence frameworks with AI-driven models, empowered by Wazuh’s robust SIEM capabilities, to address the dynamic challenges faced by SOCs in today’s complex cybersecurity environment. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

30 pages, 1914 KB  
Review
Securing the Future of Railway Systems: A Comprehensive Cybersecurity Strategy for Critical On-Board and Track-Side Infrastructure
by Nisrine Ibadah, César Benavente-Peces and Marc-Oliver Pahl
Sensors 2024, 24(24), 8218; https://doi.org/10.3390/s24248218 - 23 Dec 2024
Cited by 10 | Viewed by 5486
Abstract
The growing prevalence of cybersecurity threats is a significant concern for railway systems, which rely on an extensive network of onboard and trackside sensors. These threats have the potential to compromise the safety of railway operations and the integrity of the railway infrastructure [...] Read more.
The growing prevalence of cybersecurity threats is a significant concern for railway systems, which rely on an extensive network of onboard and trackside sensors. These threats have the potential to compromise the safety of railway operations and the integrity of the railway infrastructure itself. This paper aims to examine the current cybersecurity measures in use, identify the key vulnerabilities that they address, and propose solutions for enhancing the security of railway infrastructures. The report evaluates the effectiveness of existing security protocols by reviewing current standards, including IEC62443 and NIST, as well as case histories of recent rail cyberattacks. Significant gaps have been identified, especially where modern and legacy systems need to be integrated. Weaknesses in communication protocols such as MVB, CAN and TCP/IP are identified. To address these challenges, the paper proposes a layered security framework specific to railways that incorporate continuous monitoring, risk-based cybersecurity modeling, AI-assisted threat detection, and stronger authentication methodologies. The aim of these recommendations is to improve the resilience of railway networks and ensure a safer, more secure infrastructure for future operations. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

38 pages, 11831 KB  
Article
CIPHER: Cybersecurity Intelligent Penetration-Testing Helper for Ethical Researcher
by Derry Pratama, Naufal Suryanto, Andro Aprila Adiputra, Thi-Thu-Huong Le, Ahmada Yusril Kadiptya, Muhammad Iqbal and Howon Kim
Sensors 2024, 24(21), 6878; https://doi.org/10.3390/s24216878 - 26 Oct 2024
Cited by 8 | Viewed by 7254
Abstract
Penetration testing, a critical component of cybersecurity, typically requires extensive time and effort to find vulnerabilities. Beginners in this field often benefit from collaborative approaches with the community or experts. To address this, we develop Cybersecurity Intelligent Penetration-testing Helper for Ethical Researchers (CIPHER), [...] Read more.
Penetration testing, a critical component of cybersecurity, typically requires extensive time and effort to find vulnerabilities. Beginners in this field often benefit from collaborative approaches with the community or experts. To address this, we develop Cybersecurity Intelligent Penetration-testing Helper for Ethical Researchers (CIPHER), a large language model specifically trained to assist in penetration testing tasks as a chatbot. Unlike software development, penetration testing involves domain-specific knowledge that is not widely documented or easily accessible, necessitating a specialized training approach for AI language models. CIPHER was trained using over 300 high-quality write-ups of vulnerable machines, hacking techniques, and documentation of open-source penetration testing tools augmented in an expert response structure. Additionally, we introduced the Findings, Action, Reasoning, and Results (FARR) Flow augmentation, a novel method to augment penetration testing write-ups to establish a fully automated pentesting simulation benchmark tailored for large language models. This approach fills a significant gap in traditional cybersecurity Q&A benchmarks and provides a realistic and rigorous standard for evaluating LLM’s technical knowledge, reasoning capabilities, and practical utility in dynamic penetration testing scenarios. In our assessments, CIPHER achieved the best overall performance in providing accurate suggestion responses compared to other open-source penetration testing models of similar size and even larger state-of-the-art models like Llama 3 70B and Qwen1.5 72B Chat, particularly on insane difficulty machine setups. This demonstrates that the current capabilities of general large language models (LLMs) are insufficient for effectively guiding users through the penetration testing process. We also discuss the potential for improvement through scaling and the development of better benchmarks using FARR Flow augmentation results. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

89 pages, 16650 KB  
Review
Video and Audio Deepfake Datasets and Open Issues in Deepfake Technology: Being Ahead of the Curve
by Zahid Akhtar, Thanvi Lahari Pendyala and Virinchi Sai Athmakuri
Forensic Sci. 2024, 4(3), 289-377; https://doi.org/10.3390/forensicsci4030021 - 13 Jul 2024
Cited by 19 | Viewed by 15482
Abstract
The revolutionary breakthroughs in Machine Learning (ML) and Artificial Intelligence (AI) are extensively being harnessed across a diverse range of domains, e.g., forensic science, healthcare, virtual assistants, cybersecurity, and robotics. On the flip side, they can also be exploited for negative purposes, like [...] Read more.
The revolutionary breakthroughs in Machine Learning (ML) and Artificial Intelligence (AI) are extensively being harnessed across a diverse range of domains, e.g., forensic science, healthcare, virtual assistants, cybersecurity, and robotics. On the flip side, they can also be exploited for negative purposes, like producing authentic-looking fake news that propagates misinformation and diminishes public trust. Deepfakes pertain to audio or visual multimedia contents that have been artificially synthesized or digitally modified through the application of deep neural networks. Deepfakes can be employed for benign purposes (e.g., refinement of face pictures for optimal magazine cover quality) or malicious intentions (e.g., superimposing faces onto explicit image/video to harm individuals producing fake audio recordings of public figures making inflammatory statements to damage their reputation). With mobile devices and user-friendly audio and visual editing tools at hand, even non-experts can effortlessly craft intricate deepfakes and digitally altered audio and facial features. This presents challenges to contemporary computer forensic tools and human examiners, including common individuals and digital forensic investigators. There is a perpetual battle between attackers armed with deepfake generators and defenders utilizing deepfake detectors. This paper first comprehensively reviews existing image, video, and audio deepfake databases with the aim of propelling next-generation deepfake detectors for enhanced accuracy, generalization, robustness, and explainability. Then, the paper delves deeply into open challenges and potential avenues for research in the audio and video deepfake generation and mitigation field. The aspiration for this article is to complement prior studies and assist newcomers, researchers, engineers, and practitioners in gaining a deeper understanding and in the development of innovative deepfake technologies. Full article
(This article belongs to the Special Issue Human and Technical Drivers of Cybercrime)
Show Figures

Figure 1

Back to TopTop