Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (69)

Search Parameters:
Authors = Luis Javier García Villalba ORCID = 0000-0001-7573-6272

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 7083 KiB  
Article
A Microsimulation Model for Sustainability and Detailed Adequacy Analysis of the Retirement Pension System
by Jaime Villanueva-García, Ignacio Moral-Arce and Luis Javier García Villalba
Mathematics 2025, 13(3), 443; https://doi.org/10.3390/math13030443 - 28 Jan 2025
Viewed by 1120
Abstract
The sustainability and adequacy of pension systems are central to public policy debates in aging societies. This paper introduces a novel microsimulation model with probabilistic behavior to assess these dual challenges in the Spanish pension system. The model employs a mixed-projection method, integrating [...] Read more.
The sustainability and adequacy of pension systems are central to public policy debates in aging societies. This paper introduces a novel microsimulation model with probabilistic behavior to assess these dual challenges in the Spanish pension system. The model employs a mixed-projection method, integrating a macro approach—using economic and demographic aggregates from official sources such as the Spanish Statistics Office (INE) and Eurostat—with a micro approach based on the Continuous Sample of Working Lives (MCVL) dataset from Spanish Social Security. This framework enables individual-level projections of key labor market variables, including work time, salary, and initial pensions, under diverse reform scenarios. The results demonstrate the model’s ability to predict initial pensions with high accuracy, providing detailed insights into adequacy by age, gender, and income levels, as well as distributional measures such as density functions and quantiles. Sustainability findings indicate that pension expenditures are projected to stabilize at 13.9% of Gross Domestic Product (GDP) by 2050. The proposed model provides a robust and versatile tool for policymakers, offering a comprehensive evaluation of the long-term impacts of pension reforms on both system sustainability and individual adequacy. Full article
(This article belongs to the Special Issue Computational Economics and Mathematical Modeling)
Show Figures

Figure 1

39 pages, 1833 KiB  
Article
Question–Answer Methodology for Vulnerable Source Code Review via Prototype-Based Model-Agnostic Meta-Learning
by Pablo Corona-Fraga, Aldo Hernandez-Suarez, Gabriel Sanchez-Perez, Linda Karina Toscano-Medina, Hector Perez-Meana, Jose Portillo-Portillo, Jesus Olivares-Mercado and Luis Javier García Villalba
Future Internet 2025, 17(1), 33; https://doi.org/10.3390/fi17010033 - 14 Jan 2025
Viewed by 1611
Abstract
In cybersecurity, identifying and addressing vulnerabilities in source code is essential for maintaining secure IT environments. Traditional static and dynamic analysis techniques, although widely used, often exhibit high false-positive rates, elevated costs, and limited interpretability. Machine Learning (ML)-based approaches aim to overcome these [...] Read more.
In cybersecurity, identifying and addressing vulnerabilities in source code is essential for maintaining secure IT environments. Traditional static and dynamic analysis techniques, although widely used, often exhibit high false-positive rates, elevated costs, and limited interpretability. Machine Learning (ML)-based approaches aim to overcome these limitations but encounter challenges related to scalability and adaptability due to their reliance on large labeled datasets and their limited alignment with the requirements of secure development teams. These factors hinder their ability to adapt to rapidly evolving software environments. This study proposes an approach that integrates Prototype-Based Model-Agnostic Meta-Learning(Proto-MAML) with a Question-Answer (QA) framework that leverages the Bidirectional Encoder Representations from Transformers (BERT) model. By employing Few-Shot Learning (FSL), Proto-MAML identifies and mitigates vulnerabilities with minimal data requirements, aligning with the principles of the Secure Development Lifecycle (SDLC) and Development, Security, and Operations (DevSecOps). The QA framework allows developers to query vulnerabilities and receive precise, actionable insights, enhancing its applicability in dynamic environments that require frequent updates and real-time analysis. The model outputs are interpretable, promoting greater transparency in code review processes and enabling efficient resolution of emerging vulnerabilities. Proto-MAML demonstrates strong performance across multiple programming languages, achieving an average precision of 98.49%, recall of 98.54%, F1-score of 98.78%, and exact match rate of 98.78% in PHP, Java, C, and C++. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Graphical abstract

40 pages, 2488 KiB  
Article
Analysis of Autonomous Penetration Testing Through Reinforcement Learning and Recommender Systems
by Ariadna Claudia Moreno, Aldo Hernandez-Suarez, Gabriel Sanchez-Perez, Linda Karina Toscano-Medina, Hector Perez-Meana, Jose Portillo-Portillo, Jesus Olivares-Mercado and Luis Javier García Villalba
Sensors 2025, 25(1), 211; https://doi.org/10.3390/s25010211 - 2 Jan 2025
Cited by 2 | Viewed by 4131
Abstract
Conducting penetration testing (pentesting) in cybersecurity is a crucial turning point for identifying vulnerabilities within the framework of Information Technology (IT), where real malicious offensive behavior is simulated to identify potential weaknesses and strengthen preventive controls. Given the complexity of the tests, time [...] Read more.
Conducting penetration testing (pentesting) in cybersecurity is a crucial turning point for identifying vulnerabilities within the framework of Information Technology (IT), where real malicious offensive behavior is simulated to identify potential weaknesses and strengthen preventive controls. Given the complexity of the tests, time constraints, and the specialized level of expertise required for pentesting, analysis and exploitation tools are commonly used. Although useful, these tools often introduce uncertainty in findings, resulting in high rates of false positives. To enhance the effectiveness of these tests, Machine Learning (ML) has been integrated, showing significant potential for identifying anomalies across various security areas through detailed detection of underlying malicious patterns. However, pentesting environments are unpredictable and intricate, requiring analysts to make extensive efforts to understand, explore, and exploit them. This study considers these challenges, proposing a recommendation system based on a context-rich, vocabulary-aware transformer capable of processing questions related to the target environment and offering responses based on necessary pentest batteries evaluated by a Reinforcement Learning (RL) estimator. This RL component assesses optimal attack strategies based on previously learned data and dynamically explores additional attack vectors. The system achieved an F1 score and an Exact Match rate over 97.0%, demonstrating its accuracy and effectiveness in selecting relevant pentesting strategies. Full article
(This article belongs to the Special Issue Sensing and Machine Learning Control: Progress and Applications)
Show Figures

Figure 1

23 pages, 500 KiB  
Article
Threading Statistical Disclosure Attack with EM: An Algorithm for Revealing Identity in Anonymous Communication Networks
by Alejandra Guadalupe Silva-Trujillo, Luis Yozil Zamarrón Briceño, Juan Carlos Cuevas-Tello, Pedro David Arjona-Villicaña and Luis Javier García Villalba
Appl. Sci. 2024, 14(23), 11237; https://doi.org/10.3390/app142311237 - 2 Dec 2024
Viewed by 1053
Abstract
Messages sent across multiple platforms can be correlated to infer users’ attitudes, behaviors, preferences, lifestyles, and more. Therefore, research on anonymous communication systems has intensified in the last few years. This research introduces a new algorithm, Threading Statistical Disclosure Attack with EM (TSDA-EM), [...] Read more.
Messages sent across multiple platforms can be correlated to infer users’ attitudes, behaviors, preferences, lifestyles, and more. Therefore, research on anonymous communication systems has intensified in the last few years. This research introduces a new algorithm, Threading Statistical Disclosure Attack with EM (TSDA-EM), that employs real-world data to reveal communication’s behavior in an anonymous social network. In this study, we utilize a network constructed from email exchanges to represent interactions between individuals within an institution. The proposed algorithm is capable of identifying communication patterns within a mixed network, even under the observation of a global passive attacker. By employing multi-threading, this implementation reduced the average execution time by a factor of five when using a dataset with a large number of participants. Additionally, it has markedly improved classification accuracy, detecting more than 79% of users’ communications in large networks and more than 95% in small ones. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

30 pages, 1096 KiB  
Article
A Secure Approach Out-of-Band for e-Bank with Visual Two-Factor Authorization Protocol
by Laerte Peotta de Melo, Dino Macedo Amaral, Robson de Oliveira Albuquerque, Rafael Timóteo de Sousa Júnior, Ana Lucila Sandoval Orozco and Luis Javier García Villalba
Cryptography 2024, 8(4), 51; https://doi.org/10.3390/cryptography8040051 - 11 Nov 2024
Cited by 1 | Viewed by 2186
Abstract
The article presents an innovative approach for secure authentication in internet banking transactions, utilizing an Out-of-Band visual two-factor authorization protocol. With the increasing rise of cyber attacks and fraud, new security models are needed that ensure the integrity, authenticity, and confidentiality of financial [...] Read more.
The article presents an innovative approach for secure authentication in internet banking transactions, utilizing an Out-of-Band visual two-factor authorization protocol. With the increasing rise of cyber attacks and fraud, new security models are needed that ensure the integrity, authenticity, and confidentiality of financial transactions. The identified gap lies in the inability of traditional authentication methods, such as TANs and tokens, to provide security in untrusted terminals. The proposed solution is the Dynamic Authorization Protocol (DAP), which uses mobile devices to validate transactions through visual codes, such as QR codes. Each transaction is assigned a unique associated code, and the challenge must be responded to within 120 s. The customer initiates the transaction on a computer and independently validates it on their mobile device using an out-of-band channel to prevent attacks such as phishing and man-in-the-middle. The methodology involves implementing a prototype in Java ME for Android devices and a Java application server, creating a practical, low-computational-cost system, accessible for use across different operating systems and devices. The protocol was tested in real-world scenarios, focusing on ensuring transaction integrity and authenticity. The results show a successful implementation at Banco do Brasil, with 3.6 million active users, demonstrating the efficiency of the model over 12 years of use without significant vulnerabilities. The DAP protocol provides a robust and effective solution for securing banking transactions and can be extended to other authentication environments, such as payment terminals and point of sale devices. Full article
Show Figures

Figure 1

44 pages, 2748 KiB  
Article
Ontology Development for Asset Concealment Investigation: A Methodological Approach and Case Study in Asset Recovery
by José Alberto Sousa Torres, Daniel Alves da Silva, Robson de Oliveira Albuquerque, Georges Daniel Amvame Nze, Ana Lucila Sandoval Orozco and Luis Javier García Villalba
Appl. Sci. 2024, 14(21), 9654; https://doi.org/10.3390/app14219654 - 22 Oct 2024
Viewed by 1510
Abstract
The concealment of assets is a critical challenge in financial fraud and asset recovery investigations, posing significant obstacles for creditors and regulatory authorities. National governments commonly possess the necessary data for detecting and combating this type of fraud, typically related to personal data [...] Read more.
The concealment of assets is a critical challenge in financial fraud and asset recovery investigations, posing significant obstacles for creditors and regulatory authorities. National governments commonly possess the necessary data for detecting and combating this type of fraud, typically related to personal data and asset ownership. However, this information is often dispersed across different departments within the same government and sometimes in databases shared by other countries. This leads to difficulty semantically integrating this large amount of data in various formats and correlating entities through identifying hidden relationships, which are essential in this type of analysis. In this regard, this work proposes an ontology to support the data integration process in the domain of asset concealment and recovery and fill the gap in the existence of a public ontology for this domain. The applicability of this ontology in the context of integration between data from different departments and countries was validated. The use of the ontology in a pilot project in the context of a tool for investigating this type of fraud was conducted with a Brazilian government agency, and the users validated its applicability. Finally, a new method for constructing ontologies is proposed. The proposed process was evaluated during the asset concealment ontology building and proved to be more suitable than the similar processes analyzed concerning the partial reuse of existing ontologies and the construction of ontologies for data with a transnational scope. Full article
Show Figures

Figure 1

32 pages, 1681 KiB  
Review
Trust Evaluation Techniques for 6G Networks: A Comprehensive Survey with Fuzzy Algorithm Approach
by Elmira Saeedi Taleghani, Ronald Iván Maldonado Valencia, Ana Lucila Sandoval Orozco and Luis Javier García Villalba
Electronics 2024, 13(15), 3013; https://doi.org/10.3390/electronics13153013 - 31 Jul 2024
Cited by 5 | Viewed by 2492
Abstract
Sixth-generation (6G) networks are poised to support an array of advanced technologies and promising high-quality and secure services. However, ensuring robust security, privacy protection, operational efficiency, and superior service delivery poses significant challenges. In this context, trust emerges as a foundational element that [...] Read more.
Sixth-generation (6G) networks are poised to support an array of advanced technologies and promising high-quality and secure services. However, ensuring robust security, privacy protection, operational efficiency, and superior service delivery poses significant challenges. In this context, trust emerges as a foundational element that is critical for addressing the multifaceted challenges inherent in 6G networks. This review article comprehensively examines trust concepts, methodologies, and techniques that are vital for establishing and maintaining a secure and reliable 6G ecosystem. Beginning with an overview of the trust problem in 6G networks, this study underscores their pivotal role in navigating the network’s complexities. It proceeds to explore the conceptual frameworks underpinning trust and discuss various trust models tailored to the unique demands of 6G networks. Moreover, this article surveys a range of scholarly works presenting diverse techniques for evaluating trust by using the fuzzy logic algorithm, which is essential for ensuring the integrity and resilience of 6G networks. Through a meticulous analysis of these techniques, this study elucidates their technical nuances, advantages, and limitations. By offering a comprehensive assessment of trust evaluation methodologies, this review facilitates informed decision making in the design and implementation of secure and trustworthy 6G networks. Full article
(This article belongs to the Special Issue Smart Communication and Networking in the 6G Era)
Show Figures

Figure 1

17 pages, 1875 KiB  
Article
Bootstrap Method of Eco-Efficiency in the Brazilian Agricultural Industry
by André Luiz Marques Serrano, Gabriela Mayumi Saiki, Carlos Rosano-Penã, Gabriel Arquelau Pimenta Rodrigues, Robson de Oliveira Albuquerque and Luis Javier García Villalba
Systems 2024, 12(4), 136; https://doi.org/10.3390/systems12040136 - 17 Apr 2024
Cited by 3 | Viewed by 2294
Abstract
With the economic growth of the Brazilian agroindustry, it is necessary to evaluate the efficiency of this activity in relation to environmental demands for the country’s economic, social, and sustainable development. Within this perspective, the present research aims to examine the eco-efficiency of [...] Read more.
With the economic growth of the Brazilian agroindustry, it is necessary to evaluate the efficiency of this activity in relation to environmental demands for the country’s economic, social, and sustainable development. Within this perspective, the present research aims to examine the eco-efficiency of agricultural production in Brazilian regions, covering 5563 municipalities in the north, northeast, center-west, southeast, and south regions, using data from 2016–2017. In this sense, this study uses the DEA methods (classical and stochastic) and the computational bootstrap method to remove outliers and measure eco-efficiency. The findings lead to two fundamental conclusions: first, by emulating the benchmarks, it is feasible to increase annual revenue and preserved areas to an aggregated regional level by 20.84% while maintaining the same inputs. Given that no municipality has reached an eco-efficiency value equal to 1, there is room for optimization and improvement of production and greater sustainable development of the municipalities. Secondly, climatic factors notably influence eco-efficiency scores, suggesting that increasing temperatures and decreasing precipitation can positively impact eco-efficiency in the region. These conclusions, dependent on regional characteristics, offer valuable information for policymakers to design strategies that balance economic growth and environmental preservation. Furthermore, adaptive policies and measures can be implemented to increase the resilience of local producers and reduce vulnerability to changing climate conditions. Full article
Show Figures

Figure 1

24 pages, 1171 KiB  
Article
Understanding Data Breach from a Global Perspective: Incident Visualization and Data Protection Law Review
by Gabriel Arquelau Pimenta Rodrigues, André Luiz Marques Serrano, Amanda Nunes Lopes Espiñeira Lemos, Edna Dias Canedo, Fábio Lúcio Lopes de Mendonça, Robson de Oliveira Albuquerque, Ana Lucila Sandoval Orozco and Luis Javier García Villalba
Data 2024, 9(2), 27; https://doi.org/10.3390/data9020027 - 31 Jan 2024
Cited by 14 | Viewed by 15142
Abstract
Data breaches result in data loss, including personal, health, and financial information that are crucial, sensitive, and private. The breach is a security incident in which personal and sensitive data are exposed to unauthorized individuals, with the potential to incur several privacy concerns. [...] Read more.
Data breaches result in data loss, including personal, health, and financial information that are crucial, sensitive, and private. The breach is a security incident in which personal and sensitive data are exposed to unauthorized individuals, with the potential to incur several privacy concerns. As an example, the French newspaper Le Figaro breached approximately 7.4 billion records that included full names, passwords, and e-mail and physical addresses. To reduce the likelihood and impact of such breaches, it is fundamental to strengthen the security efforts against this type of incident and, for that, it is first necessary to identify patterns of its occurrence, primarily related to the number of data records leaked, the affected geographical region, and its regulatory aspects. To advance the discussion in this regard, we study a dataset comprising 428 worldwide data breaches between 2018 and 2019, providing a visualization of the related statistics, such as the most affected countries, the predominant economic sector targeted in different countries, and the median number of records leaked per incident in different countries, regions, and sectors. We then discuss the data protection regulation in effect in each country comprised in the dataset, correlating key elements of the legislation with the statistical findings. As a result, we have identified an extensive disclosure of medical records in India and government data in Brazil in the time range. Based on the analysis and visualization, we find some interesting insights that researchers seldom focus on before, and it is apparent that the real dangers of data leaks are beyond the ordinary imagination. Finally, this paper contributes to the discussion regarding data protection laws and compliance regarding data breaches, supporting, for example, the decision process of data storage location in the cloud. Full article
Show Figures

Figure 1

29 pages, 1877 KiB  
Article
Exploration of Metrics and Datasets to Assess the Fidelity of Images Generated by Generative Adversarial Networks
by Claudio Navar Valdebenito Maturana, Ana Lucila Sandoval Orozco and Luis Javier García Villalba
Appl. Sci. 2023, 13(19), 10637; https://doi.org/10.3390/app131910637 - 24 Sep 2023
Cited by 10 | Viewed by 4378
Abstract
Advancements in technology have improved human well-being but also enabled new avenues for criminal activities, including digital exploits like deep fakes, online fraud, and cyberbullying. Detecting and preventing such activities, especially for law enforcement agencies needing photo profiles for covert operations, is imperative. [...] Read more.
Advancements in technology have improved human well-being but also enabled new avenues for criminal activities, including digital exploits like deep fakes, online fraud, and cyberbullying. Detecting and preventing such activities, especially for law enforcement agencies needing photo profiles for covert operations, is imperative. Yet, conventional methods relying on authentic images are hindered by data protection laws. To address this, alternatives like generative adversarial networks, stable diffusion, and pixel recurrent neural networks can generate synthetic images. However, evaluating synthetic image quality is complex due to the varied techniques. Metrics are crucial, offering objective measures to compare techniques and identify areas for enhancement. This article underscores metrics’ significance in evaluating synthetic images produced by generative adversarial networks. By analyzing metrics and datasets used, researchers can comprehend the strengths, weaknesses, and areas for further research on generative adversarial networks. The article ultimately enhances image generation precision and control by detailing dataset preprocessing and quality metrics for synthetic images. Full article
Show Figures

Figure 1

16 pages, 360 KiB  
Article
Tariff Analysis in Automobile Insurance: Is It Time to Switch from Generalized Linear Models to Generalized Additive Models?
by Zuleyka Díaz Martínez, José Fernández Menéndez and Luis Javier García Villalba
Mathematics 2023, 11(18), 3906; https://doi.org/10.3390/math11183906 - 14 Sep 2023
Cited by 4 | Viewed by 2256
Abstract
Generalized Linear Models (GLMs) are the standard tool used for pricing in the field of automobile insurance. Generalized Additive Models (GAMs) are more complex and computationally intensive but allow taking into account nonlinear effects without the need to discretize the explanatory variables. In [...] Read more.
Generalized Linear Models (GLMs) are the standard tool used for pricing in the field of automobile insurance. Generalized Additive Models (GAMs) are more complex and computationally intensive but allow taking into account nonlinear effects without the need to discretize the explanatory variables. In addition, they fit perfectly into the mental framework shared by actuaries and are easier to use and interpret than machine learning models, such as trees or neural networks. This work compares both the GLM and GAM approaches, using a wide sample of policies to assess their differences in terms of quality of predictions, complexity of use, and time of execution. The results show that GAMs are a powerful alternative to GLMs, particularly when “big data” implementations of GAMs are used. Full article
Show Figures

Figure 1

18 pages, 899 KiB  
Article
Cybersecurity Analysis of Wearable Devices: Smartwatches Passive Attack
by Alejandra Guadalupe Silva-Trujillo, Mauricio Jacobo González González, Luis Pablo Rocha Pérez and Luis Javier García Villalba
Sensors 2023, 23(12), 5438; https://doi.org/10.3390/s23125438 - 8 Jun 2023
Cited by 17 | Viewed by 6650
Abstract
Wearable devices are starting to gain popularity, which means that a large portion of the population is starting to acquire these products. This kind of technology comes with a lot of advantages, as it simplifies different tasks people do daily. However, as they [...] Read more.
Wearable devices are starting to gain popularity, which means that a large portion of the population is starting to acquire these products. This kind of technology comes with a lot of advantages, as it simplifies different tasks people do daily. However, as they recollect sensitive data, they are starting to be targets for cybercriminals. The number of attacks on wearable devices forces manufacturers to improve the security of these devices to protect them. Many vulnerabilities have appeared in communication protocols, specifically Bluetooth. We focus on understanding the Bluetooth protocol and what countermeasures have been applied during their updated versions to solve the most common security problems. We have performed a passive attack on six different smartwatches to discover their vulnerabilities during the pairing process. Furthermore, we have developed a proposal of requirements needed for maximum security of wearable devices, as well as the minimum requirements needed to have a secure pairing process between two devices via Bluetooth. Full article
(This article belongs to the Special Issue Advances in E-health Networking and Its Applications)
Show Figures

Figure 1

34 pages, 3239 KiB  
Review
Learning Strategies for Sensitive Content Detection
by Daniel Povedano Álvarez, Ana Lucila Sandoval Orozco, Javier Portela García-Miguel and Luis Javier García Villalba
Electronics 2023, 12(11), 2496; https://doi.org/10.3390/electronics12112496 - 1 Jun 2023
Cited by 6 | Viewed by 5575
Abstract
Currently, the volume of sensitive content on the Internet, such as pornography and child pornography, and the amount of time that people spend online (especially children) have led to an increase in the distribution of such content (e.g., images of children being sexually [...] Read more.
Currently, the volume of sensitive content on the Internet, such as pornography and child pornography, and the amount of time that people spend online (especially children) have led to an increase in the distribution of such content (e.g., images of children being sexually abused, real-time videos of such abuse, grooming activities, etc.). It is therefore essential to have effective IT tools that automate the detection and blocking of this type of material, as manual filtering of huge volumes of data is practically impossible. The goal of this study is to carry out a comprehensive review of different learning strategies for the detection of sensitive content available in the literature, from the most conventional techniques to the most cutting-edge deep learning algorithms, highlighting the strengths and weaknesses of each, as well as the datasets used. The performance and scalability of the different strategies proposed in this work depend on the heterogeneity of the dataset, the feature extraction techniques (hashes, visual, audio, etc.) and the learning algorithms. Finally, new lines of research in sensitive-content detection are presented. Full article
Show Figures

Figure 1

19 pages, 792 KiB  
Review
Analysis of Machine Learning Techniques for Information Classification in Mobile Applications
by Sandra Pérez Arteaga, Ana Lucila Sandoval Orozco and Luis Javier García Villalba
Appl. Sci. 2023, 13(9), 5438; https://doi.org/10.3390/app13095438 - 27 Apr 2023
Cited by 8 | Viewed by 4162
Abstract
Due to the daily use of mobile technologies, we live in constant connection with the world through the Internet. Technological innovations in smart devices have allowed us to carry out everyday activities such as communicating, working, studying or using them as a means [...] Read more.
Due to the daily use of mobile technologies, we live in constant connection with the world through the Internet. Technological innovations in smart devices have allowed us to carry out everyday activities such as communicating, working, studying or using them as a means of entertainment, which has led to smartphones displacing computers as the most important device connected to the Internet today, causing users to demand smarter applications or functionalities that allow them to meet their needs. Artificial intelligence has been a major innovation in information technology that is transforming the way users use smart devices. Using applications that make use of artificial intelligence has revolutionised our lives, from making predictions of possible words based on typing in a text box, to being able to unlock devices through pattern recognition. However, these technologies face problems such as overheating and battery drain due to high resource consumption, low computational capacity, memory limitations, etc. This paper reviews the most important artificial intelligence algorithms for mobile devices, emphasising the challenges and problems that can arise when implementing these technologies in low-resource devices. Full article
Show Figures

Figure 1

23 pages, 500 KiB  
Article
Analysis of Digital Information in Storage Devices Using Supervised and Unsupervised Natural Language Processing Techniques
by Luis Alberto Martínez Hernández, Ana Lucila Sandoval Orozco and Luis Javier García Villalba
Future Internet 2023, 15(5), 155; https://doi.org/10.3390/fi15050155 - 23 Apr 2023
Cited by 8 | Viewed by 2864
Abstract
Due to the advancement of technology, cybercrime has increased considerably, making digital forensics essential for any organisation. One of the most critical challenges is to analyse and classify the information on devices, identifying the relevant and valuable data for a specific purpose. This [...] Read more.
Due to the advancement of technology, cybercrime has increased considerably, making digital forensics essential for any organisation. One of the most critical challenges is to analyse and classify the information on devices, identifying the relevant and valuable data for a specific purpose. This phase of the forensic process is one of the most complex and time-consuming, and requires expert analysts to avoid overlooking data relevant to the investigation. Although tools exist today that can automate this process, they will depend on how tightly their parameters are tuned to the case study, and many lack support for complex scenarios where language barriers play an important role. Recent advances in machine learning allow the creation of new architectures to significantly increase the performance of information analysis and perform the intelligent search process automatically, reducing analysis time and identifying relationships between files based on initial parameters. In this paper, we present a bibliographic review of artificial intelligence algorithms that allow an exhaustive analysis of multimedia information contained in removable devices in a forensic process, using natural language processing and natural language understanding techniques for the automatic classification of documents in seized devices. Finally, some of the open challenges technology developers face when generating tools that use artificial intelligence techniques to analyse the information contained in documents on seized devices are reviewed. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

Back to TopTop