Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (33)

Search Parameters:
Keywords = social media cybersecurity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 7617 KiB  
Article
Emoji-Driven Sentiment Analysis for Social Bot Detection with Relational Graph Convolutional Networks
by Kaqian Zeng, Zhao Li and Xiujuan Wang
Sensors 2025, 25(13), 4179; https://doi.org/10.3390/s25134179 - 4 Jul 2025
Viewed by 473
Abstract
The proliferation of malicious social bots poses severe threats to cybersecurity and social media information ecosystems. Existing detection methods often overlook the semantic value and emotional cues conveyed by emojis in user-generated tweets. To address this gap, we propose ESA-BotRGCN, an emoji-driven multi-modal [...] Read more.
The proliferation of malicious social bots poses severe threats to cybersecurity and social media information ecosystems. Existing detection methods often overlook the semantic value and emotional cues conveyed by emojis in user-generated tweets. To address this gap, we propose ESA-BotRGCN, an emoji-driven multi-modal detection framework that integrates semantic enhancement, sentiment analysis, and multi-dimensional feature modeling. Specifically, we first establish emoji–text mapping relationships using the Emoji Library, leverage GPT-4 to improve textual coherence, and generate tweet embeddings via RoBERTa. Subsequently, seven sentiment-based features are extracted to quantify statistical disparities in emotional expression patterns between bot and human accounts. An attention gating mechanism is further designed to dynamically fuse these sentiment features with user description, tweet content, numerical attributes, and categorical features. Finally, a Relational Graph Convolutional Network (RGCN) is employed to model heterogeneous social topology for robust bot detection. Experimental results on the TwiBot-20 benchmark dataset demonstrate that our method achieves a superior accuracy of 87.46%, significantly outperforming baseline models and validating the effectiveness of emoji-driven semantic and sentiment enhancement strategies. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

24 pages, 429 KiB  
Systematic Review
Advances in NLP Techniques for Detection of Message-Based Threats in Digital Platforms: A Systematic Review
by José Saias
Electronics 2025, 14(13), 2551; https://doi.org/10.3390/electronics14132551 - 24 Jun 2025
Viewed by 1130
Abstract
Users of all ages face risks on social media and messaging platforms. When encountering suspicious messages, legitimate concerns arise about a sender’s malicious intent. This study examines recent advances in Natural Language Processing for detecting message-based threats in digital communication. We conducted a [...] Read more.
Users of all ages face risks on social media and messaging platforms. When encountering suspicious messages, legitimate concerns arise about a sender’s malicious intent. This study examines recent advances in Natural Language Processing for detecting message-based threats in digital communication. We conducted a systematic review following PRISMA guidelines, to address four research questions. After applying a rigorous search and screening pipeline, 30 publications were selected for analysis. Our work assessed the NLP techniques and evaluation methods employed in recent threat detection research, revealing that large language models appear in only 20% of the reviewed works. We further categorized detection input scopes and discussed ethical and privacy implications. The results show that AI ethical aspects are not systematically addressed in the reviewed scientific literature. Full article
Show Figures

Figure 1

33 pages, 11250 KiB  
Article
RADAR#: An Ensemble Approach for Radicalization Detection in Arabic Social Media Using Hybrid Deep Learning and Transformer Models
by Emad M. Al-Shawakfa, Anas M. R. Alsobeh, Sahar Omari and Amani Shatnawi
Information 2025, 16(7), 522; https://doi.org/10.3390/info16070522 - 22 Jun 2025
Cited by 1 | Viewed by 526
Abstract
The recent increase in extremist material on social media platforms makes serious countermeasures to international cybersecurity and national security efforts more difficult. RADAR#, a deep ensemble approach for the detection of radicalization in Arabic tweets, is introduced in this paper. Our model combines [...] Read more.
The recent increase in extremist material on social media platforms makes serious countermeasures to international cybersecurity and national security efforts more difficult. RADAR#, a deep ensemble approach for the detection of radicalization in Arabic tweets, is introduced in this paper. Our model combines a hybrid CNN-Bi-LSTM framework with a top Arabic transformer model (AraBERT) through a weighted ensemble strategy. We employ domain-specific Arabic tweet pre-processing techniques and a custom attention layer to better focus on radicalization indicators. Experiments over a 89,816 Arabic tweet dataset indicate that RADAR# reaches 98% accuracy and a 97% F1-score, surpassing advanced approaches. The ensemble strategy is particularly beneficial in handling dialectical variations and context-sensitive words common in Arabic social media updates. We provide a full performance analysis of the model, including ablation studies and attention visualization for better interpretability. Our contribution is useful to the cybersecurity community through an effective early detection mechanism of online radicalization in Arabic language content, which can be potentially applied in counter-terrorism and online content moderation. Full article
Show Figures

Figure 1

37 pages, 7444 KiB  
Review
Recent Trends in the Public Acceptance of Autonomous Vehicles: A Review
by Thaar Alqahtani
Vehicles 2025, 7(2), 45; https://doi.org/10.3390/vehicles7020045 - 11 May 2025
Cited by 3 | Viewed by 3888
Abstract
The rapid evolution of autonomous vehicles (AVs) has ignited widespread interest in their potential to transform mobility and transportation ecosystems. However, despite significant technological advances, the acceptance of AVs by the public remains a complex and multifaceted challenge. This state-of-the-art review explores the [...] Read more.
The rapid evolution of autonomous vehicles (AVs) has ignited widespread interest in their potential to transform mobility and transportation ecosystems. However, despite significant technological advances, the acceptance of AVs by the public remains a complex and multifaceted challenge. This state-of-the-art review explores the key factors influencing AV acceptance, focusing on the intersection of artificial intelligence (AI) services, user experience, social dynamics, and regulatory landscapes across diverse global regions. By analyzing trust, perceived safety (PS), cybersecurity, and user interface design, this paper delves into the psychological and behavioral drivers that shape public perception of AVs. It also highlights the role of demographic segmentation and media influence in accelerating or hindering adoption. A comparative analysis of AV acceptance across North America, Europe, Asia, and emerging markets reveals significant regional variations, influenced by regulatory frameworks, economic conditions, and social trends. Also, this review reveals critical insights into the perceived safety associated with AV technology, including legal uncertainties and cybersecurity concerns, while emphasizing the future potential of AVs in urban environments, public transit, and autonomous logistics fleets. This review concludes by proposing strategic roadmaps and policy implications to accelerate AV adoption, offering a forward-looking perspective on how advances in technology, coupled with targeted industry and government initiatives, can shape the future of autonomous mobility. Through a comprehensive examination of current trends and challenges, this paper provides a foundation for future research and innovation aimed at enhancing public acceptance and trust in AVs. Full article
Show Figures

Figure 1

19 pages, 1664 KiB  
Article
Large Language Models for Synthetic Dataset Generation of Cybersecurity Indicators of Compromise
by Ashwaq Almorjan, Mohammed Basheri and Miada Almasre
Sensors 2025, 25(9), 2825; https://doi.org/10.3390/s25092825 - 30 Apr 2025
Viewed by 1691
Abstract
In the field of Cyber Threat Intelligence (CTI), the scarcity of high-quality and labelled datasets that include Indicators of Compromise (IoCs) impact the design and implementation of robust predictive models that are capable of classifying IoCs in online communication, specifically in social media [...] Read more.
In the field of Cyber Threat Intelligence (CTI), the scarcity of high-quality and labelled datasets that include Indicators of Compromise (IoCs) impact the design and implementation of robust predictive models that are capable of classifying IoCs in online communication, specifically in social media contexts where users are potentially highly exposed to cyber threats. Thus, the generation of high-quality synthetic datasets can be utilized to fill this gap and develop effective CTI systems. Therefore, this study aims to fine-tune OpenAI’s Large Language Model (LLM), Gpt-3.5, to generate a synthetic dataset that replicates the style of a real social media curated dataset, as well as incorporates select IoCs as domain knowledge. Four machine-learning (ML) and deep-learning (DL) models were evaluated on two generated datasets (one with 4000 instances and the other with 12,000). The results indicated that, on the 4000-instance dataset, the Dense Neural Network (DenseNN) outputs the highest accuracy (77%), while on the 12,000-instance dataset, Logistic Regression (LR) achieved the highest accuracy of 82%. This study highlights the potential of integrating fine-tuned LLMs with domain-specific knowledge to create high-quality synthetic data. The main contribution of this research is in the adoption of fine-tuning of an LLM, Gpt-3.5, using real social media datasets and curated IoC domain knowledge, which is expected to improve the process of synthetic dataset generation and later IoC extraction and classification, offering a realistic and novel resource for cybersecurity applications. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

21 pages, 5123 KiB  
Article
Neural Network Ensemble Method for Deepfake Classification Using Golden Frame Selection
by Khrystyna Lipianina-Honcharenko, Nazar Melnyk, Andriy Ivasechko, Mykola Telka and Oleg Illiashenko
Big Data Cogn. Comput. 2025, 9(4), 109; https://doi.org/10.3390/bdcc9040109 - 21 Apr 2025
Viewed by 1037
Abstract
Deepfake technology poses significant threats in various domains, including politics, cybersecurity, and social media. This study uses the golden frame selection technique to present a neural network ensemble method for deepfake classification. The proposed approach optimizes computational resources by extracting the most informative [...] Read more.
Deepfake technology poses significant threats in various domains, including politics, cybersecurity, and social media. This study uses the golden frame selection technique to present a neural network ensemble method for deepfake classification. The proposed approach optimizes computational resources by extracting the most informative video frames, improving detection accuracy. We integrate multiple deep learning models, including ResNet50, EfficientNetB0, Xception, InceptionV3, and Facenet, with an XGBoost meta-model for enhanced classification performance. Experimental results demonstrate a 91% accuracy rate, outperforming traditional deepfake detection models. Additionally, feature importance analysis using Grad-CAM highlights how different architectures focus on distinct facial regions, enhancing overall model interpretability. The findings contribute to of robust and efficient deepfake detection techniques, with potential applications in digital forensics, media verification, and cybersecurity. Full article
Show Figures

Figure 1

45 pages, 5583 KiB  
Review
From Tweets to Threats: A Survey of Cybersecurity Threat Detection Challenges, AI-Based Solutions and Potential Opportunities in X
by Omar Alsodi, Xujuan Zhou, Raj Gururajan, Anup Shrestha and Eyad Btoush
Appl. Sci. 2025, 15(7), 3898; https://doi.org/10.3390/app15073898 - 2 Apr 2025
Viewed by 2467
Abstract
The pervasive use of social media platforms, such as X (formerly Twitter), has become a part of our daily lives, simultaneously increasing the threat of cyber attacks. To address this risk, numerous studies have explored methods to detect and predict cyber attacks by [...] Read more.
The pervasive use of social media platforms, such as X (formerly Twitter), has become a part of our daily lives, simultaneously increasing the threat of cyber attacks. To address this risk, numerous studies have explored methods to detect and predict cyber attacks by analyzing X data. This study specifically examines the application of AI techniques for predicting potential cyber threats on X. DeepNN consistently outperforms competing methods in terms of overall and average figure of merit. While character-level feature extraction methods are abundant, we contend that a semantic focus is more beneficial for this stage of the process. The findings indicate that current studies often lack comprehensive evaluations of critical aspects such as prediction scope, types of cybersecurity threats, feature extraction techniques, algorithm complexity, information summarization levels, scalability over time, and performance measurements. This review primarily focuses on identifying AI methods used to detect cyber threats on X and investigates existing gaps and trends in this area. Notably, over the past few years, limited review articles have been published on detecting cyber threats on X, especially those concentrating on recent journal articles rather than conference papers. Full article
(This article belongs to the Special Issue Data and Text Mining: New Approaches, Achievements and Applications)
Show Figures

Figure 1

42 pages, 10351 KiB  
Article
Deepfake Media Forensics: Status and Future Challenges
by Irene Amerini, Mauro Barni, Sebastiano Battiato, Paolo Bestagini, Giulia Boato, Vittoria Bruni, Roberto Caldelli, Francesco De Natale, Rocco De Nicola, Luca Guarnera, Sara Mandelli, Taiba Majid, Gian Luca Marcialis, Marco Micheletto, Andrea Montibeller, Giulia Orrù, Alessandro Ortis, Pericle Perazzo, Giovanni Puglisi, Nischay Purnekar, Davide Salvi, Stefano Tubaro, Massimo Villari and Domenico Vitulanoadd Show full author list remove Hide full author list
J. Imaging 2025, 11(3), 73; https://doi.org/10.3390/jimaging11030073 - 28 Feb 2025
Cited by 5 | Viewed by 9497
Abstract
The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic [...] Read more.
The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic yet fabricated content, while these advancements enable creative and innovative applications, they also pose severe ethical, social, and security risks due to their potential misuse. The proliferation of deepfakes has triggered phenomena like “Impostor Bias”, a growing skepticism toward the authenticity of multimedia content, further complicating trust in digital interactions. This paper is mainly based on the description of a research project called FF4ALL (FF4ALL-Detection of Deep Fake Media and Life-Long Media Authentication) for the detection and authentication of deepfakes, focusing on areas such as forensic attribution, passive and active authentication, and detection in real-world scenarios. By exploring both the strengths and limitations of current methodologies, we highlight critical research gaps and propose directions for future advancements to ensure media integrity and trustworthiness in an era increasingly dominated by synthetic media. Full article
Show Figures

Figure 1

26 pages, 1192 KiB  
Article
Vulnerability and Attack Repository for IoT: Addressing Challenges and Opportunities in Internet of Things Vulnerability Databases
by Anna Felkner, Jan Adamski, Jakub Koman, Marcin Rytel, Marek Janiszewski, Piotr Lewandowski, Rafał Pachnia and Wojciech Nowakowski
Appl. Sci. 2024, 14(22), 10513; https://doi.org/10.3390/app142210513 - 14 Nov 2024
Cited by 1 | Viewed by 2516
Abstract
The article’s primary purpose is to highlight the importance of cybersecurity for Internet of Things (IoT) devices. Due to the widespread use of such devices in everyone’s daily and professional lives, taking care of their security is essential. This security can be strengthened [...] Read more.
The article’s primary purpose is to highlight the importance of cybersecurity for Internet of Things (IoT) devices. Due to the widespread use of such devices in everyone’s daily and professional lives, taking care of their security is essential. This security can be strengthened by raising awareness about the vulnerabilities and risks of these devices among their manufacturers and users. Therefore, this paper shows the results of several years of work regarding building vulnerabilities and exploiting databases, with a particular focus on IoT devices. We highlight multiple unique features of our solution, such as aggregation, correlation, and enrichment of known vulnerabilities and exploits collected from 12 different sources, presentation of a timeline of threats, and combining vulnerability information with exploits. Our databases have more than 300,000 entries, which are the result of aggregating and correlating more than 1,300,000 entries from 12 different databases simultaneously, enriched with information from unstructured sources. We cover the innovative utilization of Artificial Intelligence (AI) to support data enrichment, examining the usage of the Light Gradient-Boosting Machine (LGBM) model to automatically predict vulnerability severity and Mistral7B to categorize vulnerable products, which, especially in the case of IoT devices, is critical due to the diversity of nomenclature. Social media and various unstructured sources are prominent places for gathering information. Retrieving data from them is much more complex than from structured databases, but the information there is normally supplied at a faster rate. Thus, we added Mastodon monitoring to enhance our threat timelines. Full article
Show Figures

Figure 1

17 pages, 3202 KiB  
Article
Arabic Spam Tweets Classification: A Comprehensive Machine Learning Approach
by Wafa Hussain Hantom and Atta Rahman
AI 2024, 5(3), 1049-1065; https://doi.org/10.3390/ai5030052 - 2 Jul 2024
Cited by 3 | Viewed by 2399
Abstract
Nowadays, one of the most common problems faced by Twitter (also known as X) users, including individuals as well as organizations, is dealing with spam tweets. The problem continues to proliferate due to the increasing popularity and number of users of social media [...] Read more.
Nowadays, one of the most common problems faced by Twitter (also known as X) users, including individuals as well as organizations, is dealing with spam tweets. The problem continues to proliferate due to the increasing popularity and number of users of social media platforms. Due to this overwhelming interest, spammers can post texts, images, and videos containing suspicious links that can be used to spread viruses, rumors, negative marketing, and sarcasm, and potentially hack the user’s information. Spam detection is among the hottest research areas in natural language processing (NLP) and cybersecurity. Several studies have been conducted in this regard, but they mainly focus on the English language. However, Arabic tweet spam detection still has a long way to go, especially emphasizing the diverse dialects other than modern standard Arabic (MSA), since, in the tweets, the standard dialect is seldom used. The situation demands an automated, robust, and efficient Arabic spam tweet detection approach. To address the issue, in this research, various machine learning and deep learning models have been investigated to detect spam tweets in Arabic, including Random Forest (RF), Support Vector Machine (SVM), Naive Bayes (NB) and Long-Short Term Memory (LSTM). In this regard, we have focused on the words as well as the meaning of the tweet text. Upon several experiments, the proposed models have produced promising results in contrast to the previous approaches for the same and diverse datasets. The results showed that the RF classifier achieved 96.78% and the LSTM classifier achieved 94.56%, followed by the SVM classifier that achieved 82% accuracy. Further, in terms of F1-score, there is an improvement of 21.38%, 19.16% and 5.2% using RF, LSTM and SVM classifiers compared to the schemes with same dataset. Full article
Show Figures

Figure 1

10 pages, 1271 KiB  
Proceeding Paper
Social Media in the Digital Age: A Comprehensive Review of Impacts, Challenges and Cybercrime
by Gagandeep Kaur, Utkarsha Bonde, Kunjal Lalit Pise, Shruti Yewale, Poorva Agrawal, Purushottam Shobhane, Shruti Maheshwari, Latika Pinjarkar and Rupali Gangarde
Eng. Proc. 2024, 62(1), 6; https://doi.org/10.3390/engproc2024062006 - 1 Mar 2024
Cited by 5 | Viewed by 10094
Abstract
There are very renowned social media platforms like Instagram, Twitter, Facebook, etc., with each of which being used by different shareholders across the world to communicate with each other. Social media is a pool of online communication platforms that are based on community [...] Read more.
There are very renowned social media platforms like Instagram, Twitter, Facebook, etc., with each of which being used by different shareholders across the world to communicate with each other. Social media is a pool of online communication platforms that are based on community input, content sharing, and collaborations. The way we communicate, share information, and connect with other people has been revolutionized by social media. This has led to a series of benefits but also posed many challenges, especially in cybersecurity. This paper investigates the varied influences of social media, examining both its good and negative consequences across a variety of industries. It focuses specifically on the cybersecurity concerns posed by the growing usage of social media, shedding light on the vulnerabilities encountered by individuals and organizations. This investigation includes a study of common cybercrimes like phishing, social engineering, burglary via social networking, virus attacks, cyberstalking, identity theft, and cybercasing. This study emphasizes the importance of a complete and targeted cybersecurity approach that includes preventive measures such as privacy enhancements, user training, sophisticated email filtering, robust authentication, and encryption technologies. Individuals and organizations can traverse the evolving social media ecosystem with greater cyber resilience by addressing these challenges and using proactive tactics. Full article
(This article belongs to the Proceedings of The 2nd Computing Congress 2023)
Show Figures

Figure 1

12 pages, 1867 KiB  
Article
Experimental Evaluation: Can Humans Recognise Social Media Bots?
by Maxim Kolomeets, Olga Tushkanova, Vasily Desnitsky, Lidia Vitkova and Andrey Chechulin
Big Data Cogn. Comput. 2024, 8(3), 24; https://doi.org/10.3390/bdcc8030024 - 26 Feb 2024
Cited by 5 | Viewed by 4556
Abstract
This paper aims to test the hypothesis that the quality of social media bot detection systems based on supervised machine learning may not be as accurate as researchers claim, given that bots have become increasingly sophisticated, making it difficult for human annotators to [...] Read more.
This paper aims to test the hypothesis that the quality of social media bot detection systems based on supervised machine learning may not be as accurate as researchers claim, given that bots have become increasingly sophisticated, making it difficult for human annotators to detect them better than random selection. As a result, obtaining a ground-truth dataset with human annotation is not possible, which leads to supervised machine-learning models inheriting annotation errors. To test this hypothesis, we conducted an experiment where humans were tasked with recognizing malicious bots on the VKontakte social network. We then compared the “human” answers with the “ground-truth” bot labels (‘a bot’/‘not a bot’). Based on the experiment, we evaluated the bot detection efficiency of annotators in three scenarios typical for cybersecurity but differing in their detection difficulty as follows: (1) detection among random accounts, (2) detection among accounts of a social network ‘community’, and (3) detection among verified accounts. The study showed that humans could only detect simple bots in all three scenarios but could not detect more sophisticated ones (p-value = 0.05). The study also evaluates the limits of hypothetical and existing bot detection systems that leverage non-expert-labelled datasets as follows: the balanced accuracy of such systems can drop to 0.5 and lower, depending on bot complexity and detection scenario. The paper also describes the experiment design, collected datasets, statistical evaluation, and machine learning accuracy measures applied to support the results. In the discussion, we raise the question of using human labelling in bot detection systems and its potential cybersecurity issues. We also provide open access to the datasets used, experiment results, and software code for evaluating statistical and machine learning accuracy metrics used in this paper on GitHub. Full article
(This article belongs to the Special Issue Security, Privacy, and Trust in Artificial Intelligence Applications)
Show Figures

Figure 1

12 pages, 515 KiB  
Article
Time Aware F-Score for Cybersecurity Early Detection Evaluation
by Manuel López-Vizcaíno, Francisco J. Nóvoa, Diego Fernández and Fidel Cacheda
Appl. Sci. 2024, 14(2), 574; https://doi.org/10.3390/app14020574 - 9 Jan 2024
Cited by 2 | Viewed by 1708
Abstract
With the increase in the use of Internet interconnected systems, security has become of utmost importance. One key element to guarantee an adequate level of security is being able to detect the threat as soon as possible, decreasing the risk of consequences derived [...] Read more.
With the increase in the use of Internet interconnected systems, security has become of utmost importance. One key element to guarantee an adequate level of security is being able to detect the threat as soon as possible, decreasing the risk of consequences derived from those actions. In this paper, a new metric for early detection system evaluation that takes into account the delay in detection is defined. Time aware F-score (TaF) takes into account the number of items or individual elements processed to determine if an element is an anomaly or if it is not relevant to be detected. These results are validated by means of a dual approach to cybersecurity, Operative System (OS) scan attack as part of systems and network security and the detection of depression in social media networks as part of the protection of users. Also, different approaches, oriented towards studying the impact of single item selection, are applied to final decisions. This study allows to establish that nitems selection method is usually the best option for early detection systems. TaF metric provides, as well, an adequate alternative for time sensitive detection evaluation. Full article
(This article belongs to the Special Issue Advances in Cybersecurity: Challenges and Solutions)
Show Figures

Figure 1

18 pages, 3777 KiB  
Article
Political Optimization Algorithm with a Hybrid Deep Learning Assisted Malicious URL Detection Model
by Mohammed Aljebreen, Fatma S. Alrayes, Sumayh S. Aljameel and Muhammad Kashif Saeed
Sustainability 2023, 15(24), 16811; https://doi.org/10.3390/su152416811 - 13 Dec 2023
Cited by 5 | Viewed by 1591
Abstract
With the enhancement of the Internet of Things (IoT), smart cities have developed the idea of conventional urbanization. IoT networks permit distributed smart devices to collect and process data in smart city structures utilizing an open channel, the Internet. Accordingly, challenges like security, [...] Read more.
With the enhancement of the Internet of Things (IoT), smart cities have developed the idea of conventional urbanization. IoT networks permit distributed smart devices to collect and process data in smart city structures utilizing an open channel, the Internet. Accordingly, challenges like security, centralization, privacy (i.e., execution data poisoning and inference attacks), scalability, transparency, and verifiability restrict faster variations of smart cities. Detecting malicious URLs in an IoT environment is crucial to protect devices and the network from potential security threats. Malicious URL detection is an essential element of cybersecurity. It is established that malicious URL attacks mean large risks in smart cities, comprising financial damages, losses of personal identifications, online banking, losing data, and loss of user confidentiality in online businesses, namely e-commerce and employment of social media. Therefore, this paper concentrates on the proposal of a Political Optimization Algorithm by a Hybrid Deep Learning Assisted Malicious URL Detection and Classification for Cybersecurity (POAHDL-MDC) technique. The presented POAHDL-MDC technique identifies whether malicious URLs occur. To accomplish this, the POAHDL-MDC technique performs pre-processing to transform the data to a compatible format, and a Fast Text word embedding process is involved. For malicious URL recognition, a Hybrid Deep Learning (HDL) model integrates the features of stacked autoencoder (SAE) and bi-directional long short-term memory (Bi-LSTM). Finally, POA is exploited for optimum hyperparameter tuning of the HDL technique. The simulation values of the POAHDL-MDC approach are tested on a Malicious URL database, and the outcome exhibits an improvement of the POAHDL-MDC technique with a maximal accuracy of 99.31%. Full article
Show Figures

Figure 1

28 pages, 7338 KiB  
Review
A Survey of Digital Government: Science Mapping Approach, Application Areas, and Future Directions
by Merve Güler and Gülçin Büyüközkan
Systems 2023, 11(12), 563; https://doi.org/10.3390/systems11120563 - 30 Nov 2023
Cited by 6 | Viewed by 4386
Abstract
With the rapid development of digital technologies, digital transformation reshapes the functioning of governments. Digital government (DG) aims to leverage technology to enhance the delivery of public services, improve efficiency, and foster transparency. Embracing DG is a strategic imperative for governments looking to [...] Read more.
With the rapid development of digital technologies, digital transformation reshapes the functioning of governments. Digital government (DG) aims to leverage technology to enhance the delivery of public services, improve efficiency, and foster transparency. Embracing DG is a strategic imperative for governments looking to provide effective, transparent, and citizen-centric services in the 21st century. Therefore, many government organizations have intensified their DG efforts in response to its necessity. However, there is little clarity in the previous literature and a lack of uniform understanding among government employees, policymakers, and citizens regarding the concept of DG. Therefore, this study aims to analyze current DG research with science mapping, classify the research areas, and propose future directions for upcoming studies. A search was conducted on Web of Science and Scopus databases since the year 2000. VOSViewer software was used for visualizing and exploring bibliometric networks. This study is one of the first attempts to examine the DG area using the science mapping approach. Selected publications were categorized into research areas, and future directions were presented to bridge the identified research gaps. According to our results, the five main research areas are DG transformation, cybersecurity, public participation and social media, open government data and transparency, and e-Government adoption models. This study guides practitioners, academics, policymakers, and public employees in planning their future studies. Full article
Show Figures

Figure 1

Back to TopTop