Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (95)

Search Parameters:
Keywords = technology-agnostic

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1053 KiB  
Opinion
A Framework for the Design of Privacy-Preserving Record Linkage Systems
by Zixin Nie, Benjamin Tyndall, Daniel Brannock, Emily Gentles, Elizabeth Parish and Alison Banger
J. Cybersecur. Priv. 2025, 5(3), 44; https://doi.org/10.3390/jcp5030044 - 9 Jul 2025
Viewed by 371
Abstract
Record linkage can enhance the utility of data by bringing data together from different sources, increasing the available information about data subjects and providing more holistic views. Doing so, however, can increase privacy risks. To mitigate these risks, a family of methods known [...] Read more.
Record linkage can enhance the utility of data by bringing data together from different sources, increasing the available information about data subjects and providing more holistic views. Doing so, however, can increase privacy risks. To mitigate these risks, a family of methods known as privacy-preserving record linkage (PPRL) was developed, using techniques such as cryptography, de-identification, and the strict separation of roles to ensure data subjects’ privacy remains protected throughout the linkage process, and the resulting linked data poses no additional privacy risks. Building privacy protections into the architecture of the system (for instance, ensuring that data flows between different parties in the system do not allow for transmission of private information) is just as important as the technology used to obfuscate private information. In this paper, we present a technology-agnostic framework for designing PPRL systems that is focused on privacy protection, defining key roles, providing a system architecture with data flows, detailing system controls, and discussing privacy evaluations that ensure the system protects privacy. We hope that the framework presented in this paper can both help elucidate how currently deployed PPRL systems protect privacy and help developers design future PPRL systems. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

29 pages, 1812 KiB  
Article
Innovative Guardrails for Generative AI: Designing an Intelligent Filter for Safe and Responsible LLM Deployment
by Olga Shvetsova, Danila Katalshov and Sang-Kon Lee
Appl. Sci. 2025, 15(13), 7298; https://doi.org/10.3390/app15137298 - 28 Jun 2025
Viewed by 934
Abstract
This paper proposes a technological framework designed to mitigate the inherent risks associated with the deployment of artificial intelligence (AI) in decision-making and task execution within the management processes. The Agreement Validation Interface (AVI) functions as a modular Application Programming Interface (API) Gateway [...] Read more.
This paper proposes a technological framework designed to mitigate the inherent risks associated with the deployment of artificial intelligence (AI) in decision-making and task execution within the management processes. The Agreement Validation Interface (AVI) functions as a modular Application Programming Interface (API) Gateway positioned between user applications and LLMs. This gateway architecture is designed to be LLM-agnostic, meaning it can operate with various underlying LLMs without requiring specific modifications for each model. This universality is achieved by standardizing the interface for requests and responses and applying a consistent set of validation and enhancement processes irrespective of the chosen LLM provider, thus offering a consistent governance layer across a diverse LLM ecosystem. AVI facilitates the orchestration of multiple AI subcomponents for input–output validation, response evaluation, and contextual reasoning, thereby enabling real-time, bidirectional filtering of user interactions. A proof-of-concept (PoC) implementation of AVI was developed and rigorously evaluated using industry-standard benchmarks. The system was tested for its effectiveness in mitigating adversarial prompts, reducing toxic outputs, detecting personally identifiable information (PII), and enhancing factual consistency. The results demonstrated that AVI reduced successful fast injection attacks by 82%, decreased toxic content generation by 75%, and achieved high PII detection performance (F1-score ≈ 0.95). Furthermore, the contextual reasoning module significantly improved the neutrality and factual validity of model outputs. Although the integration of AVI introduced a moderate increase in latency, the overall framework effectively enhanced the reliability, safety, and interpretability of LLM-driven applications. AVI provides a scalable and adaptable architectural template for the responsible deployment of generative AI in high-stakes domains such as finance, healthcare, and education, promoting safer and more ethical use of AI technologies. Full article
Show Figures

Figure 1

25 pages, 5480 KiB  
Systematic Review
Integrating Circular Economy Principles in the New Product Development Process: A Systematic Literature Review and Classification of Available Circular Design Tools
by Benedetta Rotondo, Conny Bakker, Ruud Balkenende and Venanzio Arquilla
Sustainability 2025, 17(9), 4155; https://doi.org/10.3390/su17094155 - 4 May 2025
Cited by 1 | Viewed by 1369
Abstract
Nowadays, the circular economy represents a promising strategy for achieving sustainable development through optimising resource efficiency, extending product lifespans, and reducing environmental impacts. Despite the growing interest in circular design practices, companies often face difficulties integrating these principles into their established New Product [...] Read more.
Nowadays, the circular economy represents a promising strategy for achieving sustainable development through optimising resource efficiency, extending product lifespans, and reducing environmental impacts. Despite the growing interest in circular design practices, companies often face difficulties integrating these principles into their established New Product Development (NPD) processes. This is mainly due to the overwhelming number of available design tools and methods, which are fragmented, challenging to navigate, overlap in functionality, and lack standardisation. This study provides a comprehensive mapping, classification, and analysis of 77 existing circular design tools identified through a systematic literature review and supplementary online searches. The tools were systematically categorised according to format, data type, industry sector, circular strategies, innovation focus, aims, and applicability across the NPD stages. The results indicate a predominance of physical, qualitative, and sector-agnostic tools, emphasising circularity integration within the Discover, Define, and Develop phases of the design process. This structured classification facilitates stakeholder navigation of existing resources, highlighting opportunities for more targeted, industry-specific tool development, consumer-oriented approaches, and the importance of considering Industry 4.0 technologies in circular design practice. Future research could address these gaps by developing customised frameworks, validating tool effectiveness through real industrial applications, and promoting deeper integration of circular design tools within NPD practices and business objectives. Full article
(This article belongs to the Special Issue Sustainable Circular Economy in Industry 4.0)
Show Figures

Figure 1

28 pages, 6222 KiB  
Article
IoTBystander: A Non-Intrusive Dual-Channel-Based Smart Home Security Monitoring Framework
by Haotian Chi, Qi Ma, Yuwei Wang, Jing Yang and Haijun Geng
Appl. Sci. 2025, 15(9), 4795; https://doi.org/10.3390/app15094795 - 25 Apr 2025
Viewed by 690
Abstract
The increasing prevalence of IoT technology in smart homes has significantly enhanced convenience but also introduced new security and safety challenges. Traditional security solutions, reliant on sequences of IoT-generated event data (e.g., notifications of device status changes and sensor readings), are vulnerable to [...] Read more.
The increasing prevalence of IoT technology in smart homes has significantly enhanced convenience but also introduced new security and safety challenges. Traditional security solutions, reliant on sequences of IoT-generated event data (e.g., notifications of device status changes and sensor readings), are vulnerable to cyberattacks, such as message forgery and interception and delaying attacks, and fail to monitor non-smart devices. Moreover, fragmented smart home ecosystems require vendor cooperation or system modifications for comprehensive monitoring, limiting the practicality of the existing approaches. To address these issues, we propose IoTBystander, a non-intrusive dual-channel smart home security monitoring framework that utilizes two ubiquitous platform-agnostic signals, i.e., audio and network, to monitor user and device activities. We introduce a novel dual-channel aggregation mechanism that integrates insights from both channels and cross-verifies the integrity of monitoring results. This approach expands the monitoring scope to include non-smart devices and provides richer context for anomaly detection, failure diagnosis, and configuration debugging. Empirical evaluations on a real-world testbed with nine smart and eleven non-smart devices demonstrate the high accuracy of IoTBystander in event recognition: 92.86% for recognizing events of smart devices, 95.09% for non-smart devices, and 94.27% for all devices. A case study on five anomaly scenarios further shows significant improvements in anomaly detection performance by combining the strengths of both channels. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 1544 KiB  
Article
Protocol for Evaluating Explainability in Actuarial Models
by Catalina Lozano-Murcia, Francisco P. Romero and Mᵃ Concepción Gonzalez-Ramos
Electronics 2025, 14(8), 1561; https://doi.org/10.3390/electronics14081561 - 11 Apr 2025
Viewed by 485
Abstract
This paper explores the use of explainable artificial intelligence (XAI) techniques in actuarial science to address the opacity of advanced machine learning models in financial contexts. While technological advancements have enhanced actuarial models, their black box nature poses challenges in highly regulated environments. [...] Read more.
This paper explores the use of explainable artificial intelligence (XAI) techniques in actuarial science to address the opacity of advanced machine learning models in financial contexts. While technological advancements have enhanced actuarial models, their black box nature poses challenges in highly regulated environments. This study proposes a protocol for selecting and applying XAI techniques to improve interpretability, transparency, and regulatory compliance. It categorizes techniques based on origin, target, and interpretative capacity, and introduces a protocol to identify the most suitable method for actuarial models. The proposed protocol is tested in a case study involving two classification algorithms, gradient boosting and random forest, with accuracy of 0.80 and 0.79, focusing on two explainability objectives. Several XAI techniques are analyzed, with results highlighting partial dependency variance (PDV) and local interpretable model-agnostic explanations (LIME) as effective tools for identifying key variables. The findings demonstrate that the protocol aids in model selection, internal audits, regulatory compliance, and enhanced decision-making transparency. These advantages make it particularly valuable for improving model governance in the financial sector. Full article
(This article belongs to the Special Issue Advances in Information, Intelligence, Systems and Applications)
Show Figures

Figure 1

21 pages, 9957 KiB  
Article
GaussianEnhancer++: A General GS-Agnostic Rendering Enhancer
by Chen Zou, Qingsen Ma, Jia Wang, Ming Lu, Shanghang Zhang, Zhaowei Qu and Zhaofeng He
Symmetry 2025, 17(3), 442; https://doi.org/10.3390/sym17030442 - 15 Mar 2025
Cited by 1 | Viewed by 573
Abstract
Gaussian Splatting (GS) methods, including 3DGS and 2DGS, have demonstrated significant effectiveness in real-time novel view synthesis (NVS), establishing themselves as a key technology in the field of computer graphics. However, GS-based methods still face challenges in rendering high-quality image details. Even when [...] Read more.
Gaussian Splatting (GS) methods, including 3DGS and 2DGS, have demonstrated significant effectiveness in real-time novel view synthesis (NVS), establishing themselves as a key technology in the field of computer graphics. However, GS-based methods still face challenges in rendering high-quality image details. Even when utilizing advanced frameworks, their outputs may display significant rendering artifacts when relying solely on a few input views, such as noise and blurriness. A reasonable approach is to conduct post-processing in order to restore clear details. Therefore, we propose GaussianEnhancer, a general GS-agnostic post-processor that employs a degradation-driven view blending method to improve the rendering quality of GS models. Specifically, we design a degradation modeling method tailored to the GS style and construct a large-scale training dataset to effectively simulate the native rendering artifacts of GS, enabling efficient training. In addition, we present a spatial information fusion framework that includes view fusion and depth modulation modules. This framework successfully integrates related images and leverages depth information from the target image to enhance rendering details. Through our GaussianEnhancer, we effectively eliminate the rendering artifacts of GS models and generate highly realistic image details. Based on GaussianEnhancer, we introduce GaussianEnhancer++, which features an enhanced GS-style degradation simulator, leading to improved enhancement quality. Furthermore, GaussianEnhancer++ can generate ultra-high-resolution outputs from noisy low resolution GS rendered images by leveraging the symmetry between high-resolution and low resolution images. Extensive experiments demonstrate the excellent restoration ability of GaussianEnhancer++ on various novel view synthesis benchmarks. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 5450 KiB  
Article
Exploring Pre-Trained Models for Skin Cancer Classification
by Abdelkader Alrabai, Amira Echtioui and Fathi Kallel
Appl. Syst. Innov. 2025, 8(2), 35; https://doi.org/10.3390/asi8020035 - 13 Mar 2025
Cited by 1 | Viewed by 2345
Abstract
Accurate skin cancer classification is essential for early diagnosis and effective treatment planning, enabling timely interventions and improved patient outcomes. In this paper, the performance of four pre-trained models—two convolutional neural networks (ResNet50 and VGG19) and two vision transformers (ViT-b16 and ViT-b32)—is evaluated [...] Read more.
Accurate skin cancer classification is essential for early diagnosis and effective treatment planning, enabling timely interventions and improved patient outcomes. In this paper, the performance of four pre-trained models—two convolutional neural networks (ResNet50 and VGG19) and two vision transformers (ViT-b16 and ViT-b32)—is evaluated in distinguishing malignant from benign skin cancers using a publicly available dermoscopic dataset. Among these models, ResNet50 achieved the highest performance across all the evaluation metrics, with accuracy, precision, and recall of 89.09% and an F1 score of 89.08%, demonstrating its ability to effectively capture complex patterns in skin lesion images. While the other models produced competitive results, ResNet50 exhibited superior robustness and consistency. To enhance model interpretability, two eXplainable Artificial Intelligence (XAI) techniques, Local Interpretable Model-Agnostic Explanations (LIME) and integrated gradients, were employed to provide insights into the decision-making process, fostering trust in automated diagnostic systems. These findings underscore the potential of deep learning for automated skin cancer classification and highlight the importance of model transparency for clinical adoption. As AI technology continues to evolve, its integration into clinical workflows could improve diagnostic accuracy, reduce the workload of healthcare professionals, and enhance patient outcomes. Full article
Show Figures

Figure 1

32 pages, 2960 KiB  
Article
Comparing Application-Level Hardening Techniques for Neural Networks on GPUs
by Giuseppe Esposito, Juan-David Guerrero-Balaguera, Josie E. Rodriguez Condia and Matteo Sonza Reorda
Electronics 2025, 14(5), 1042; https://doi.org/10.3390/electronics14051042 - 6 Mar 2025
Viewed by 936
Abstract
Neural networks (NNs) are essential in advancing modern safety-critical systems. Lightweight NN architectures are deployed on resource-constrained devices using hardware accelerators like Graphics Processing Units (GPUs) for fast responses. However, the latest semiconductor technologies may be affected by physical faults that can jeopardize [...] Read more.
Neural networks (NNs) are essential in advancing modern safety-critical systems. Lightweight NN architectures are deployed on resource-constrained devices using hardware accelerators like Graphics Processing Units (GPUs) for fast responses. However, the latest semiconductor technologies may be affected by physical faults that can jeopardize the NN computations, making fault mitigation crucial for safety-critical domains. The recent studies propose software-based Hardening Techniques (HTs) to address these faults. However, the proposed fault countermeasures are evaluated through different hardware-agnostic error models neglecting the effort required for their implementation and different test benches. Comparing application-level HTs across different studies is challenging, leaving it unclear (i) their effectiveness against hardware-aware error models on any NN and (ii) which HTs provide the best trade-off between reliability enhancement and implementation cost. In this study, application-level HTs are evaluated homogeneously and independently by performing a study on the feasibility of implementation and a reliability assessment under two hardware-aware error models: (i) weight single bit-flips and (ii) neuron bit error rate. Our results indicate that not all HTs suit every NN architecture, and their effectiveness varies depending on the evaluated error model. Techniques based on the range restriction of activation function consistently outperform others, achieving up to 58.23% greater mitigation effectiveness while keeping the introduced overhead at inference time low while requiring a contained effort in their implementation. Full article
Show Figures

Figure 1

16 pages, 587 KiB  
Concept Paper
Exploring AI Amid the Hype: A Critical Reflection Around the Applications and Implications of AI in Journalism
by Paschalia (Lia) Spyridou and Maria Ioannou
Societies 2025, 15(2), 23; https://doi.org/10.3390/soc15020023 - 28 Jan 2025
Cited by 1 | Viewed by 4356
Abstract
Over the last decade, AI has increasingly been adopted by newsrooms in the form of different tools aiming to support journalists and augment the capabilities of the profession. The main idea behind the adoption of AI is that it can make journalists’ work [...] Read more.
Over the last decade, AI has increasingly been adopted by newsrooms in the form of different tools aiming to support journalists and augment the capabilities of the profession. The main idea behind the adoption of AI is that it can make journalists’ work more efficient, freeing them up from some repetitive or routine tasks while enhancing their research and storytelling techniques. Against this idea, and drawing on the concept of “hype”, we employ a critical reflection on the lens often used to talk about journalism and AI. We suggest that the severe sustainability crisis of journalism, rooted in growing pressure from platforms and major corporate competitors, changing news consumption habits and rituals and the growing technologization of news media, leads to the obsessive pursuit of technology in the absence of clear and research-informed strategies which cater to journalism’s civic role. As AI is changing and (re)shaping norms and practices associated with news making, many questions and debates are raised pertaining to the quality and plurality of outputs created by AI. Given the disproportionate attention paid to technological innovation with little interpretation, the present article explores how AI is impacting journalism. Additionally, using the political economy framework, we analyze the fundamental issues and challenges journalism is faced with in terms of both practices and professional sustainability. In the process, we untangle the AI hype and attempt to shed light on how AI can help journalism regain its civic role. We argue that despite the advantages AI provides to journalism, we should avoid the “shiny things perspective”, which tends to emphasize productivity and profitability, and rather focus on the constructive synergy of humans and machines to achieve the six or seven things journalism can do for democracy. Otherwise, we are heading toward “alien intelligence” which is agnostic to the core normative values of journalism. Full article
Show Figures

Figure 1

34 pages, 25702 KiB  
Article
Software-Defined Radio-Based Internet of Things Communication Systems: An Application for the DASH7 Alliance Protocol
by Dennis Joosens, Noori BniLam, Rafael Berkvens and Maarten Weyn
Appl. Sci. 2025, 15(1), 333; https://doi.org/10.3390/app15010333 - 31 Dec 2024
Cited by 1 | Viewed by 1889
Abstract
Software-Defined Radio (SDR) technology has been a very popular and powerful prototyping device for decades. It finds applications in both fundamental research or application-oriented tasks. Additionally, the continuing rise of the Internet of Things (IoT) necessitates the validation, processing, and decoding of a [...] Read more.
Software-Defined Radio (SDR) technology has been a very popular and powerful prototyping device for decades. It finds applications in both fundamental research or application-oriented tasks. Additionally, the continuing rise of the Internet of Things (IoT) necessitates the validation, processing, and decoding of a large number of received signals. This is where SDRs can be a valuable instrument. In this work, we present an open-source software system using GNU Radio and SDRs, which improves the comprehension of the physical layer aspects of Internet of Things communication systems. Our implementation is generic and application-agnostic. Therefore, it can serve as a learning and investigation instrument for any IoT communication system. Within this work, we implement the open-source DASH7 Alliance Protocol (D7AP). The developed software tool can simulate synthetic DASH7 signals, process recorded data sets, and decode the received DASH7 packets in real time using an SDR front-end. The software is accompanied by three data sets collected in controlled, indoor, and suburban environments. The experimental results revealed that the total packet losses of the data sets were 0%, 2.33%, and 16.67%, respectively. Simultaneously, the three data sets were received by a dedicated DASH7 gateway with total packet losses of 0%, 3.83%, and 7.92%, respectively. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

27 pages, 2436 KiB  
Article
Seeing the Sound: Multilingual Lip Sync for Real-Time Face-to-Face Translation
by Amirkia Rafiei Oskooei, Mehmet S. Aktaş and Mustafa Keleş
Computers 2025, 14(1), 7; https://doi.org/10.3390/computers14010007 - 28 Dec 2024
Cited by 3 | Viewed by 4272
Abstract
Imagine a future where language is no longer a barrier to real-time conversations, enabling instant and lifelike communication across the globe. As cultural boundaries blur, the demand for seamless multilingual communication has become a critical technological challenge. This paper addresses the lack of [...] Read more.
Imagine a future where language is no longer a barrier to real-time conversations, enabling instant and lifelike communication across the globe. As cultural boundaries blur, the demand for seamless multilingual communication has become a critical technological challenge. This paper addresses the lack of robust solutions for real-time face-to-face translation, particularly for low-resource languages, by introducing a comprehensive framework that not only translates language but also replicates voice nuances and synchronized facial expressions. Our research tackles the primary challenge of achieving accurate lip synchronization across culturally diverse languages, filling a significant gap in the literature by evaluating the generalizability of lip sync models beyond English. Specifically, we develop a novel evaluation framework combining quantitative lip sync error metrics and qualitative assessments by human observers. This framework is applied to assess two state-of-the-art lip sync models with different architectures for Turkish, Persian, and Arabic languages, using a newly collected dataset. Based on these findings, we propose and implement a modular system that integrates language-agnostic lip sync models with neural networks to deliver a fully functional face-to-face translation experience. Inference Time Analysis shows this system achieves highly realistic, face-translated talking heads in real time, with a throughput as low as 0.381 s. This transformative framework is primed for deployment in immersive environments such as VR/AR, Metaverse ecosystems, and advanced video conferencing platforms. It offers substantial benefits to developers and businesses aiming to build next-generation multilingual communication systems for diverse applications. While this work focuses on three languages, its modular design allows scalability to additional languages. However, further testing in broader linguistic and cultural contexts is required to confirm its universal applicability, paving the way for a more interconnected and inclusive world where language ceases to hinder human connection. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2024 (ICCSA 2024))
Show Figures

Figure 1

25 pages, 1115 KiB  
Article
Explainable Pre-Trained Language Models for Sentiment Analysis in Low-Resourced Languages
by Koena Ronny Mabokela, Mpho Primus and Turgay Celik
Big Data Cogn. Comput. 2024, 8(11), 160; https://doi.org/10.3390/bdcc8110160 - 15 Nov 2024
Viewed by 2556
Abstract
Sentiment analysis is a crucial tool for measuring public opinion and understanding human communication across digital social media platforms. However, due to linguistic complexities and limited data or computational resources, it is under-represented in many African languages. While state-of-the-art Afrocentric pre-trained language models [...] Read more.
Sentiment analysis is a crucial tool for measuring public opinion and understanding human communication across digital social media platforms. However, due to linguistic complexities and limited data or computational resources, it is under-represented in many African languages. While state-of-the-art Afrocentric pre-trained language models (PLMs) have been developed for various natural language processing (NLP) tasks, their applications in eXplainable Artificial Intelligence (XAI) remain largely unexplored. In this study, we propose a novel approach that combines Afrocentric PLMs with XAI techniques for sentiment analysis. We demonstrate the effectiveness of incorporating attention mechanisms and visualization techniques in improving the transparency, trustworthiness, and decision-making capabilities of transformer-based models when making sentiment predictions. To validate our approach, we employ the SAfriSenti corpus, a multilingual sentiment dataset for South African under-resourced languages, and perform a series of sentiment analysis experiments. These experiments enable comprehensive evaluations, comparing the performance of Afrocentric models against mainstream PLMs. Our results show that the Afro-XLMR model outperforms all other models, achieving an average F1-score of 71.04% across five tested languages, and the lowest error rate among the evaluated models. Additionally, we enhance the interpretability and explainability of the Afro-XLMR model using Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP). These XAI techniques ensure that sentiment predictions are not only accurate and interpretable but also understandable, fostering trust and reliability in AI-driven NLP technologies, particularly in the context of African languages. Full article
(This article belongs to the Special Issue Artificial Intelligence and Natural Language Processing)
Show Figures

Figure 1

9 pages, 210 KiB  
Article
Mitigating Bias Due to Race and Gender in Machine Learning Predictions of Traffic Stop Outcomes
by Kevin Saville, Derek Berger and Jacob Levman
Information 2024, 15(11), 687; https://doi.org/10.3390/info15110687 - 1 Nov 2024
Cited by 1 | Viewed by 1232
Abstract
Traffic stops represent a crucial point of interaction between citizens and law enforcement, with potential implications for bias and discrimination. This study performs a rigorously validated comparative machine learning model analysis, creating artificial intelligence (AI) technologies to predict the results of traffic stops [...] Read more.
Traffic stops represent a crucial point of interaction between citizens and law enforcement, with potential implications for bias and discrimination. This study performs a rigorously validated comparative machine learning model analysis, creating artificial intelligence (AI) technologies to predict the results of traffic stops using a dataset sourced from the Montgomery County Maryland Data Centre, focusing on variables such as driver demographics, violation types, and stop outcomes. We repeated our rigorous validation of AI for the creation of models that predict outcomes with and without race and with and without gender informing the model. Feature selection employed regularly selects for gender and race as a predictor variable. We also observed correlations between model performance and both race and gender. While these findings imply the existence of discrimination based on race and gender, our large-scale analysis (>600,000 samples) demonstrates the ability to produce top performing models that are gender and race agnostic, implying the potential to create technology that can help mitigate bias in traffic stops. The findings encourage the need for unbiased data and robust algorithms to address biases in law enforcement practices and enhance public trust in AI technologies deployed in this domain. Full article
(This article belongs to the Section Artificial Intelligence)
26 pages, 8632 KiB  
Article
An Innovative Honeypot Architecture for Detecting and Mitigating Hardware Trojans in IoT Devices
by Amira Hossam Eldin Omar, Hassan Soubra, Donatien Koulla Moulla and Alain Abran
IoT 2024, 5(4), 730-755; https://doi.org/10.3390/iot5040033 - 31 Oct 2024
Cited by 2 | Viewed by 3331
Abstract
The exponential growth and widespread adoption of Internet of Things (IoT) devices have introduced many vulnerabilities. Attackers frequently exploit these flaws, necessitating advanced technological approaches to protect against emerging cyber threats. This paper introduces a novel approach utilizing hardware honeypots as an additional [...] Read more.
The exponential growth and widespread adoption of Internet of Things (IoT) devices have introduced many vulnerabilities. Attackers frequently exploit these flaws, necessitating advanced technological approaches to protect against emerging cyber threats. This paper introduces a novel approach utilizing hardware honeypots as an additional defensive layer against hardware vulnerabilities, particularly hardware Trojans (HTs). HTs pose significant risks to the security of modern integrated circuits (ICs), potentially causing operational failures, denial of service, or data leakage through intentional modifications. The proposed system was implemented on a Raspberry Pi and tested on an emulated HT circuit using a Field-Programmable Gate Array (FPGA). This approach leverages hardware honeypots to detect and mitigate HTs in the IoT devices. The results demonstrate that the system effectively detects and mitigates HTs without imposing additional complexity on the IoT devices. The Trojan-agnostic solution offers full customization to meet specific security needs, providing a flexible and robust layer of security. These findings provide valuable insights into enhancing the security of IoT devices against hardware-based cyber threats, thereby contributing to the overall resilience of IoT networks. This innovative approach offers a promising solution to address the growing security challenges in IoT environments. Full article
Show Figures

Figure 1

25 pages, 4024 KiB  
Article
A Novel Hybrid XAI Solution for Autonomous Vehicles: Real-Time Interpretability Through LIME–SHAP Integration
by H. Ahmed Tahir, Walaa Alayed, Waqar Ul Hassan and Amir Haider
Sensors 2024, 24(21), 6776; https://doi.org/10.3390/s24216776 - 22 Oct 2024
Cited by 4 | Viewed by 4313
Abstract
The rapid advancement in self-driving and autonomous vehicles (AVs) integrated with artificial intelligence (AI) technology demands not only precision but also output transparency. In this paper, we propose a novel hybrid explainable AI (XAI) framework that combines local interpretable model-agnostic explanations (LIME) and [...] Read more.
The rapid advancement in self-driving and autonomous vehicles (AVs) integrated with artificial intelligence (AI) technology demands not only precision but also output transparency. In this paper, we propose a novel hybrid explainable AI (XAI) framework that combines local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP). Our framework combines the precision and globality of SHAP and low computational requirements of LIME, creating a balanced approach for onboard deployment with enhanced transparency. We evaluate the proposed framework on three different state-of-the-art models: ResNet-18, ResNet-50, and SegNet-50 on the KITTI dataset. The results demonstrate that our hybrid approach consistently outperforms traditional approaches by achieving a fidelity rate of more than 85%, interpretability factor of more than 80%, and consistency of more than 70%, surpassing the conventional methods. Furthermore, the inference time of our proposed framework with ResNet-18 was 0.28 s; for ResNet-50, it was 0.571 s; and that for SegNet was 3.889 s with XAI layers. This is optimal for onboard computations and deployment. This research establishes a strong foundation for the deployment of XAI in safety-critical AV with balanced tradeoffs for real-time decision-making. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors Technology in Smart Cities)
Show Figures

Figure 1

Back to TopTop