Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (67)

Search Parameters:
Keywords = media forensics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 4368 KiB  
Article
A Dual-Branch Fusion Model for Deepfake Detection Using Video Frames and Microexpression Features
by Georgios Petmezas, Vazgken Vanian, Manuel Pastor Rufete, Eleana E. I. Almaloglou and Dimitris Zarpalas
J. Imaging 2025, 11(7), 231; https://doi.org/10.3390/jimaging11070231 - 11 Jul 2025
Viewed by 448
Abstract
Deepfake detection has become a critical issue due to the rise of synthetic media and its potential for misuse. In this paper, we propose a novel approach to deepfake detection by combining video frame analysis with facial microexpression features. The dual-branch fusion model [...] Read more.
Deepfake detection has become a critical issue due to the rise of synthetic media and its potential for misuse. In this paper, we propose a novel approach to deepfake detection by combining video frame analysis with facial microexpression features. The dual-branch fusion model utilizes a 3D ResNet18 for spatiotemporal feature extraction and a transformer model to capture microexpression patterns, which are difficult to replicate in manipulated content. We evaluate the model on the widely used FaceForensics++ (FF++) dataset and demonstrate that our approach outperforms existing state-of-the-art methods, achieving 99.81% accuracy and a perfect ROC-AUC score of 100%. The proposed method highlights the importance of integrating diverse data sources for deepfake detection, addressing some of the current limitations of existing systems. Full article
Show Figures

Figure 1

42 pages, 3407 KiB  
Review
Interframe Forgery Video Detection: Datasets, Methods, Challenges, and Search Directions
by Mona M. Ali, Neveen I. Ghali, Hanaa M. Hamza, Khalid M. Hosny, Eleni Vrochidou and George A. Papakostas
Electronics 2025, 14(13), 2680; https://doi.org/10.3390/electronics14132680 - 2 Jul 2025
Viewed by 545
Abstract
The authenticity of digital video content has become a critical issue in multimedia security due to the significant rise in video editing and manipulation in recent years. The detection of interframe forgeries is essential for identifying manipulations, including frame duplication, deletion, and insertion. [...] Read more.
The authenticity of digital video content has become a critical issue in multimedia security due to the significant rise in video editing and manipulation in recent years. The detection of interframe forgeries is essential for identifying manipulations, including frame duplication, deletion, and insertion. These are popular techniques for altering video footage without leaving visible visual evidence. This study provides a detailed review of various methods for detecting video forgery, with a primary focus on interframe forgery techniques. The article evaluates approaches by assessing key performance measures. According to a statistical overview, machine learning has traditionally been used more frequently, but deep learning techniques are gaining popularity due to their outstanding performance in handling complex tasks and robust post-processing capabilities. The study highlights the significance of interframe forgery detection for forensic analysis, surveillance, and content moderation, as demonstrated through both evaluation and case studies. It aims to summarize existing studies and identify limitations to guide future research towards more robust, scalable, and generalizable methods, such as the development of benchmark datasets that reflect real-world video manipulation diversity. This emphasizes the necessity of creating large public datasets of manipulated high-resolution videos to support reliable integrity evaluations in dealing with widespread media manipulation. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 760 KiB  
Article
Detecting AI-Generated Images Using a Hybrid ResNet-SE Attention Model
by Abhilash Reddy Gunukula, Himel Das Gupta and Victor S. Sheng
Appl. Sci. 2025, 15(13), 7421; https://doi.org/10.3390/app15137421 - 2 Jul 2025
Viewed by 410
Abstract
The rapid advancements in generative artificial intelligence (AI), particularly through models like Generative Adversarial Networks (GANs) and diffusion-based architectures, have made it increasingly difficult to distinguish between real and synthetically generated images. While these technologies offer benefits in creative domains, they also pose [...] Read more.
The rapid advancements in generative artificial intelligence (AI), particularly through models like Generative Adversarial Networks (GANs) and diffusion-based architectures, have made it increasingly difficult to distinguish between real and synthetically generated images. While these technologies offer benefits in creative domains, they also pose serious risks in terms of misinformation, digital forgery, and identity manipulation. This paper presents a novel hybrid deep learning model for detecting AI-generated images by integrating the ResNet-50 architecture with Squeeze-and-Excitation (SE) attention blocks. The proposed SE-ResNet50 model enhances channel-wise feature recalibration and interpretability by integrating Squeeze-and-Excitation (SE) blocks into the ResNet-50 backbone, enabling dynamic emphasis on subtle generative artifacts such as unnatural textures and semantic inconsistencies, thereby improving classification fidelity. Experimental evaluation on the CIFAKE dataset demonstrates the model’s effectiveness, achieving a test accuracy of 96.12%, precision of 97.04%, recall of 88.94%, F1-score of 92.82%, and an AUC score of 0.9862. The model shows strong generalization, minimal overfitting, and superior performance compared with transformer-based models and standard architectures like ResNet-50, VGGNet, and DenseNet. These results confirm the hybrid model’s suitability for real-time and resource-constrained applications in media forensics, content authentication, and ethical AI governance. Full article
(This article belongs to the Special Issue Advanced Signal and Image Processing for Applied Engineering)
Show Figures

Figure 1

25 pages, 10875 KiB  
Article
Novel Deepfake Image Detection with PV-ISM: Patch-Based Vision Transformer for Identifying Synthetic Media
by Orkun Çınar and Yunus Doğan
Appl. Sci. 2025, 15(12), 6429; https://doi.org/10.3390/app15126429 - 7 Jun 2025
Viewed by 687
Abstract
This study presents a novel approach to the increasingly important task of distinguishing AI-generated images from authentic photographs. The detection of such synthetic content is critical for combating deepfake misinformation and ensuring the authenticity of digital media in journalism, forensics, and online platforms. [...] Read more.
This study presents a novel approach to the increasingly important task of distinguishing AI-generated images from authentic photographs. The detection of such synthetic content is critical for combating deepfake misinformation and ensuring the authenticity of digital media in journalism, forensics, and online platforms. A custom-designed Vision Transformer (ViT) model, termed Patch-Based Vision Transformer for Identifying Synthetic Media (PV-ISM), is introduced. Its performance is benchmarked against innovative transfer learning methods using 60,000 authentic images from the CIFAKE dataset, which is derived from CIFAR-10, along with a corresponding collection of images generated using Stable Diffusion 1.4. PV-ISM incorporates patch extraction, positional encoding, and multiple transformer blocks with attention mechanisms to identify subtle artifacts in synthetic images. Following extensive hyperparameter tuning, an accuracy of 96.60% was achieved, surpassing the performance of ResNet50 transfer learning approaches (93.32%) and other comparable methods reported in the literature. The experimental results demonstrate the model’s balanced classification capabilities, exhibiting excellent recall and precision throughout both image categories. The patch-based architecture of Vision Transformers, combined with appropriate data augmentation techniques, proves particularly effective for synthetic image detection while requiring less training time than traditional transfer learning approaches. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Image Processing)
Show Figures

Figure 1

21 pages, 5123 KiB  
Article
Neural Network Ensemble Method for Deepfake Classification Using Golden Frame Selection
by Khrystyna Lipianina-Honcharenko, Nazar Melnyk, Andriy Ivasechko, Mykola Telka and Oleg Illiashenko
Big Data Cogn. Comput. 2025, 9(4), 109; https://doi.org/10.3390/bdcc9040109 - 21 Apr 2025
Viewed by 1014
Abstract
Deepfake technology poses significant threats in various domains, including politics, cybersecurity, and social media. This study uses the golden frame selection technique to present a neural network ensemble method for deepfake classification. The proposed approach optimizes computational resources by extracting the most informative [...] Read more.
Deepfake technology poses significant threats in various domains, including politics, cybersecurity, and social media. This study uses the golden frame selection technique to present a neural network ensemble method for deepfake classification. The proposed approach optimizes computational resources by extracting the most informative video frames, improving detection accuracy. We integrate multiple deep learning models, including ResNet50, EfficientNetB0, Xception, InceptionV3, and Facenet, with an XGBoost meta-model for enhanced classification performance. Experimental results demonstrate a 91% accuracy rate, outperforming traditional deepfake detection models. Additionally, feature importance analysis using Grad-CAM highlights how different architectures focus on distinct facial regions, enhancing overall model interpretability. The findings contribute to of robust and efficient deepfake detection techniques, with potential applications in digital forensics, media verification, and cybersecurity. Full article
Show Figures

Figure 1

30 pages, 1749 KiB  
Article
Deepfake Image Forensics for Privacy Protection and Authenticity Using Deep Learning
by Saud Sohail, Syed Muhammad Sajjad, Adeel Zafar, Zafar Iqbal, Zia Muhammad and Muhammad Kazim
Information 2025, 16(4), 270; https://doi.org/10.3390/info16040270 - 27 Mar 2025
Viewed by 3353
Abstract
This research focuses on the detection of deepfake images and videos for forensic analysis using deep learning techniques. It highlights the importance of preserving privacy and authenticity in digital media. The background of the study emphasizes the growing threat of deepfakes, which pose [...] Read more.
This research focuses on the detection of deepfake images and videos for forensic analysis using deep learning techniques. It highlights the importance of preserving privacy and authenticity in digital media. The background of the study emphasizes the growing threat of deepfakes, which pose significant challenges in various domains, including social media, politics, and entertainment. Current methodologies primarily rely on visual features that are specific to the dataset and fail to generalize well across varying manipulation techniques. However, these techniques focus on either spatial or temporal features individually and lack robustness in handling complex deepfake artifacts that involve fused facial regions such as eyes, nose, and mouth. Key approaches include the use of CNNs, RNNs, and hybrid models like CNN-LSTM, CNN-GRU, and temporal convolutional networks (TCNs) to capture both spatial and temporal features during the detection of deepfake videos and images. The research incorporates data augmentation with GANs to enhance model performance and proposes an innovative fusion of artifact inspection and facial landmark detection for improved accuracy. The experimental results show near-perfect detection accuracy across diverse datasets, demonstrating the effectiveness of these models. However, challenges remain, such as the difficulty of detecting deepfakes in compressed video formats, the need for handling noise and addressing dataset imbalances. The research presents an enhanced hybrid model that improves detection accuracy while maintaining performance across various datasets. Future work includes improving model generalization to detect emerging deepfake techniques better. The experimental results reveal a near-perfect accuracy of over 99% across different architectures, highlighting their effectiveness in forensic investigations. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Figure 1

15 pages, 538 KiB  
Article
Use of Multiple Inputs and a Hybrid Deep Learning Model for Verifying the Authenticity of Social Media Posts
by Bandar Alotaibi
Electronics 2025, 14(6), 1184; https://doi.org/10.3390/electronics14061184 - 18 Mar 2025
Viewed by 768
Abstract
With the rise of social media platforms and the vast amount of text content generated on these platforms, text data forensics has emerged as a new area of research that aims to verify posts’ authenticity by analyzing textual content. This study proposes an [...] Read more.
With the rise of social media platforms and the vast amount of text content generated on these platforms, text data forensics has emerged as a new area of research that aims to verify posts’ authenticity by analyzing textual content. This study proposes an innovative hybrid framework for detecting fake content on social media by examining both the text and metadata of Twitter posts. The metadata are fed into a feature selection method to select the most beneficial features. Using multiple inputs, a hybrid deep learning framework is proposed to classify Twitter posts as real or fake, where fake content is defined as posts containing misleading information. This research significantly contributes to the field of text data forensics by enhancing the detection of such fake texts. A recent comprehensive dataset for text data forensics called CIC Truth Seeker Dataset 2023 was used to assess the effectiveness of the proposed framework; the proposed framework uses long short-term memory (LSTM) to process textual data and hybrid residual neural network (ResNet) and deep neural network (DNN) layers for metadata. The framework has shown promising results during its preliminary evaluations. The paper examines the proposed model’s architecture and performance while highlighting potential improvements in privacy, ethics, real-time deployment, and implementation limitations to emphasize its broader impact. Full article
Show Figures

Figure 1

42 pages, 10351 KiB  
Article
Deepfake Media Forensics: Status and Future Challenges
by Irene Amerini, Mauro Barni, Sebastiano Battiato, Paolo Bestagini, Giulia Boato, Vittoria Bruni, Roberto Caldelli, Francesco De Natale, Rocco De Nicola, Luca Guarnera, Sara Mandelli, Taiba Majid, Gian Luca Marcialis, Marco Micheletto, Andrea Montibeller, Giulia Orrù, Alessandro Ortis, Pericle Perazzo, Giovanni Puglisi, Nischay Purnekar, Davide Salvi, Stefano Tubaro, Massimo Villari and Domenico Vitulanoadd Show full author list remove Hide full author list
J. Imaging 2025, 11(3), 73; https://doi.org/10.3390/jimaging11030073 - 28 Feb 2025
Cited by 5 | Viewed by 9369
Abstract
The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic [...] Read more.
The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic yet fabricated content, while these advancements enable creative and innovative applications, they also pose severe ethical, social, and security risks due to their potential misuse. The proliferation of deepfakes has triggered phenomena like “Impostor Bias”, a growing skepticism toward the authenticity of multimedia content, further complicating trust in digital interactions. This paper is mainly based on the description of a research project called FF4ALL (FF4ALL-Detection of Deep Fake Media and Life-Long Media Authentication) for the detection and authentication of deepfakes, focusing on areas such as forensic attribution, passive and active authentication, and detection in real-world scenarios. By exploring both the strengths and limitations of current methodologies, we highlight critical research gaps and propose directions for future advancements to ensure media integrity and trustworthiness in an era increasingly dominated by synthetic media. Full article
Show Figures

Figure 1

17 pages, 1978 KiB  
Article
Lightweight Deepfake Detection Based on Multi-Feature Fusion
by Siddiqui Muhammad Yasir and Hyun Kim
Appl. Sci. 2025, 15(4), 1954; https://doi.org/10.3390/app15041954 - 13 Feb 2025
Cited by 3 | Viewed by 2793
Abstract
Deepfake technology utilizes deep learning (DL)-based face manipulation techniques to seamlessly replace faces in videos, creating highly realistic but artificially generated content. Although this technology has beneficial applications in media and entertainment, misuse of its capabilities may lead to serious risks, including identity [...] Read more.
Deepfake technology utilizes deep learning (DL)-based face manipulation techniques to seamlessly replace faces in videos, creating highly realistic but artificially generated content. Although this technology has beneficial applications in media and entertainment, misuse of its capabilities may lead to serious risks, including identity theft, cyberbullying, and false information. The integration of DL with visual cognition has resulted in important technological improvements, particularly in addressing privacy risks caused by artificially generated “deepfake” images on digital media platforms. In this study, we propose an efficient and lightweight method for detecting deepfake images and videos, making it suitable for devices with limited computational resources. In order to reduce the computational burden usually associated with DL models, our method integrates machine learning classifiers in combination with keyframing approaches and texture analysis. Moreover, the features extracted with a histogram of oriented gradients (HOG), local binary pattern (LBP), and KAZE bands were integrated to evaluate using random forest, extreme gradient boosting, extra trees, and support vector classifier algorithms. Our findings show a feature-level fusion of HOG, LBP, and KAZE features improves accuracy to 92% and 96% on FaceForensics++ and Celeb-DF(v2), respectively. Full article
(This article belongs to the Collection Trends and Prospects in Multimedia)
Show Figures

Figure 1

16 pages, 890 KiB  
Article
DeepGuard: Identification and Attribution of AI-Generated Synthetic Images
by Yasmine Namani, Ikram Reghioua, Gueltoum Bendiab, Mohamed Aymen Labiod and Stavros Shiaeles
Electronics 2025, 14(4), 665; https://doi.org/10.3390/electronics14040665 - 8 Feb 2025
Cited by 1 | Viewed by 2571
Abstract
Text-to-image (T2I) synthesis, driven by advancements in deep learning and generative models, has seen significant improvements, enabling the creation of highly realistic images from textual descriptions. However, this rapid development brings challenges in distinguishing synthetic images from genuine ones, raising concerns in critical [...] Read more.
Text-to-image (T2I) synthesis, driven by advancements in deep learning and generative models, has seen significant improvements, enabling the creation of highly realistic images from textual descriptions. However, this rapid development brings challenges in distinguishing synthetic images from genuine ones, raising concerns in critical areas such as security, privacy, and digital forensics. To address these concerns and ensure the reliability and authenticity of data, this paper conducts a systematic study on detecting fake images generated by text-to-image synthesis models. Specifically, it evaluates the effectiveness of deep learning methods that leverage ensemble learning for detecting fake images. Additionally, it introduces a multi-classification technique to attribute fake images to their source models, thereby enabling accountability for model misuse. The effectiveness of these methods is assessed through extensive simulations and proof-of-concept experiments. The results reveal that these methods can effectively detect fake images and associate them with their respective generation models, achieving impressive accuracy rates ranging from 98.00% to 99.87% on our custom dataset, “DeepGuardDB”. These findings highlight the potential of the proposed techniques to mitigate synthetic media risks, ensuring a safer digital space with preserved authenticity across various domains, including journalism, legal forensics, and public safety. Full article
Show Figures

Figure 1

38 pages, 2985 KiB  
Systematic Review
Generative Artificial Intelligence and the Evolving Challenge of Deepfake Detection: A Systematic Analysis
by Reza Babaei, Samuel Cheng, Rui Duan and Shangqing Zhao
J. Sens. Actuator Netw. 2025, 14(1), 17; https://doi.org/10.3390/jsan14010017 - 6 Feb 2025
Cited by 5 | Viewed by 12164
Abstract
Deepfake technology, which employs advanced generative artificial intelligence to create hyper-realistic synthetic media, poses significant challenges across various sectors, including security, entertainment, and education. This literature review explores the evolution of deepfake generation methods, ranging from traditional techniques to state-of-the-art models such as [...] Read more.
Deepfake technology, which employs advanced generative artificial intelligence to create hyper-realistic synthetic media, poses significant challenges across various sectors, including security, entertainment, and education. This literature review explores the evolution of deepfake generation methods, ranging from traditional techniques to state-of-the-art models such as generative adversarial networks and diffusion models. We navigate through the effectiveness and limitations of various detection approaches, including machine learning, forensic analysis, and hybrid techniques, while highlighting the critical importance of interpretability and real-time performance in detection systems. Furthermore, we discuss the ethical implications and regulatory considerations surrounding deepfake technology, emphasizing the need for comprehensive frameworks to mitigate risks associated with misinformation and manipulation. Through a systematic review of the existing literature, our aim is to identify research gaps and future directions for the development of robust, adaptable detection systems that can keep pace with rapid advancements in deepfake generation. Full article
Show Figures

Figure 1

16 pages, 603 KiB  
Article
Comprehensive Evaluation of Deepfake Detection Models: Accuracy, Generalization, and Resilience to Adversarial Attacks
by Maryam Abbasi, Paulo Váz, José Silva and Pedro Martins
Appl. Sci. 2025, 15(3), 1225; https://doi.org/10.3390/app15031225 - 25 Jan 2025
Cited by 1 | Viewed by 7384
Abstract
The rise of deepfakes—synthetic media generated using artificial intelligence—threatens digital content authenticity, facilitating misinformation and manipulation. However, deepfakes can also depict real or entirely fictitious individuals, leveraging state-of-the-art techniques such as generative adversarial networks (GANs) and emerging diffusion-based models. Existing detection methods face [...] Read more.
The rise of deepfakes—synthetic media generated using artificial intelligence—threatens digital content authenticity, facilitating misinformation and manipulation. However, deepfakes can also depict real or entirely fictitious individuals, leveraging state-of-the-art techniques such as generative adversarial networks (GANs) and emerging diffusion-based models. Existing detection methods face challenges with generalization across datasets and vulnerability to adversarial attacks. This study focuses on subsets of frames extracted from the DeepFake Detection Challenge (DFDC) and FaceForensics++ videos to evaluate three convolutional neural network architectures—XCeption, ResNet, and VGG16—for deepfake detection. Performance metrics include accuracy, precision, F1-score, AUC-ROC, and Matthews Correlation Coefficient (MCC), combined with an assessment of resilience to adversarial perturbations via the Fast Gradient Sign Method (FGSM). Among the tested models, XCeption achieves the highest accuracy (89.2% on DFDC), strong generalization, and real-time suitability, while VGG16 excels in precision and ResNet provides faster inference. However, all models exhibit reduced performance under adversarial conditions, underscoring the need for enhanced resilience. These findings indicate that robust detection systems must consider advanced generative approaches, adversarial defenses, and cross-dataset adaptation to effectively counter evolving deepfake threats. Full article
Show Figures

Figure 1

20 pages, 17747 KiB  
Article
A Secure Learned Image Codec for Authenticity Verification via Self-Destructive Compression
by Chen-Hsiu Huang and Ja-Ling Wu
Big Data Cogn. Comput. 2025, 9(1), 14; https://doi.org/10.3390/bdcc9010014 - 15 Jan 2025
Viewed by 1075
Abstract
In the era of deepfakes and AI-generated content, digital image manipulation poses significant challenges to image authenticity, creating doubts about the credibility of images. Traditional image forensics techniques often struggle to detect sophisticated tampering, and passive detection approaches are reactive, verifying authenticity only [...] Read more.
In the era of deepfakes and AI-generated content, digital image manipulation poses significant challenges to image authenticity, creating doubts about the credibility of images. Traditional image forensics techniques often struggle to detect sophisticated tampering, and passive detection approaches are reactive, verifying authenticity only after counterfeiting occurs. In this paper, we propose a novel full-resolution secure learned image codec (SLIC) designed to proactively prevent image manipulation by creating self-destructive artifacts upon re-compression. Once a sensitive image is encoded using SLIC, any subsequent re-compression or editing attempts will result in visually severe distortions, making the image’s tampering immediately evident. Because the content of an SLIC image is either original or visually damaged after tampering, images encoded with this secure codec hold greater credibility. SLIC leverages adversarial training to fine-tune a learned image codec that introduces out-of-distribution perturbations, ensuring that the first compressed image retains high quality while subsequent re-compressions degrade drastically. We analyze and compare the adversarial effects of various perceptual quality metrics combined with different learned codecs. Our experiments demonstrate that SLIC holds significant promise as a proactive defense strategy against image manipulation, offering a new approach to enhancing image credibility and authenticity in a media landscape increasingly dominated by AI-driven forgeries. Full article
Show Figures

Figure 1

14 pages, 15003 KiB  
Article
Is Lung Disease a Risk Factor for Sudden Cardiac Death? A Comparative Case–Control Histopathological Study
by Ioana Radu, Anca Otilia Farcas, Septimiu Voidazan, Carmen Corina Radu and Klara Brinzaniuc
Diseases 2025, 13(1), 8; https://doi.org/10.3390/diseases13010008 - 6 Jan 2025
Cited by 1 | Viewed by 1123
Abstract
Background/Objectives: Sudden cardiac death (SCD) constitutes approximately 50% of cardiovascular mortality. Numerous studies have established an interrelation and a strong association between SCD and pulmonary diseases, such as chronic obstructive pulmonary disease (COPD). The aim of this study is to examine the presence [...] Read more.
Background/Objectives: Sudden cardiac death (SCD) constitutes approximately 50% of cardiovascular mortality. Numerous studies have established an interrelation and a strong association between SCD and pulmonary diseases, such as chronic obstructive pulmonary disease (COPD). The aim of this study is to examine the presence of more pronounced cardiopulmonary histopathological changes in individuals who died from SCD compared to the histopathological changes in those who died from violent deaths, in two groups with comparable demographic characteristics, age and sex. Methods: This retrospective case–control study investigated the histopathological changes in cardiac and pulmonary tissues in two cohorts, each comprising 40 cases of SCD and 40 cases of violent death (self-inflicted hanging). Forensic autopsies were conducted at the Maramureș County Forensic Medicine Service, Romania, between 2019 and 2020. Results: The mean ages recorded were 43.88 years (SD 5.49) for the SCD cohort and 41.98 years (SD 8.55) for the control cohort. In the SCD cases, pulmonary parenchyma exhibited inflammatory infiltrate in 57.5% (23), fibrosis in 62.5% (25), blood extravasation in 45% (18), and vascular media thickening in 37.5% (15), compared to the control cohort, where these parameters were extremely low. In myocardial tissue, fibrosis was identified in 47.5% (19) and subendocardial adipose tissue in 22.5% (9) of the control cohort. Conclusions: A close association exists between SCD and the histopathological alterations observed in the pulmonary parenchyma, including inflammation, fibrosis, emphysema, blood extravasation, stasis, intimal lesions, and vascular media thickening in intraparenchymal vessels. Both the histopathological modifications in the pulmonary parenchyma and vessels, as well as those in myocardial tissue, were associated with an increased risk of SCD, ranging from 2.17 times (presence of intimal lesions) to 58.50 times (presence of interstitial and perivascular inflammatory infiltrate in myocardial tissue). Full article
(This article belongs to the Section Cardiology)
Show Figures

Figure 1

14 pages, 10530 KiB  
Article
Tesla Log Data Analysis Approach from a Digital Forensics Perspective
by Jung-Hwan Lee, Seong Ho Lim, Bumsu Hyeon, Oc-Yeub Jeon, Jong Jin Park and Nam In Park
World Electr. Veh. J. 2024, 15(12), 590; https://doi.org/10.3390/wevj15120590 - 21 Dec 2024
Viewed by 2766
Abstract
Modern vehicles are equipped with various electronic control units (ECUs) for safety, entertainment, and autonomous driving. These ECUs operate independently according to their respective roles and generate considerable data. However, owing to capacity and security concerns, most of these data are not stored. [...] Read more.
Modern vehicles are equipped with various electronic control units (ECUs) for safety, entertainment, and autonomous driving. These ECUs operate independently according to their respective roles and generate considerable data. However, owing to capacity and security concerns, most of these data are not stored. In contrast, Tesla vehicles, equipped with multiple sensors and designed under the software-defined vehicle (SDV) concept, collect, store, and periodically transmit data to dedicated servers. The data stored inside and outside the vehicle by the manufacturer can be used for various purposes and can provide numerous insights to digital forensics researchers investigating incidents/accidents. In this study, various data stored inside and outside of Tesla vehicles are described sequentially from a digital forensics perspective. First, we identify the location and range of the obtainable storage media. Second, we explain how the data are acquired. Third, we describe how the acquired data are analyzed. Fourth, we verify the analyzed data by comparing them with one another. Finally, the cross-analysis of various data obtained from the actual accident vehicles and the data provided by the manufacturer revealed consistent trends across the datasets. Although the number of data points recorded during the same timeframe differed, the overall patterns remained consistent. This process enhanced the reliability of the vehicle data and improved the accuracy of the accident investigation. Full article
Show Figures

Figure 1

Back to TopTop