Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (302)

Search Parameters:
Keywords = media automations

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 18320 KiB  
Article
Penetrating Radar on Unmanned Aerial Vehicle for the Inspection of Civilian Infrastructure: System Design, Modeling, and Analysis
by Jorge Luis Alva Alarcon, Yan Rockee Zhang, Hernan Suarez, Anas Amaireh and Kegan Reynolds
Aerospace 2025, 12(8), 686; https://doi.org/10.3390/aerospace12080686 (registering DOI) - 31 Jul 2025
Abstract
The increasing demand for noninvasive inspection (NII) of complex civil infrastructures requires overcoming the limitations of traditional ground-penetrating radar (GPR) systems in addressing diverse and large-scale applications. The solution proposed in this study focuses on an initial design that integrates a low-SWaP (Size, [...] Read more.
The increasing demand for noninvasive inspection (NII) of complex civil infrastructures requires overcoming the limitations of traditional ground-penetrating radar (GPR) systems in addressing diverse and large-scale applications. The solution proposed in this study focuses on an initial design that integrates a low-SWaP (Size, Weight, and Power) ultra-wideband (UWB) impulse radar with realistic electromagnetic modeling for deployment on unmanned aerial vehicles (UAVs). The system incorporates ultra-realistic antenna and propagation models, utilizing Finite Difference Time Domain (FDTD) solvers and multilayered media, to replicate realistic airborne sensing geometries. Verification and calibration are performed by comparing simulation outputs with laboratory measurements using varied material samples and target models. Custom signal processing algorithms are developed to extract meaningful features from complex electromagnetic environments and support anomaly detection. Additionally, machine learning (ML) techniques are trained on synthetic data to automate the identification of structural characteristics. The results demonstrate accurate agreement between simulations and measurements, as well as the potential for deploying this design in flight tests within realistic environments featuring complex electromagnetic interference. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

16 pages, 502 KiB  
Article
Artificial Intelligence in Digital Marketing: Enhancing Consumer Engagement and Supporting Sustainable Behavior Through Social and Mobile Networks
by Carmen Acatrinei, Ingrid Georgeta Apostol, Lucia Nicoleta Barbu, Raluca-Giorgiana Chivu (Popa) and Mihai-Cristian Orzan
Sustainability 2025, 17(14), 6638; https://doi.org/10.3390/su17146638 - 21 Jul 2025
Viewed by 618
Abstract
This article explores the integration of artificial intelligence (AI) in digital marketing through social and mobile networks and its role in fostering sustainable consumer behavior. AI enhances personalization, sentiment analysis, and campaign automation, reshaping marketing dynamics and enabling brands to engage interactively with [...] Read more.
This article explores the integration of artificial intelligence (AI) in digital marketing through social and mobile networks and its role in fostering sustainable consumer behavior. AI enhances personalization, sentiment analysis, and campaign automation, reshaping marketing dynamics and enabling brands to engage interactively with users. A quantitative study conducted on 501 social media users evaluates how perceived benefits, risks, trust, transparency, satisfaction, and social norms influence the acceptance of AI-driven marketing tools. Using structural equation modeling (SEM), the findings show that social norms and perceived transparency significantly enhance trust in AI, while perceived benefits and satisfaction drive user acceptance; conversely, perceived risks and negative emotions undermine trust. From a sustainability perspective, AI supports the efficient targeting and personalization of eco-conscious content, aligning marketing with environmentally responsible practices. This study contributes to ethical AI and sustainable digital strategies by offering empirical evidence and practical insights for responsible AI integration in marketing. Full article
Show Figures

Figure 1

35 pages, 954 KiB  
Article
Beyond Manual Media Coding: Evaluating Large Language Models and Agents for News Content Analysis
by Stavros Doropoulos, Elisavet Karapalidou, Polychronis Charitidis, Sophia Karakeva and Stavros Vologiannidis
Appl. Sci. 2025, 15(14), 8059; https://doi.org/10.3390/app15148059 - 20 Jul 2025
Viewed by 463
Abstract
The vast volume of media content, combined with the costs of manual annotation, challenges scalable codebook analysis and risks reducing decision-making accuracy. This study evaluates the effectiveness of large language models (LLMs) and multi-agent teams in structured media content analysis based on codebook-driven [...] Read more.
The vast volume of media content, combined with the costs of manual annotation, challenges scalable codebook analysis and risks reducing decision-making accuracy. This study evaluates the effectiveness of large language models (LLMs) and multi-agent teams in structured media content analysis based on codebook-driven annotation. We construct a dataset of 200 news articles on U.S. tariff policies, manually annotated using a 26-question codebook encompassing 122 distinct codes, to establish a rigorous ground truth. Seven state-of-the-art LLMs, spanning low- to high-capacity tiers, are assessed under a unified zero-shot prompting framework incorporating role-based instructions and schema-constrained outputs. Experimental results show weighted global F1-scores between 0.636 and 0.822, with Claude-3-7-Sonnet achieving the highest direct-prompt performance. To examine the potential of agentic orchestration, we propose and develop a multi-agent system using Meta’s Llama 4 Maverick, incorporating expert role profiling, shared memory, and coordinated planning. This architecture improves the overall F1-score over the direct prompting baseline from 0.757 to 0.805 and demonstrates consistent gains across binary, categorical, and multi-label tasks, approaching commercial-level accuracy while maintaining a favorable cost–performance profile. These findings highlight the viability of LLMs, both in direct and agentic configurations, for automating structured content analysis. Full article
(This article belongs to the Special Issue Natural Language Processing in the Era of Artificial Intelligence)
Show Figures

Figure 1

26 pages, 9183 KiB  
Review
Application of Image Computing in Non-Destructive Detection of Chinese Cuisine
by Xiaowei Huang, Zexiang Li, Zhihua Li, Jiyong Shi, Ning Zhang, Zhou Qin, Liuzi Du, Tingting Shen and Roujia Zhang
Foods 2025, 14(14), 2488; https://doi.org/10.3390/foods14142488 - 16 Jul 2025
Viewed by 456
Abstract
Food quality and safety are paramount in preserving the culinary authenticity and cultural integrity of Chinese cuisine, characterized by intricate ingredient combinations, diverse cooking techniques (e.g., stir-frying, steaming, and braising), and region-specific flavor profiles. Traditional non-destructive detection methods often struggle with the unique [...] Read more.
Food quality and safety are paramount in preserving the culinary authenticity and cultural integrity of Chinese cuisine, characterized by intricate ingredient combinations, diverse cooking techniques (e.g., stir-frying, steaming, and braising), and region-specific flavor profiles. Traditional non-destructive detection methods often struggle with the unique challenges posed by Chinese dishes, including complex textural variations in staple foods (e.g., noodles, dumplings), layered seasoning compositions (e.g., soy sauce, Sichuan peppercorns), and oil-rich cooking media. This study pioneers a hyperspectral imaging framework enhanced with domain-specific deep learning algorithms (spatial–spectral convolutional networks with attention mechanisms) to address these challenges. Our approach effectively deciphers the subtle spectral fingerprints of Chinese-specific ingredients (e.g., fermented black beans, lotus root) and quantifies critical quality indicators, achieving an average classification accuracy of 97.8% across 15 major Chinese dish categories. Specifically, the model demonstrates high precision in quantifying chili oil content in Mapo Tofu with a Mean Absolute Error (MAE) of 0.43% w/w and assessing freshness gradients in Cantonese dim sum (Shrimp Har Gow) with a classification accuracy of 95.2% for three distinct freshness levels. This approach leverages the detailed spectral information provided by hyperspectral imaging to automate the classification and detection of Chinese dishes, significantly improving both the accuracy of image-based food classification by >15 percentage points compared to traditional RGB methods and enhancing food quality safety assessment. Full article
Show Figures

Graphical abstract

24 pages, 1605 KiB  
Article
Quantum-Secure Coherent Optical Networking for Advanced Infrastructures in Industry 4.0
by Ofir Joseph and Itzhak Aviv
Information 2025, 16(7), 609; https://doi.org/10.3390/info16070609 - 15 Jul 2025
Viewed by 416
Abstract
Modern industrial ecosystems, particularly those embracing Industry 4.0, increasingly depend on coherent optical networks operating at 400 Gbps and beyond. These high-capacity infrastructures, coupled with advanced digital signal processing and phase-sensitive detection, enable real-time data exchange for automated manufacturing, robotics, and interconnected factory [...] Read more.
Modern industrial ecosystems, particularly those embracing Industry 4.0, increasingly depend on coherent optical networks operating at 400 Gbps and beyond. These high-capacity infrastructures, coupled with advanced digital signal processing and phase-sensitive detection, enable real-time data exchange for automated manufacturing, robotics, and interconnected factory systems. However, they introduce multilayer security challenges—ranging from hardware synchronization gaps to protocol overhead manipulation. Moreover, the rise of large-scale quantum computing intensifies these threats by potentially breaking classical key exchange protocols and enabling the future decryption of stored ciphertext. In this paper, we present a systematic vulnerability analysis of coherent optical networks that use OTU4 framing, Media Access Control Security (MACsec), and 400G ZR+ transceivers. Guided by established risk assessment methodologies, we uncover critical weaknesses affecting management plane interfaces (e.g., MDIO and I2C) and overhead fields (e.g., Trail Trace Identifier, Bit Interleaved Parity). To mitigate these risks while preserving the robust data throughput and low-latency demands of industrial automation, we propose a post-quantum security framework that merges spectral phase masking with multi-homodyne coherent detection, strengthened by quantum key distribution for key management. This layered approach maintains backward compatibility with existing infrastructure and ensures forward secrecy against quantum-enabled adversaries. The evaluation results show a substantial reduction in exposure to timing-based exploits, overhead field abuses, and cryptographic compromise. By integrating quantum-safe measures at the optical layer, our solution provides a future-proof roadmap for network operators, hardware vendors, and Industry 4.0 stakeholders tasked with safeguarding next-generation manufacturing and engineering processes. Full article
Show Figures

Figure 1

26 pages, 4255 KiB  
Article
Moving Toward Automated Construction Management: An Automated Construction Worker Efficiency Evaluation System
by Chaojun Zhang, Chao Mao, Huan Liu, Yunlong Liao and Jiayi Zhou
Buildings 2025, 15(14), 2479; https://doi.org/10.3390/buildings15142479 - 15 Jul 2025
Viewed by 291
Abstract
In the Architecture, Engineering, and Construction (AEC) industry, traditional labor efficiency evaluation methods have limitations, while computer vision technology shows great potential. This study aims to develop a potential automated construction efficiency evaluation framework. We propose a method that integrates keypoint processing and [...] Read more.
In the Architecture, Engineering, and Construction (AEC) industry, traditional labor efficiency evaluation methods have limitations, while computer vision technology shows great potential. This study aims to develop a potential automated construction efficiency evaluation framework. We propose a method that integrates keypoint processing and extraction using the BlazePose model from MediaPipe, action classification with a Long Short-Term Memory (LSTM) network, and construction object recognition with the YOLO algorithm. A new model framework for action recognition and work hour statistics is introduced, and a specific construction scene dataset is developed under controlled experimental conditions. The experimental results on this dataset show that the worker action recognition accuracy can reach 82.23%, and the average accuracy of the classification model based on the confusion matrix is 81.67%. This research makes contributions in terms of innovative methodology, a new model framework, and a comprehensive dataset, which may have potential implications for enhancing construction efficiency, supporting cost-saving strategies, and providing decision support in the future. However, this study represents an initial validation under limited conditions, and it also has limitations such as its dependence on well-lit environments and high computational requirements. Future research should focus on addressing these limitations and further validating the approach in diverse and practical construction scenarios. Full article
Show Figures

Figure 1

19 pages, 1779 KiB  
Article
Through the Eyes of the Viewer: The Cognitive Load of LLM-Generated vs. Professional Arabic Subtitles
by Hussein Abu-Rayyash and Isabel Lacruz
J. Eye Mov. Res. 2025, 18(4), 29; https://doi.org/10.3390/jemr18040029 - 14 Jul 2025
Viewed by 429
Abstract
As streaming platforms adopt artificial intelligence (AI)-powered subtitle systems to satisfy global demand for instant localization, the cognitive impact of these automated translations on viewers remains largely unexplored. This study used a web-based eye-tracking protocol to compare the cognitive load that GPT-4o-generated Arabic [...] Read more.
As streaming platforms adopt artificial intelligence (AI)-powered subtitle systems to satisfy global demand for instant localization, the cognitive impact of these automated translations on viewers remains largely unexplored. This study used a web-based eye-tracking protocol to compare the cognitive load that GPT-4o-generated Arabic subtitles impose with that of professional human translations among 82 native Arabic speakers who viewed a 10 min episode (“Syria”) from the BBC comedy drama series State of the Union. Participants were randomly assigned to view the same episode with either professionally produced Arabic subtitles (Amazon Prime’s human translations) or machine-generated GPT-4o Arabic subtitles. In a between-subjects design, with English proficiency entered as a moderator, we collected fixation count, mean fixation duration, gaze distribution, and attention concentration (K-coefficient) as indices of cognitive processing. GPT-4o subtitles raised cognitive load on every metric; viewers produced 48% more fixations in the subtitle area, recorded 56% longer fixation durations, and spent 81.5% more time reading the automated subtitles than the professional subtitles. The subtitle area K-coefficient tripled (0.10 to 0.30), a shift from ambient scanning to focal processing. Viewers with advanced English proficiency showed the largest disruptions, which indicates that higher linguistic competence increases sensitivity to subtle translation shortcomings. These results challenge claims that large language models (LLMs) lighten viewer burden; despite fluent surface quality, GPT-4o subtitles demand far more cognitive resources than expert human subtitles and therefore reinforce the need for human oversight in audiovisual translation (AVT) and media accessibility. Full article
Show Figures

Figure 1

22 pages, 1013 KiB  
Article
Leveraging Artificial Intelligence in Social Media Analysis: Enhancing Public Communication Through Data Science
by Sawsan Taha and Rania Abdel-Qader Abdallah
Journal. Media 2025, 6(3), 102; https://doi.org/10.3390/journalmedia6030102 - 12 Jul 2025
Viewed by 563
Abstract
This study examines the role of AI tools in improving public communication via social media analysis. It reviews five of the top platforms—Google Cloud Natural Language, IBM Watson NLU, Hootsuite Insights, Talkwalker Analytics, and Sprout Social—to determine their accuracy in detecting sentiment, predicting [...] Read more.
This study examines the role of AI tools in improving public communication via social media analysis. It reviews five of the top platforms—Google Cloud Natural Language, IBM Watson NLU, Hootsuite Insights, Talkwalker Analytics, and Sprout Social—to determine their accuracy in detecting sentiment, predicting trends, optimally timing content, and enhancing messaging engagement. Adopting a structured model approach and Partial Least Squares Structural Equation Modeling (PLS-SEM) via SMART PLS, this research uses 500 influencer posts from five Arab countries. The results demonstrate the impactful relationships between AI tool functions and communication outcomes: the utilization of text analysis tools significantly improved public engagement (β = 0.62, p = 0.001), trend forecasting tools improved strategic planning decisions (β = 0.74, p < 0.001), and timing optimization tools enhanced message efficacy (β = 0.59, p = 0.004). Beyond the technical dimensions, the study addresses urgent ethical considerations by outlining a five-principle ethical governance model that encourages transparency, fairness, privacy, human oversee of technologies, and institutional accountability considering data bias, algorithmic opacity, and over-reliance on automated solutions. The research adds a multidimensional framework for propelling AI into digital public communication in culturally sensitive and linguistically diverse environments and provides a blueprint for improving AI integration. Full article
Show Figures

Figure 1

19 pages, 1400 KiB  
Article
Identifying Themes in Social Media Discussions of Eating Disorders: A Quantitative Analysis of How Meaningful Guidance and Examples Improve LLM Classification
by Apoorv Prasad, Setayesh Abiazi Shalmani, Lu He, Yang Wang and Susan McRoy
BioMedInformatics 2025, 5(3), 40; https://doi.org/10.3390/biomedinformatics5030040 - 11 Jul 2025
Viewed by 440
Abstract
Background: Social media represents a unique opportunity to investigate the perspectives of people with eating disorders at scale. One forum alone, r/EatingDisorders, now has 113,000 members worldwide. In less than a day, where a manual analysis might sample a few dozen items, automatic [...] Read more.
Background: Social media represents a unique opportunity to investigate the perspectives of people with eating disorders at scale. One forum alone, r/EatingDisorders, now has 113,000 members worldwide. In less than a day, where a manual analysis might sample a few dozen items, automatic classification using large language models (LLMs) can analyze thousands of posts. Methods: Here, we compare multiple strategies for invoking an LLM, including ones that include examples (few-shot) and annotation guidelines, to classify eating disorder content across 14 predefined themes using Llama3.1:8b on 6850 social media posts. In addition to standard metrics, we calculate four novel dimensions of classification quality: a Category Divergence Index, confidence scores (overall model certainty), focus scores (a measure of decisiveness for selected subsets of themes), and dominance scores (primary theme identification strength). Results: By every measure, invoking an LLM without extensive guidance and examples (zero-shot) is insufficient. Zero-shot had worse mean category divergence (7.17 versus 3.17). Whereas, few-shot yielded higher mean confidence, 0.42 versus 0.27, and higher mean dominance, 0.81 versus 0.46. Overall, a few-shot approach improved quality measures across nearly 90% of predictions. Conclusions: These findings suggest that LLMs, if invoked with expert instructions and helpful examples, can provide instantaneous high-quality annotation, enabling automated mental health content moderation systems or future clinical research. Full article
Show Figures

Figure 1

22 pages, 818 KiB  
Article
Towards Reliable Fake News Detection: Enhanced Attention-Based Transformer Model
by Jayanti Rout, Minati Mishra and Manob Jyoti Saikia
J. Cybersecur. Priv. 2025, 5(3), 43; https://doi.org/10.3390/jcp5030043 - 9 Jul 2025
Viewed by 647
Abstract
The widespread rise of misinformation across digital platforms has increased the demand for accurate and efficient Fake News Detection (FND) systems. This study introduces an enhanced transformer-based architecture for FND, developed through comprehensive ablation studies and empirical evaluations on multiple benchmark datasets. The [...] Read more.
The widespread rise of misinformation across digital platforms has increased the demand for accurate and efficient Fake News Detection (FND) systems. This study introduces an enhanced transformer-based architecture for FND, developed through comprehensive ablation studies and empirical evaluations on multiple benchmark datasets. The proposed model combines improved multi-head attention, dynamic positional encoding, and a lightweight classification head to effectively capture nuanced linguistic patterns, while maintaining computational efficiency. To ensure robust training, techniques such as label smoothing, learning rate warm-up, and reproducibility protocols were incorporated. The model demonstrates strong generalization across three diverse datasets, such as FakeNewsNet, ISOT, and LIAR, achieving an average accuracy of 79.85%. Specifically, it attains 80% accuracy on FakeNewsNet, 100% on ISOT, and 59.56% on LIAR. With just 3.1 to 4.3 million parameters, the model achieves an 85% reduction in size compared to full-sized BERT architectures. These results highlight the model’s effectiveness in balancing high accuracy with resource efficiency, making it suitable for real-world applications such as social media monitoring and automated fact-checking. Future work will explore multilingual extensions, cross-domain generalization, and integration with multimodal misinformation detection systems. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—2nd Edition)
Show Figures

Figure 1

20 pages, 1496 KiB  
Article
Utilizing LLMs and ML Algorithms in Disaster-Related Social Media Content
by Vasileios Linardos, Maria Drakaki and Panagiotis Tzionas
GeoHazards 2025, 6(3), 33; https://doi.org/10.3390/geohazards6030033 - 2 Jul 2025
Viewed by 505
Abstract
In this research, we explore the use of Large Language Models (LLMs) and clustering techniques to automate the structuring and labeling of disaster-related social media content. With a gathered dataset comprising millions of tweets related to various disasters, our approach aims to transform [...] Read more.
In this research, we explore the use of Large Language Models (LLMs) and clustering techniques to automate the structuring and labeling of disaster-related social media content. With a gathered dataset comprising millions of tweets related to various disasters, our approach aims to transform unstructured and unlabeled data into a structured and labeled format that can be readily used for training machine learning algorithms and enhancing disaster response efforts. We leverage LLMs to preprocess and understand the semantic content of the tweets, applying several semantic properties to the data. Subsequently, we apply clustering techniques to identify emerging themes and patterns that may not be captured by predefined categories, with these patterns surfaced through topic extraction of the clusters. We proceed with manual labeling and evaluation of 10,000 examples to evaluate the LLMs’ ability to understand tweet features. Our methodology is applied to real-world data for disaster events, with results directly applicable to actual crisis situations. Full article
Show Figures

Figure 1

25 pages, 2892 KiB  
Article
Focal Correlation and Event-Based Focal Visual Content Text Attention for Past Event Search
by Pranita P. Deshmukh and S. Poonkuntran
Computers 2025, 14(7), 255; https://doi.org/10.3390/computers14070255 - 28 Jun 2025
Viewed by 302
Abstract
Every minute, vast amounts of video and image data are uploaded worldwide to the internet and social media platforms, creating a rich visual archive of human experiences—from weddings and family gatherings to significant historical events such as war crimes and humanitarian crises. When [...] Read more.
Every minute, vast amounts of video and image data are uploaded worldwide to the internet and social media platforms, creating a rich visual archive of human experiences—from weddings and family gatherings to significant historical events such as war crimes and humanitarian crises. When properly analyzed, this multimodal data holds immense potential for reconstructing important events and verifying information. However, challenges arise when images and videos lack complete annotations, making manual examination inefficient and time-consuming. To address this, we propose a novel event-based focal visual content text attention (EFVCTA) framework for automated past event retrieval using visual question answering (VQA) techniques. Our approach integrates a Long Short-Term Memory (LSTM) model with convolutional non-linearity and an adaptive attention mechanism to efficiently identify and retrieve relevant visual evidence alongside precise answers. The model is designed with robust weight initialization, regularization, and optimization strategies and is evaluated on the Common Objects in Context (COCO) dataset. The results demonstrate that EFVCTA achieves the highest performance across all metrics (88.7% accuracy, 86.5% F1-score, 84.9% mAP), outperforming state-of-the-art baselines. The EFVCTA framework demonstrates promising results for retrieving information about past events captured in images and videos and can be effectively applied to scenarios such as documenting training programs, workshops, conferences, and social gatherings in academic institutions Full article
Show Figures

Figure 1

23 pages, 1333 KiB  
Article
Disaster in the Headlines: Quantifying Narrative Variation in Global News Using Topic Modeling and Statistical Inference
by Fahim Sufi and Musleh Alsulami
Mathematics 2025, 13(13), 2049; https://doi.org/10.3390/math13132049 - 20 Jun 2025
Viewed by 313
Abstract
Understanding how disasters are framed in news media is critical to unpacking the socio-political dynamics of crisis communication. However, empirical research on narrative variation across disaster types and geographies remains limited. This study addresses that gap by examining whether media outlets adopt distinct [...] Read more.
Understanding how disasters are framed in news media is critical to unpacking the socio-political dynamics of crisis communication. However, empirical research on narrative variation across disaster types and geographies remains limited. This study addresses that gap by examining whether media outlets adopt distinct narrative structures based on disaster type and country. We curated a large-scale dataset of 20,756 disaster-related news articles, spanning from September 2023 to May 2025, aggregated from 471 distinct global news portals using automated web scraping, RSS feeds, and public APIs. The unstructured news titles were transformed into structured representations using GPT-3.5 Turbo and subjected to unsupervised topic modeling using Latent Dirichlet Allocation (LDA). Five dominant latent narrative topics were extracted, each characterized by semantically coherent keyword clusters (e.g., “wildfire”, “earthquake”, “flood”, “hurricane”). To empirically evaluate our hypotheses, we conducted chi-square tests of independence. Results demonstrated a statistically significant association between disaster type and narrative frame (χ2=25,280.78, p < 0.001), as well as between country and narrative frame (χ2=23,564.62, p < 0.001). Visualizations confirmed consistent topic–disaster and topic–country pairings, such as “earthquake” narratives dominating in Japan and Myanmar and “hurricane” narratives in the USA. The findings reveal that disaster narratives vary by event type and geopolitical context, supported by a mathematically robust, scalable, data-driven method for analyzing media framing of global crises. Full article
Show Figures

Figure 1

24 pages, 2410 KiB  
Article
UA-HSD-2025: Multi-Lingual Hate Speech Detection from Tweets Using Pre-Trained Transformers
by Muhammad Ahmad, Muhammad Waqas, Ameer Hamza, Sardar Usman, Ildar Batyrshin and Grigori Sidorov
Computers 2025, 14(6), 239; https://doi.org/10.3390/computers14060239 - 18 Jun 2025
Cited by 1 | Viewed by 673
Abstract
The rise in social media has improved communication but also amplified the spread of hate speech, creating serious societal risks. Automated detection remains difficult due to subjectivity, linguistic diversity, and implicit language. While prior research focuses on high-resource languages, this study addresses the [...] Read more.
The rise in social media has improved communication but also amplified the spread of hate speech, creating serious societal risks. Automated detection remains difficult due to subjectivity, linguistic diversity, and implicit language. While prior research focuses on high-resource languages, this study addresses the underexplored multilingual challenges of Arabic and Urdu hate speech through a comprehensive approach. To achieve this objective, this study makes four different key contributions. First, we have created a unique multi-lingual, manually annotated binary and multi-class dataset (UA-HSD-2025) sourced from X, which contains the five most important multi-class categories of hate speech. Secondly, we created detailed annotation guidelines to make a robust and perfect hate speech dataset. Third, we explore two strategies to address the challenges of multilingual data: a joint multilingual and translation-based approach. The translation-based approach involves converting all input text into a single target language before applying a classifier. In contrast, the joint multilingual approach employs a unified model trained to handle multiple languages simultaneously, enabling it to classify text across different languages without translation. Finally, we have employed state-of-the-art 54 different experiments using different machine learning using TF-IDF, deep learning using advanced pre-trained word embeddings such as FastText and Glove, and pre-trained language-based models using advanced contextual embeddings. Based on the analysis of the results, our language-based model (XLM-R) outperformed traditional supervised learning approaches, achieving 0.99 accuracy in binary classification for Arabic, Urdu, and joint-multilingual datasets, and 0.95, 0.94, and 0.94 accuracy in multi-class classification for joint-multilingual, Arabic, and Urdu datasets, respectively. Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
Show Figures

Figure 1

10 pages, 980 KiB  
Brief Report
Large-Scale Expansion of Suspension Cells in an Automated Hollow-Fiber Perfusion Bioreactor
by Eric Bräuchle, Maria Knaub, Laura Weigand, Elisabeth Ehrend, Patricia Manns, Antje Kremer, Hugo Fabre and Halvard Bonig
Bioengineering 2025, 12(6), 644; https://doi.org/10.3390/bioengineering12060644 - 12 Jun 2025
Viewed by 693
Abstract
Bioreactors enable scalable cell cultivation by providing controlled environments for temperature, oxygen, and nutrient regulation, maintaining viability and enhancing expansion efficiency. Automated systems improve reproducibility and minimize contamination risks, making them ideal for high-density cultures. While fed-batch bioreactors dominate biologics production, continuous systems [...] Read more.
Bioreactors enable scalable cell cultivation by providing controlled environments for temperature, oxygen, and nutrient regulation, maintaining viability and enhancing expansion efficiency. Automated systems improve reproducibility and minimize contamination risks, making them ideal for high-density cultures. While fed-batch bioreactors dominate biologics production, continuous systems like perfusion cultures offer superior resource efficiency and productivity. The Quantum hollow-fiber perfusion bioreactor supports cell expansion via semi-permeable capillary membranes and a closed modular design, allowing continuous media exchange while retaining key molecules. We developed a multiple-harvest protocol for suspension cells in the Quantum system, yielding 2.5 × 1010 MEL-745A cells within 29 days, with peak densities of 4 × 107 cells/mL—a 15-fold increase over static cultures. Viability averaged 91.3%, with biweekly harvests yielding 3.1 × 109 viable cells per harvest. Continuous media exchange required more basal media to maintain glucose and lactate levels but meaningfully less growth supplement than the 2D culture. Stable transgene expression suggested phenotypic stability. Automated processing reduced hands-on time by one-third, achieving target cell numbers 12 days earlier than 2D culture. Despite higher media use, total costs for the automated were lower compared to the manual process. Quantum enables high-density suspension cell expansion with cost advantages over conventional methods. Full article
(This article belongs to the Section Cellular and Molecular Bioengineering)
Show Figures

Figure 1

Back to TopTop