Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (698)

Search Parameters:
Keywords = offline evaluation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
9 pages, 1359 KB  
Proceeding Paper
Evaluation of SLAM Methods for Small-Scale Autonomous Racing Vehicles
by Rudolf Krecht, Abdelrahman Mutaz A. Alabdallah and Barham Jeries B. Farraj
Eng. Proc. 2025, 113(1), 9; https://doi.org/10.3390/engproc2025113009 - 28 Oct 2025
Abstract
Simultaneous Localization and Mapping (SLAM) is a critical component of autonomous navigation, enabling mobile robots to construct maps while estimating their location. In this study, we compare the performance of SLAM Toolbox and Cartographer, two widely used 2D SLAM methods, by evaluating their [...] Read more.
Simultaneous Localization and Mapping (SLAM) is a critical component of autonomous navigation, enabling mobile robots to construct maps while estimating their location. In this study, we compare the performance of SLAM Toolbox and Cartographer, two widely used 2D SLAM methods, by evaluating their ability to generate accurate maps for autonomous racing applications. The evaluation was conducted using real-world data collected from a RoboRacer vehicle equipped with a 2D laser scanner and capable of providing odometry, operating on a small test track. Both SLAM methods were tested offline. The resulting occupancy grid maps were analyzed using quantitative metrics and visualization tools to assess their quality and consistency. The evaluation was performed against ground truth data derived from an undistorted photograph of the racetrack. Full article
Show Figures

Figure 1

19 pages, 1950 KB  
Article
Thermo-Mechanical Fault Diagnosis for Marine Steam Turbines: A Hybrid DLinear–Transformer Anomaly Detection Framework
by Ziyi Zou, Guobing Chen, Luotao Xie, Jintao Wang and Zichun Yang
J. Mar. Sci. Eng. 2025, 13(11), 2050; https://doi.org/10.3390/jmse13112050 - 27 Oct 2025
Viewed by 82
Abstract
Thermodynamic fault diagnosis of marine steam turbines remains challenging due to non-stationary multivariate sensor data under stochastic loads and transient conditions. While conventional threshold-based methods lack the sophistication for such dynamics, existing data-driven Transformers struggle with inherent non-stationarity. To address this, we propose [...] Read more.
Thermodynamic fault diagnosis of marine steam turbines remains challenging due to non-stationary multivariate sensor data under stochastic loads and transient conditions. While conventional threshold-based methods lack the sophistication for such dynamics, existing data-driven Transformers struggle with inherent non-stationarity. To address this, we propose a hybrid DLinear–Transformer framework that synergistically integrates localized trend decomposition with global feature extraction. The model employs a dual-branch architecture with adaptive positional encoding and a gated fusion mechanism to enhance robustness. Extensive evaluations demonstrate the framework’s superiority: on public benchmarks (SMD, SWaT), it achieves statistically significant F1-score improvements of 2.7% and 0.3% over the state-of-the-art TranAD model under a controlled, reproducible setup. Most importantly, validation on a real-world marine steam turbine dataset confirms a leading fault detection accuracy of 94.6% under variable conditions. By providing a reliable foundation for identifying precursor anomalies, this work establishes a robust offline benchmark that paves the way for practical predictive maintenance in marine engineering. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

20 pages, 4600 KB  
Article
Study on the Coupling and Coordination Degree of Virtual and Real Space Heat in Coastal Internet Celebrity Streets
by Yilu Gong, Sijia Han and Jun Yang
ISPRS Int. J. Geo-Inf. 2025, 14(10), 407; https://doi.org/10.3390/ijgi14100407 - 21 Oct 2025
Viewed by 227
Abstract
This study investigates the coupling and coordination mechanisms between virtual and physical spatial heat in coastal internet-famous streets under the influence of social media. Taking Dalian’s coastal internet-famous street as a case study, user interaction data (likes, favorites, shares, and comments) from the [...] Read more.
This study investigates the coupling and coordination mechanisms between virtual and physical spatial heat in coastal internet-famous streets under the influence of social media. Taking Dalian’s coastal internet-famous street as a case study, user interaction data (likes, favorites, shares, and comments) from the Xiaohongshu platform were integrated with multi-source spatio-temporal big data, including Baidu Heat Maps, to construct an “online–offline” heat coupling and coordination evaluation framework. The entropy-weight method was employed to quantify online heat, while nonlinear regression analysis and a coupling coordination degree model were applied to examine interaction mechanisms and spatio-temporal differentiation patterns. The results show that online heat demonstrates significant polarization with strong agglomeration in the Donggang area, while offline heat fluctuates periodically, rising during the day, stabilizing at night, and peaking on holidays at up to 3.5 times weekday levels with marginal diminishing effects. Forwarding behavior is confirmed as the core driver of online popularity, highlighting the central role of cross-circle communication. The coupling coordination model identifies states ranging from high-quality coordination during holidays to discoordination in daily under-conversion or overload scenarios. These findings verify the leading role of algorithmic recommendation in redistributing spatial power and demonstrate that the sustainability of coastal check-in destinations depends on balancing short-term traffic surges with long-term spatial quality, providing practical insights for governance and sustainable urban planning. Full article
Show Figures

Figure 1

20 pages, 7704 KB  
Article
Seamless User-Generated Content Processing for Smart Media: Delivering QoE-Aware Live Media with YOLO-Based Bib Number Recognition
by Alberto del Rio, Álvaro Llorente, Sofia Ortiz-Arce, Maria Belesioti, George Pappas, Alejandro Muñiz, Luis M. Contreras and Dimitris Christopoulos
Electronics 2025, 14(20), 4115; https://doi.org/10.3390/electronics14204115 - 21 Oct 2025
Viewed by 272
Abstract
The increasing availability of User-Generated Content during large-scale events is transforming spectators into active co-creators of live narratives while simultaneously introducing challenges in managing heterogeneous sources, ensuring content quality, and orchestrating distributed infrastructures. A trial was conducted to evaluate automated orchestration, media enrichment, [...] Read more.
The increasing availability of User-Generated Content during large-scale events is transforming spectators into active co-creators of live narratives while simultaneously introducing challenges in managing heterogeneous sources, ensuring content quality, and orchestrating distributed infrastructures. A trial was conducted to evaluate automated orchestration, media enrichment, and real-time quality assessment in a live sporting scenario. A key innovation of this work is the use of a cloud-native architecture based on Kubernetes, enabling dynamic and scalable integration of smartphone streams and remote production tools into a unified workflow. The system also included advanced cognitive services, such as a Video Quality Probe for estimating perceived visual quality and an AI Engine based on YOLO models for detection and recognition of runners and bib numbers. Together, these components enable a fully automated workflow for live production, combining real-time analysis and quality monitoring, capabilities that previously required manual or offline processing. The results demonstrated consistently high Mean Opinion Score (MOS) values above 3 72.92% of the time, confirming acceptable perceived quality under real network conditions, while the AI Engine achieved strong performance with a Precision of 93.6% and Recall of 80.4%. Full article
Show Figures

Figure 1

31 pages, 3570 KB  
Article
Optimization of the Human–Robot Collaborative Disassembly Process Using a Genetic Algorithm: Application to the Reconditioning of Electric Vehicle Batteries
by Salma Nabli, Gilde Vanel Tchane Djogdom and Martin J.-D. Otis
Designs 2025, 9(5), 122; https://doi.org/10.3390/designs9050122 - 17 Oct 2025
Viewed by 1487
Abstract
To achieve a complete circular economy for used electric vehicle batteries, it is essential to implement a disassembly step. Given the significant diversity of battery geometries and designs, a high degree of flexibility is required for automated disassembly processes. The incorporation of human–robot [...] Read more.
To achieve a complete circular economy for used electric vehicle batteries, it is essential to implement a disassembly step. Given the significant diversity of battery geometries and designs, a high degree of flexibility is required for automated disassembly processes. The incorporation of human–robot interaction provides a valuable degree of flexibility in the process workflow. However, human behavior is characterized by unpredictable timing and variable task durations, which add considerable complexity to process planning. Therefore, it is crucial to develop a robust strategy for coordinating human and robotic tasks to manage the scheduling of production activities efficiently. This study proposes a global optimization approach to the scheduling of production activities, which employs a genetic algorithm with the objective of minimizing the total production time while simultaneously reducing the idle time of both the human operator and robot. The proposed approach is concerned with optimizing the sequencing of disassembly tasks, considering both temporal and exclusion constraints, to guarantee that tasks deemed hazardous are not executed in the presence of a human. This approach is based on a two-level adaptation framework developed in RoboDK (Robot Development Kit, v5.4.3.22231, 2022, RoboDK Inc., Montréal, QC Canada). At the first level, offline optimization is performed using a genetic algorithm to determine the optimal task sequencing strategy. This stage anticipates human behavior by proposing disassembly sequences aligned with expected human availability. At the second level, an online reactive adjustment refines the plan in real time, adapting it to actual human interventions and compensating for deviations from initial forecasts. The effectiveness of this global optimization strategy is evaluated against a non-global approach, in which the problem is partitioned into independent subproblems solved separately and then integrated. The results demonstrate the efficacy of the proposed approach in comparison with a non-global approach, particularly in scenarios where humans arrive earlier than anticipated. Full article
Show Figures

Figure 1

26 pages, 1351 KB  
Review
Trends and Limitations in Transformer-Based BCI Research
by Maximilian Achim Pfeffer, Johnny Kwok Wai Wong and Sai Ho Ling
Appl. Sci. 2025, 15(20), 11150; https://doi.org/10.3390/app152011150 - 17 Oct 2025
Viewed by 449
Abstract
Transformer-based models have accelerated EEG motor imagery (MI) decoding by using self-attention to capture long-range temporal structures while complementing spatial inductive biases. This systematic survey of Scopus-indexed works from 2020 to 2025 indicates that reported advances are concentrated in offline, protocol-heterogeneous settings; inconsistent [...] Read more.
Transformer-based models have accelerated EEG motor imagery (MI) decoding by using self-attention to capture long-range temporal structures while complementing spatial inductive biases. This systematic survey of Scopus-indexed works from 2020 to 2025 indicates that reported advances are concentrated in offline, protocol-heterogeneous settings; inconsistent preprocessing, non-standard data splits, and sparse efficiency frequently reporting cloud claims of generalization and real-time suitability. Under session- and subject-aware evaluation on the BCIC IV 2a/2b dataset, typical performance clusters are in the high-80% range for binary MI and the mid-70% range for multi-class tasks with gains of roughly 5–10 percentage points achieved by strong hybrids (CNN/TCN–Transformer; hierarchical attention) rather than by extreme figures often driven by leakage-prone protocols. In parallel, transformer-driven denoising—particularly diffusion–transformer hybrids—yields strong signal-level metrics but remains weakly linked to task benefit; denoise → decode validation is rarely standardized despite being the most relevant proxy when artifact-free ground truth is unavailable. Three priorities emerge for translation: protocol discipline (fixed train/test partitions, transparent preprocessing, mandatory reporting of parameters, FLOPs, per-trial latency, and acquisition-to-feedback delay); task relevance (shared denoise → decode benchmarks for MI and related paradigms); and adaptivity at scale (self-supervised pretraining on heterogeneous EEG corpora and resource-aware co-optimization of preprocessing and hybrid transformer topologies). Evidence from subject-adjusting evolutionary pipelines that jointly tune preprocessing, attention depth, and CNN–Transformer fusion demonstrates reproducible inter-subject gains over established baselines under controlled protocols. Implementing these practices positions transformer-driven BCIs to move beyond inflated offline estimates toward reliable, real-time neurointerfaces with concrete clinical and assistive relevance. Full article
(This article belongs to the Special Issue Brain-Computer Interfaces: Development, Applications, and Challenges)
Show Figures

Figure 1

25 pages, 1360 KB  
Article
Source Robust Non-Parametric Reconstruction of Epidemic-like Event-Based Network Diffusion Processes Under Online Data
by Jiajia Xie, Chen Lin, Xinyu Guo and Cassie S. Mitchell
Big Data Cogn. Comput. 2025, 9(10), 262; https://doi.org/10.3390/bdcc9100262 - 16 Oct 2025
Viewed by 286
Abstract
Temporal network diffusion models play a crucial role in healthcare, information technology, and machine learning, enabling the analysis of dynamic event-based processes such as disease spread, information propagation, and behavioral diffusion. This study addresses the challenge of reconstructing temporal network diffusion events in [...] Read more.
Temporal network diffusion models play a crucial role in healthcare, information technology, and machine learning, enabling the analysis of dynamic event-based processes such as disease spread, information propagation, and behavioral diffusion. This study addresses the challenge of reconstructing temporal network diffusion events in real time under conditions of missing and evolving data. A novel non-parametric reconstruction method by simple weights differentiationis proposed to enhance source detection robustness with provable improved error bounds. The approach introduces adaptive cost adjustments, dynamically reducing high-risk source penalties and enabling bounded detours to mitigate errors introduced by missing edges. Theoretical analysis establishes enhanced upper bounds on false positives caused by detouring, while a stepwise evaluation of dynamic costs minimizes redundant solutions, resulting in robust Steiner tree reconstructions. Empirical validation on three real-world datasets demonstrates a 5% improvement in Matthews correlation coefficient (MCC), a twofold reduction in redundant sources, and a 50% decrease in source variance. These results confirm the effectiveness of the proposed method in accurately reconstructing temporal network diffusion while improving stability and reliability in both offline and online settings. Full article
Show Figures

Figure 1

28 pages, 3456 KB  
Article
Learning to Partition: Dynamic Deep Neural Network Model Partitioning for Edge-Assisted Low-Latency Video Analytics
by Yan Lyu, Likai Liu, Xuezhi Wang, Zhiyu Fan, Jinchen Wang and Guanyu Gao
Mach. Learn. Knowl. Extr. 2025, 7(4), 117; https://doi.org/10.3390/make7040117 - 13 Oct 2025
Viewed by 666
Abstract
In edge-assisted low-latency video analytics, a critical challenge is balancing on-device inference latency against the high bandwidth costs and network delays of offloading. Ineffectively managing this trade-off degrades performance and hinders critical applications like autonomous systems. Existing solutions often rely on static partitioning [...] Read more.
In edge-assisted low-latency video analytics, a critical challenge is balancing on-device inference latency against the high bandwidth costs and network delays of offloading. Ineffectively managing this trade-off degrades performance and hinders critical applications like autonomous systems. Existing solutions often rely on static partitioning or greedy algorithms that optimize for a single frame. These myopic approaches adapt poorly to dynamic network and workload conditions, leading to high long-term costs and significant frame drops. This paper introduces a novel partitioning technique driven by a Deep Reinforcement Learning (DRL) agent on a local device that learns to dynamically partition a video analytics Deep Neural Network (DNN). The agent learns a farsighted policy to dynamically select the optimal DNN split point for each frame by observing the holistic system state. By optimizing for a cumulative long-term reward, our method significantly outperforms competitor methods, demonstrably reducing overall system cost and latency while nearly eliminating frame drops in our real-world testbed evaluation. The primary limitation is the initial offline training phase required by the DRL agent. Future work will focus on extending this dynamic partitioning framework to multi-device and multi-edge environments. Full article
Show Figures

Figure 1

18 pages, 2542 KB  
Article
A Two-Stage MLP-LSTM Network-Based Task Planning Method for Human–Robot Collaborative Assembly Scenarios
by Zhenyu Pan and Weiming Wang
Appl. Sci. 2025, 15(20), 10922; https://doi.org/10.3390/app152010922 - 11 Oct 2025
Viewed by 226
Abstract
In many current assembly scenarios, efficient collaboration between humans and robots can improve collaborative efficiency and quality. However, the efficient arrangement of human–robot collaborative (HRC) tasks constitutes a significant challenge. In a collaborative workspace where humans and robots collaborate on assembling a shared [...] Read more.
In many current assembly scenarios, efficient collaboration between humans and robots can improve collaborative efficiency and quality. However, the efficient arrangement of human–robot collaborative (HRC) tasks constitutes a significant challenge. In a collaborative workspace where humans and robots collaborate on assembling a shared product, the determination of task allocation between them is of crucial importance. To address this issue, offline feasible HRC paths are established based on assembly task constraint information. Subsequently, the HRC process is simulated within a virtual environment leveraging these feasible paths. Human assembly intentions are explicitly expressed through human assembly trajectories, and implicitly expressed through simulation results such as assembly time and human–robot resource allocation. Furthermore, a two-stage MLP-LSTM network is employed to train and optimize the assembly simulation database. In the first stage, a sequence generation model is trained using high-quality HRC processes. Then, the network learns human evaluation patterns to score the generated sequences. Ultimately, task allocation for HRC is performed based on the high-scoring generated sequences. The effectiveness of the proposed method is demonstrated through assembly scenarios of two products. Compared with traditional optimization methods like DFS and Greedy, the human collaboration ratio has been optimized by 10%, while the collaborative quality evaluation has been improved by 3%. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

22 pages, 17900 KB  
Article
Custom Material Scanning System for PBR Texture Acquisition: Hardware Design and Digitisation Workflow
by Lunan Wu, Federico Morosi and Giandomenico Caruso
Appl. Sci. 2025, 15(20), 10911; https://doi.org/10.3390/app152010911 - 11 Oct 2025
Viewed by 328
Abstract
Real-time rendering is increasingly used in augmented and virtual reality (AR/VR), interactive design, and product visualisation, where materials must prioritise efficiency and consistency rather than the extreme accuracy required in offline rendering. In parallel, the growing demand for personalised and customised products has [...] Read more.
Real-time rendering is increasingly used in augmented and virtual reality (AR/VR), interactive design, and product visualisation, where materials must prioritise efficiency and consistency rather than the extreme accuracy required in offline rendering. In parallel, the growing demand for personalised and customised products has created a need for digital materials that can be generated in-house without relying on expensive commercial systems. To address these requirements, this paper presents a low-cost digitisation workflow based on photometric stereo. The system integrates a custom-built scanner with cross-polarised illumination, automated multi-light image acquisition, a dual-stage colour calibration process, and a node-based reconstruction pipeline that produces albedo and normal maps. A reproducible evaluation methodology is also introduced, combining perceptual colour-difference analysis using the CIEDE2000 (ΔE00) metric with angular-error assessment of normal maps on known-geometry samples. By openly providing the workflow, bill of materials, and implementation details, this work delivers a practical and replicable solution for reliable material capture in real-time rendering and product customisation scenarios. Full article
Show Figures

Figure 1

51 pages, 1512 KB  
Article
CoCoChain: A Concept-Aware Consensus Protocol for Secure Sensor Data Exchange in Vehicular Ad Hoc Networks
by Rubén Juárez, Ruben Nicolas-Sans and José Fernández Tamames
Sensors 2025, 25(19), 6226; https://doi.org/10.3390/s25196226 - 8 Oct 2025
Viewed by 432
Abstract
Vehicular Ad Hoc Networks (VANETs) support safety-critical and traffic-optimization applications through low-latency, reliable V2X communication. However, securing integrity and auditability with blockchain is challenging because conventional BFT-style consensus incurs high message overhead and latency. We introduce CoCoChain, a concept-aware consensus mechanism tailored to [...] Read more.
Vehicular Ad Hoc Networks (VANETs) support safety-critical and traffic-optimization applications through low-latency, reliable V2X communication. However, securing integrity and auditability with blockchain is challenging because conventional BFT-style consensus incurs high message overhead and latency. We introduce CoCoChain, a concept-aware consensus mechanism tailored to VANETs. Instead of exchanging full payloads, CoCoChain trains a sparse autoencoder (SAE) offline on raw message payloads and encodes each message into a low-dimensional concept vector; only the top-k activations are broadcast during consensus. These compact semantic digests are integrated into a practical BFT workflow with per-phase semantic checks using a cosine-similarity threshold θ=0.85 (calibrated on validation data to balance detection and false positives). We evaluate CoCoChain in OMNeT++/SUMO across urban, highway, and multi-hop broadcast under congestion scenarios, measuring latency, throughput, packet delivery ratio, and Age of Information (AoI), and including adversaries that inject semantically corrupted concepts as well as cross-layer stress (RF jamming and timing jitter). Results show CoCoChain reduces consensus message overhead by up to 25% and confirmation latency by 20% while maintaining integrity with up to 20% Byzantine participants and improving information freshness (AoI) under high channel load. This work focuses on OBU/RSU semantic-aware consensus (not 6G joint sensing or multi-base-station fusion). The code, configs, and an anonymized synthetic replica of the dataset will be released upon acceptance. Full article
(This article belongs to the Special Issue Joint Communication and Sensing in Vehicular Networks)
Show Figures

Figure 1

22 pages, 5020 KB  
Article
Machine Learning on Low-Cost Edge Devices for Real-Time Water Quality Prediction in Tilapia Aquaculture
by Pinit Nuangpirom, Siwasit Pitjamit, Veerachai Jaikampan, Chanotnon Peerakam, Wasawat Nakkiew and Parida Jewpanya
Sensors 2025, 25(19), 6159; https://doi.org/10.3390/s25196159 - 4 Oct 2025
Viewed by 813
Abstract
This study presents the deployment of Machine Learning (ML) models on low-cost edge devices (ESP32) for real-time water quality prediction in tilapia aquaculture. A compact monitoring and control system was developed with low-cost sensors to capture key environmental parameters under field conditions in [...] Read more.
This study presents the deployment of Machine Learning (ML) models on low-cost edge devices (ESP32) for real-time water quality prediction in tilapia aquaculture. A compact monitoring and control system was developed with low-cost sensors to capture key environmental parameters under field conditions in Northern Thailand. Three ML models—Multiple Linear Regression (MLR), Decision Tree Regression (DTR), and Random Forest Regression (RFR)—were evaluated. RFR achieved the highest accuracy (R2 > 0.80), while MLR, with moderate performance (R2 ≈ 0.65–0.72), was identified as the most practical choice for ESP32 deployment due to its computational efficiency and offline operability. The system integrates sensing, prediction, and actuation, enabling autonomous regulation of dissolved oxygen and pH without constant cloud connectivity. Field validation demonstrated the system’s ability to maintain DO within biologically safe ranges and stabilize pH within an hour, supporting fish health and reducing production risks. These findings underline the potential of Edge AIoT as a scalable solution for small-scale aquaculture in resource-limited contexts. Future work will expand seasonal data coverage, explore federated learning approaches, and include economic assessments to ensure long-term robustness and sustainability. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

18 pages, 14342 KB  
Article
A Multi-LiDAR Self-Calibration System Based on Natural Environments and Motion Constraints
by Yuxuan Tang, Jie Hu, Zhiyong Yang, Wencai Xu, Shuaidi He and Bolun Hu
Mathematics 2025, 13(19), 3181; https://doi.org/10.3390/math13193181 - 4 Oct 2025
Viewed by 444
Abstract
Autonomous commercial vehicles often mount multiple LiDARs to enlarge their field of view, but conventional calibration is labor-intensive and prone to drift during long-term operation. We present an online self-calibration method that combines a ground plane motion constraint with a virtual RGB–D projection, [...] Read more.
Autonomous commercial vehicles often mount multiple LiDARs to enlarge their field of view, but conventional calibration is labor-intensive and prone to drift during long-term operation. We present an online self-calibration method that combines a ground plane motion constraint with a virtual RGB–D projection, mapping 3D point clouds to 2D feature/depth images to reduce feature extraction cost while preserving 3D structure. Motion consistency across consecutive frames enables a reduced-dimension hand–eye formulation. Within this formulation, the estimation integrates geometric constraints on SE(3) using Lagrange multiplier aggregation and quasi-Newton refinement. This approach highlights key aspects of identifiability, conditioning, and convergence. An online monitor evaluates plane alignment and LiDAR–INS odometry consistency to detect degradation and trigger recalibration. Tests on a commercial vehicle with six LiDARs and on nuScenes demonstrate accuracy comparable to offline, target-based methods while supporting practical online use. On the vehicle, maximum errors are 6.058 cm (translation) and 4.768° (rotation); on nuScenes, 2.916 cm and 5.386°. The approach streamlines calibration, enables online monitoring, and remains robust in real-world settings. Full article
(This article belongs to the Section A: Algebra and Logic)
Show Figures

Figure 1

26 pages, 1137 KB  
Article
“One Face, Many Roles”: The Role of Cognitive Load and Authenticity in Driving Short-Form Video Ads
by Yadi Feng, Bin Li, Yixuan Niu and Baolong Ma
J. Theor. Appl. Electron. Commer. Res. 2025, 20(4), 272; https://doi.org/10.3390/jtaer20040272 - 3 Oct 2025
Viewed by 775
Abstract
Short-form video platforms have shifted advertising from standalone, time-bounded spots to feed-embedded, swipeable stimuli, creating a high-velocity processing context that can penalize casting complexity. We ask whether a “one face, many roles” casting strategy (a single actor playing multiple characters) outperforms multi-actor executions, [...] Read more.
Short-form video platforms have shifted advertising from standalone, time-bounded spots to feed-embedded, swipeable stimuli, creating a high-velocity processing context that can penalize casting complexity. We ask whether a “one face, many roles” casting strategy (a single actor playing multiple characters) outperforms multi-actor executions, and why. A two-phase pretest (N = 3500) calibrated a realistic ceiling for “multi-actor” casts, then four experiments (total N = 4513) tested mechanisms, boundary conditions, and alternatives. Study 1 (online and offline replications) shows that single-actor ads lower cognitive load and boost account evaluations and purchase intention. Study 2, a field experiment, demonstrates that Need for Closure amplifies these gains via reduced cognitive load. Study 3 documents brand-type congruence: one actor performs better for entertaining/exciting brands, whereas multi-actor suits professional/competence-oriented brands. Study 4 rules out cost-frugality and sympathy using a budget cue and a sequential alternative path (perceived cost constraint → sympathy). Across studies, a chain mediation holds: single-actor casting reduces cognitive load, which elevates brand authenticity and increases purchase intention; a simple mediation links cognitive load to account evaluations. Effects are robust across settings and participant gender. We theorize short-form advertising as a context-embedded persuasion episode that connects information-processing efficiency to authenticity inferences, and we derive practical guidance for talent selection and script design in short-form campaigns. Full article
Show Figures

Figure 1

17 pages, 3487 KB  
Article
Vehicle Connectivity and Dynamic Traffic Response to Unplanned Urban Events
by Javad Sadeghi, Cristiana Botta, Brunella Caroleo and Maurizio Arnone
Urban Sci. 2025, 9(10), 409; https://doi.org/10.3390/urbansci9100409 - 2 Oct 2025
Viewed by 381
Abstract
Integrating advanced technologies, such as Connected Autonomous Vehicles (CAVs) and Connected Vehicles (CVs), represents new strategies and solutions in urban mobility, particularly during unexpected urban events. Vehicle connectivity facilitates real-time communication between vehicles and infrastructure, enhancing traffic management by enabling dynamic rerouting to [...] Read more.
Integrating advanced technologies, such as Connected Autonomous Vehicles (CAVs) and Connected Vehicles (CVs), represents new strategies and solutions in urban mobility, particularly during unexpected urban events. Vehicle connectivity facilitates real-time communication between vehicles and infrastructure, enhancing traffic management by enabling dynamic rerouting to minimize delays and prevent bottlenecks. This study employs the SUMO (Simulation of Urban Mobility) microsimulation to analyze the impact of dynamic rerouting strategies during urban disruptions within the IN2CCAM project’s Turin Living Lab. The Living Lab integrates simulation with real-world testing, including autonomous shuttle operations, to evaluate new mobility solutions. In the initial phase, offline simulations examine street, lane, and intersection closures along shuttle routes to assess how penetration levels of CVs and CAVs influence mobility. The results indicate that higher connectivity penetration improves traffic flow, with the greatest benefits observed at increased levels of autonomous vehicles. These findings highlight the potential of dynamic routing strategies, supported by vehicle connectivity and autonomous driving technologies, to enhance urban mobility and effectively respond to real-time traffic conditions. Additionally, this work demonstrates the capabilities and flexibility of SUMO for simulating complex urban traffic scenarios. Full article
(This article belongs to the Special Issue Advances in Urban Planning and the Digitalization of City Management)
Show Figures

Figure 1

Back to TopTop