Next Article in Journal
Timer-Based Digitization of Analog Sensors Using Ramp-Crossing Time Encoding
Previous Article in Journal
HiLTS©: Human-in-the-Loop Therapeutic System: A Wireless-enabled Digital Neuromodulation Testbed for Brainwave Entrainment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Electrodermal Response Patterns and Emotional Engagement Under Continuous Algorithmic Video Stimulation: A Multimodal Biometric Analysis

by
Carolina Del-Valle-Soto
1,*,
Violeta Corona
2,
Jesus Gomez Romero-Borquez
1,
David Contreras-Tiscareno
1,
Diego Sebastian Montoya-Rodriguez
1,
Jesus Abel Gutierrez-Calvillo
1,
Bernardo Sandoval
1 and
José Varela-Aldás
3
1
Facultad de Ingeniería, Universidad Panamericana, Álvaro del Portillo 49, Zapopan 45010, Jalisco, Mexico
2
Facultad de Ciencias Económicas y Empresariales, Universidad Panamericana, Álvaro del Portillo 49, Zapopan 45010, Jalisco, Mexico
3
Centro de Investigación MIST, Facultad de Ingenierías, Universidad Tecnológica Indoamérica, Ambato 180103, Ecuador
*
Author to whom correspondence should be addressed.
Technologies 2026, 14(1), 70; https://doi.org/10.3390/technologies14010070 (registering DOI)
Submission received: 15 December 2025 / Revised: 14 January 2026 / Accepted: 15 January 2026 / Published: 18 January 2026

Abstract

Excessive use of short-form video platforms such as TikTok has raised growing concerns about digital addiction and its impact on young users’ emotional well-being. This study examines the relationship between continuous TikTok exposure and emotional engagement in young adults aged 20–23 through a multimodal experimental design. The purpose of this research is to determine whether emotional engagement increases, remains stable, or declines during prolonged exposure and to assess the degree of correspondence between facially inferred engagement and physiological arousal. To achieve this, multimodal biometric data were collected using the iMotions platform, integrating galvanic skin response (GSR) sensors and facial expression analysis via Affectiva’s AFFDEX SDK 5.1. Engagement levels were binarized using a logistic transformation, and a binomial test was conducted. GSR analysis, merged with a 50 ms tolerance, revealed no significant differences in skin conductance between engaged and non-engaged states. Findings indicate that although TikTok elicits strong initial emotional engagement, engagement levels significantly decline over time, suggesting habituation and emotional fatigue. The results refine our understanding of how algorithm-driven, short-form content affects users’ affective responses and highlight the limitations of facial metrics as sole indicators of physiological arousal. Implications for theory include advancing multimodal models of emotional engagement that account for divergences between expressivity and autonomic activation. Implications for practice emphasize the need for ethical platform design and improved digital well-being interventions. The originality and value of this study lie in its controlled experimental approach that synchronizes facial and physiological signals, offering objective evidence of the temporal decay of emotional engagement during continuous TikTok use and underscoring the complexity of measuring affect in highly stimulating digital environments.

1. Introduction

Recent scholarship has increasingly examined how algorithmic platforms are deliberately engineered to capture and sustain user attention through affective and cognitive manipulation. Rather than serving as neutral conduits of communication, social media ecosystems strategically exploit psychological heuristics, attentional biases, and reward-based feedback mechanisms to optimize engagement and retention [1]. This deliberate coupling of persuasive design and algorithmic personalization has raised critical concerns regarding the erosion of self-regulation and the emergence of digital environments that condition users through intermittent reinforcement schedules [2]. In particular, the rapid proliferation of short-form video platforms such as TikTok exemplifies the convergence of emotional engineering, behavioral economics, and data-driven personalization, producing immersive systems where habitual use increasingly mirrors behavioral addiction [3].
Social media platforms are purposefully designed to capture and retain users’ attention, often encouraging repetitive consumption patterns that resemble behavioral addiction. Among these platforms [4], TikTok—a short-form video application—has rapidly emerged as one of the most influential and widely used, particularly among young adults. Its endless-scroll design, algorithmic personalization, and immediate feedback mechanisms have prompted growing concern regarding their potential effects on users’ emotional regulation and cognitive engagement [5].
Digital addiction is characterized by compulsive use and impaired self-regulation. Empirical evidence indicates that increased time spent on social media is associated with negative outcomes for mental well-being [6,7]. Eyre found that heavier TikTok use among Spanish adolescents correlated with reduced ability to establish self-imposed limits on app usage, reflecting the platform’s inherently persuasive architecture. Similar studies link problematic social media consumption to symptoms such as anxiety, social isolation, and diminished emotional awareness [8]. However, the majority of this research relies on self-reported measures, which are susceptible to bias and often fail to capture unconscious or nonverbal emotional responses.
To address these limitations, recent advances in affective computing and psychophysiological sensing offer objective methods to quantify emotional engagement, that is, the intensity of emotional involvement elicited by external stimuli. Multimodal biometric systems allow researchers to assess engagement using both facial expression analysis and physiological indicators. For example, Affectiva’s AFFDEX SDK continuously estimates an engagement index derived from facial action unit activations, representing “emotional responsiveness triggered by stimuli” [9,10]. In parallel, GSR measures fluctuations in skin conductance, which serve as a reliable proxy for sympathetic nervous system arousal under emotional stimulation [11]. When combined, these modalities enable a more comprehensive and unobtrusive assessment of users’ subconscious reactions during media consumption [12].
Within this framework, the present study aims to examine how sustained TikTok exposure influences emotional engagement among young adults. Specifically, we investigate whether emotional engagement intensifies or declines during continuous platform use and whether physiological arousal aligns with facially inferred engagement levels [13]. By employing synchronized facial and physiological measures through the iMotions platform, this work provides an experimental perspective that complements previous self-report–based findings on digital media behavior.
From an electrochemical perspective, electrodermal activity represents a biosensing modality grounded in ionic transport and conductance changes at the skin–electrode interface. Variations in sweat gland activity modulate local electrolyte concentration and skin impedance, producing measurable conductance fluctuations that can be interpreted as electrochemical responses to autonomic activation. Accordingly, GSR constitutes a non-invasive electrochemical biosignal, widely used in wearable sensing systems to characterize dynamic ionic and conductivity-based processes at the epidermal surface.
We propose the following hypotheses: (1) the proportion of “engaged” moments will exceed the empirical baseline p 0 = 0.250875 ; (2) emotional engagement will increase over sustained exposure time, reflecting a cumulative reinforcement effect; and (3) GSR arousal will be significantly higher during engaged than during non-engaged intervals.
The remainder of this paper is organized as follows. Section 2 details the methodological framework, including participant recruitment, apparatus configuration, and multimodal data processing procedures. Section 3 presents the empirical findings and integrated discussion, emphasizing the temporal evolution of engagement and the interplay between facial and physiological metrics. Section 4 concludes by synthesizing the theoretical and applied implications of the results for affective computing, digital well-being, and ethical media design. Together, these sections establish a coherent analytical progression from conceptual rationale to empirical validation and critical interpretation.

2. Related Work

The existing body of research provides a convergent yet critical foundation for understanding how TikTok’s algorithmic design, persuasive interface features, and affective dynamics shape users’ engagement patterns. Scholars consistently highlight the platform’s attention-capturing architecture and its alignment with broader behavioral addiction frameworks [7,14]. At the same time, empirical work on verified TikTok content demonstrates that observable engagement—such as likes, shares, and comments—is strongly influenced by structural and presentational attributes of the videos themselves [15]. Complementing these findings, biometric research has shown that multimodal indicators such as facial expression metrics and electrodermal activity capture emotional responsiveness beyond what self-report measures or algorithmic engagement data can reveal [16,17]. Together, this literature suggests that while TikTok reliably elicits high levels of affective engagement, the underlying emotional, cognitive, and physiological mechanisms may diverge in important ways, warranting a deeper multimodal investigation.
From this critical synthesis emerge methodological gaps that directly inform the development of the present study’s hypotheses. Although prior research acknowledges TikTok’s capacity to elicit strong emotional reactions, few studies have objectively tested whether such reactions surpass empirically established thresholds, such as the engagement baseline identified in verified content ( p 0 = 0.250875 ) [15]. Additionally, while facial-coding indicators have been linked to patterns of virality and viewer responsiveness [16], physiological activation requires more rigorous analytic techniques—such as convex-optimization-based EDA decomposition—to distinguish between tonic and phasic components of arousal [17].
The studies summarized in Table 1 collectively illustrate the growing convergence between social media research and affective computing. Prior work has predominantly focused on self-reported assessments of engagement or algorithmic analyses of platform metrics, such as likes and shares [15]. Although these approaches have advanced our understanding of content virality and behavioral addiction on TikTok and similar platforms [7,14], they often overlook the unconscious, physiological dimensions of user experience. Recent biometric and multimodal frameworks, including facial coding and galvanic skin response modeling [16,18,19], provide a methodological foundation for more objective measurement of emotional engagement. Building upon these developments, the present study contributes a controlled experimental framework integrating facial-affective analysis and GSR to quantify how sustained TikTok exposure shapes users’ emotional engagement over time.

3. Materials and Methods

A total of N = 27 young adults (15 female, 12 male) participated in this study, aged between 20 and 23 years ( M = 21.4 , S D = 1.1 ). All participants were regular TikTok users who reported using the platform at least several times per week. To minimize carry-over effects such as saturation or withdrawal, participants abstained from TikTok use on the day of the experiment. Informed consent was obtained from all participants prior to data collection, and the research protocol was approved by the institutional review board of Universidad Panamericana. Participants were informed of the procedures, their right to withdraw at any time, and the handling of their anonymized data.
Beyond methodological considerations, the ethical implications of predictive affective modeling warrant attention. The ability to infer users’ emotional states raises critical issues of privacy, informed consent, and responsible application in clinical, educational, and entertainment contexts. Future developments must integrate safeguards and transparent protocols to ensure ethical deployment.
Our experimental approach was intentionally designed to prioritize ecological validity by allowing participants to interact with TikTok in a naturalistic manner, similar to everyday use, rather than constraining exposure to predefined content categories. As a result, the video stream predominantly reflected standard algorithmic advertising and promotional content, which typically combines short-form entertainment, influencer marketing, product showcases, and highly optimized audiovisual stimuli aimed at maximizing immediate attention and emotional responsiveness. Such content is characterized by rapid pacing, high visual salience, emotionally charged cues, and repetitive structural patterns—features that are largely consistent across advertising-driven TikTok feeds regardless of nominal content category. This relative homogeneity in persuasive design is central to the research question addressed in this paper. Our objective was not to compare the emotional effects of different semantic content types (e.g., education versus news), but rather to examine how continuous exposure to algorithmically curated short-form content, as it is commonly encountered on the platform, shapes emotional engagement over time. The observed decline in engagement and affective complexity suggests that even highly optimized and emotionally stimulating advertising-oriented content may induce habituation and emotional fatigue when consumed continuously.
Figure 1 presents functional block diagram of the experimental system and analysis pipeline. Facial video and electrodermal activity (GSR) signals are synchronously acquired during continuous TikTok exposure using the iMotions platform. The synchronized signals are processed to extract facial engagement metrics and electrodermal features, which are subsequently analyzed through binarization, temporal correlation, growth-curve modeling, and nonlinear complexity analysis. Participants interact with the TikTok desktop interface while facial video and electrodermal activity are acquired concurrently through a webcam and GSR sensor. These heterogeneous data streams are synchronized at the acquisition level using the iMotions platform to ensure precise temporal alignment. The synchronized signals are then processed through a structured pipeline that extracts facial engagement metrics, preprocesses electrodermal signals, and aligns modalities at the millisecond scale. Subsequent analysis modules operate on these processed signals to quantify engagement dynamics using complementary statistical, temporal, and complexity-based methods. This system-level representation clarifies how multimodal biosignals are transformed into interpretable engagement metrics within a unified technological framework. This figure provides a high-level overview of the experimental setup and the multimodal data acquisition and analysis workflow employed in this study.

3.1. Apparatus and Materials

The experimental setup combined facial expression analysis and GSR measurements using the iMotions research platform. During the data collection phase, participants were seated in front of a standard monitor and interacted with TikTok using the desktop application. All experimental sessions were conducted on a Dell Precision 3660 (Dell Technologies, Round Rock, TX, USA) workstation running Windows 11 Pro, equipped with a 13th-generation Intel® Core™ i9-13900 processor, 32 GB of RAM, and sufficient computational capacity to support synchronized facial and physiological data acquisition in real time. Facial expressions were recorded continuously using a webcam positioned in front of the participant, while GSR data were collected simultaneously. No head-mounted display was worn during facial data acquisition. The following instruments and software were employed:
TikTok Desktop Application: Each participant interacted with the TikTok desktop version on an isolated computer, providing a naturalistic scrolling experience comparable to mobile use but under controlled conditions. Facial Expression Analysis: A webcam recorded participants’ facial expressions at 30 fps. Affectiva’s AFFDEX SDK 5.1 computed an Engagement Index on a continuous scale ranging from 0 to 100, reflecting the intensity of observable emotional responsiveness derived from weighted facial action units. For subsequent analyses, these raw engagement values were normalized to a 0–1 range using Min–Max scaling, as described in Section 3.4 [3]. Although other affective metrics such as joy, surprise, and valence were captured, only engagement was analyzed in this study. GSR: A Shimmer3 GSR+ device (Shimmer Sensing, Dublin, Ireland) equipped with Ag/AgCl electrodes attached to the index and middle fingers of the non-dominant hand recorded electrodermal activity (EDA) at 128 Hz. GSR readings (in μS) indexed sympathetic arousal, with a neutral baseline collected prior to the session. Stimulus Control: All system notifications were disabled, and each participant was instructed to freely browse TikTok videos for exactly 15 min while their facial and physiological signals were continuously recorded.
All data streams—AFFDEX engagement scores, GSR conductance, and timestamps—were synchronized and logged using iMotions’ multimodal integration framework. The overall experimental flow included participant consent, sensor calibration, a 15-min viewing session, and post-session data export and analysis.

3.2. Procedure

Each session was conducted individually in a quiet, well-lit room to minimize distractions and ensure optimal facial tracking. Participants were seated at a comfortable distance from the screen and instructed to use TikTok as they typically would. The procedure was structured as follows:
Setup: Participants were briefed about the experiment, sensors were calibrated, and baseline GSR readings were recorded. Viewing Session: The experimenter signaled the start of the 15-min TikTok browsing session. (1) The AFFDEX SDK computed frame-by-frame engagement probabilities in real time. (2) GSR signals were recorded continuously at 128 Hz. (3) Timestamps were synchronized across both modalities. Post-Session: After 15 min, the experimenter terminated the session. Participants then completed a short self-report questionnaire (User Engagement Scale) to provide subjective feedback, which was recorded but not included in the statistical analyses. The equipment was reset between participants to ensure signal integrity.
The post-session questionnaire was administered to capture participants’ reflective and subjective impressions of their engagement experience; however, these self-report measures were not included in the primary statistical analyses, which focused on objective biometric signals.

3.3. Measures

Table 2 summarizes the primary variables and units of measurement recorded in the experiment.
The GSR signal analyzed in this study reflects electrochemical conductance variations arising from sweat-mediated ionic transport at the skin interface. Changes in sympathetic activation alter sweat composition and ion mobility, directly affecting electrode–skin conductance. Thus, the recorded GSR signal captures electrochemical response dynamics rather than purely behavioral or phenomenological correlates.
From these measures, the following derived metrics were computed:
  • Mean engagement across the 15-min session.
  • Proportion of “engaged” frames, defined as those where engagement 50 .
  • Frequency of GSR peaks per minute, defined as conductance increases exceeding 0.05 μS above the participant’s baseline.

3.4. Data Processing

Data preprocessing and synchronization were conducted using Python (v3.10) and the Pandas, NumPy, and SciPy libraries. For each participant, the corresponding CSV file was cleaned to retain only the variables Timestamp, Engagement, and GSR Conductance CAL. Missing values were removed to ensure signal integrity.
The raw AFFDEX engagement values (0–100) were normalized using a Min–Max scaler to constrain the range between 0 and 1. This normalized engagement signal was used for all temporal, correlational, and visualization analyses, ensuring comparability across participants and numerical stability. When reported in binarized form, thresholds were applied to the normalized scale. The scaled engagement values were then transformed into a binary engagement label using a logistic-sigmoid function with parameters a = 10 , b = 0.05 , and a decision threshold c = 0.50 , as shown in Equation (1).
E binary = 1 , if 1 1 + e a ( E scaled b ) c , 0 , otherwise .
The logistic sigmoid function was used as a deterministic binarization mechanism rather than as a trainable classifier. Its parameters were selected through empirical verification to balance sensitivity and specificity in engagement detection while ensuring numerical stability and interpretability of the resulting labels.
The slope parameter ( a = 10 ) was chosen to enforce a sharp transition around the decision boundary, minimizing ambiguous classifications near the threshold. The offset parameter ( b = 0.05 ) aligns the inflection point of the sigmoid with the mid-range of normalized engagement values produced by the AFFDEX SDK, while the decision threshold ( c = 0.50 ) corresponds to a conservative criterion for classifying frames as engaged.
To assess robustness, we verified that moderate variations in these parameters did not qualitatively alter the proportion of engaged frames or the direction of the reported statistical effects. This sensitivity verification confirmed that the observed engagement dynamics are not driven by arbitrary parameter choices but reflect stable patterns in the underlying facial engagement signal.
Frames classified as E binary = 1 were considered “engaged,” while others were “not engaged.” These binarized engagement data were then merged with corresponding GSR readings using a nearest-timestamp join with a ± 50 ms tolerance. Rows without matching timestamps were excluded to preserve temporal precision.
For each participant, the following analyses were performed:
1.
Computation of Pearson’s correlation coefficient (r) between elapsed time (in seconds) and normalized engagement, assessing whether engagement increased or decreased over time.
2.
Separation of GSR data into “engaged” and “not engaged” subsets based on the binary engagement label.
3.
Calculation of mean GSR values and detection of GSR peaks, defined as local maxima at least 0.05 μS above each participant’s mean conductance.
The resulting participant-level metrics (correlations, mean GSR values, and peak counts) were aggregated for group-level statistical testing. The binomial test for engagement proportions was conducted with a baseline p 0 = 0.250875 derived from García-Marín and Salvat-Martinrey (2022) [15]. The logistic parameters ( a , b , c ) = ( 10 , 0.05 , 0.50 ) were empirically selected to minimize false positives while maintaining sensitivity to facial activation changes.
This multimodal methodological framework ensured temporal synchronization between affective (facial) and physiological (GSR) signals, enabling the objective assessment of emotional engagement dynamics during sustained TikTok use.
Although engagement binarization was required for categorical analyses, the continuous normalized engagement signal was retained for correlation and temporal analyses. To evaluate the influence of threshold selection, we conducted a sensitivity verification by varying the sigmoid parameters within reasonable ranges (±20% for slope and offset, and decision thresholds between 0.45 and 0.55). These variations did not qualitatively alter the observed engagement proportions, temporal trends, or statistical significance of the main results, indicating robustness to moderate threshold changes.

4. Results and Discussion

The experimental configuration integrated equipment with accurate physiological monitoring tools to guarantee an immersive and reliable data acquisition process. The experimental environment consisted of a high-performance computing workstation configured to ensure reliable, low-latency multimodal data acquisition during continuous TikTok exposure.
All twenty-seven participants completed the 15-min TikTok viewing session without any adverse events. On average, each participant generated approximately 27,000 facial engagement frames and 115,000 GSR samples, recorded in perfect temporal alignment. Visual inspection of the time-series data revealed that engagement levels frequently decreased over time, suggesting potential habituation effects (Figure 2). In this and subsequent figures, facial engagement is shown on the normalized 0–1 scale, derived from the original AFFDEX engagement values (0–100).

4.1. Engagement Binarization and Frequency

To determine whether the proportion of “engaged” moments exceeded the baseline derived from [21], engagement values were binarized using the logistic-sigmoid function with parameters a = 10 , b = 0.05 , and threshold c = 0.50 , as shown in Equation (2).
1 1 + e 10 ( E scaled 0.05 ) 0.50 .
Across all participants, a total of N = 707 , 029 frames were analyzed, of which k = 234 , 508 (33.2%) were classified as “engaged.” A one-sided binomial test was conducted against the null hypothesis H 0 : p = 0.250875 , yielding p < 0.001 . A one-sided binomial test was conducted against the null hypothesis H 0 : p = 0.250875 , yielding p < 0.001 . As shown in Figure 3, the observed proportion of frames classified as engaged significantly exceeds the theoretical baseline, confirming that TikTok stimuli elicited above-average emotional engagement during the experimental sessions. As shown in Figure 3, the binomial test compares the observed proportion of engaged frames against the empirical baseline probability. The significant deviation above the baseline confirms that TikTok exposure elicits above-average facial engagement at the frame level. To evaluate whether TikTok exposure elicited facial engagement above an empirically established baseline, we conducted a one-sided binomial test comparing the observed proportion of engaged frames against the reference probability p0 = 0.250875. Across all participants, 234,508 out of 707,029 frames (33.2%) were classified as engaged, yielding a highly significant deviation from the baseline (p < 0.001). The observed engagement proportion clearly exceeds the empirical baseline probability.

4.1.1. Temporal Dynamics of Engagement

To test the hypothesis that engagement would increase over time, Pearson’s correlation coefficient (r) was computed between elapsed time (in seconds) and normalized engagement for each participant. Out of 27 participants, 23 exhibited negative correlations, indicating that engagement tended to decrease as viewing time progressed. After removing invalid or missing values, the mean correlation was r ¯ = 0.134 . A one-sample t-test on Fisher z-transformed correlations confirmed that this downward trend was statistically significant ( t 26 = 4.366 , p = 0.0002 ).
To further illustrate this temporal decline, Figure 4 provides a participant-level visualization of the temporal evolution of facial engagement across the viewing session by directly comparing average normalized engagement during the early (minutes 1–5) and late (minutes 11–15) intervals. Each point represents one participant, with the horizontal axis indicating early-session engagement and the vertical axis indicating late-session engagement. The dashed diagonal corresponds to the identity line ( y = x ), which denotes equal engagement levels across both intervals. Points located below the diagonal indicate participants whose average engagement decreased during the later portion of the session, whereas points above the diagonal would indicate increased engagement over time. As illustrated in the figure, the majority of participants fall below the identity line, revealing a systematic reduction in facial engagement during the late viewing interval. This pattern provides a clear visual complement to the correlation-based analysis, demonstrating that the observed negative association between engagement and elapsed time reflects a consistent within-subject temporal decline rather than being driven by a small subset of participants.
These findings contradict the initial hypothesis of sustained or increasing engagement, suggesting instead that continuous exposure to short-form videos may induce emotional habituation or cognitive fatigue. Notably, this analysis was performed using the continuous normalized engagement signal rather than binarized labels, ensuring that the observed temporal decay is not an artifact of threshold-based classification.
Figure 5 illustrates the growth-curve analysis of facial engagement across the viewing session using a linear mixed-effects modeling framework. The gray lines represent the engagement trajectories of all participants included in the study (N = 27), plotted at the minute level to visualize inter-individual variability in baseline engagement and temporal response patterns. Some trajectories appear partially truncated due to missing values associated with shorter or incomplete recordings; these were intentionally retained as missing data to avoid selective visualization or participant exclusion. The black line depicts the population-level trend estimated by the mixed-effects model, capturing the average temporal evolution of engagement while accounting for individual differences through random intercepts. Despite substantial variability across participants, the group-level trajectory exhibits a consistent downward trend over time, indicating a systematic decline in facial engagement during sustained TikTok exposure. This result reinforces the robustness of the observed engagement decay and demonstrates that it is not driven by a small subset of participants, but rather reflects a general temporal pattern across the full sample. Despite this variability, a clear population-level trend emerges. The black curve represents the fixed-effect component of the mixed-effects model, capturing the average engagement trajectory across all participants while accounting for random intercepts at the individual level. This group-level trend exhibits a systematic decline in engagement as exposure time progresses, indicating that emotional engagement decreases over sustained interaction with algorithmically curated short-form video content. Importantly, the mixed-effects growth-curve approach allows the temporal decay in engagement to be modeled independently of individual baseline differences. While some participants begin with high engagement levels and others with lower initial responsiveness, the overall downward trajectory remains consistent, suggesting that the observed engagement decay is a robust temporal phenomenon rather than an artifact driven by a subset of participants. This finding aligns with theories of emotional habituation and attentional fatigue, whereby repeated exposure to high-arousal stimuli leads to a gradual attenuation of emotional responsiveness.
From a methodological perspective, the use of a mixed-effects model strengthens the interpretation of the results by simultaneously capturing within-subject temporal dynamics and between-subject variability. Rather than relying on simple averaged time series or correlation-based analyses, this approach provides a more comprehensive representation of engagement dynamics under continuous algorithmic stimulation. The figure therefore offers converging evidence that short-form video platforms can elicit strong initial emotional engagement but struggle to sustain it over time, reinforcing the study’s central claim regarding habituation and emotional fatigue in algorithm-driven media environments.

4.1.2. GSR Analysis

For each participant, the GSR values were aligned with the engagement labels using a ± 50 ms timestamp tolerance. Let E = { i timestamp i labeled as engaged } and N = { i timestamp i labeled as non - engaged } . The mean GSR conductance was then computed separately for engaged and non-engaged moments as shown in Equation (3).
G ¯ eng = 1 | E | i E G S R i , G ¯ not = 1 | N | i N G S R i .
Table 3 summarizes the individual mean conductance levels. Across participants, paired-sample t-tests revealed no statistically significant difference in mean GSR between engaged (M = 3.81 μS) and non-engaged moments (M = 3.77 μS), t = 1.063 , p = 0.2975 . Thus, physiological arousal, as captured by GSR, did not systematically increase during facially classified “engaged” intervals.
To facilitate visual inspection of the participant-level GSR values reported in Table 3, Figure 6 presents a per-participant comparison of mean electrodermal conductance during facially classified engaged and non-engaged moments. Substantial inter-individual variability in absolute GSR levels is observed; however, no consistent pattern of higher conductance during engaged states emerges across participants.
The absence of a significant GSR difference suggests that facially detected engagement may not directly correspond to sympathetic arousal, emphasizing the complexity of emotional responses during digital media consumption.

4.2. Fast Nonlinear Complexity Analysis of Physiological and Expressive Signals

To complement the linear and distribution-based analyses presented earlier, we introduce a lightweight yet theoretically grounded nonlinear complexity assessment of the physiological (GSR) and expressive (facial engagement) signals recorded during continuous TikTok exposure. Traditional nonlinear methods such as Sample Entropy and full Lempel–Ziv Complexity are computationally demanding for high-frequency, long-duration time series and are therefore impractical for rapid experimental pipelines or real-time affective computing systems. To address this limitation while preserving interpretability, we implement two fast-complexity estimators widely adopted in physiological informatics: Approximate Entropy Light (ApEn-L), which captures local unpredictability in autonomic fluctuations, and Binary Pattern Complexity (BPC), which quantifies the structural richness of state transitions in the engagement sequence.
Figure 7 summarizes the results of the nonlinear complexity analyses by comparing early (minutes 1–5) and late (minutes 11–15) segments of the session. While the Binary Pattern Complexity (BPC) metric shows a clear reduction in the late interval, indicating a loss of structural richness in facial engagement patterns, the Approximate Entropy Light (ApEn-L) values remain relatively stable across time, with no pronounced decrease between early and late segments.
Similarly, the BPC metric shows a marked reduction between the early and late segments. Because BPC reflects the diversity of transitions in the binarized engagement sequence, a lower value indicates that facial engagement responses become more stereotyped and less behaviorally flexible over time. This result suggests that even when participants exhibit facial markers of engagement, these expressions follow increasingly constrained patterns during extended exposure. The phenomenon mirrors reductions in expressive bandwidth observed in sustained-attention tasks and reinforces the conceptualization of algorithmically curated short-form video streams as high-intensity but low-durability affective stimuli.
The introduction of these fast nonlinear complexity metrics represents a novel methodological contribution to the study of short-form media consumption. Whereas prior TikTok research has primarily focused on amplitude-based measures, self-reports, or algorithmic engagement signals, our complexity-based analysis demonstrates that continuous platform exposure leads not only to decreasing engagement levels but also to a collapse in the intrinsic dynamical richness of both physiological arousal and expressive behavior. This layered degradation of affective complexity highlights a previously undocumented dimension of emotional fatigue in algorithmically driven media environments, offering new implications for digital well-being, attentional sustainability, and the design of ethical personalized content delivery systems.
Figure 7 illustrates the nonlinear complexity patterns of both physiological and expressive responses during continuous TikTok exposure. The first metric, Approximate Entropy Light (ApEn-L), provides a computationally efficient proxy for the unpredictability of the GSR signal. Higher ApEn-L values indicate richer autonomic fluctuations, whereas lower values reflect increased predictability and reduced adaptability of the sympathetic nervous system. As shown in the Figure, the relative stability of ApEn-L suggests that autonomic signal unpredictability does not systematically decline over the course of exposure. This indicates that, although facial engagement becomes more stereotyped over time, the underlying physiological arousal dynamics retain a comparable level of local variability.
The second metric, Binary Pattern Complexity (BPC), quantifies the diversity of state transitions in the binarized engagement sequence. A higher BPC value reflects a more flexible and varied pattern of facial engagement responses, while a lower value indicates that behavioral expressions follow increasingly repetitive or stereotyped trajectories. The marked reduction in BPC observed in the late segment demonstrates that participants’ expressive behavior narrows over time, even when engagement is still detected. This finding suggests that the nature of engagement shifts from a dynamically rich pattern to a more uniform response profile as the session progresses.
Taken together, the metrics depicted in Figure 7 reveal a differentiated pattern of emotional dynamics. Whereas expressive behavior, as captured by BPC, exhibits a marked reduction in structural complexity over time, physiological arousal dynamics, as indexed by ApEn-L, remain comparatively stable. This dissociation suggests that expressive engagement may undergo faster habituation than autonomic processes, reinforcing the interpretation of emotional engagement as a multi-component construct with partially independent temporal trajectories.

4.3. Discussion

The present study provides empirical evidence on the temporal dynamics of emotional engagement during continuous TikTok use through a multimodal experimental approach. The results demonstrate that, although the proportion of engaged frames significantly exceeded the theoretical baseline, engagement declined as exposure continued. This finding challenges the assumption that algorithmically tailored content produces sustained affective involvement, instead suggesting a habituation effect consistent with cognitive saturation and emotional fatigue. The negative time–engagement correlation ( r = 0.134 ) reflects the attenuation of responsiveness to repeated high-arousal stimuli, aligning with established models of hedonic adaptation and attentional decay in digital media contexts. From a psychophysiological standpoint, these results imply that while TikTok initially captures attention through novelty and audiovisual stimulation, its ability to maintain emotional arousal may be inherently self-limiting over prolonged sessions.
The observed decline in emotional engagement over time can be coherently interpreted through established theories of emotional habituation. Habituation theory posits that repeated exposure to stimuli with similar affective characteristics leads to a progressive reduction in emotional responsiveness, even when stimulus intensity remains high. In the context of short-form video platforms, algorithmic personalization ensures a continuous stream of emotionally optimized content; however, this very optimization may accelerate habituation by reducing novelty and increasing perceptual redundancy. Our findings empirically support this mechanism by demonstrating that initial engagement peaks rapidly but subsequently attenuates during continuous exposure, suggesting that emotional intensity alone is insufficient to sustain long-term engagement without novelty or meaningful variation.
From an attentional perspective, the temporal decay of engagement aligns with models of attentional resource depletion and cognitive fatigue. According to attentional decay frameworks, sustained exposure to high-frequency, high-arousal stimuli taxes limited cognitive resources, leading to reduced responsiveness over time. Short-form video platforms intensify this process by minimizing recovery intervals and continuously demanding orienting responses through rapid audiovisual transitions. The negative correlation between engagement and elapsed time observed in this study provides biometric evidence for attentional fatigue in algorithmically curated media environments, reinforcing the notion that continuous stimulation may paradoxically undermine sustained attention.
Importantly, the dissociation between facially inferred engagement and physiological arousal observed in this study challenges simplified models of emotional engagement that assume a direct correspondence between expressive behavior and autonomic activation. While facial expression analysis captures overt, socially legible markers of engagement, electrodermal activity reflects underlying sympathetic nervous system dynamics that may habituate more rapidly. This divergence suggests that emotional engagement is a multi-layered construct in which expressive and physiological components follow distinct temporal trajectories. The findings therefore support hierarchical and component-process models of emotion, emphasizing the need for multimodal measurement frameworks in affective computing and media psychology.
Taken together, these findings motivate a testable theoretical framework in which emotional engagement during algorithmically curated media exposure follows a three-phase trajectory: (1) rapid affective activation driven by novelty and reward anticipation, (2) progressive habituation characterized by declining physiological arousal and attentional resources, and (3) expressive persistence with reduced autonomic support. This framework generates clear, testable predictions—for example, that physiological markers of arousal will decay faster than facial indicators under continuous exposure, and that introducing meaningful content variation or recovery intervals may partially restore engagement dynamics. By articulating engagement as a dynamic, multi-component process rather than a static outcome, the present study advances theoretical coherence in the study of digital media engagement.
The dissociation observed between facially inferred engagement and GSR further underscores the complexity of measuring affective responses in interactive digital environments. While Affectiva’s AFFDEX algorithm provides a valid proxy for observable expressivity, it may not fully reflect sympathetic activation at the autonomic level. The absence of significant GSR differences between engaged and non-engaged moments ( p = 0.2975 ) suggests that facial cues of engagement are partially decoupled from underlying physiological arousal. This divergence is not necessarily contradictory but indicative of multi-layered emotional processing, where overt expressivity and physiological excitation follow distinct temporal and intensity trajectories. Methodologically, this emphasizes the importance of integrating complementary modalities—such as heart-rate variability, pupil dilation, or EEG—to capture a more holistic spectrum of user engagement.
Beyond its empirical contributions, this study advances the methodological framework for examining emotional engagement in algorithmic media systems. By synchronizing biometric and behavioral data in real time, the experiment bridges the gap between subjective self-reports and objective physiological markers, offering a replicable pipeline for future affective computing research. However, the laboratory setting, constrained exposure time, and logistic parameterization of the engagement function may limit generalizability to naturalistic mobile contexts. Future work should expand temporal scope, incorporate adaptive thresholds for engagement detection, and examine cross-modal synchrony under ecologically valid conditions. Such refinements would enhance understanding of how platform design, content variability, and individual traits jointly modulate emotional regulation and digital well-being in emerging media ecosystems.
  • The proportion of engaged frames ( 33.2 % ) significantly exceeded the theoretical baseline ( p < 0.001 ), confirming TikTok’s high initial emotional appeal.
  • Engagement levels exhibited a significant negative correlation with time ( r ¯ = 0.134 , t 26 = 4.366 , p = 0.0002 ), indicating a consistent decline across participants.
  • GSR conductance did not differ significantly between engaged and non-engaged frames ( t = 1.063 , p = 0.2975 ), suggesting that facial indicators of engagement were not consistently accompanied by measurable physiological arousal.
It is important to consider that electrodermal activity reflects autonomic responses with inherent physiological latencies, typically occurring several seconds after stimulus processing. In experimental paradigms with discrete and well-defined emotional events, time-lagged or event-aligned analyses can provide valuable insight into causal affective dynamics. However, the present study intentionally employed a naturalistic, continuous browsing paradigm in which participants were exposed to an uninterrupted stream of algorithmically curated short-form videos. In such contexts, emotional stimulation is sustained and overlapping rather than event-based, making it difficult to identify discrete engagement onsets or stimulus boundaries suitable for event alignment. Consequently, the analyses focused on distribution-level and temporal trends in engagement and physiological arousal rather than fine-grained event-locked coupling. This design choice aligns with the study’s objective of characterizing global engagement dynamics and habituation effects under continuous algorithmic stimulation, rather than modeling moment-to-moment causal responses. Future studies employing controlled stimulus timing, explicit event markers, or experimentally induced engagement episodes may extend the present framework by incorporating time-lagged or event-aligned analyses to further disentangle expressive and autonomic response dynamics.
An additional methodological extension involves the use of individualized engagement thresholds, calibrated to each participant’s baseline or distributional properties. While such personalization may further reduce inter-individual variability, it requires reliable ground-truth labels or extended calibration phases, which were beyond the scope of the present study. Future work may integrate adaptive or participant-specific thresholds to enhance personalization and cross-study comparability.
An important consideration concerns the role of video content characteristics in shaping emotional engagement. While incorporating explicit content labels or emotional attributes may appear desirable for disentangling semantic effects from temporal dynamics, such an approach assumes that content operates as an independent experimental variable. In algorithm-driven platforms such as TikTok, however, content selection is itself an endogenous outcome of continuous personalization and affective optimization.
The present study intentionally prioritizes ecological validity by examining emotional engagement under uninterrupted, algorithmically curated stimulation, rather than isolating predefined content categories. From this perspective, short-form video content functions as a manifestation of the algorithmic system rather than an external factor. Artificially controlling or labeling content may therefore obscure the very mechanisms through which algorithmic personalization shapes affective dynamics over time.
By focusing on temporal patterns of engagement and physiological response during continuous exposure, the study isolates system-level effects such as habituation, emotional fatigue, and multimodal dissociation. Future research may extend this framework by combining controlled content paradigms with naturalistic feeds to disentangle semantic attributes from algorithmic delivery, thereby complementing the system-level insights provided here.
Taken together, these results indicate that TikTok content elicits strong but transient emotional engagement, with diminishing affective responsiveness over continuous exposure. The dissociation between facial and physiological metrics highlights the need for multimodal models of engagement that account for both observable and latent affective processes.

4.3.1. Rationale and Value of the Multimodal Approach

A central contribution of this study lies in the combined analysis of facial expression metrics and electrodermal activity, which enables a more nuanced characterization of emotional engagement than either modality alone. Facial engagement indices derived from the AFFDEX SDK capture observable and socially legible expressions of responsiveness, reflecting how users outwardly react to stimuli. In contrast, GSR provides a direct measure of sympathetic nervous system activation, indexing underlying physiological arousal that may not be consciously expressed.
The integration of both modalities revealed a systematic dissociation between expressive and physiological components of engagement. While facial engagement exhibited a clear temporal decline and a reduction in structural complexity over sustained exposure, physiological arousal did not show corresponding differences between engaged and non-engaged states and maintained relatively stable nonlinear complexity. This divergence indicates that outward expressions of engagement may habituate more rapidly than autonomic processes, a pattern that would remain undetected under a unimodal design.
By jointly examining expressive and physiological signals, the multimodal framework allowed us to move beyond simple confirmation of engagement levels and instead uncover the layered and partially independent dynamics of emotional engagement under continuous algorithmic stimulation. This approach refines existing models of digital engagement by demonstrating that reliance on a single modality—either facial metrics or physiological arousal alone—can lead to incomplete or potentially misleading interpretations. The findings thus underscore the necessity of multimodal measurement frameworks for capturing the full complexity of affective responses in highly stimulating digital media environments.

4.3.2. Role of Subjective Feedback in Interpreting Biometric Engagement

Although the primary analyses of this study focus on objective biometric measures, a post-session self-report questionnaire was administered to capture participants’ subjective impressions of their engagement experience. This questionnaire was intentionally not included in the main statistical analyses, as self-reported engagement reflects reflective, post hoc evaluation rather than moment-to-moment affective dynamics. In contrast, the biometric measures employed in this study were designed to capture rapid, non-conscious emotional and physiological responses during continuous exposure.
The distinction between subjective and biometric measures is particularly relevant in light of the observed dissociation between facial engagement and physiological arousal. Participants’ self-reports often reflect perceived enjoyment, interest, or fatigue after the session, whereas facial expressions and GSR index real-time expressive and autonomic processes that may evolve independently of conscious appraisal. Rather than serving as redundant indicators, these modalities provide complementary perspectives on engagement operating at different levels of awareness.
From this perspective, the subjective feedback collected post-session offers contextual support for interpreting the biometric findings, helping to situate objective engagement decay and multimodal dissociation within participants’ conscious experience. Future work may integrate synchronized self-report probes or experience sampling methods with biometric measures to further bridge subjective and physiological dimensions of engagement in algorithmically curated media environments.

5. Conclusions

This study provides a comprehensive multimodal assessment of emotional engagement during continuous TikTok exposure, integrating synchronized facial-affective analytics and physiological measures to quantify real-time user responses. The evidence reveals a paradox inherent in algorithmic media ecosystems: TikTok’s design successfully captures immediate emotional engagement but fails to sustain it, as users exhibit rapid cognitive habituation and emotional attenuation. This temporal decay challenges long-standing assumptions about persistent digital reinforcement and underscores the ephemeral nature of affective involvement in short-form content consumption. From an applied scientific perspective, these findings contribute to both affective computing and human-computer interaction research. The observed dissociation between facial expressivity and physiological arousal demonstrates the limitations of single-modality engagement measures and highlights the necessity of multimodal, psychophysiologically grounded models for emotion detection. The methodological framework presented here, combining GSR, facial metrics, and behavioral data within a synchronized pipeline, offers a replicable foundation for evaluating user experience in algorithm-driven environments. Beyond empirical results, this work raises broader implications for digital well-being, persuasive interface design, and the ethics of algorithmic media. The fleeting yet intense engagement loops identified may underpin compulsive usage patterns, emphasizing the need for transparent, user-centric design policies. Future applied research should explore longitudinal, ecologically valid settings to examine how micro-level affective fluctuations aggregate into macro-behavioral dependencies. Such insights can inform the development of adaptive digital systems that balance engagement with emotional sustainability, advancing both psychological health and responsible innovation in the attention economy.Ensure conclusions are directly tied to the research objectives and questions. The findings of this study hold meaningful implications for both theoretical models of emotional engagement and practical applications in digital media design. Theoretically, they challenge prevailing assumptions of sustained affective reinforcement by demonstrating that algorithmic platforms such as TikTok generate only transient engagement, suggesting the need to refine existing models of digital attention, hedonic adaptation, and affective habituation. Practically, these insights underscore the importance of developing user-centered interface architectures that promote emotional balance rather than reliance on intermittent reinforcement. From a research perspective, future studies should extend these results beyond controlled laboratory conditions to longitudinal, ecologically valid contexts, integrating additional physiological and neurocognitive measures, such as heart-rate variability, EEG, or eye tracking, to deepen understanding of multimodal affective responses. Nonetheless, the study’s limited exposure duration, participant homogeneity, and reliance on laboratory-based simulations constrain its generalizability, warranting replication with diverse populations and real-world usage scenarios to enhance external validity and applied impact.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/technologies1010000/s1.

Author Contributions

Conceptualization, C.D.-V.-S. and V.C.; methodology, C.D.-V.-S.; software, D.C.-T., D.S.M.-R., J.A.G.-C. and B.S.; validation, C.D.-V.-S., V.C. and J.G.R.-B.; formal analysis, C.D.-V.-S.; investigation, V.C., J.G.R.-B. and D.S.M.-R.; resources, J.V.-A.; data curation, D.C.-T. and B.S.; writing—original draft preparation, C.D.-V.-S. and V.C.; writing—review and editing, C.D.-V.-S., V.C. and J.G.R.-B.; visualization, B.S.; supervision, C.D.-V.-S.; project administration, C.D.-V.-S.; funding acquisition, V.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was approved by the Integrity Code of the Universidad Panamericana, validated by the Social Affairs Committee and approved by the Governing Council through resolution CR 98-22, on 15 November 2022.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data availability requests can be sent to the journal through the Supplementary Materials.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Voorveld, H.A.; Meppelink, C.S.; Boerman, S.C. Consumers’ persuasion knowledge of algorithms in social media advertising: Identifying consumer groups based on awareness, appropriateness, and coping ability. Int. J. Advert. 2024, 43, 960–986. [Google Scholar] [CrossRef]
  2. Bernstein, M.; Christin, A.; Hancock, J.; Hashimoto, T.; Jia, C.; Lam, M.; Meister, N.; Persily, N.; Piccardi, T.; Saveski, M.; et al. Embedding societal values into social media algorithms. J. Online Trust Saf. 2023, 2. [Google Scholar] [CrossRef]
  3. Turner, A.; Shum, J.; Chiou, L. Short-form video platforms and user attention: A mixed-methods study of TikTok usage among young adults. New Media Soc. 2023, 25, 783–802. [Google Scholar]
  4. Atalatti, S.; Pawar, U. Addictive Interfaces: How Persuasion, Psychology Principles, and Emotions Shape Engagement. In Interactive Media with Next-Gen Technologies and Their Usability Evaluation; Chapman and Hall/CRC: Boca Raton, FL, USA, 2024; pp. 47–66. [Google Scholar]
  5. Shin, D. Embodying algorithms, enactive artificial intelligence and the extended cognition: You can see as much as you know about algorithm. J. Inf. Sci. 2023, 49, 18–31. [Google Scholar] [CrossRef]
  6. Virós-Martín, C.; Montaña-Blasco, M.; Jiménez-Morales, M. Can’t stop scrolling! Adolescents’ patterns of TikTok use and digital well-being self-perception. Humanit. Soc. Sci. Commun. 2024, 11, 1444. [Google Scholar] [CrossRef]
  7. Montag, C.; Yang, H.; Elhai, J.D. On the Psychology of TikTok Use: A First Glimpse From Empirical Findings. Front. Public Health 2021, 9, 641673. [Google Scholar] [CrossRef] [PubMed]
  8. Eyre, H.A.; Hynes, W.; Ayadi, R.; Swieboda, P.; Berk, M.; Ibanez, A.; Castelló, M.E.; Jeste, D.V.; Tempest, M.; Abdullah, J.M.; et al. The brain economy: Advancing brain science to better understand the modern economy. Malays. J. Med Sci. MJMS 2024, 31, 1. [Google Scholar] [CrossRef] [PubMed]
  9. Lakshminarayana, N.N.; Sankaran, N.; Setlur, S.; Govindaraju, V. Multimodal deep feature aggregation for facial action unit recognition using visible images and physiological signals. In Proceedings of the 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019), Lille, France, 14–18 May 2019; pp. 1–4. [Google Scholar]
  10. Hinduja, S.; Canavan, S.; Kaur, G. Multimodal fusion of physiological signals and facial action units for pain recognition. In Proceedings of the 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), Buenos Aires, Argentina, 16–20 November 2020; pp. 577–581. [Google Scholar]
  11. Boucsein, W. Electrodermal Activity, 2nd ed.; Springer Science & Business Media: New York, NY, USA, 2012. [Google Scholar] [CrossRef]
  12. Koelstra, S.; Muehl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A Database for Emotion Analysis Using Physiological Signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef]
  13. Bakker, A.B. The social psychology of work engagement: State of the field. Career Dev. Int. 2022, 27, 36–53. [Google Scholar] [CrossRef]
  14. Kuss, D.J.; Griffiths, M.D. Social Networking Sites and Addiction: Ten Lessons Learned. Int. J. Environ. Res. Public Health 2017, 14, 311. [Google Scholar] [CrossRef] [PubMed]
  15. García-Marín, D.; Salvat-Martinrey, G. Viralizar la verdad. Factores predictivos del engagement en el contenido verificado en TikTok. El Prof. De La Inf. 2022, 31, e310210. [Google Scholar] [CrossRef]
  16. Bačić, D.; Gilstrap, C. Predicting video virality and viewer engagement: A biometric data and machine learning approach. Behav. Inf. Technol. 2023. Epub ahead of printing. [Google Scholar] [CrossRef]
  17. Greco, A.; Lanatà, A.; Valenza, G.; Scilingo, E.P.; Citi, L. cvxEDA: A Convex Optimization Approach to Electrodermal Activity Processing. IEEE Trans. Biomed. Eng. 2015, 63, 797–804. [Google Scholar] [CrossRef] [PubMed]
  18. Torrecilla-García, J.A.; Skotnicka, A.G. Biometrics as a Key Factor for Emergent User Experience Evaluation. In Biometrics; CRC Press: Boca Raton, FL, USA, 2024; pp. 27–53. [Google Scholar]
  19. Shams, P.; Verhulst, N. Neurophysiological frontiers in service research: Unraveling the mysteries of customer experience. In Handbook of Service Experience; Edward Elgar Publishing: Cheltenham, UK, 2025; pp. 336–350. [Google Scholar]
  20. Magdin, M.; Prikler, F. Real time facial expression recognition using webcam and SDK affectiva. Int. J. Interact. Multimed. Artif. Intell. 2018, 5, 7. [Google Scholar] [CrossRef]
  21. Min, W.; Frankosky, M.H.; Mott, B.W.; Rowe, J.P.; Smith, A.; Wiebe, E.; Boyer, K.E.; Lester, J.C. DeepStealth: Game-based learning stealth assessment with deep neural networks. IEEE Trans. Learn. Technol. 2019, 13, 312–325. [Google Scholar] [CrossRef]
Figure 1. Functional block diagram of the experimental system and multimodal data analysis pipeline. Facial video and electrodermal activity signals are synchronously acquired during continuous TikTok exposure, processed, and analyzed through complementary temporal and complexity-based methods.
Figure 1. Functional block diagram of the experimental system and multimodal data analysis pipeline. Facial video and electrodermal activity signals are synchronously acquired during continuous TikTok exposure, processed, and analyzed through complementary temporal and complexity-based methods.
Technologies 14 00070 g001
Figure 2. Example time series for a single participant, showing normalized facial engagement (0–1; blue) derived from the AFFDEX Engagement Index and GSR conductance (µS; red) over a 60-s window.
Figure 2. Example time series for a single participant, showing normalized facial engagement (0–1; blue) derived from the AFFDEX Engagement Index and GSR conductance (µS; red) over a 60-s window.
Technologies 14 00070 g002
Figure 3. Binomial test of facial engagement frequency. The observed proportion of frames classified as engaged across all participants is compared against the empirical baseline probability ( p 0 = 0.250875 ). The red dashed line indicates the empirical baseline probability ( p 0 = 0.250875 ) derived from prior work, against which the observed proportion of engaged frames is compared in the binomial test.
Figure 3. Binomial test of facial engagement frequency. The observed proportion of frames classified as engaged across all participants is compared against the empirical baseline probability ( p 0 = 0.250875 ). The red dashed line indicates the empirical baseline probability ( p 0 = 0.250875 ) derived from prior work, against which the observed proportion of engaged frames is compared in the binomial test.
Technologies 14 00070 g003
Figure 4. Per-participant comparison of normalized facial engagement between early (minutes 1–5) and late (minutes 11–15) viewing intervals. Each point represents one participant. Values below the diagonal indicate reduced engagement during the late interval.
Figure 4. Per-participant comparison of normalized facial engagement between early (minutes 1–5) and late (minutes 11–15) viewing intervals. Each point represents one participant. Values below the diagonal indicate reduced engagement during the late interval.
Technologies 14 00070 g004
Figure 5. Growth-curve model of emotional engagement over time. Each gray line represents the engagement trajectory of an individual participant (all N = 27 participants are shown). The black line corresponds to the population-level trend estimated by the linear mixed-effects model, capturing the average temporal evolution of engagement while accounting for inter-individual variability. Legend: Gray lines represent individual participant engagement trajectories. The black line represents the population-level trend estimated by the linear mixed-effects model.
Figure 5. Growth-curve model of emotional engagement over time. Each gray line represents the engagement trajectory of an individual participant (all N = 27 participants are shown). The black line corresponds to the population-level trend estimated by the linear mixed-effects model, capturing the average temporal evolution of engagement while accounting for inter-individual variability. Legend: Gray lines represent individual participant engagement trajectories. The black line represents the population-level trend estimated by the linear mixed-effects model.
Technologies 14 00070 g005
Figure 6. Per-participant comparison of mean galvanic skin response (GSR, μS) during facially classified engaged and non-engaged moments. Values correspond to the participant-level means reported in Table 3. Bars represent mean electrodermal conductance for each participant under both conditions.
Figure 6. Per-participant comparison of mean galvanic skin response (GSR, μS) during facially classified engaged and non-engaged moments. Values correspond to the participant-level means reported in Table 3. Bars represent mean electrodermal conductance for each participant under both conditions.
Technologies 14 00070 g006
Figure 7. Fast nonlinear complexity metrics for physiological (ApEn-L) and expressive (BPC) signals during early (min 1–5) and late (min 11–15) TikTok exposure intervals. BPC shows a clear reduction over time, while ApEn-L remains relatively stable, highlighting a divergence between expressive and physiological complexity dynamics.
Figure 7. Fast nonlinear complexity metrics for physiological (ApEn-L) and expressive (BPC) signals during early (min 1–5) and late (min 11–15) TikTok exposure intervals. BPC shows a clear reduction over time, while ApEn-L remains relatively stable, highlighting a divergence between expressive and physiological complexity dynamics.
Technologies 14 00070 g007
Table 1. Representative related work on TikTok, affective computing, and biometric engagement analysis.
Table 1. Representative related work on TikTok, affective computing, and biometric engagement analysis.
DomainStudyData/PopulationMethods/ToolsKey Findings and Relevance
TikTok psychology[7]Literature review of general usersReview of early empirical TikTok studiesHighlights attention-capture mechanisms and digital addiction risks motivating biometric evaluation.
Engagement prediction (TikTok)[15]Verified TikTok posts (Spanish sample)Regression and logistic models of post featuresIdentifies content and presentational factors increasing engagement; establishes empirical engagement baseline ( p 0 = 0.250875 ).
Biometric engagement and virality[16]Video viewers (ads)Affectiva facial metrics, eye-tracking, physiological sensorsDemonstrates predictive power of multimodal signals for virality; supports using facial and GSR features jointly.
Facial expression analysis (AFFDEX)[20]Real-world faces datasetAFFDEX 2.0 toolkit (Affectiva/Smart Eye)Describes modern SDK accuracy and emotional metrics; justifies software choice for facial engagement detection.
GSR signal modeling[17]Psychophysiological datasetsConvex optimization for EDA decomposition (cvxEDA)Provides robust method to separate tonic and phasic arousal; relevant to refining physiological engagement analysis.
Social media addiction theory[14]Meta-review of multiple SNS platformsBehavioral addiction frameworkSynthesizes theoretical model of compulsive use; contextualizes TikTok as attention-driven digital environment.
Table 2. Recorded variables and their corresponding units.
Table 2. Recorded variables and their corresponding units.
VariableDescription (Unit/Range)
TimestampElapsed time of recording (ms)
GSR (EDA)Skin conductance (microsiemens, μS)
EngagementAFFDEX-derived emotional engagement probability (0–100)
Table 3. Mean GSR (μS) during engaged vs. non-engaged moments per participant.
Table 3. Mean GSR (μS) during engaged vs. non-engaged moments per participant.
IDGSRengGSRnotIDGSRengGSRnot
P013.633.69P150.670.46
P022.532.75P162.182.40
P038.708.31P1715.9616.07
P043.232.47P188.299.27
P050.930.95P191.311.22
P060.450.45P201.070.58
P073.803.99P216.085.76
P081.701.61P220.680.72
P092.362.33P230.900.76
P100.850.72P242.482.03
P114.674.45P251.321.20
P122.442.33P262.052.09
P130.370.43P272.832.78
P140.210.21
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Del-Valle-Soto, C.; Corona, V.; Romero-Borquez, J.G.; Contreras-Tiscareno, D.; Montoya-Rodriguez, D.S.; Gutierrez-Calvillo, J.A.; Sandoval, B.; Varela-Aldás, J. Electrodermal Response Patterns and Emotional Engagement Under Continuous Algorithmic Video Stimulation: A Multimodal Biometric Analysis. Technologies 2026, 14, 70. https://doi.org/10.3390/technologies14010070

AMA Style

Del-Valle-Soto C, Corona V, Romero-Borquez JG, Contreras-Tiscareno D, Montoya-Rodriguez DS, Gutierrez-Calvillo JA, Sandoval B, Varela-Aldás J. Electrodermal Response Patterns and Emotional Engagement Under Continuous Algorithmic Video Stimulation: A Multimodal Biometric Analysis. Technologies. 2026; 14(1):70. https://doi.org/10.3390/technologies14010070

Chicago/Turabian Style

Del-Valle-Soto, Carolina, Violeta Corona, Jesus Gomez Romero-Borquez, David Contreras-Tiscareno, Diego Sebastian Montoya-Rodriguez, Jesus Abel Gutierrez-Calvillo, Bernardo Sandoval, and José Varela-Aldás. 2026. "Electrodermal Response Patterns and Emotional Engagement Under Continuous Algorithmic Video Stimulation: A Multimodal Biometric Analysis" Technologies 14, no. 1: 70. https://doi.org/10.3390/technologies14010070

APA Style

Del-Valle-Soto, C., Corona, V., Romero-Borquez, J. G., Contreras-Tiscareno, D., Montoya-Rodriguez, D. S., Gutierrez-Calvillo, J. A., Sandoval, B., & Varela-Aldás, J. (2026). Electrodermal Response Patterns and Emotional Engagement Under Continuous Algorithmic Video Stimulation: A Multimodal Biometric Analysis. Technologies, 14(1), 70. https://doi.org/10.3390/technologies14010070

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop