We created three video stimuli, each of 198 s duration, with the same narrative and content, but different editing styles. One stimulus was a one-shot movie consisting of a single open shot with no cuts. The second video stimulus was a movie with 33 shots and a continuous, classic style of editing, with an average shot length of 5.9 s. This stimulus presented classic shots with smooth transitions in accordance with the 180° rule, by which the same action is filmed following that angle to avoid spatial discontinuity. The third video stimulus was a movie with 79 shots and a discontinuous, chaotic style of editing, with an average shot length of 2.4 s. This third stimulus broke the classic 180° rule, and presented sudden movements in the frame, discontinuities in time and space between shots, constant camera movements, and a large number of different kinds of shots.
The narrative consisted of a man who enters a room containing a desk, goes out, enters again, and sits at the desk. On the desk there are three colored balls, three books, and an apple. He juggles with the colored balls, puts them back on the desk, and goes out again. The man enters once more with a laptop in a case, sits, opens the case, and takes out the device. He opens it up and works on it, then picks up, one by one, each of the three books on a desk situated on the right side of the screen, reads something in it, then puts it down on the left side of the screen. He works for a while with the laptop, then closes it and moves it to the left. Then, the man puts his hand into his pocket and takes out a small torch, which he points towards the viewer. He turns it on for a few seconds, turns it off, then puts it back into his pocket. He takes the apple from the desk, rubs it on his shoulder, and bites it. He chews and bites the apple repeatedly for a while. He leaves the apple core behind the laptop, to the left of the screen. The man swallows the rest of the apple and wipes his mouth with his hand. Then he makes a happy face, a sad face, and a disgusted face, runs his hand over his face, and makes a happy face again. The man stands up and leaves the room.
Stimuli were presented on a high-definition (HD) 42-inch light-emitting diode (LED) display (TH42PZ70EA, Panasonic Corporation, Osaka, Japan) using Paradigm Stimulus Presentation software (Perception Research System Incorporated, Laurence, KS, USA).
2.3. Data Acquisition
Subjects participated in sessions (~15 min) of active viewing. All participants watched all the stimuli. The presentation of the stimuli was randomized over all possible combinations. The stimuli were presented on a stage that we designed to make participants feel comfortable while watching the media content. We asked participants to watch the stimuli without further requirements, having told them that they would be asked some questions after the visualization. At the end of the session, participants filled out a distracting questionnaire.
Observers’ eye blinks were detected following a dual protocol: using electroencephalographic/electromyographic (EEG/EMG) recordings and a HD video recording system. Participants’ EEG/EMG was recorded using a wireless device (Enobio®, Neuroelectrics, Barcelona, Spain) with 20 electrodes placed according to the 10–20 International System. Eye blinks were detected by the prefrontal Fp1 and Fp2 electrodes and electrooculographic electrodes. For comparison, participants’ faces were also recorded in a close-up shot with an HD camera (HDR-GW55VE, Sony Corporation, Tokyo, Japan) at 25 frames/s.
2.4. Data Analysis
We analyzed eye blinks following two procedures. Firstly, we filtered EEG/EMG data to 0.5–3 Hz and applied Brainstorm’s eye-blink detectors (Brainstorm, University of Southern California, Los Angeles, CA, USA) in electrooculographic (EOG), Fp1 and Fp2 channels, running on MATLAB R2013a (The Mathworks Inc., Natick, MA,). In a second step, we manually checked eye blinks in the videos of participants’ faces recorded with the HD camera. Using those two methods, we obtained a matrix with a final list of each participant’s eye blinks. To assess changes in blink rate with time, each video was divided into 40 blocks of 4.95 s, and the blink number was converted into blinks/min for each block. The blink rate analysis was performed by repeated-measures analysis of variance (ANOVA) designed with two factors: time and style of editing. The time factor corresponds with each of the blocks. Using the blinks within each block, we computed a two-way ANOVA with blocks that showed increases or decreases, and the rest of them, with type of block and style of editing as factors. We used SigmaPlot 11.0 (Systat Software Inc., San Jose, CA, USA) for the statistical analysis.