Event-Related Potentials during Verbal Recognition of Naturalistic Neutral-to-Emotional Dynamic Facial Expressions

: Event-related potentials during facial emotion recognition have been studied for more than twenty years. Nowadays, there has been a growing interest in the use of naturalistic stimuli. This research was aimed, therefore, at studying event-related potentials (ERP) during recognition of dynamic facial neutral-to-emotional expressions, more ecologically valid than static faces. We recorded the ERP of 112 participants who watched 144 dynamic morphs depicting a gradual change from a neutral expression to a basic emotional expression (anger, disgust, fear, happiness, sadness and surprise) and labelled those emotions verbally. We revealed some typical ERP, like N170, P2, EPN and LPP. Participants with lower accuracy exhibited a larger posterior P2. Participants with faster correct responses exhibited a larger amplitude of P2 and LPP. We also conducted a classiﬁcation analysis that yielded the accuracy of 76% for prediction of participants who recognise emotions quickly on the basis of the amplitude of posterior P2 and LPP. These results extend data from previous research about the electroencephalographic correlates of facial emotion recognition.


Introduction
Perhaps, Darwin [1] was the first to scientifically describe facial expressions and their evolutionary functions in a variety of animals. Emotional facial expressions of conspecifics are crucial stimuli that provoke different adaptive reactions. In human culture, an adequate recognition of emotional facial expressions is required in interpersonal, professional and civic situations. Previous research provided a great amount of knowledge about how emotion is recognised from facial expressions [2][3][4][5]. Furthermore, many investigators have studied why individuals with psychiatric and neurological diseases are often impaired in their ability to recognise facial emotion expressions (see a recent review of Collin et al. [6]).
Event-related potentials (ERPs) of electroencephalograms (EEG), among other physiological methods, permit to study brain correlates of facial emotion recognition. Many studies have collected numerous data. Thus, the P1 wave is an early positive deflection, typically peaking around 100 ms at posterior sites [7,8]. It reflects an early sensory processing for stimuli presented at an expected point [9]. Numerous works suggest that the extrastriate cortex is the brain source of P1 [10,11]. Some studies revealed P1 during facial emotion recognition. Holmes et al. [12] showed that P1 amplitude is greater to fearful faces than to neutral ones. Rellecke et al. [13] also found a greater P1 in response to fear and anger compared to neutral expressions. In a study by Pourtois et al. [14], fearful faces also induced a larger P1 than neutral faces.
N170 is a wave, typically registered between 130 and 200 ms at posterior lateral scalp areas, in response to facial compared to non-facial stimuli [15,16]. In general, a metaanalysis by Hinojosa et al. [17] showed that N170 was larger to emotional faces relative to neutral ones. As to neural underpinnings, the occipitotemporal cortex and posterior fusiform gyrus are supposed to be sources of the N170 [18,19].
Early posterior negativity (EPN) is a wave associated with sensory encoding and perceptual analysis in the visual cortex [20]. In a study by Schupp et al. [21], threatening faces elicited a greater EPN, compared with neutral or friendly expressions. Hartigan and Richards [22] found that EPN was enhanced for disgust over neutral expressions. In a study by Mavratzakis et al. [23] the EPN was enhanced in response to fearful as compared to neutral or happy faces. Langeslag and Van Strien [24] showed that angry faces provoked a greater EPN in comparison with neutral faces. There is evidence that the EPN may originate in the occipital and parietal cortices [25]. This research prepares the ground for practical implications in the field of diagnostics. For example, Sarraf-Razavi et al. [26] found a significant reduction in the early posterior negativity of ERP for happy and angry faces in hyperactive children, compared to controls. In this vein, Frenkel and Bar-Haim [27] revealed an enhanced EPN to fearful faces in patients with anxiety disorders.
P2 is another wave that is sometimes found during face recognition. It is a positive component around 200 ms, that has been related to attention and categorization [28,29]. As for facial recognition, it has been shown that happy expressions provoked a larger amplitude of P2 in comparison to angry and neutral ones [30]. P300 also has been related to facial emotion recognition [31] and has even been shown as a marker of impairment in schizophrenic patients [32].
Parietocentral late positive complex (LPC) has also been registered in response to facial stimuli. Thus, Werheid et al. [33] found that non-attractive faces elicited relatively lower LPC amplitudes. Later, Lu et al. [34] revealed that this was the case only for male subjects, but not for females.
However, the majority of studies on EEG correlates of facial emotion recognition have been conducted with the use of static photographs of faces. This approach has received criticism. Krumhuber et al. [35] believe that dynamic stimuli improve coherence in the identification of emotion and permit to differentiate between genuine and fake emotional expressions. They categorically warn that researchers do not understand what faces do while they use static snapshots of faces as a paradigm for researching facial expressions. Ambadar et al. [36] found that dynamic facial expressions were recognized better than static photographs. Similarly, Lander and Bruce [37] showed the same effect for the recognition of identity. In addition, neurological investigations demonstrate that there are different pathways for recognition of static and dynamic facial expressions [38]. A meta-analysis by Zinchenko et al. [39] showed that brain regions, typically related to facial emotion recognition, like the amygdala and fusiform gyrus, showed greater responses to dynamic expressions in comparison to static ones.
Nowadays, dynamic facial stimuli have become more widespread. For example, Amaral et al. [40] presented complex ecological animations with social content at the singletrial level. There were 900 ms animated avatars or groups of them which alternated their gaze direction to the left and to the right. Qu et al. [41] proposed a database of facial emotion video clips. Some studies applied fMRI to study the recognition of dynamic facial expressions [42][43][44].
Only a few researchers used dynamic facial expressions to investigate ERP. Fichtenholtz et al. [45] found a wave analogous to N170 and a second positive wave, which they called P325. In this vein, Recio et al. [46] asked participants to categorise happy, angry, or neutral faces in a static image or dynamically evolving within 150 ms. Dynamic happy facial expressions were recognised faster and more accurately than static photographs. They observed the EPN and later positive potential (LPP), that were larger and longer during recognition of dynamic in comparison with static facial expressions. Trautmann-Lengsfeld et al. [47] presented videos during 1500 ms and also revealed N170, EPN and LPP. Stefanou and colleagues [48] presented video clips of 500 ms, constructed from static face photographs, and found the N170 wave at posterior sites.
As Recio et al. [49] noted, in face-to-face communication, facial emotional expressions do not consist of static images showing the apex with maximal expression intensity. Moreover, we believe that we should take into account that usually, before watching a dynamic facial emotional expression, people explore the neutral expression of the other person. That is why we supposed that presenting dynamic facial expressions to our participants preceded by neutral faces of actors would be more ecologically valid. Therefore, the aim of our work was to study ERP during facial emotion recognition. We wanted to explore the feasibility of using dynamic stimulation for emotion recognition in ERP research. We expected to register the waves, typical for such studies: P1, N170, P2, EPN and LPP. We wanted to compare these waves between participants with different levels of accuracy and reaction times. For this purpose, we applied a paradigm of verbal response to dynamic facial expressions, which demonstrated effectiveness in differentiating all basic facial emotional expressions [50]. In other words, this more ecological paradigm was able to detect the differences in accuracy and speed in each possible pair of basic facial emotional expressions. That is, happiness was the best recognised facial emotion, while fear was the worst (surprise, disgust, anger and sadness being arranged in between). This complete differentiation has not been achieved in previous studies, where participants responded with keys. Due to the wide-spread and fruitful application of artificial intelligence in the field of facial recognition [51][52][53], we also applied a classification analysis to predict what participants would recognise facial emotional expressions quickly on the basis of their brain amplitudes.

Participants
The sample consisted of 112 right-handed participants (59% of females, mean age = 22.5, SD = 3.4). All had normal or corrected-to-normal vision. No one had a psychiatric or neurological diagnosis, nor took medication. Procedures were conducted according to the Declaration of Helsinki and approved by the local ethics committee. The participants took part in the study without any reward.

Material and Design
Following a previous study [50], we used 144 dynamic morphs that we made from 56 coloured, full-face photographs of four men and four women from Karolinska Directed Emotional Faces collection (KDEF; [54]). They depicted seven facial expressions: anger, disgust, fear, happiness, sadness, surprise and neutral. With the help of the software Sqirlz Morph 2.1 by Xiberpix, we made 48 "neutral-to-emotion" morph pairs (six emotional expressions × eight actors). In the photos, we indicated key points of the face for the software to prepare a gradual change of 12 frames, from the first neutral expression to the final emotional one. Thus, each dynamic morph first contained a neutral expression (500 ms) and 11 frames (42 ms each) and lasted 962 ms ( Figure 1). The idea of first presenting a static neutral face was adopted from the work by Fichtenholtz et al. [45]. Each dynamic morph was presented three times. The trials were quasi-randomised: each quarter of trials (1-36, 37-72, 73-108 and 108-144) contained the same number (6) of each facial emotional expressions. A facial emotional expression or an actor was not presented more than twice in a row. All participants viewed the same sequence of 144 dynamic morphs. The participants were seated in a dark soundproof cubicle in front of a 19-inch c puter screen at a distance of 60 cm, that is the visual angle equalled 19 degrees. A mi phone (frequency: 20-16,000 Hz; sensitivity: 54 dB; impedance: 2.2 kΩ) was attached their auricles so that its sensor was 2 cm from their lips and could not be moved by in untary movements. The experiment was conducted in OpenSesame [55]. Each trial be The participants were seated in a dark soundproof cubicle in front of a 19-inch computer screen at a distance of 60 cm, that is the visual angle equalled 19 degrees. A mi-Appl. Sci. 2022, 12, 7782 4 of 13 crophone (frequency: 20-16,000 Hz; sensitivity: 54 dB; impedance: 2.2 kΩ) was attached to their auricles so that its sensor was 2 cm from their lips and could not be moved by involuntary movements. The experiment was conducted in OpenSesame [55]. Each trial began with a fixation (a cross for 1 s), then participants viewed a face for 962 ms. The last frame remained until a response was made. They were asked to loudly announce the emotion label as quickly as possible (verbal reaction times were measured). A previous study found no differences in the pronunciation of emotion labels; hence, this should not influence the verbal reaction times to video clips with emotional faces [50]. A technician logged their responses to compute the accuracy later. During the intertrial period (3 s) before each trial, six emotion labels appeared alphabetically as a reminder of possible responses. Before the experiment, participants passed six training trials.

Data Collection and Reduction
The EEG was recorded from 32 electrodes, using an NVX-136 amplifier (MCS, Russia), according to the international 10-20 system (filters = 0.05-40 Hz; sampling rate = 1000 Hz; impedance ≤ 10 kΩ; ground: AFz). Ocular movements were recorded through one electrode placed at the orbicular muscle below the right eye. Earlobe electrodes were used as an online reference.

Data Analysis
To analyse electroencephalographic differences between participants with higher and lower accuracy of facial emotion recognition, we split the whole sample in half, using the median value of the total accuracy, and controlled the difference with the Mann-Whitney test. In other words, we built two subsamples, that is a high accuracy subsample (56 participants) and a low accuracy subsample (56 participants). Then, we compared (using t-tests) the amplitude of ERP in participants with lower and higher accuracy at each millisecond. As we had only two levels of dependent variables, we could afford comparisons at each millisecond to identify the time windows of waves more precisely. Pairwise comparisons employed the false discovery rate correction for multiple comparisons [59]. The same was conducted after the division of the whole sample in half depending on the median reaction time. That is, we obtained a fast reaction subsample (56 participants) and a slow reaction subsample (56 participants). Finally, we conducted a classification analysis of the recognition speed on the basis of brain waves, using K-nearest neighbours, Naïve Bayesian Classificator and Automated neural Networks. We predicted whether a participant belonged to the fast or slow group using his or her amplitude of each wave, averaged across trials. The overall flow of the study is depicted in Figure 2.
of the recognition speed on the basis of brain waves, using K-nearest neighbours, Naïve Bayesian Classificator and Automated neural Networks. We predicted whether a participant belonged to the fast or slow group using his or her amplitude of each wave, averaged across trials. The overall flow of the study is depicted in Figure 2.

Results
We found a negative correlation between accuracy and time of facial emotion recognition (rs = −0.82, p < 0.001). The later the recognition of a stimulus was, the less accurate it was. The division of the whole sample in half using medians of accuracy and reaction time was successful. Participants, who were more accurate in total, turned out to be more accurate in recognition of all emotional expressions (only the difference in the accuracy of surprise recognition was marginal, Table 1). Participants, who responded faster in total, were faster at the recognition of each emotional expression ( Table 2).

Results
We found a negative correlation between accuracy and time of facial emotion recognition (r s = −0.82, p < 0.001). The later the recognition of a stimulus was, the less accurate it was. The division of the whole sample in half using medians of accuracy and reaction time was successful. Participants, who were more accurate in total, turned out to be more accurate in recognition of all emotional expressions (only the difference in the accuracy of surprise recognition was marginal, Table 1). Participants, who responded faster in total, were faster at the recognition of each emotional expression ( Table 2).    (FT9, FT10, FC5, FC6, T3, T4, T5, T6, CP5, CP6, TP9,  and TP10), and posterior sites (P3, Pz, P4, O1, Oz, and O2) in response to dynamic facial expressions (all subjects, all trials, means ± SDs). We found some typical ERP, like N170, P2, EPN and LPP in temporal and posterior regions.
Topographic maps (Figure 4) show the reverse activity between anterior and posterior regions. The topoplot of ERP while watching neutral faces reflects the distribution of N170 at 170 ms, posterior P2 at 200 ms, and EPN at 500 ms. The topoplot of ERP in response to the gradual appearance of an emotional expression depicts another posterior P2 at 300 ms, EPN at 800 ms, and LPP at 1300 ms.

ERP Related to Individual Differences in Facial Emotion Recognition
We found an effect of the Accuracy on posterior ERP. Participants with a lower level of accuracy displayed a larger P2 amplitude (387-428 ms), ps < 0.048, ds = 0.36-0.38, (Figure 5). We did not observe any significant effect of Accuracy on frontal, central and temporal ERP (ps > 0.055). We also found an effect of the Reaction time on posterior ERPs. Participants with the faster correct answers displayed a larger amplitude of P2 (235-342 ms, ds = 0.36-0.42) and the LPP (1031-1199 ms, ds = 0.33-0.46), all ps < 0.041 ( Figure 6). Finally, we found an effect of the reaction time on temporal ERP. Participants with the faster correct answers displayed a larger amplitude of the LPP (1064-1199 ms), all ps < 0.044. We did not observe any significant effect of Reaction time or Accuracy on frontal and central ERP (ps > 0.055).

ERP Related to Individual Differences in Facial Emotion Recognition
We found an effect of the Accuracy on posterior ERP. Participants with a lower level of accuracy displayed a larger P2 amplitude (387-428 ms), ps < 0.048, ds = 0.36-0.38, (Figure 5). We did not observe any significant effect of Accuracy on frontal, central and temporal ERP (ps > 0.055). We also found an effect of the Reaction time on posterior ERPs. Participants with the faster correct answers displayed a larger amplitude of P2 (235-342 ms, ds = 0.36-0.42) and the LPP (1031-1199 ms, ds = 0.33-0.46), all ps < 0.041 ( Figure 6). Finally, we found an effect of the reaction time on temporal ERP. Participants with the faster correct answers displayed a larger amplitude of the LPP (1064-1199 ms), all ps < 0.044. We did not observe any significant effect of Reaction time or Accuracy on frontal and central ERP (ps > 0.055).

ERP Related to Individual Differences in Facial Emotion Recognition
We found an effect of the Accuracy on posterior ERP. Participants with a lower lev of accuracy displayed a larger P2 amplitude (387-428 ms), ps < 0.048, ds = 0.36-0.38, (F ure 5). We did not observe any significant effect of Accuracy on frontal, central and tem poral ERP (ps > 0.055). We also found an

Classification Analysis
The classification analysis revealed that Automated neural networks displayed the best result, from the methods we applied ( Table 3). The prediction accuracy was 75.86. K-Nearest Neighbours showed the worst result. Table 3. The classification accuracy of the recognition speed on the basis of the posterior brain waves.

Classification Analysis
The classification analysis revealed that Automated neural networks displayed the best result, from the methods we applied ( Table 3). The prediction accuracy was 75.86. K-Nearest Neighbours showed the worst result. Table 3. The classification accuracy of the recognition speed on the basis of the posterior brain waves.

Discussion
The aim of our study was to investigate event-related potentials of EEG during the emotion recognition of dynamic facial expressions. To make our paradigm more naturalistic, we first demonstrated a neutral face for 500 ms and then presented a gradual change to a basic facial emotional expression during 462 ms. This permitted us to obtain diverse ERP, typically found in studies of facial emotion recognition.
We recorded verbal reactions and reproduced differences in recognition accuracy between all basic facial emotional expressions, found by Kosonogov and Titova [50]. The correlation between accuracy and reaction time was negative. Stimuli that were recognised more accurately, were recognised faster. This finding is in accordance with previous studies on facial emotion recognition [60,61].
We registered some ERP typically found during facial emotion observation/recognition. After a visual investigation we referred to them as N170, P2, EPN, and LPP. They were prominent at temporal and posterior sites. Our naturalistic paradigm consisted of two parts. During the presentation of neutral faces for 500 ms, we found P1, N170, P2 and EPN. Then, a gradual change to a basic facial emotional expression (11 frames, 462 ms in

Discussion
The aim of our study was to investigate event-related potentials of EEG during the emotion recognition of dynamic facial expressions. To make our paradigm more naturalistic, we first demonstrated a neutral face for 500 ms and then presented a gradual change to a basic facial emotional expression during 462 ms. This permitted us to obtain diverse ERP, typically found in studies of facial emotion recognition.
We recorded verbal reactions and reproduced differences in recognition accuracy between all basic facial emotional expressions, found by Kosonogov and Titova [50]. The correlation between accuracy and reaction time was negative. Stimuli that were recognised more accurately, were recognised faster. This finding is in accordance with previous studies on facial emotion recognition [60,61].
We registered some ERP typically found during facial emotion observation/recognition. After a visual investigation we referred to them as N170, P2, EPN, and LPP. They were prominent at temporal and posterior sites. Our naturalistic paradigm consisted of two parts. During the presentation of neutral faces for 500 ms, we found P1, N170, P2 and EPN. Then, a gradual change to a basic facial emotional expression (11 frames, 462 ms in total) provoked N170, P2, EPN and LPP. We admit that the first part (the perception of the neutral face) may have contaminated the ERP of the second one (the perception of a facial emotional expression). However, we wanted to build naturalistic stimuli, which is why we always presented a neutral face first. These waves appeared at their typical latencies during the presentation of neutral faces.
As for ERP during the gradual change of basic facial emotional expressions, the latencies were longer. Thus, we regarded the wave between 201 and 500 ms as P2 and the wave between 501 and 1000 ms as the EPN. This discrepancy could be explained by the prolonged presentation of dynamic stimuli. In other words, the presentation of faces during 462 ms could make ERP appear later and last longer.
Similar to previous studies [15,48,62], we observed N170 at temporal and posterior sites (both during the perception of neutral faces and dynamic emotional expressions). We found it both during watching neutral faces and in response to gradual changes of a basic facial emotional expression. It was more salient when watching neutral faces. However, our paradigm did not permit registering a clear N170 in response to the gradual appearance of a basic facial emotional expression, and we did not find any difference in N170 between subjects. A possible explanation would be that this time window was already contaminated by initial neutral faces.
We also revealed a positive wave after N170 (both during the perception of neutral faces and dynamic emotional expressions), which we considered as P2 [63]. Rossignol et al. [64] also found P2 in response to facial expressions, albeit they played a role in priming and were not recognised. Balconi and Carrera [65] demonstrated that P2, during the comparison of voices and facial expressions, was larger when both types of stimulation were congruent. P2 has been referred as a correlate of attention processes [29,66] and has been related to early discrimination of stimuli [67]. Especially, the potentials between 200-300 ms may reflect attentional capture by emotional stimuli [68]. Generally speaking, stimuli displaying people are more salient and attract attention (e.g., [69]). Particularly, Carretié et al. [70] found that faces have the ability to capture attention to a greater extent than scenes. A recent study showed that any emotional social content (not only faces, but also scenes) modulate anterior P2 [71]. However, it is important to note that our paradigm generated only salient posterior, but not anterior, P2.
During 501-1100 ms, we registered a negative wave (during the perception of dynamic emotional expressions), which we considered to be the early posterior negativity. It reflects a natural selective attention to emotional stimuli [25,72]. Bayer and Schacht [73] found similar EPN in response to emotional faces, scenes and words. Eimer et al. [74] demonstrated an enhanced early negativity at lateral temporal and occipital electrodes for emotions relative to neutral faces.
Finally, our paradigm permitted us to observe the LPP 1100 ms after the gradual appearance of a basic facial emotional expression. LPP amplitude could be interpreted as the allocation of attentional resources towards emotional stimuli of different characteristics [75]. Thus, Foti et al. [76] showed a larger amplitude to emotional expressions relative to neutral ones. Vico et al. [77] found larger amplitudes of the late ERP waves, P3 and LPP, during perception of the beloved of participants in comparison with unknown or famous faces. In a study by Choi et al. [78], high empathy people exhibited a larger amplitude of LPP in response to facial expressions.
Similar to Recio et al. [46], who presented dynamic facial expressions, we identified the EPN and LPP, although in our case their latencies were greater. While those researchers found them at 200-250 and 400-600 ms, the latencies in our study were 501-1100 and 1101-1500 ms. We believe that this discrepancy can be explained by the duration of the gradual change to a basic facial emotional expression. In their study, dynamic morphs consisted of three frames, a third of which (the final one, 100% of emotion) was presented at 100 ms. Whereas in our study, we consecutively presented 11 frames for 42 ms, so the final frame appeared only at 450 ms.
We also compared event-related potentials between participants with different performances of facial emotion recognition. Thus, participants with low accuracy exhibited larger amplitudes of posterior P2 (387-428 ms). This can reflect the attentional resources required for recognition. Of note, our EEG analysis embraced only trials with correct responses. Hence, participants with lower accuracy allocated their attention in such a way as to recognise emotional expressions correctly. This is in accordance with attentional allocation framework which relates a successful behavioural performance with an increased amplitude in attentional ERP waves [79,80]. For participants with high accuracy, such a task, supposedly, is easier and more habitual. In addition, participants with the faster correct responses displayed a larger amplitude of P2 and LPP after the beginning of the gradual appearance of a facial emotional expression. Therefore, faster, but correct recognition of emotional expressions required a greater allocation of processing resources. In other words, dynamic facial stimuli provoked a greater allocation of attention which helped to recognise emotions faster.
Further research could be directed at constructing different designs to attempt to capture subtle physiological differences between all basic facial emotional expressions. We admit, as a limitation, the fact that our design could not allow examination of ERP between facial emotional expressions (only 144 trials for six emotional expressions). For this purpose, one could present much more stimuli (30-50 per emotion, that would make the study much longer) or limit to, for instance, three basic facial emotional expressions. In our study, basic emotions differed much in the accuracy of their recognition. This does reflect typical differences in facial emotion recognition, but this entailed different numbers of trials with correct responses for different basic emotions. Hence, another future direction could lie in presenting a stimulus database which would be ranged by accuracy so that each basic facial emotional expression could be recognised in a certain number of cases. This also will help get a better signal-to-noise ratio, because in our study we had to delete 64.6% of trials. We also suppose that the forced choice of six facial emotional expressions could increase the cognitive load of our participants [81]. However, at the same time, we opted to build a more ecological paradigm that would embrace all six basic facial emotional expressions that could appear in people's faces in everyday life. Also, a design without a neutral face first could be used to avoid the contamination effect of neutral faces; however, once again, this would not fit our intention to present naturalistic stimuli.
In general, the application of such naturalistic paradigms seems to be promising for psychiatric and neurological studies. Facial emotion recognition is influenced by many variables, like age [82], sex [83], the age of actors [84], ethnicity [85] and so on. Therefore, ERPs during the recognition of dynamic facial expressions could be modulated by these variables.
To conclude, we applied a more naturalistic and ecological paradigm to study the facial emotion recognition. The participants watched 144 dynamic morphs depicting a gradual change from a neutral expression to a basic facial emotional expression and labelled verbally those emotions. We revealed some typical ERPs, like N170, P2, EPN and LPP. Participants with lower accuracy exhibited a larger P2 during correct recognition. Participants with faster correct responses exhibited a larger amplitude of P2 and LPP during correct recognition. The classification analysis revealed the accuracy of 76% for prediction of participants who recognise emotional expressions quickly on the basis of the amplitude of posterior P2 and LPP. These differences between subsamples are supposed to reflect the attentional allocation, that is successful behavioural performance is related an increased amplitude in attentional ERPs.
Author Contributions: Conceptualization, methodology, all authors; software, V.K.; data curation, V.K. and E.K.; writing-original draft preparation, V.K.; writing-review and editing, E.K. and E.V.; visualization, V.K.; supervision, project administration, E.V funding acquisition, V.K. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author.