Next Article in Journal
Refunctionalization and Usage Frequency: An Exploratory Questionnaire Study
Previous Article in Journal
Non-Native Dialect Matters: The Perception of European and Brazilian Portuguese Vowels by Californian English Monolinguals and Spanish–English Bilinguals
Article Menu
Issue 4 (December) cover image

Export Article

Languages 2018, 3(4), 38; doi:10.3390/languages3040038

Auditory–Visual Speech Integration in Bipolar Disorder: A Preliminary Study
Psychology Program, Northern Cyprus Campus, Middle East Technical University (METU), KKTC, via Mersin 10, 99738 Kalkanlı, Güzelyurt, Turkey
Author to whom correspondence should be addressed.
Received: 4 July 2018 / Accepted: 15 October 2018 / Published: 17 October 2018


This study aimed to investigate how individuals with bipolar disorder integrate auditory and visual speech information compared to healthy individuals. Furthermore, we wanted to see whether there were any differences between manic and depressive episode bipolar disorder patients with respect to auditory and visual speech integration. It was hypothesized that the bipolar group’s auditory–visual speech integration would be weaker than that of the control group. Further, it was predicted that those in the manic phase of bipolar disorder would integrate visual speech information more robustly than their depressive phase counterparts. To examine these predictions, a McGurk effect paradigm with an identification task was used with typical auditory–visual (AV) speech stimuli. Additionally, auditory-only (AO) and visual-only (VO, lip-reading) speech perceptions were also tested. The dependent variable for the AV stimuli was the amount of visual speech influence. The dependent variables for AO and VO stimuli were accurate modality-based responses. Results showed that the disordered and control groups did not differ in AV speech integration and AO speech perception. However, there was a striking difference in favour of the healthy group with respect to the VO stimuli. The results suggest the need for further research whereby both behavioural and physiological data are collected simultaneously. This will help us understand the full dynamics of how auditory and visual speech information are integrated in people with bipolar disorder.
auditory–visual speech perception; auditory–visual speech integration; bipolar disorder; speech perception

1. Introduction

Speech perception is not solely an auditory phenomenon but an auditory–visual (AV) process. This was first empirically demonstrated in noisy listening conditions by Sumby and Pollack (1954), and later in clear listening conditions by what is called the McGurk Effect (McGurk and MacDonald 1976). In a typical demonstration of the McGurk effect, the auditory syllable /ba/ dubbed onto the lip movements for /ga/ is often perceived as /da/ or /tha/ by most native English speakers. This illusory effect unequivocally shows that speech perception involves the processing of visual speech information in the form of orofacial (lip and mouth) movements. Not only did McGurk and MacDonald (1976) demonstrate the role of visual speech information in clear listening conditions, but more importantly, it has come to be used as a widespread research tool that measures the degree to which visual speech information influences the resultant percept—the degree of auditory–visual speech integration. The effect is stronger in some language contexts than others (e.g., Sekiyama and Tohkura 1993), showing considerable inter-language variation. In some languages, such as English, Italian (Bovo et al. 2009), and Turkish (Erdener 2015), the effect is observed robustly, but in some others such as Japanese and Mandarin (Sekiyama and Tohkura 1993; Sekiyama 1997; also Magnotti et al. 2015) it is not observed so readily. However, most, if not all, cross-language research in auditory–visual speech perception shows that the effect is stronger when we attend to a foreign or unfamiliar language rather than our native language. This demonstrates that the system uses more visual speech information to decipher unfamiliar speech input than for a native language. Further, there are cross-language differences in the strength of the McGurk effect (e.g., Sekiyama and Burnham 2008; Erdener and Burnham 2013) coupled with developmental factors such as language-specific speech perception (see Burnham 2003). The degree to which visual speech information is integrated into the auditory information also appears to be a function of age. While the (in)coherences between the auditory and visual speech components are detectable in infancy (Kuhl and Meltzoff 1982), the McGurk effect itself is also evident in infants (Burnham and Dodd 2004; Desjardins and Werker 2004). Furthermore, the influence of visual speech increases with age (McGurk and MacDonald 1976; Desjardins et al. 1997; Sekiyama and Burnham 2008). This age-based increase appears to be a result of a number of factors such as language-specific speech perception—the relative influence of native over non-native speech perception (Erdener and Burnham 2013).
Unfortunately, there is a paucity of research in how AV speech perception occurs in the context of psychopathology. In the domain of speech pathology and hearing, we know that children and adults with hearing problems tend to utilize visual speech information more than their hearing counterparts in order to enhance the incoming speech input (Arnold and Köpsel 1996). Using McGurk stimuli, Dodd et al. (2008) tested three groups of children: those with delayed phonological acquisition, those with phonological disorder and those with normal speech development. The results showed that children with phonological disorder had greater difficulty in integrating auditory and visual speech information. These results show that the extent to which visual speech information is used has the potential to be used as an additional diagnostic and prognostic metric in the treatment of speech disorders. However, how AV speech integration occurs in those with mental disorders is almost completely uncharted territory. Some scattered studies presented below with no clear common focus were reported recently. The studies that had recruited participants with mental disorders or developmental disabilities consistently demonstrated a lack of integration of auditory and visual speech information (see below). Uncovering the mechanism of the integration between auditory and visual speech information in these special populations is of particular importance for both pure and applied sciences, not least with the potential to provide us with additional behavioural criteria and diagnostic and prognostic tools.
In one of the very few AV speech perception studies with clinical cases, schizophrenic patients showed difficulty in integrating visual and auditory speech information and the amount of illusory experience was inversely related to age (Pearl et al. 2009; White et al. 2014). These AV speech integration differences between healthy and schizophrenic perceivers were shown to be salient at a cortical level as well. It was, for instance, demonstrated that while silent speech (i.e., a VO speech or lip-reading) condition activated the superior and inferior posterior temporal areas of the brain in healthy controls, the activation in these areas in their schizophrenic counterparts was significantly less (Surguladze et al. 2001; also see Calvert et al. 1997). In other words, while silent speech was perceived as speech by healthy individuals, seeing orofacial movements in silent speech was not any different than seeing any other object or event visually. The available—but limited—evidence also suggests that the problem in AV speech integration in schizophrenics was due to a dysfunction in the motor areas (Szycik et al. 2009). Such AV speech perception discrepancies were also found in other mental disorders. For instance, Delbeuck et al. (2007) reported deficits in AV speech integration in Alzheimer’s disease patients, and with a sample of Asperger’s syndrome individuals, Schelinski et al. (2014) found a similar result. In addition, Stevenson et al. (2014) found that the magnitude of deficiency in AV speech integration was relatively negligible at earlier ages, with the difference becoming much greater with increasing age. A comparable developmental pattern was also found with a group of children with developmental language disorder (Meronen et al. 2013). In this investigation we attempted to study the process of AV speech integration in the context bipolar disorder—a disorder characterized by alternating and contrastive episodes of mania and depression. While those bipolar individuals in the manic stage can focus on tasks at hand in a rather excessive and extremely goal-directed way, those in the depressive episode display behavioural patterns that are almost completely the opposite (Goodwin and Sachs 2010). The paucity of data from clinical populations prevents us from advancing literature-based, clear-cut hypotheses. So we adopted the following approach: (1) to preliminarily investigate the status of AV speech perception in bipolar disorder; and (2) to determine whether, if any, differences exist between bipolar-disordered individuals in both manic and depressive episodes. Arising from this approach, we hypothesized that: (1) based on previous research with other clinical groups, the control group here should give more visually-based/integrated responses to the AV McGurk stimuli than their bipolar-disordered counterparts; and (2) if the auditory and visual speech information are fused at behavioural level as a function of attentional focus and excessive goal-directed behaviour (Goodwin and Sachs 2010), then bipolar participants in the manic episode should give more integrated responses than the depressive sub-group. We based this latter hypothesis also on the anecdotal evidence (in the absence of empirical observations about attentional processes in bipolar disorder), as reported by several participants, that patients are almost always focused on tasks of interest when they go through a manic episode, whereas they report relatively impoverished attention to tasks during the depressive phase of the disorder.

2. Method

2.1. Participants

A total of 44 participants (14 females, 30 males, Mage = 28.8 years, SD = 10.2) were recruited after removing missing data and also data from outliers. The bipolar disorder sample consisted of 22 in-patients at Manisa Mental Health Hospital in Turkey. Of these, 12 were in manic (Mage = 30.9.9 years, SD = 6.78) and 10 were in depressive (Mage = 41.5 years, SD = 11.3) episodes at the time of testing. All were adult-onset patients. As in-patients, all bipolar disordered subjects were on valproate and/or lithium-based medications. A further 22 healthy participants (Mage = 21.8 years, SD = 1.26) were also recruited from amongst volunteers at the Middle East Technical University, Northern Cyprus Campus. All participants were native speakers of Turkish with normal hearing and normal/corrected-to-normal vision. Furthermore, no psychiatric or previous mental health disorders were reported by the members of the control group. A written informed consent was obtained from each participant.

2.2. Materials and Procedure

The McGurk stimuli were created by using words and non-words spoken by two native speakers of Turkish—a male and female talker. This material was created for an earlier study (Erdener 2015). The raw stimuli were then edited to AV, auditory-only (AO), and visual-only (VO) stimuli. The AV stimuli were created by dubbing incongruent auditory components onto video components (e.g., auditory /soba/ + visual /soga/ soda), and a fusion of the auditory and visual components was planned to yield a real word (see the preceding example). This allowed us to recognize whether a perceiver’s judgement was visually-based or not.
The AO and VO stimuli were created by deleting visual or auditory portions. All AO stimuli were real words. In total, there were 24 AV, 12 AO, and 12 VO stimuli. In the AO and VO conditions, each stimulus was presented twice to match up the number of trials/stimuli in the AV condition. Participants were instructed to “watch and listen” to the video file presented in each trial. The sound level was set around 65 dB Sound Pressure Level (SPL), and testing sessions of the disordered participants were completed in a quiet, comfortable room provided by the hospital administration. The control subjects, who were undergraduate students, were tested in a quiet testing room at the psychology laboratory at Middle East Technical University, Northern Cyprus Campus. The sound levels in both testing locations were measured using the Sound Meter application (Smart Tools Co., that runs on mobile devices with an Android operating system (Google Inc., Mountain View, CA, USA). Responses from participants were manually recorded by the experimenter. A computer-based response collection method did not appear feasible given the delicate nature of the experimental group thereby avoiding any potential task-related burdens. To maintain minimum intrusion, external loud speakers were used to present the stimuli. The test phase was preceded by a familiarization trial in each experimental condition. None of the participants had any difficulty in comprehending or completing the task requirements.

3. Results

Two sets of statistical analyses were conducted in this study: a comparison of disordered and control groups by means of a t-test analysis, and a comparison of the disordered subgroups, namely those with bipolar disorder going through manic versus depressive stages versus the control group using the non-parametric Kruskal–Wallis test due to small sample size. Statistical analyses were conducted using IBM SPSS 24 (IBM, Armonk, NY, USA).

3.1. The t-Test Analyses

A series of independent t tests were conducted on AV, AO, and VO scores comparing the bipolar-disordered and the control groups. The homogeneity of variance assumptions as per Levene’s test for equality of variances was met for all except McGurk AO scores (p = 0.029).
Thus, for the AO variable the values when equal variances were not met are reported. The results revealed no significant differences between the disordered and control groups in terms of the AV (t (42) = 0.227, p = 0.82) and AO scores (t (35.602) = −0.593, p = 0.56). However, in the VO condition, the control group performed better than their disordered counterparts (t (42) = −2.882, p <0.005). The mean scores for these measures are presented in Figure 1.

3.2. The Kruskall-Wallis Test

Given the small sample size of groups (n < 30), we resorted to three Kruskal–Wallis nonparametric tests to analyse the AV, AO, and VO scores. The analyses of AV (χ2 (2, n = 44) = 0.804, p = 0.669) and AO (χ2 (2, n = 44) = 0.584, p = 0.747) scores revealed no significant results.
The Kruskal–Wallis analysis of the VO scored produced significant group differences (χ2 (2, n = 44) = 7.665, p = 0.022). As there are no post-hoc alternatives for the Kruskal–Wallis test, we ran two Mann–Whitney U tests with VO scores. The comparison of VO scores between manic and depressive bipolar groups (z (n = 22) = −0.863, r2 = 0.418) and manic bipolar and the control groups failed to reach significance (z (n = 34) = −1.773, r2 = 0.080). The third Mann–Whitney U comparison between the depressive bipolar and the control groups, on the other hand, showed a significant difference (z (n = 32) = −2.569, r2 = 0.009). Figure 2 presents the mean data for each group on all measures.

4. Discussion

It was hypothesized that the bipolar disorder group would be less susceptible to the McGurk effect and therefore to exhibit integration of auditory and visual speech information to a lesser extent compared to the healthy control group. Furthermore, the bipolar manic group was predicted to integrate auditory and visual speech information to a greater extent than their depressive phase counterparts if the AV speech integration was a process occurring at the behavioural level requiring attentional resource allocation. The findings of this study did not support the main hypothesis that there was no significant difference between the control group and the bipolar group in terms of the degree of AV speech integration. Group-based differences were not observed with the AO stimuli. On the other hand, lending partial support to the overall hypotheses, the control group performed overwhelmingly better than the bipolar group in the depressive episode, yet the manic group did not differ significantly from the control group—as demonstrated in the analyses of both combined and separated bipolar groups’ data—with the VO condition, virtually a lip-reading task. Further, the disordered subgroups did not differ from each other significantly on any of the measures. To sum up, the only significant result was the observation that the control group performed significantly better than the depressive episode bipolar group in the VO condition but not in either of the AO or AV conditions as had been predicted. Although this seems to present an ambiguous picture—surely warranting further scrutiny—these results present us with a number of intriguing possibilities for interpretation. These are presented below.
In a normal integration process of auditory and visual speech information the stimuli coming from these two modalities are integrated on a number of levels such as the behavioural (i.e., phonetic vs. phonological levels of integration; see Bernstein et al. 2002) or cortical (Campbell 2007) levels. Given the finding that the control group did not differ from the disordered groups with respect to the integration of auditory and visual speech information and AO information but did on the VO information leaves us with the question as to why the visual-only (or lip-reading) information acts differently in the disordered group in general.
Given the fact that there was no difference between the groups—disordered or not—with respect to AV speech integration, it seems that visual information is somehow integrated with the auditory information to a resultant percept whether the perceiver is disordered or not. This may suggest the source of the integration is not limited to behavioural level but also to cortical and/or other levels, thus calling for further scrutiny on multiple levels. That is, we need to look at both behavioural and cortical responses to the AV stimuli to understand how integration works in bipolar individuals. There are suggestions and models in the literature that explain how the AV speech integration may be in the non-disordered population. Some studies claim that integration occurs at the cortical level, independent of other levels (Campbell 2007), while some others argue in favour of a behavioural emphasis for the integration process, that is, phonetic (e.g., Burnham and Dodd 2004) vs. phonological in which the probabilistic values of auditory and visual speech inputs are weighted out, for example in Massaro’s Fuzzy Logical Model of Perception in which both auditory and visual signals are evaluated on the basis of their probabilistic values, and then integrated based on those values leading to a percept (e.g., Massaro 1998). However insofar as the fact that no difference between the disordered and control group responses to the AV and AO stimuli is present here, no known models seem to explain the differences in VO performances between (and amongst) these groups.
Thus, how do bipolar disordered perceivers, unlike people with other disorders such as schizophrenia (White et al. 2014) or Alzheimer’s disease (Delbeuck et al. 2007) still integrate auditory and visual speech information whilst having difficulty perceiving VO speech information (or lipreading)? On the basis of the data we have here, one can think of three broad possibilities as to how AV speech integration occurs, yet the visual speech information alone fails to be processed. One possibility is that, given that speech perception is primarily an auditory phenomenon, the visual speech information is carried somehow alongside the auditory information without any integrative process, as some data suggest in auditory-dominant models (Altieri and Yang 2016). One question that remains with respect to the perception of AV speech in the bipolar disorder context is how the visual speech information is integrated. It appears that, given the relative failure of visual-only speech processing in bipolar disordered individuals, further research is warranted to unearth the mechanism through which the visual speech information is eventually and evidently integrated.
Such mechanisms are, as we have so far seen in AV speech perception research, multi-faceted, thus calling for multi-level scrutiny, e.g., investigations at both behavioural and cortical levels. The finding that the non-disordered individuals treat VO speech information as a speech input whilst the bipolar individuals do not seem to (at least in relative terms), raises the question whether, in fact, bipolar individuals have issues with the visual speech input per se. Given the fact that there was no difference with respect to the AV stimuli, the auditory dominance models (e.g., Altieri and Yang 2016) may account for a possibility that visual speech information arriving with auditory speech information is somehow undermined, but the resultant percept is still there, as is the case with the non-disordered individuals. In fact, VO speech input is treated as speech in healthy population. Using the MRI imaging technique, Calvert and colleagues demonstrated that VO (lip-read) speech without any auditory input activated several areas of the cortex in which activation is normally associated with auditory speech input, particularly with superior temporal gyrus. Even more intriguingly, they also found that no speech-related areas were active in response to faces performing non-speech movements (Calvert et al. 1997). Thus, VO information is converted to an auditory code in healthy individuals, and as far as our data seem to suggest, at a behavioural level this does not occur in bipolar individuals. On the basis of our data and Calvert et al.’s findings, we may suggest (or speculate) that the way visual speech information is integrated in bipolar individuals is most likely different to the way it occurs in healthy individuals due to the different behavioural and/or cortical processes engaged. In order to understand how visual information is processed, we need both behavioural and cortical data to be obtained in real time, simultaneously, and in response to the same AV stimuli.
An understanding of how visual speech information is processed both with and without auditory input will provide us with a better understanding of how speech perception processes occur in this special population and thus will pave the way to developing finer criteria for both diagnostic and prognostic processes.
Unfortunately, for reasons beyond our control we had limited time and resources, thus limiting us to a small sample size. However, still, the data we present allow us to make the following hypothesis for subsequent studies: given the impoverished status of VO speech perception in the bipolar groups while still yielding eventual AV speech integration, a compensatory mechanism at both the behavioural and cortical levels may be at work. In order to understand this mechanism, as we have just suggested, responses to McGurk stimuli at both the behavioural and cortical levels must be obtained via known as well as new-type auditory–visual speech stimuli (Jerger et al. 2014).

Author Contributions

Formal analysis, D.E.; Investigation, A.Y.; Methodology, A.Y. and D.E.; Project administration, A.Y.; Supervision, D.E.; Writing—original draft, D.E.; Writing—review & editing, A.Y. and D.E.


This research received no external funding.


We would like to thank Manisa Mental Health Hospital administration and all participants for their support. The constructive comments and suggestions by the anonymous reviewers are also greatly appreciated.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Altieri, Nicholas, and Cheng-Ta Yang. 2016. Parallel linear dynamic models can mimic the McGurk effect in clinical populations. Journal of Computational Neuroscience 41: 143–55. [Google Scholar] [CrossRef] [PubMed]
  2. Arnold, Paul, and Andrea Köpsel. 1996. Lip-reading, reading and memory of hearing and hearing-impaired children. Scandinavian Audiology 25: 13–20. [Google Scholar] [CrossRef] [PubMed]
  3. Bernstein, Lynne E., Denis Burnham, and Jean-Luc Schwartz. 2002. Special session: Issues in audiovisual spoken language processing (when, where, and how?). In Proceedings of the International Conference on Spoken Language Processing, Denver, CO, USA, September 16–20. Edited by John H. L. Hansen and Bryan L. Pellom. Baixas: ISCA Archive, Volume 3, pp. 1445–48. [Google Scholar]
  4. Bovo, Roberto, Andrea Ciorba, Silvano Prosser, and Alessandro Martini. 2009. The McGurk phenomenon in Italian listeners. Acta Otorhynolaryngologica Italica 29: 203–8. [Google Scholar]
  5. Burnham, Denis. 2003. Language specific speech perception and the onset of reading. Reading and Writing 16: 573–609. [Google Scholar] [CrossRef]
  6. Burnham, Denis, and Barbara Dodd. 2004. Auditory-visual speech integration by prelinguistic infants: Perception of an emergent consonant in the McGurk effect. Developmental Psychobiology 45: 204–20. [Google Scholar] [CrossRef] [PubMed]
  7. Calvert, Gemma A., Edward T. Bullmore, Michael J. Brammer, Ruth Campbell, Steven C. R. Williams, Philip K. McGuire, Peter W. R. Woodruff, Susan D. Iversen, and Anthony S. David. 1997. Activation of auditory cortex during silent lipreading. Science 276: 593–96. [Google Scholar] [CrossRef] [PubMed]
  8. Campbell, Ruth. 2007. The processing of audio-visual speech: empirical and neural bases. Philosophical Transactions B 363: 1001–10. [Google Scholar] [CrossRef] [PubMed]
  9. Delbeuck, Xavier, Fabienne Collette, and Martial Van der Linden. 2007. Is Alzheimer’s disease a disconnection syndrome? Evidence from a cross-modal audio-visual illusory experiment. Neuropsychologia 45: 3315–23. [Google Scholar] [CrossRef] [PubMed]
  10. Desjardins, Renée N., and Janet F. Werker. 2004. Is the integration of heard and seen speech mandatory for infants? Developmental Psychobiology 45: 187–203. [Google Scholar] [CrossRef] [PubMed]
  11. Desjardins, Renée N., John Rogers, and Janet F. Werker. 1997. An exploration of why preschoolers perform differently than do adults in audiovisual speech perception tasks. Journal of Experimental Child Psychology 66: 85–110. [Google Scholar] [CrossRef] [PubMed]
  12. Dodd, Barbara, Beth Mcintosh, Doğu Erdener, and Denis Burnham. 2008. Perception of the auditory-visual illusion in speech perception by children with phonological disorders. Clinical Linguistics & Phonetics 22: 69–82. [Google Scholar]
  13. Erdener, Doğu. 2015. Türkçede McGurk İllüzyonu/The McGurk Illusion in Turkish. Turkish Journal of Psychology 30: 19–27. [Google Scholar]
  14. Erdener, Doğu, and Denis Burnham. 2013. The relationship between auditory–visual speech perception and language-specific speech perception at the onset of reading instruction in English-speaking children. Journal of Experimental Child Psychology 116: 120–138. [Google Scholar] [CrossRef] [PubMed]
  15. Goodwin, Guy, and Gary Sachs. 2010. Fast facts: Bipolar Disorder, 2nd ed. Abingdon: HEALTH Press. [Google Scholar]
  16. Jerger, Susan, Markus F. Damian, Nancy Tye-Murray, and Hervé Abdi. 2014. Children use visual speech to compensate for non-intact auditory speech. Journal of Experimental Child Psychology 126: 295–312. [Google Scholar] [CrossRef] [PubMed]
  17. Kuhl, Patricia K., and Andrew N. Meltzoff. 1982. The bimodal perception of speech in infancy. Science 21: 1138–41. [Google Scholar] [CrossRef]
  18. Magnotti, John F., Debshila Basu Mallick, Guo Feng, Bin Zhou, Wen Zhou, and Michael S. Beauchamp. 2015. Similar frequency of the McGurk effect in large samples of native Mandarin Chinese and American English speakers. Experimental Brain Research 233: 2581–86. [Google Scholar] [CrossRef] [PubMed]
  19. Massaro, Dominic W. 1998. Perceiving Talking Faces: From Speech Perception to a Behavioural Principle. Cambridge: The MIT Press. [Google Scholar]
  20. McGurk, Harry, and John MacDonald. 1976. Hearing lips and seeing voices. Nature 264: 746–48. [Google Scholar] [CrossRef]
  21. Meronen, Auli, Kaisa Tiippana, Jari Westerholm, and Timo Ahonen. 2013. Audiovisual speech perception in children with developmental language disorder in degraded listening conditions. Journal of Speech, Language, and Hearing Research 56: 211–21. [Google Scholar] [CrossRef]
  22. Pearl, Doron, Dorit Yodashkin-Porat, Nachum Katz, Avi Valevski, Dov Aizenberg, Mayanit Sigler, Abraham Weizman, and Leonid Kikinzon. Differences in audiovisual integration, as measured by McGurk phenomenon, among adult and adolescent patients with schizophrenia and age-matched healthy control groups. Comprehensive Psychiatry 50: 186–92. [CrossRef] [PubMed]
  23. Schelinski, Stefanie, Philipp Riedel, and Katharina von Kriegstein. 2014. Visual abilities are important for auditory-only speech recognition: Evidence from autism spectrum disorder. Neuropsychologia 65: 1–11. [Google Scholar] [CrossRef] [PubMed]
  24. Sekiyama, Kaoru. 1997. Audiovisual speech perception and its cross-language differences. Japanese Journal of Psychonomic Science 15: 122–27. [Google Scholar]
  25. Sekiyama, Kaoru, and Denis Burnham. 2008. Impact of language on development of auditory–visual speech perception. Developmental Science 11: 306–20. [Google Scholar] [CrossRef] [PubMed]
  26. Sekiyama, Kaoru, and Yoh’ichi Tohkura. 1993. Cross-language differences in the influence of visual cues in speech perception. Journal of Phonetics 21: 427–44. [Google Scholar]
  27. Stevenson, Ryan A., Justin Siemann, Tiffany G. Woynaroski, Brittany C. Schneider, Heley E. Eberly, Stephen Camarata, and Mark T. Wallace. 2014. Brief report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders. Journal of Autism and Developmental Disorders 44: 1470–77. [Google Scholar] [CrossRef] [PubMed]
  28. Sumby, W. H., and Irwin Pollack. 1954. Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26: 212–15. [Google Scholar]
  29. Surguladze, Simon A., Gemma A. Calvert, Michael J. Brammer, Ruth Campbell, Edward T. Bullmore, Vincent Giampietro, and Anthony S. David. 2001. Audio-visual speech perception in schizophrenia: An fMRI study. Psychiatry Research. Neuroimaging 106: 1–14. [Google Scholar] [CrossRef]
  30. Szycik, Gregor R., Thomas F. Münte, Wolfgang Dillo, Bahram Mohammadi, Amir Samii, and Detlef E. Dietrich. 2009. Audiovisual integration of speech is disturbed in schizophrenia: an fMRI study. Schizophrenia Research 110: 111–18. [Google Scholar] [CrossRef] [PubMed]
  31. White, Thomas, Rebekah L. Wigton, Dan W. Joyce, Tracy Bobin, Christian Ferragamo, Nisha Wasim, Stephen Lisk, and Sukhi Shergill. 2014. Eluding the illusion? Schizophrenia, dopamine and the McGurk effect. Frontiers in Human Neuroscience 8: 565. [Google Scholar] [CrossRef]
Figure 1. The mean scores of auditory–visual (AV; (a)), auditory-only (AO; (b)), and visual-only (VO; (c)) scores obtained from the bipolar groups combined and the control group. Error bars represent the standard error of the mean.
Figure 1. The mean scores of auditory–visual (AV; (a)), auditory-only (AO; (b)), and visual-only (VO; (c)) scores obtained from the bipolar groups combined and the control group. Error bars represent the standard error of the mean.
Languages 03 00038 g001aLanguages 03 00038 g001b
Figure 2. The mean scores of auditory–visual (AV) (a), auditory-only (AO) (b), and visual-only (VO) (c) scores obtained from the manic and depressive episode bipolar subgroups and the control group. Error bars represent the standard error of the mean.
Figure 2. The mean scores of auditory–visual (AV) (a), auditory-only (AO) (b), and visual-only (VO) (c) scores obtained from the manic and depressive episode bipolar subgroups and the control group. Error bars represent the standard error of the mean.
Languages 03 00038 g002

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (
Languages EISSN 2226-471X Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top