Next Article in Journal
Category Structure and Categorical Perception Jointly Explained by Similarity-Based Information Theory
Previous Article in Journal
A New Hyperchaotic System-Based Design for Efficient Bijective Substitution-Boxes
Article Menu
Issue 7 (July) cover image

Export Article

Open AccessArticle
Entropy 2018, 20(7), 526;

Universal Features in Phonological Neighbor Networks

Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
Department of Physics, University of Connecticut, Storrs, CT 06269, USA
Institute for Systems Genomics, University of Connecticut, Storrs, CT 06269, USA
Connecticut Institute for the Brain & Cognitive Sciences, Storrs, CT 06269, USA
Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
Department of Physical Therapy and Athletic Training, Boston University, Boston, MA 02215, USA
Department of Psychology, University of Western Ontario, London, ON N6A 5C2, Canada
Brain & Mind Institute, University of Western Ontario, London, ON N6A 5C2, Canada
Author to whom correspondence should be addressed.
Received: 22 May 2018 / Revised: 29 June 2018 / Accepted: 10 July 2018 / Published: 12 July 2018
(This article belongs to the Section Complexity)
Full-Text   |   PDF [760 KB, uploaded 13 July 2018]   |  


Human speech perception involves transforming a countinuous acoustic signal into discrete linguistically meaningful units (phonemes) while simultaneously causing a listener to activate words that are similar to the spoken utterance and to each other. The Neighborhood Activation Model posits that phonological neighbors (two forms [words] that differ by one phoneme) compete significantly for recognition as a spoken word is heard. This definition of phonological similarity can be extended to an entire corpus of forms to produce a phonological neighbor network (PNN). We study PNNs for five languages: English, Spanish, French, Dutch, and German. Consistent with previous work, we find that the PNNs share a consistent set of topological features. Using an approach that generates random lexicons with increasing levels of phonological realism, we show that even random forms with minimal relationship to any real language, combined with only the empirical distribution of language-specific phonological form lengths, are sufficient to produce the topological properties observed in the real language PNNs. The resulting pseudo-PNNs are insensitive to the level of lingustic realism in the random lexicons but quite sensitive to the shape of the form length distribution. We therefore conclude that “universal” features seen across multiple languages are really string universals, not language universals, and arise primarily due to limitations in the kinds of networks generated by the one-step neighbor definition. Taken together, our results indicate that caution is warranted when linking the dynamics of human spoken word recognition to the topological properties of PNNs, and that the investigation of alternative similarity metrics for phonological forms should be a priority. View Full-Text
Keywords: networks; neighborhood activation model; phonology; phonological neighbor network networks; neighborhood activation model; phonology; phonological neighbor network

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Share & Cite This Article

MDPI and ACS Style

Brown, K.S.; Allopenna, P.D.; Hunt, W.R.; Steiner, R.; Saltzman, E.; McRae, K.; Magnuson, J.S. Universal Features in Phonological Neighbor Networks. Entropy 2018, 20, 526.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top