entropy-logo

Journal Browser

Journal Browser

What Limits Working Memory Performance?

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (31 October 2020) | Viewed by 17630

Special Issue Editors


E-Mail Website
Guest Editor
SISSA- Cognitive Neuroscience, Via Bonomea 265, I-34136 Trieste, Italy
Interests: neural computation; memory; brain organization

E-Mail Website
Guest Editor
Cognitive Neuroimaging Unit NeuroSpin center 91191, Gif-sur-Yvette, France
Interests: natural language processing; neural computation; machine learning

Special Issue Information

Dear Colleauges,

Common wisdom tells us that working memory is severely limited in capacity—for example, the “magical” number seven for digit span; perhaps because of hard biophysical constraints—as suggested by the typical few seconds of retention time for verbal material. Experimental evidence is however complex, and is in complex relation to information theory. George Miller noted that while humans can typically convey only about log2(7) bits in unidimensional judgements, our short-term memory span can be much longer, if information is organized in chunks. Venerable mnemonic techniques, like the method of loci, can help us to train ourselves to recode and reach well beyond our naive short-term information capacity. So, is a general information-theoretic account of working memory possible? How constrained would it be by cortical circuitry? Any theoretical and theory-framed experimental contribution to these questions is welcome to the SI, including evidence obtained in animal studies or with the simulation of plausible memory networks.

Prof. Dr. Alessandro Treves
Dr. Yair Lakretz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • phonological output buffer
  • information bottleneck
  • short-term plasticity
  • long-range dependencies
  • visuospatial sketchpad
  • articulatory loop

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

16 pages, 2052 KiB  
Article
Working Memory Decline in Alzheimer’s Disease Is Detected by Complexity Analysis of Multimodal EEG-fNIRS
by David Perpetuini, Antonio Maria Chiarelli, Chiara Filippini, Daniela Cardone, Pierpaolo Croce, Ludovica Rotunno, Nelson Anzoletti, Michele Zito, Filippo Zappasodi and Arcangelo Merla
Entropy 2020, 22(12), 1380; https://doi.org/10.3390/e22121380 - 06 Dec 2020
Cited by 28 | Viewed by 3962
Abstract
Alzheimer’s disease (AD) is characterized by working memory (WM) failures that can be assessed at early stages through administering clinical tests. Ecological neuroimaging, such as Electroencephalography (EEG) and functional Near Infrared Spectroscopy (fNIRS), may be employed during these tests to support AD early [...] Read more.
Alzheimer’s disease (AD) is characterized by working memory (WM) failures that can be assessed at early stages through administering clinical tests. Ecological neuroimaging, such as Electroencephalography (EEG) and functional Near Infrared Spectroscopy (fNIRS), may be employed during these tests to support AD early diagnosis within clinical settings. Multimodal EEG-fNIRS could measure brain activity along with neurovascular coupling (NC) and detect their modifications associated with AD. Data analysis procedures based on signal complexity are suitable to estimate electrical and hemodynamic brain activity or their mutual information (NC) during non-structured experimental paradigms. In this study, sample entropy of whole-head EEG and frontal/prefrontal cortex fNIRS was evaluated to assess brain activity in early AD and healthy controls (HC) during WM tasks (i.e., Rey–Osterrieth complex figure and Raven’s progressive matrices). Moreover, conditional entropy between EEG and fNIRS was evaluated as indicative of NC. The findings demonstrated the capability of complexity analysis of multimodal EEG-fNIRS to detect WM decline in AD. Furthermore, a multivariate data-driven analysis, performed on these entropy metrics and based on the General Linear Model, allowed classifying AD and HC with an AUC up to 0.88. EEG-fNIRS may represent a powerful tool for the clinical evaluation of WM decline in early AD. Full article
(This article belongs to the Special Issue What Limits Working Memory Performance?)
Show Figures

Figure 1

35 pages, 4227 KiB  
Article
Professional or Amateur? The Phonological Output Buffer as a Working Memory Operator
by Neta Haluts, Massimiliano Trippa, Naama Friedmann and Alessandro Treves
Entropy 2020, 22(6), 662; https://doi.org/10.3390/e22060662 - 15 Jun 2020
Cited by 7 | Viewed by 3137
Abstract
The Phonological Output Buffer (POB) is thought to be the stage in language production where phonemes are held in working memory and assembled into words. The neural implementation of the POB remains unclear despite a wealth of phenomenological data. Individuals with POB impairment [...] Read more.
The Phonological Output Buffer (POB) is thought to be the stage in language production where phonemes are held in working memory and assembled into words. The neural implementation of the POB remains unclear despite a wealth of phenomenological data. Individuals with POB impairment make phonological errors when they produce words and non-words, including phoneme omissions, insertions, transpositions, substitutions and perseverations. Errors can apply to different kinds and sizes of units, such as phonemes, number words, morphological affixes, and function words, and evidence from POB impairments suggests that units tend to substituted with units of the same kind—e.g., numbers with numbers and whole morphological affixes with other affixes. This suggests that different units are processed and stored in the POB in the same stage, but perhaps separately in different mini-stores. Further, similar impairments can affect the buffer used to produce Sign Language, which raises the question of whether it is instantiated in a distinct device with the same design. However, what appear as separate buffers may be distinct regions in the activity space of a single extended POB network, connected with a lexicon network. The self-consistency of this idea can be assessed by studying an autoassociative Potts network, as a model of memory storage distributed over several cortical areas, and testing whether the network can represent both units of word and signs, reflecting the types and patterns of errors made by individuals with POB impairment. Full article
(This article belongs to the Special Issue What Limits Working Memory Performance?)
Show Figures

Figure 1

15 pages, 3741 KiB  
Article
Working Memory Training: Assessing the Efficiency of Mnemonic Strategies
by Serena Di Santo, Vanni De Luca, Alessio Isaja and Sara Andreetta
Entropy 2020, 22(5), 577; https://doi.org/10.3390/e22050577 - 20 May 2020
Viewed by 3854
Abstract
Recently, there has been increasing interest in techniques for enhancing working memory (WM), casting a new light on the classical picture of a rigid system. One reason is that WM performance has been associated with intelligence and reasoning, while its impairment showed correlations [...] Read more.
Recently, there has been increasing interest in techniques for enhancing working memory (WM), casting a new light on the classical picture of a rigid system. One reason is that WM performance has been associated with intelligence and reasoning, while its impairment showed correlations with cognitive deficits, hence the possibility of training it is highly appealing. However, results on WM changes following training are controversial, leaving it unclear whether it can really be potentiated. This study aims at assessing changes in WM performance by comparing it with and without training by a professional mnemonist. Two groups, experimental and control, participated in the study, organized in two phases. In the morning, both groups were familiarized with stimuli through an N-back task, and then attended a 2-hour lecture. For the experimental group, the lecture, given by the mnemonist, introduced memory encoding techniques; for the control group, it was a standard academic lecture about memory systems. In the afternoon, both groups were administered five tests, in which they had to remember the position of 16 items, when asked in random order. The results show much better performance in trained subjects, indicating the need to consider such possibility of enhancement, alongside general information-theoretic constraints, when theorizing about WM span. Full article
(This article belongs to the Special Issue What Limits Working Memory Performance?)
Show Figures

Figure 1

Other

Jump to: Research

19 pages, 452 KiB  
Opinion
What Limits Our Capacity to Process Nested Long-Range Dependencies in Sentence Comprehension?
by Yair Lakretz, Stanislas Dehaene and Jean-Rémi King
Entropy 2020, 22(4), 446; https://doi.org/10.3390/e22040446 - 16 Apr 2020
Cited by 12 | Viewed by 6196
Abstract
Sentence comprehension requires inferring, from a sequence of words, the structure of syntactic relationships that bind these words into a semantic representation. Our limited ability to build some specific syntactic structures, such as nested center-embedded clauses (e.g., “The dog that the cat that [...] Read more.
Sentence comprehension requires inferring, from a sequence of words, the structure of syntactic relationships that bind these words into a semantic representation. Our limited ability to build some specific syntactic structures, such as nested center-embedded clauses (e.g., “The dog that the cat that the mouse bit chased ran away”), suggests a striking capacity limitation of sentence processing, and thus offers a window to understand how the human brain processes sentences. Here, we review the main hypotheses proposed in psycholinguistics to explain such capacity limitation. We then introduce an alternative approach, derived from our recent work on artificial neural networks optimized for language modeling, and predict that capacity limitation derives from the emergence of sparse and feature-specific syntactic units. Unlike psycholinguistic theories, our neural network-based framework provides precise capacity-limit predictions without making any a priori assumptions about the form of the grammar or parser. Finally, we discuss how our framework may clarify the mechanistic underpinning of language processing and its limitations in the human brain. Full article
(This article belongs to the Special Issue What Limits Working Memory Performance?)
Show Figures

Figure 1

Back to TopTop