Next Article in Journal
Quantitative Analysis of the Influence of Volatile Matter Content in Coal Samples on the Fractal Dimension of Their Nanopore Characteristics
Previous Article in Journal
Efficacy of Maxillary Expansion with Clear Aligner in the Mixed Dentition: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Experimental Study on the Formation of Spatial Cognitive Maps in Humans

Institute of Exercise Training and Sport Informatics, German Sport University, 50927 Cologne, Germany
Appl. Sci. 2025, 15(13), 7234; https://doi.org/10.3390/app15137234
Submission received: 12 May 2025 / Revised: 4 June 2025 / Accepted: 5 June 2025 / Published: 27 June 2025
(This article belongs to the Topic The Computational Brain)

Abstract

Featured Application

The outcome of the present study can help in the design of navigation aids and training programs for individuals with spatial orientation problems, such as older adults and persons with brain injury or early-stage dementia.

Abstract

This study investigated how cognitive maps of the environment are formed. During learning trials, participants encoded the spatial locations of objects in a virtual maze either through simulated movement within the maze (first-person perspective) or by inspecting a schematic map (survey perspective). During interleaved test trials, they indicated where the object were on a schematic map (survey perspective). Response accuracy, averaged across objects and participants, increased gradually across test trials. At the level of individual participants and objects, however, accuracy improved abruptly. Furthermore, response accuracy was unaffected by the number of encoded objects used. Notably, the speed of map formation and the absence of a set-size effect were comparable across the two encoding perspectives, despite the fact that first-person encoding required transformation into a survey perspective for testing. Unlike the speed, the accuracy was lower in the first-person perspective compared to the survey encoding perspective. These findings suggest that cognitive maps can be holistic rather than item-dependent representations that emerge in a locally abrupt fashion, regardless of the encoding perspective. In contrast to the emergence speed, map accuracy can be lower when the encoding perspective differs from the test perspective.

1. Introduction

Finding our way through landscapes, cities, and buildings is a complex cognitive skill. It includes the integration of multimodal cues about the spatial layout of the environment and about one’s own motion through it, transformations between multiple spatial reference frames, spatial memory, goal-setting, route planning, and executive control [1,2]. Successful wayfinding often relies on internal representations of space called “cognitive maps” [3] or “survey knowledge” [4]. The term ‘cognitive map’ should not be interpreted literally, as an exact replica of the real world. Rather, like other cognitive phenomena, a cognitive map is a “coping tool… for handling tasks, which typically does not entail exact modeling of the world” [5]. Continuing with the “map” metaphor, behavioral and neurophysiological research indicates that cognitive maps can be distorted, influenced by assumptions, fragmented, poor in detail, and in some cases even non-metric [6,7]. Nonetheless, internal representations support flexible route planning, enabling individuals to take shortcuts, navigate around obstacles, and travel along routes they have never taken before.
These representations enable flexible route planning that takes shortcuts and road-blocks into account, and allows travel along routes never taken before.
Most studies have investigated cognitive map formation using alternating learning and test trials: during learning trials, participants explored the environment and encountered various objects; during test trials, they indicated the locations of these objects. In this research, response accuracy increased steadily from one test to the next [8,9,10], which seems to suggest that cognitive maps emerge gradually with practice. However, the observed gradual increase could actually be an averaging artifact: cognitive maps might emerge abruptly, but at differing times depending on the participant and/or on the tested environmental location, yielding an artificially smooth curve when aggregated. In what follows, this location-specific abrupt emergence will be named “locally abrupt”. Earlier research reported that individuals indeed vary widely in their spatial skills, depending on factors such as sex, cognitive abilities and personality [11,12,13]. Furthermore, some environmental objects are easier to localize than others, depending, e.g., on their familiarity and their distance from physical or perceived boundaries [14,15] which could be the basis for the locally abrupt emergence of a cognitive map. Finally, evidence for abrupt—albeit not locally abrupt—emergence is provided by the phenomenon of “reorientation”, where individuals experience abrupt transitions between feeling disoriented and feeling oriented [16,17]. All these findings are in accordance with the view that the observed gradual emergence of cognitive maps might be an averaging artifact.
The main purpose of the present work was to provide persuasive experimental evidence of whether cognitive maps form in a locally abrupt or in a gradual fashion. To this end, participants were given an alternating sequence of learning trials and test trials, as in earlier research, but their performance was analyzed separately for each combination of object and participant to avoid averaging artifacts. This analysis was anchored in “hits”, i.e., the first correct localization of a given object by a given participant. If locations in cognitive maps emerge abruptly, a hit should almost invariably be followed by another correct response to the same object by the same participant. If locations emerge gradually, the probability of a correct response after a hit should be similar to that after a non-hit.
A second purpose was to determine the relationship between the fidelity of cognitive maps and the number of environmental objects present. Distinctive objects, often called “landmarks”, facilitate internal representations of space since they serve as reference points for nearby locations [18] and for structuring the environment [19]. However, a larger number of objects might degrade those representations by elevating the cognitive load: more object–location bindings must be maintained in spatial memory, straining capacity and increasing the risk of interference. Indeed, the extant literature on human working memory documents that a larger number of items is more difficult to remember [20,21]. This “set-size effect” has also been documented in a wayfinding study [22]. Notably, however, participants in the latter study were asked to repeatedly follow a fixed route, a task that requires serial-order memory and associative memory [23] but not the use of a cognitive map. Whether there exists a set-size effect for cognitive maps, therefore, is still unknown and is investigated in this study by comparing participants’ performance in two environments containing a different number of objects.
The third purpose was to find out whether the speed of cognitive map formation (locally abrupt vs. gradual) and its set-size effect depend on the perspective from which spatial knowledge is acquired during learning trials. Several studies [24,25,26] have compared two encoding conditions. In one, participants walked or were passively transported through the environment and thus encoded spatial information from a first-person perspective; in the other condition, they inspected a cartographic map and thus encoded spatial information from a survey perspective. These studies reported that encoding from a first-person perspective resulted in better performance on tests based on a first-person perspective (judgments of path length and heading from one’s current position to an object) compared to tests based on a survey perspective (judgments of straight-line distances and directions between object pairs), while the opposite was the case for encoding from a survey perspective. These findings suggest that the encoding perspective can have an impact on the quality of spatial representations; it therefore may also influence the speed at which cognitive maps emerge (abruptly vs. gradually) and the impact of the number of objects on the formation of those maps. Specifically, if spatial information is encoded from a first-person perspective, it must be transformed into a survey perspective to form a cognitive map, taking into account one’s own varying position as signaled by visual, kinesthetic, and other self-motion cues (“path integration”; [27]). In contrast, if spatial information is encoded from a survey perspective, no such transformation is needed. Thus, the additional processing demand of first-person encoding may slow down map formation and heighten vulnerability to overload by an increasing number of item–location bindings.
In sum, this study examined (1) whether the formation of cognitive maps is locally abrupt or gradual, (2) whether it is affected by the number of objects (set-size effect), and (3) whether both the emergence speed and the set-size effect vary with the spatial encoding perspective.

2. Materials and Methods

2.1. Participants

A total of 112 participants were examined. Experiment 1 comprised 56 persons (18 female, mean age 30.1 years, SD 7.2 years), and Experiment 2 comprised another 56 persons (25 female, mean age 32.4 years, SD 9.1 years). All were recruited via the internet platform Prolific and completed the experiment remotely on their desktop computers. The experimental software was both programmed and delivered using the internet platform Gorilla. The inclusion criteria were an age in the range of 20–40, ability to use a desktop PC, and fluency in English (as all instructions were provided in English).
This study was conducted as part of a larger research program pre-approved by the author’s institutional Ethics Commission (Approval No. 062/2020). Informed consent was obtained from each participant before testing.

2.2. Practice Trials

Participants saw a simulated walk across three intersections of a virtual maze with uniform corridors and four-way intersections (Figure 1a). Movement from one intersection to the next took 2 s and was followed by a 3 s stop. During the stop, a geometric shape was displayed straight ahead Geometrical shapes rather than realistic objects were used to minimize the impact of mnemonic aids: debriefings in our own earlier studies revealed that realistic objects (pictures of vehicles, furniture, scenery, etc.) are often mentally embedded into narratives to better memorize their locations (e.g., “the mountain is in the background” or “the bulldozer drove through the tower into the tent”.)) (e.g., star in Figure 1a). A different geometrical shape was displayed at each intersection.
After participants had crossed the three intersection and viewed the three shapes, a schematic map was displayed. There, intersections were depicted as blue discs and connecting corridors as blue lines (Figure 1b). The three shapes were displayed sequentially next to the schematic map, each for 3 s. For each shape, participants had to identify the corresponding intersection by clicking on the appropriate blue disc with their mouse. If a response was incorrect or exceeded the 3 s time limit, an error message appeared for 3 s, and the participant had to try again until giving the correct response within 3 s.
In the first practice trial, the simulated movements were directed forward. In the second trial, they were directed to the right. (Throughout the study, movement to the left or right was simulated without turns, since turns are known to degrade spatial orientation in virtual environments [25]. For example, two successive right turns were replaced by linear movement towards the right shoulder followed by a backward movement—as if looking first through the side window and then through the rear window of a moving vehicle. In this way, participants’ viewing perspective remained aligned with the maze at all times.) The two practice trials ensured that each participant gave six correct responses before being admitted to the actual experiment.

2.3. Learning and Test Trials

In the Maze condition, each learning trial showed a simulated movement through a virtual maze, similar to the practice trials. However, the maze now featured a greater number of intersections arranged across two dimensions on a horizontal plane. Movement paused for 2.5 s at each intersection, while a novel geometric shape was displayed. At the beginning of each trial, participants were shown a schematic map of the maze route, annotated with red arrows and text (see Figure 1c for an example).
In the Control condition, the learning trials displayed a schematic maze map resembling Figure 1c, but without red annotations. Geometric shapes were presented sequentially, each superimposed on a different blue disc. The location and timing of shape presentations matched those in the Maze condition.
In both conditions, each test trial began with the same schematic map shown during learning (but without annotations). The geometric shapes from the preceding learning trial were presented in a new random serial order next to the map. Participants were instructed to click on the corresponding blue disc on the map. If responses were incorrect or did not occur within 12 s, an error message was displayed for 3 s and then the next shape appeared. Thus, unlike the learning trials, the test trials did not allow response corrections.

2.4. Procedures

Each participant began with practice trials to familiarize them with the task and response format. This was followed by the first condition (Maze or Control), which consisted of learning trials (L) and test trials (T) in the order L-L-T-L-T-L-T-L-T-L-T. Then came the other condition, which used the same order of trials.
The first learning trial in the Maze condition displayed simulated movement through the maze, past all shapes, along the specific route shown in Figure 1c. The first learning trial of the Control condition presented all shapes sequentially along the same “route”. All subsequent learning trials followed different routes so that participants saw the same shapes in the same locations, but in a different serial order in each trial. This was to prevent serial-order learning and instead promote the acquisition of a cognitive map. For the same reason, the order of shape presentation also differed between learning and test trials.
To avoid interference and carry-over effects between conditions, shape colors and locations differed between conditions. For example, a pink triangle appeared at the front–left position in one condition, while a green triangle was shown at the central–right location in the other condition. The assignment of color–location sets to conditions was balanced across participants, as was the order in which the two conditions were administered.
Experiment 1 used a 4 × 3 layout (see Figure 1c) with 12 distinct shapes. Experiment 2, conducted with a separate group of participants, used a 4 × 4 layout and displayed 16 distinct shapes. Each experiment lasted approximately 30 to 45 min.
Participants were instructed not to interrupt the experiment for unrelated activities or rest breaks and not to use external memory aids such as taking notes. Logged timestamps confirmed that the participants progressed swiftly through the experiment, with no delays suggestive of note-taking, mental rehearsal, fatigue, or mind wandering. At the start of each learning trial, participants were reminded to memorize the spatial locations of all shapes, as they would be required to recall these during the subsequent test trial. They were also informed that memorizing the temporal order of shapes would not be helpful, as this order would change during testing.

2.5. Data Analysis

Participants’ responses in the test trials were scored as 1 if correct on the first attempt, or 0 if incorrect or timed out upon first attempt. The resultant binary scores were analyzed using mixed-effects logistic regression via the R function glmer. The participant ID was entered as a random effect. The fixed effects were as follows:
-
Age (continuous);
-
Sex (female, male);
-
Condition (Maze, Control);
-
Test (factor with levels 1, 2, 3, 4, 5);
-
One additional effect depending on the research question:
-
Boundary (yes, no), to determine whether response correctness differed between shapes located at an edge of the virtual maze (= yes) and those located more centrally (= no);
-
Experiment (1, 2), to assess whether response correctness differed between mazes with 12 versus 16 shapes;
-
Hit (1, 0, “missing data”), to evaluate whether response correctness following a hit (i.e., Hit = 1) differed from correctness following an incorrect response (i.e., Hit = 0). If a response was correct but not a hit (e.g., the second correct response in a row), the following response was excluded from this analysis and was coded as missing.
To obtain the most parsimonious model, logistic regression followed a backward elimination procedure. Backward rather than forward elimination was used since this study is largely exploratory rather than confirmatory. Analysis started from the full model
accuracy ~ Age + Sex + Condition * Test * (additional effect)
Model terms were removed one by one and were reinstated if their removal significantly worsened the model’s fit (p < 0.05) with a non-negligible effect size (i.e., Cohen’s f 2 > 0.02). Both criteria were used since p-values alone can be misleading when large samples and complex statistical models are used [28]. Significance was tested via the R function anova, and effect size was determined using model log-likelihoods obtained via the R function logLik. To respect the principle of marginality [29], the three-way interaction was removed first, followed by two-way interactions, and finally the main effects.
The final step of the analysis examined whether performance following a hit conformed to the view (see Introduction) that cognitive maps emerge in a locally abrupt fashion. According to this view, the proportion of correct responses following a hit should be close to 1.0, but not exactly 1.0, since hits can also occur by chance rather than from genuine spatial knowledge. Specifically, the predicted proportion of correct responses following a hit is
Πpredicted = c2 + (1 − c)
where c is the chance probability of a correct response, c2 is the probability of a chance correct response following a chance hit, and (1 − c) is the probability of a correct response following a hit resulting from genuine spatial knowledge. For Experiment 1 (12 shapes), c = 1/12 hence Πpredicted = 0.923. For Experiment 2 (16 shapes), c = 1/16 hence Πpredicted = 0.941. The observed proportions of correct responses following a hit were compared against Πpredicted using a Wald z-test, which was implemented with the R function test.
Although the analyses were conducted using logistic regression, effect size estimation was based on an ANOVA model with an equivalent factor structure, as effect sizes for logistic regressions are not easily obtained. The effects of main interest for the present study were Boundary, Experiment, Hit, and their interactions with Condition. Power analysis using GPower [30]. with parameters α = 0.05, β = 0.95, and a medium effect size of f = 0.25, indicated a required sample size of n = 54 for each experiment. The actual sample size was n = 56, to enable the full balancing of 2 color–location sets x 2 condition orders.

3. Results

As a first overview of the participants’ performance, Figure 2 illustrates their mean accuracy in Experiment 1. In this graphic, mean accuracy represents the proportion of correct responses from a given participant on a given test trial of a given condition. Figure 2 shows that mean accuracy varied widely, exhibiting both floor and ceiling effects. These floor and ceiling effects precluded an analysis of mean accuracy by parametric statistical tests. Instead, the raw binary correctness scores were analyzed by logistic regression, which models the proportion of ones versus zeroes without assuming any specific distribution of residuals.
Table 1 summarizes the outcome of the logistic regression of Experiment 1. The backward elimination procedure reinstated only the main effects of the Condition and Test: response correctness increased from one test to the next, and was consistently higher under the Control compared to the Maze condition (cf. Figure 2). The across-participant difference between conditions (mean ± standard deviation) was 0.26 ± 0.25, and the correlation between condition means was r = 0.53 (p < 0.001), indicating a large effect size. This suggests that the Maze condition posed a greater challenge, but both conditions tapped into overlapping cognitive resources. Notably, backward elimination removed all terms that included boundary effects.
As Figure 3 and Table 2 illustrate, the findings were similar in Experiment 2. Mean accuracy varied widely, response correctness increased consistently from one test to the next and was higher in the Control than in the Maze condition, and all terms that included boundary effects were removed by backward elimination. The difference between conditions was 0.25 ± 0.22, and the correlation between conditions was r = 0.57 (p < 0.001; large effect size). Thus, Experiment 2 confirms the higher demand of the Maze condition and that both conditions share resources.
To compare correctness in Experiment 1 and 2, Boundary was replaced by Experiment in the logistic regression. Table 3 shows that, as expected, the main effects of Condition and Test were reinstated. More importantly, all terms that included Experiment were removed.
To compare correctness with and without a preceding hit, Experiment was replaced by Hit in the logistic regression. According to Table 4 and Table 5, the main effect of Condition was reinstated for both experiments, which replicates the findings from Table 1, Table 2 and Table 3. The main effect of Hit was reinstated as well: the correctness following a hit (0.780 ± 0.185) was higher than that following an incorrect response (0.542 ± 0.244). All interactions with Hit were removed, indicating that the consequences of a preceding hit on response accuracy were comparable across tests and conditions.
For Experiment 1, response correctness following a hit was not significantly different from Πpredicted, the value expected if the formation of cognitive maps is locally abrupt (z = 0.374, p = 0.708). For Experiment 2, the registered and expected correctness were also not significantly different (z = 0.417, p = 0.677).

4. Discussion

This study investigated how cognitive maps of the environment are formed. Its purpose was to determine whether they emerge abruptly or gradually, whether they exhibit a set-size effect, and whether their speed of emergence and set-size effect depend on the spatial perspective from which spatial information is encoded. During learning trials, participants explored a virtual maze either via simulated movements through the maze (Maze condition, first-person perspective) or by inspecting a schematic map (Control condition, survey perspective). Within both conditions, they had to indicate the locations of previously encountered objects on a schematic map (survey perspective) during interleaved test trials.
In accordance with earlier research on spatial skills (e.g., [12,13]), response correctness varied widely. Nevertheless, statistical analyses provided robust evidence, albeit with small effect sizes, that correctness (1) increased across learning trials and (2) was higher in the Control than in the Maze condition. The difference between conditions was substantial, averaging about 0.25 on a scale from 0 to 1, and can be seen as an indicator of cognitive demand [26]. Specifically, the cognitive demand could be higher in the Maze condition as, unlike in the Control condition, spatial knowledge had to be transformed from a first-person perspective into a survey perspective.
Under the criteria for the reinstation of effects described in the Data Analysis section, the present data do not confirm that localization is better near physical or perceived boundaries of the explored space compared to more central regions [14,15]. This deviating result could indicate that merely confining the explored space to a subregion of the total space, with free view beyond that subregion (cf. Figure 1a), is not sufficient to elicit a boundary effect. Alternatively, 12 or 16 objects may not be enough to differentiate between boundaries and centers. Additional work is therefore needed to elucidate the boundary effect.
Regarding the main purpose of this study, the emergence speed of cognitive maps, the present data confirmed [8,9,10] that mean accuracy progressively increases across test trials (cf. Figure 2 and Figure 3). While this overall trend suggests a gradual learning process, a fine-grained statistical analysis indicated that cognitive maps actually formed in a locally abrupt, all-or-none fashion. Thus, response correctness following a hit was substantially higher than that following an incorrect response, and was not much lower than predicted by the abrupt-emergence view. In other words, once a participant correctly localized a given object, (s)he was very likely to correctly localize that object again during the next test trial. This was the case in both experiments, i.e., in both groups of 56 participants. These findings suggests that the gradual increase observed in Figure 2 and Figure 3, and possibly also earlier research [8,9,10], may be an artifact of averaging across participants and objects.
The sudden emergence of cognitive maps aligns with the phenomenon of reorientation, in which individuals abruptly regain their sense of direction after a period of spatial uncertainty [16,17]. Comparable discontinuities, named “aha-effects” or “insights”, have also been reported in other domains of cognitive psychology [31,32]. Although these phenomena relate to global rather than local changes, they suggest that cognitive processes do not always unfold incrementally; rather, they can involve sudden cognitive reorganizations.
Turning to the second purpose, the set-size effect, the present data indicate that the number of environmental objects had no appreciable effect on response correctness under the reinstation criteria described in the Data Analysis section. This absence of a substantial set-size effect stands in contrast to the majority of studies on human working memory [20], although it aligns with a few [33,34,35]. One possible explanation is that participants grouped adjacent objects into regional clusters: encoding clusters rather than individual items possibly reduced their memory demand. However, a more intriguing interpretation is that participants represented the object layout in a holistic fashion [34], not as a sum of elements but as an integrated spatial “Gestalt” [31]. If so, cognitive maps may be likened to a digital camera’s image sensor: just as the sensor can typically capture 12 or 16 objects in its field of view with the same resolution, a cognitive map may be able to represent 12 or 16 objects with the same fidelity.
With respect to the third purpose, the role of the encoding perspective, the present data document that response accuracy depends on the encoding perspective, since the main effects of Condition were reinstated in Table 1, Table 2, Table 3, Table 4 and Table 5. This confirms earlier findings by others [24,25,26]. However, the consequences of a preceding hit on response accuracy did not depend on the encoding perspective as the Condition:Hit and the Conditioin:Test:Hit terms did not meet the reinstation criteria; this was the case both in Experiment 1 with 12 objects (Table 4) and in Experiment 2 with 16 objects (Table 5). It therefore appears to be the case that the emergence speed of cognitive maps and its sensitivity to the size of the set do not depend on the encoding perspective and therefore reflect the intrinsic properties of cognitive maps rather than the properties of transformations between perspectives. In contrast, the accuracy of cognitive maps seems to depend on the encoding perspective and thus reflect the costs associated with transformations between perspectives.
Experimental findings may not necessarily be generalized beyond the specific paradigm employed, and this caveat applies especially to cognitively complex constructs such as internal representations of space. Thus, cognitive maps might be organized differently in other environments [7], such as those with a different spatial configuration, a larger number of realistic design elements such as doors and wastebaskets, and dynamic distractions such as oncoming pedestrians. Furthermore, a set-size effect could arise when the number of objects is substantially larger than 16, although such large sets may be of less relevance for everyday life. A further limitation is that this study focused only on cognitive maps from a survey perspective aligned with the participants’ torso. The findings may therefore not extend to representations centered on other body parts, salient environmental features, or cardinal directions. These alternative representations are known to emerge depending on task demands and prior experience [36,37,38].
In conclusion, this study favors the interpretation that cognitive maps can be holistic constructs that emerge in a locally abrupt manner; that is, different parts of the map may form abruptly but at different times. This was observed regardless of the encoding perspective. Unlike the emergence speed, accuracy was found to be lower with the first-person perspective, probably reflecting the costs of transformation from the first-person to the survey perspective. Future research could investigate how cognitive maps emerge in environments that are more everyday-like than the present grid-shaped and unadorned maze.

Funding

This research was funded by the Marga und Walter Boll Stiftung, grant 210-05.01-21. The sponsor had no role in the design, execution, interpretation, or writing of the study.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki. It is part of a larger research program approved by the Ethics Committee of the German Sport University, approval number 062/2020, date of approval 7 May 2020).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

Raw data and the software code used to run the experiments are available from the author, but the Gorilla platform will charge for use of the software code.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Wolbers, T.; Hegarty, M. What determines our navigational abilities? Trends Cogn. Sci. 2010, 14, 138–146. [Google Scholar] [CrossRef]
  2. Hegarty, M.; He, C.; Boone, A.P.; Yu, S.; Jacobs, E.G.; Chrastil, E.R. Understanding Differences in Wayfinding Strategies. Top. Cogn. Sci. 2022, 10, 102–119. [Google Scholar] [CrossRef] [PubMed]
  3. Tolman, E.C. Cognitive maps in rats and men. Psychol. Rev. 1948, 55, 189–208. [Google Scholar] [CrossRef] [PubMed]
  4. Appleyard, D. Styles and Methods of Structuring a City. Environ. Behav. 1970, 2, 100–117. [Google Scholar] [CrossRef]
  5. Crippen, M. Anticipating and enacting worlds: Moods, illness and psychobehavioral adaptation. Phenomenol. Cogn. Sci. 2023. [Google Scholar] [CrossRef]
  6. Chrastil, E.R.; Warren, W.H. Active and passive contributions to spatial learning. Psychon. Bull. Rev. 2012, 19, 1–23. [Google Scholar] [CrossRef]
  7. Peer, M.; Nadar, C.; Epstein, R.A. The format of the cognitive map depends on the structure of the environment. J. Exp. Psychol. Gen. 2024, 153, 224–240. [Google Scholar] [CrossRef]
  8. Kim, K.; Bock, O. Acquisition of landmark, route, and survey knowledge in a wayfinding task: In stages or in parallel? Psychol. Res. 2021, 85, 2098–2106. [Google Scholar] [CrossRef]
  9. Wolbers, T.; Büchel, C. Dissociable retrosplenial and hippocampal contributions to successful formation of survey representations. J. Neurosci. 2005, 25, 3333–3340. [Google Scholar] [CrossRef]
  10. Zhang, H.; Zherdeva, K.; Ekstrom, A.D. Different “routes” to a cognitive map: Dissociable forms of spatial knowledge derived from route and cartographic map learning. Mem. Cogn. 2014, 42, 1106–1117. [Google Scholar] [CrossRef]
  11. Bao, R.; Chang, S.; Liu, R.; Wang, Y.; Guan, Y. Research status of visuospatial dysfunction and spatial navigation. Front. Aging Neurosci. 2025, 17, 1609620. [Google Scholar] [CrossRef] [PubMed]
  12. Ishikawa, T.; Montello, D.R. Spatial knowledge acquisition from direct experience in the environment: Individual differences in the development of metric knowledge and the integration of separately learned places. Cogn. Psychol. 2006, 52, 93–129. [Google Scholar] [CrossRef]
  13. Pazzaglia, F.; Meneghetti, C.; Ronconi, L. Tracing a Route and Finding a Shortcut: The Working Memory, Motivational, and Personality Factors Involved. Front. Hum. Neurosci. 2018, 12, 225. [Google Scholar] [CrossRef]
  14. Lee, S.A. The boundary-based view of spatial cognition: A synthesis. Curr. Opin. Behav. Sci. 2017, 16, 58–65. [Google Scholar] [CrossRef]
  15. Negen, J.; Sandri, A.; Lee, S.A.; Nardini, M. Boundaries in spatial cognition: Looking like a boundary is more important than being a boundary. J. Exp. Psychol. Learn. Mem. Cogn. 2020, 46, 1007–1021. [Google Scholar] [CrossRef] [PubMed]
  16. Julian, J.B.; Keinath, A.T.; Marchette, S.A.; Epstein, R.A. The Neurocognitive Basis of Spatial Reorientation. Curr. Biol. 2018, 28, R1059–R1073. [Google Scholar] [CrossRef]
  17. Charalambous, E.; Hanna, S.; Penn, A. Aha! I know where I am: The contribution of visuospatial cues to reorientation in urban environments. Spat. Cogn. Comput. 2021, 21, 197–234. [Google Scholar] [CrossRef]
  18. Sadalla, E.K.; Burroughs, W.J.; Staplin, L.J. Reference points in spatial cognition. J. Exp. Psychol. Hum. Learn. Mem. 1980, 6, 516–528. [Google Scholar] [CrossRef]
  19. Couclelis, H.; Golledge, R.G.; Gale, N.; Tobler, W. Exploring the anchor-point hypothesis of spatial cognition. J. Environ. Psychol. 1987, 7, 99–122. [Google Scholar] [CrossRef]
  20. Oberauer, K.; Farrell, S.; Jarrold, C.; Lewandowsky, S. What limits working memory capacity? Psychol. Bull. 2016, 142, 758–799. [Google Scholar] [CrossRef]
  21. Bays, P.M.; Schneegans, S.; Ma, W.J.; Brady, T.F. Representation and computation in visual working memory. Nat. Hum. Behav. 2024, 8, 1016–1034. [Google Scholar] [CrossRef] [PubMed]
  22. Bock, O.; Huang, J.-Y.; Onur, Ö.A.; Memmert, D. Choice between decision-making strategies in human route-following. Mem. Cogn. 2023, 51, 1849–1857. [Google Scholar] [CrossRef] [PubMed]
  23. Tlauka, M.; Wilson, P.N. The effect of landmarks on route-learning in a computer-simulated environment. J. Environ. Psychol. 1994, 14, 305–313. [Google Scholar] [CrossRef]
  24. König, S.U.; Keshava, A.; Clay, V.; Rittershofer, K.; Kuske, N.; König, P. Embodied Spatial Knowledge Acquisition in Immersive Virtual Reality: Comparison to Map Exploration. Front. Virtual Real. 2021, 2, 625548. [Google Scholar] [CrossRef]
  25. Richardson, A.E.; Montello, D.R.; Hegarty, M. Spatial knowledge acquisition from maps and from navigation in real and virtual environments. Mem. Cogn. 1999, 27, 741–750. [Google Scholar] [CrossRef]
  26. Thorndyke, P.W.; Hayes-Roth, B. Differences in spatial knowledge acquired from maps and navigation. Cogn. Psychol. 1982, 14, 560–589. [Google Scholar] [CrossRef]
  27. Amorim, M.-A.; Glasauer, S.; Corpinot, K.; Berthoz, A. Updating an object’s orientation and location during nonvisual navigation: A comparison between two processing modes. Percept. Psychophys. 1997, 59, 404–418. [Google Scholar] [CrossRef]
  28. Selya, A.S.; Rose, J.S.; Dierker, L.C.; Hedeker, D.; Mermelstein, R.J. A Practical Guide to Calculating Cohen’s f2, a Measure of Local Effect Size, from PROC MIXED. Front. Psychol. 2012, 3, 111. [Google Scholar] [CrossRef] [PubMed]
  29. Nelder, J.A. A Reformulation of Linear Models. J. R. Stat. Society. Ser. A Gen. 1977, 140, 48. [Google Scholar] [CrossRef]
  30. Faul, F.; Erdfelder, E.; Lang, A.-G.; Buchner, A. G* Power 3. Behav. Res. Methods 2007, 39, 175–191. [Google Scholar] [CrossRef]
  31. Köhler, W. An Aspect of Gestalt Psychology. Pedagog. Semin. J. Genet. Psychol. 1925, 32, 691–723. [Google Scholar] [CrossRef]
  32. Jung-Beeman, M.; Bowden, E.M.; Haberman, J.; Frymiare, J.L.; Arambel-Liu, S.; Greenblatt, R.; Reber, P.J.; Kounios, J. Neural Activity When People Solve Verbal Problems with Insight. PLoS Biol. 2004, 2, e97. [Google Scholar] [CrossRef]
  33. Chase, W.G.; Simon, H.A. Perception in chess. Cogn. Psychol. 1973, 4, 55–81. [Google Scholar] [CrossRef]
  34. Oliva, A.; Torralba, A. Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope. Int. J. Comput. Vis. 2001, 42, 145–175. [Google Scholar] [CrossRef]
  35. Young, A.W.; Hellawell, D.; Hay, D.C. Configurational Information in Face Perception. Perception 1987, 16, 747–759. [Google Scholar] [CrossRef]
  36. Burgess, N. Spatial memory: How egocentric and allocentric combine. Trends Cogn. Sci. 2006, 10, 551–557. [Google Scholar] [CrossRef] [PubMed]
  37. Galati, G.; Pelle, G.; Berthoz, A.; Committeri, G. Multiple reference frames used by the human brain for spatial perception and memory. Exp. Brain Res. 2010, 206, 109–120. [Google Scholar] [CrossRef]
  38. Meilinger, T.; Frankenstein, J.; Watanabe, K.; Bülthoff, H.H.; Hölscher, C. Reference frames in learning from maps and navigation. Psychol. Res. 2015, 79, 1000–1008. [Google Scholar] [CrossRef]
Figure 1. (a) Maze interior, as seen by participants during the practice and learning trials conducted under the Maze condition. (b) Schematic map displayed during the first practice trial. (c) Example schematic map displayed at the onset of learning trials in the Maze condition; the red arrows and text illustrate the upcoming route.
Figure 1. (a) Maze interior, as seen by participants during the practice and learning trials conducted under the Maze condition. (b) Schematic map displayed during the first practice trial. (c) Example schematic map displayed at the onset of learning trials in the Maze condition; the red arrows and text illustrate the upcoming route.
Applsci 15 07234 g001
Figure 2. Mean response accuracy in Experiment 1. Lines are across-participant medians, error bars are interquartile ranges.
Figure 2. Mean response accuracy in Experiment 1. Lines are across-participant medians, error bars are interquartile ranges.
Applsci 15 07234 g002
Figure 3. Mean response accuracy in Experiment 2.
Figure 3. Mean response accuracy in Experiment 2.
Applsci 15 07234 g003
Table 1. Logistic regression for response correctness in Exp. 1.
Table 1. Logistic regression for response correctness in Exp. 1.
Termχ2pf2Action
Condition:Test:Boundary1.160.281<0.001remove
Test:Boundary0.000.974<0.001remove
Condition:Boundary0.190.661<0.001remove
Test:Condition0.360.546<0.001remove
Boundary11.72<0.0010.002remove
Condition758.96<0.0010.113reinstate
Test365.87<0.0010.061reinstate
Note: χ2 refers to the chi-square statistic, p is the p-value, f2 is the effect size, and “action” refers to whether the term was removed or reinstated in the logistic regression. Effects were reinstated if p < 0.05 and f2 > 0.02; see Data Analysis section.
Table 2. Logistic regression for response correctness in Exp. 2.
Table 2. Logistic regression for response correctness in Exp. 2.
Termχ2pf2Action
Condition:Test:Boundary5.510.019<0.001remove
Test:Boundary1.090.297<0.001remove
Condition:Boundary4.460.035<0.001remove
Test:Condition1.9860.159<0.001remove
Boundary101.2<0.0010.012remove
Condition844.28<0.0010.091reinstate
Test663.54<0.0010.078reinstate
Note: column names as in Table 1.
Table 3. Logistic regression for response correctness in both experiments.
Table 3. Logistic regression for response correctness in both experiments.
Termχ2pf2Action
Condition:Test:Experiment1.870.171<0.001remove
Test:Experiment2.830.093<0.001remove
Condition:Experiment8.420.004<0.001remove
Test:Condition0.540.462<0.001remove
Experiment0.570.452<0.001remove
Condition1596<0.0010.099reinstate
Test1026.3<0.0010.071reinstate
Note: column names as in Table 1.
Table 4. Logistic regression for response correctness conditional on a hit in Exp. 1.
Table 4. Logistic regression for response correctness conditional on a hit in Exp. 1.
Termχ2pf2Action
Condition:Test:Hit0.900.342<0.001remove
Test:Condition0.620.430<0.001remove
Test:Hit3.410.0650.001remove
Condition:Hit1.830.1760.001remove
Test15.05<0.0010.005remove
Condition140.07<0.0010.042reinstate
Hit140.60<0.0010.042reinstate
Note: column names as in Table 1.
Table 5. Logistic regression for response correctness conditional on a hit in Exp. 2.
Table 5. Logistic regression for response correctness conditional on a hit in Exp. 2.
Termχ2pf2Action
Condition:Test:Hit0.470.495<0.001remove
Test:Condition2.580.1080.001remove
Test:Hit11.38<0.0010.002remove
Condition:Hit2.250.134<0.001remove
Test35.46<0.0010.007remove
Condition187.22<0.0010.039reinstate
Hit226.35<0.0010.048reinstate
Note: column names as in Table 1.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bock, O. An Experimental Study on the Formation of Spatial Cognitive Maps in Humans. Appl. Sci. 2025, 15, 7234. https://doi.org/10.3390/app15137234

AMA Style

Bock O. An Experimental Study on the Formation of Spatial Cognitive Maps in Humans. Applied Sciences. 2025; 15(13):7234. https://doi.org/10.3390/app15137234

Chicago/Turabian Style

Bock, Otmar. 2025. "An Experimental Study on the Formation of Spatial Cognitive Maps in Humans" Applied Sciences 15, no. 13: 7234. https://doi.org/10.3390/app15137234

APA Style

Bock, O. (2025). An Experimental Study on the Formation of Spatial Cognitive Maps in Humans. Applied Sciences, 15(13), 7234. https://doi.org/10.3390/app15137234

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop