Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (30)

Search Parameters:
Keywords = Friston

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
42 pages, 3822 KiB  
Article
The Criticality of Consciousness: Excitatory–Inhibitory Balance and Dual Memory Systems in Active Inference
by Don M. Tucker, Phan Luu and Karl J. Friston
Entropy 2025, 27(8), 829; https://doi.org/10.3390/e27080829 - 4 Aug 2025
Viewed by 248
Abstract
The organization of consciousness is described through increasingly rich theoretical models. We review evidence that working memory capacity—essential to generating consciousness in the cerebral cortex—is supported by dual limbic memory systems. These dorsal (Papez) and ventral (Yakovlev) limbic networks provide the basis for [...] Read more.
The organization of consciousness is described through increasingly rich theoretical models. We review evidence that working memory capacity—essential to generating consciousness in the cerebral cortex—is supported by dual limbic memory systems. These dorsal (Papez) and ventral (Yakovlev) limbic networks provide the basis for mnemonic processing and prediction in the dorsal and ventral divisions of the human neocortex. Empirical evidence suggests that the dorsal limbic division is (i) regulated preferentially by excitatory feedforward control, (ii) consolidated by REM sleep, and (iii) controlled in waking by phasic arousal through lemnothalamic projections from the pontine brainstem reticular activating system. The ventral limbic division and striatum, (i) organizes the inhibitory neurophysiology of NREM to (ii) consolidate explicit memory in sleep, (iii) operating in waking cognition under the same inhibitory feedback control supported by collothalamic tonic activation from the midbrain. We propose that (i) these dual (excitatory and inhibitory) systems alternate in the stages of sleep, and (ii) in waking they must be balanced—at criticality—to optimize the active inference that generates conscious experiences. Optimal Bayesian belief updating rests on balanced feedforward (excitatory predictive) and feedback (inhibitory corrective) control biases that play the role of prior and likelihood (i.e., sensory) precision. Because the excitatory (E) phasic arousal and inhibitory (I) tonic activation systems that regulate these dual limbic divisions have distinct affective properties, varying levels of elation for phasic arousal (E) and anxiety for tonic activation (I), the dual control systems regulate sleep and consciousness in ways that are adaptively balanced—around the entropic nadir of EI criticality—for optimal self-regulation of consciousness and psychological health. Because they are emotive as well as motive control systems, these dual systems have unique qualities of feeling that may be registered as subjective experience. Full article
(This article belongs to the Special Issue Active Inference in Cognitive Neuroscience)
Show Figures

Figure 1

29 pages, 3774 KiB  
Article
Improving the Minimum Free Energy Principle to the Maximum Information Efficiency Principle
by Chenguang Lu
Entropy 2025, 27(7), 684; https://doi.org/10.3390/e27070684 - 26 Jun 2025
Viewed by 1008
Abstract
Friston proposed the Minimum Free Energy Principle (FEP) based on the Variational Bayesian (VB) method. This principle emphasizes that the brain and behavior coordinate with the environment, promoting self-organization. However, it has a theoretical flaw, a possibility of being misunderstood, and a limitation [...] Read more.
Friston proposed the Minimum Free Energy Principle (FEP) based on the Variational Bayesian (VB) method. This principle emphasizes that the brain and behavior coordinate with the environment, promoting self-organization. However, it has a theoretical flaw, a possibility of being misunderstood, and a limitation (only likelihood functions are used as constraints). This paper first introduces the semantic information G theory and the R(G) function (where R is the minimum mutual information for the given semantic mutual information G). The G theory is based on the P-T probability framework and, therefore, allows for the use of truth, membership, similarity, and distortion functions (related to semantics) as constraints. Based on the study of the R(G) function and logical Bayesian Inference, this paper proposes the Semantic Variational Bayesian (SVB) and the Maximum Information Efficiency (MIE) principle. Theoretic analysis and computing experiments prove that RG = FH(X|Y) (where F denotes VFE, and H(X|Y) is Shannon conditional entropy) instead of F continues to decrease when optimizing latent variables; SVB is a reliable and straightforward approach for latent variables and active inference. This paper also explains the relationship between information, entropy, free energy, and VFE in local non-equilibrium and equilibrium systems, concluding that Shannon information, semantic information, and VFE are analogous to the increment of free energy, the increment of exergy, and physical conditional entropy. The MIE principle builds upon the fundamental ideas of the FEP, making them easier to understand and apply. It needs to combine deep learning methods for wider applications. Full article
(This article belongs to the Special Issue Information-Theoretic Approaches for Machine Learning and AI)
Show Figures

Figure 1

45 pages, 6952 KiB  
Review
A Semantic Generalization of Shannon’s Information Theory and Applications
by Chenguang Lu
Entropy 2025, 27(5), 461; https://doi.org/10.3390/e27050461 - 24 Apr 2025
Cited by 1 | Viewed by 1066
Abstract
Does semantic communication require a semantic information theory parallel to Shannon’s information theory, or can Shannon’s work be generalized for semantic communication? This paper advocates for the latter and introduces a semantic generalization of Shannon’s information theory (G theory for short). The core [...] Read more.
Does semantic communication require a semantic information theory parallel to Shannon’s information theory, or can Shannon’s work be generalized for semantic communication? This paper advocates for the latter and introduces a semantic generalization of Shannon’s information theory (G theory for short). The core idea is to replace the distortion constraint with the semantic constraint, achieved by utilizing a set of truth functions as a semantic channel. These truth functions enable the expressions of semantic distortion, semantic information measures, and semantic information loss. Notably, the maximum semantic information criterion is equivalent to the maximum likelihood criterion and similar to the Regularized Least Squares criterion. This paper shows G theory’s applications to daily and electronic semantic communication, machine learning, constraint control, Bayesian confirmation, portfolio theory, and information value. The improvements in machine learning methods involve multi-label learning and classification, maximum mutual information classification, mixture models, and solving latent variables. Furthermore, insights from statistical physics are discussed: Shannon information is similar to free energy; semantic information to free energy in local equilibrium systems; and information efficiency to the efficiency of free energy in performing work. The paper also proposes refining Friston’s minimum free energy principle into the maximum information efficiency principle. Lastly, it compares G theory with other semantic information theories and discusses its limitation in representing the semantics of complex data. Full article
(This article belongs to the Special Issue Semantic Information Theory)
Show Figures

Figure 1

20 pages, 1133 KiB  
Article
As One and Many: Relating Individual and Emergent Group-Level Generative Models in Active Inference
by Peter Thestrup Waade, Christoffer Lundbak Olesen, Jonathan Ehrenreich Laursen, Samuel William Nehrer, Conor Heins, Karl Friston and Christoph Mathys
Entropy 2025, 27(2), 143; https://doi.org/10.3390/e27020143 - 1 Feb 2025
Cited by 2 | Viewed by 2271
Abstract
Active inference under the Free Energy Principle has been proposed as an across-scales compatible framework for understanding and modelling behaviour and self-maintenance. Crucially, a collective of active inference agents can, if they maintain a group-level Markov blanket, constitute a larger group-level active inference [...] Read more.
Active inference under the Free Energy Principle has been proposed as an across-scales compatible framework for understanding and modelling behaviour and self-maintenance. Crucially, a collective of active inference agents can, if they maintain a group-level Markov blanket, constitute a larger group-level active inference agent with a generative model of its own. This potential for computational scale-free structures speaks to the application of active inference to self-organizing systems across spatiotemporal scales, from cells to human collectives. Due to the difficulty of reconstructing the generative model that explains the behaviour of emergent group-level agents, there has been little research on this kind of multi-scale active inference. Here, we propose a data-driven methodology for characterising the relation between the generative model of a group-level agent and the dynamics of its constituent individual agents. We apply methods from computational cognitive modelling and computational psychiatry, applicable for active inference as well as other types of modelling approaches. Using a simple Multi-Armed Bandit task as an example, we employ the new ActiveInference.jl library for Julia to simulate a collective of agents who are equipped with a Markov blanket. We use sampling-based parameter estimation to make inferences about the generative model of the group-level agent, and we show that there is a non-trivial relationship between the generative models of individual agents and the group-level agent they constitute, even in this simple setting. Finally, we point to a number of ways in which this methodology might be applied to better understand the relations between nested active inference agents across scales. Full article
Show Figures

Figure 1

33 pages, 1860 KiB  
Article
Introducing ActiveInference.jl: A Julia Library for Simulation and Parameter Estimation with Active Inference Models
by Samuel William Nehrer, Jonathan Ehrenreich Laursen, Conor Heins, Karl Friston, Christoph Mathys and Peter Thestrup Waade
Entropy 2025, 27(1), 62; https://doi.org/10.3390/e27010062 - 12 Jan 2025
Cited by 1 | Viewed by 3082
Abstract
We introduce a new software package for the Julia programming language, the library ActiveInference.jl. To make active inference agents with Partially Observable Markov Decision Process (POMDP) generative models available to the growing research community using Julia, we re-implemented the pymdp library for Python. [...] Read more.
We introduce a new software package for the Julia programming language, the library ActiveInference.jl. To make active inference agents with Partially Observable Markov Decision Process (POMDP) generative models available to the growing research community using Julia, we re-implemented the pymdp library for Python. ActiveInference.jl is compatible with cutting-edge Julia libraries designed for cognitive and behavioural modelling, as it is used in computational psychiatry, cognitive science and neuroscience. This means that POMDP active inference models can now be easily fit to empirically observed behaviour using sampling, as well as variational methods. In this article, we show how ActiveInference.jl makes building POMDP active inference models straightforward, and how it enables researchers to use them for simulation, as well as fitting them to data or performing a model comparison. Full article
Show Figures

Figure 1

17 pages, 757 KiB  
Article
Bayesian Mechanics of Synaptic Learning Under the Free-Energy Principle
by Chang Sub Kim
Entropy 2024, 26(11), 984; https://doi.org/10.3390/e26110984 - 16 Nov 2024
Viewed by 1316
Abstract
The brain is a biological system comprising nerve cells and orchestrates its embodied agent’s perception, behavior, and learning in dynamic environments. The free-energy principle (FEP) advocated by Karl Friston explicates the local, recurrent, and self-supervised cognitive dynamics of the brain’s higher-order functions. In [...] Read more.
The brain is a biological system comprising nerve cells and orchestrates its embodied agent’s perception, behavior, and learning in dynamic environments. The free-energy principle (FEP) advocated by Karl Friston explicates the local, recurrent, and self-supervised cognitive dynamics of the brain’s higher-order functions. In this study, we continue to refine the FEP through a physics-guided formulation; specifically, we apply our theory to synaptic learning by considering it an inference problem under the FEP and derive the governing equations, called Bayesian mechanics. Our study uncovers how the brain infers weight changes and postsynaptic activity, conditioned on the presynaptic input, by deploying generative models of the likelihood and prior belief. Consequently, we exemplify the synaptic efficacy in the brain with a simple model; in particular, we illustrate that the brain organizes an optimal trajectory in neural phase space during synaptic learning in continuous time, which variationally minimizes synaptic surprisal. Full article
Show Figures

Figure 1

19 pages, 3318 KiB  
Review
Episodic Visual Hallucinations, Inference and Free Energy
by Daniel Collerton, Ichiro Tsuda and Shigetoshi Nara
Entropy 2024, 26(7), 557; https://doi.org/10.3390/e26070557 - 28 Jun 2024
Viewed by 1904
Abstract
Understandings of how visual hallucinations appear have been highly influenced by generative approaches, in particular Friston’s Active Inference conceptualization. Their core proposition is that these phenomena occur when hallucinatory expectations outweigh actual sensory data. This imbalance occurs as the brain seeks to minimize [...] Read more.
Understandings of how visual hallucinations appear have been highly influenced by generative approaches, in particular Friston’s Active Inference conceptualization. Their core proposition is that these phenomena occur when hallucinatory expectations outweigh actual sensory data. This imbalance occurs as the brain seeks to minimize informational free energy, a measure of the distance between predicted and actual sensory data in a stationary open system. We review this approach in the light of old and new information on the role of environmental factors in episodic hallucinations. In particular, we highlight the possible relationship of specific visual triggers to the onset and offset of some episodes. We use an analogy from phase transitions in physics to explore factors which might account for intermittent shifts between veridical and hallucinatory vision. In these triggered forms of hallucinations, we suggest that there is a transient disturbance in the normal one-to-one correspondence between a real object and the counterpart perception such that this correspondence becomes between the real object and a hallucination. Generative models propose that a lack of information transfer from the environment to the brain is one of the key features of hallucinations. In contrast, we submit that specific information transfer is required at onset and offset in these cases. We propose that this transient one-to-one correspondence between environment and hallucination is mediated more by aberrant discriminative than by generative inference. Discriminative inference can be conceptualized as a process for maximizing shared information between the environment and perception within a self-organizing nonstationary system. We suggest that generative inference plays the greater role in established hallucinations and in the persistence of individual hallucinatory episodes. We further explore whether thermodynamic free energy may be an additional factor in why hallucinations are temporary. Future empirical research could productively concentrate on three areas. Firstly, subjective perceptual changes and parallel variations in brain function during specific transitions between veridical and hallucinatory vision to inform models of how episodes occur. Secondly, systematic investigation of the links between environment and hallucination episodes to probe the role of information transfer in triggering transitions between veridical and hallucinatory vision. Finally, changes in hallucinatory episodes over time to elucidate the role of learning on phenomenology. These empirical data will allow the potential roles of different forms of inference in the stages of hallucinatory episodes to be elucidated. Full article
Show Figures

Figure 1

12 pages, 845 KiB  
Article
The Universal Optimism of the Self-Evidencing Mind
by Elizabeth L. Fisher and Jakob Hohwy
Entropy 2024, 26(6), 518; https://doi.org/10.3390/e26060518 - 17 Jun 2024
Cited by 2 | Viewed by 2497
Abstract
Karl Friston’s free-energy principle casts agents as self-evidencing through active inference. This implies that decision-making, planning and information-seeking are, in a generic sense, ‘wishful’. We take an interdisciplinary perspective on this perplexing aspect of the free-energy principle and unpack the epistemological implications of [...] Read more.
Karl Friston’s free-energy principle casts agents as self-evidencing through active inference. This implies that decision-making, planning and information-seeking are, in a generic sense, ‘wishful’. We take an interdisciplinary perspective on this perplexing aspect of the free-energy principle and unpack the epistemological implications of wishful thinking under the free-energy principle. We use this epistemic framing to discuss the emergence of biases for self-evidencing agents. In particular, we argue that this elucidates an optimism bias as a foundational tenet of self-evidencing. We allude to a historical precursor to some of these themes, interestingly found in Machiavelli’s oeuvre, to contextualise the universal optimism of the free-energy principle. Full article
Show Figures

Figure 1

9 pages, 196 KiB  
Opinion
Friston, Free Energy, and Psychoanalytic Psychotherapy
by Jeremy Holmes
Entropy 2024, 26(4), 343; https://doi.org/10.3390/e26040343 - 18 Apr 2024
Cited by 2 | Viewed by 4005
Abstract
This paper outlines the ways in which Karl Friston’s work illuminates the everyday practice of psychotherapists. These include (a) how the strategic ambiguity of the therapist’s stance brings, via ‘transference’, clients’ priors to light; (b) how the unstructured and negative capability of the [...] Read more.
This paper outlines the ways in which Karl Friston’s work illuminates the everyday practice of psychotherapists. These include (a) how the strategic ambiguity of the therapist’s stance brings, via ‘transference’, clients’ priors to light; (b) how the unstructured and negative capability of the therapy session reduces the salience of priors, enabling new top-down models to be forged; (c) how fostering self-reflection provides an additional step in the free energy minimization hierarchy; and (d) how Friston and Frith’s ‘duets for one’ can be conceptualized as a relational zone in which collaborative free energy minimization takes place without sacrificing complexity. Full article
17 pages, 318 KiB  
Article
Shared Protentions in Multi-Agent Active Inference
by Mahault Albarracin, Riddhi J. Pitliya, Toby St. Clere Smithe, Daniel Ari Friedman, Karl Friston and Maxwell J. D. Ramstead
Entropy 2024, 26(4), 303; https://doi.org/10.3390/e26040303 - 29 Mar 2024
Cited by 5 | Viewed by 7545
Abstract
In this paper, we unite concepts from Husserlian phenomenology, the active inference framework in theoretical biology, and category theory in mathematics to develop a comprehensive framework for understanding social action premised on shared goals. We begin with an overview of Husserlian phenomenology, focusing [...] Read more.
In this paper, we unite concepts from Husserlian phenomenology, the active inference framework in theoretical biology, and category theory in mathematics to develop a comprehensive framework for understanding social action premised on shared goals. We begin with an overview of Husserlian phenomenology, focusing on aspects of inner time-consciousness, namely, retention, primal impression, and protention. We then review active inference as a formal approach to modeling agent behavior based on variational (approximate Bayesian) inference. Expanding upon Husserl’s model of time consciousness, we consider collective goal-directed behavior, emphasizing shared protentions among agents and their connection to the shared generative models of active inference. This integrated framework aims to formalize shared goals in terms of shared protentions, and thereby shed light on the emergence of group intentionality. Building on this foundation, we incorporate mathematical tools from category theory, in particular, sheaf and topos theory, to furnish a mathematical image of individual and group interactions within a stochastic environment. Specifically, we employ morphisms between polynomial representations of individual agent models, allowing predictions not only of their own behaviors but also those of other agents and environmental responses. Sheaf and topos theory facilitates the construction of coherent agent worldviews and provides a way of representing consensus or shared understanding. We explore the emergence of shared protentions, bridging the phenomenology of temporal structure, multi-agent active inference systems, and category theory. Shared protentions are highlighted as pivotal for coordination and achieving common objectives. We conclude by acknowledging the intricacies stemming from stochastic systems and uncertainties in realizing shared goals. Full article
23 pages, 5003 KiB  
Article
Active Data Selection and Information Seeking
by Thomas Parr, Karl Friston and Peter Zeidman
Algorithms 2024, 17(3), 118; https://doi.org/10.3390/a17030118 - 12 Mar 2024
Cited by 3 | Viewed by 3887
Abstract
Bayesian inference typically focuses upon two issues. The first is estimating the parameters of some model from data, and the second is quantifying the evidence for alternative hypotheses—formulated as alternative models. This paper focuses upon a third issue. Our interest is in the [...] Read more.
Bayesian inference typically focuses upon two issues. The first is estimating the parameters of some model from data, and the second is quantifying the evidence for alternative hypotheses—formulated as alternative models. This paper focuses upon a third issue. Our interest is in the selection of data—either through sampling subsets of data from a large dataset or through optimising experimental design—based upon the models we have of how those data are generated. Optimising data-selection ensures we can achieve good inference with fewer data, saving on computational and experimental costs. This paper aims to unpack the principles of active sampling of data by drawing from neurobiological research on animal exploration and from the theory of optimal experimental design. We offer an overview of the salient points from these fields and illustrate their application in simple toy examples, ranging from function approximation with basis sets to inference about processes that evolve over time. Finally, we consider how this approach to data selection could be applied to the design of (Bayes-adaptive) clinical trials. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Reasoning)
Show Figures

Figure 1

17 pages, 8324 KiB  
Article
Measurement of the Mapping between Intracranial EEG and fMRI Recordings in the Human Brain
by David W Carmichael, Serge Vulliemoz, Teresa Murta, Umair Chaudhary, Suejen Perani, Roman Rodionov, Maria Joao Rosa, Karl J Friston and Louis Lemieux
Bioengineering 2024, 11(3), 224; https://doi.org/10.3390/bioengineering11030224 - 27 Feb 2024
Cited by 4 | Viewed by 2795
Abstract
There are considerable gaps in our understanding of the relationship between human brain activity measured at different temporal and spatial scales. Here, electrocorticography (ECoG) measures were used to predict functional MRI changes in the sensorimotor cortex in two brain states: at rest and [...] Read more.
There are considerable gaps in our understanding of the relationship between human brain activity measured at different temporal and spatial scales. Here, electrocorticography (ECoG) measures were used to predict functional MRI changes in the sensorimotor cortex in two brain states: at rest and during motor performance. The specificity of this relationship to spatial co-localisation of the two signals was also investigated. We acquired simultaneous ECoG-fMRI in the sensorimotor cortex of three patients with epilepsy. During motor activity, high gamma power was the only frequency band where the electrophysiological response was co-localised with fMRI measures across all subjects. The best model of fMRI changes across states was its principal components, a parsimonious description of the entire ECoG spectrogram. This model performed much better than any others that were based either on the classical frequency bands or on summary measures of cross-spectral changes. The region-specific fMRI signal is reflected in spatially and spectrally distributed EEG activity. Full article
(This article belongs to the Special Issue Multimodal Neuroimaging Techniques: Progress and Application)
Show Figures

Figure 1

23 pages, 4268 KiB  
Article
A Variational Synthesis of Evolutionary and Developmental Dynamics
by Karl Friston, Daniel A. Friedman, Axel Constant, V. Bleu Knight, Chris Fields, Thomas Parr and John O. Campbell
Entropy 2023, 25(7), 964; https://doi.org/10.3390/e25070964 - 21 Jun 2023
Cited by 21 | Viewed by 5141
Abstract
This paper introduces a variational formulation of natural selection, paying special attention to the nature of ‘things’ and the way that different ‘kinds’ of ‘things’ are individuated from—and influence—each other. We use the Bayesian mechanics of particular partitions to understand how slow phylogenetic [...] Read more.
This paper introduces a variational formulation of natural selection, paying special attention to the nature of ‘things’ and the way that different ‘kinds’ of ‘things’ are individuated from—and influence—each other. We use the Bayesian mechanics of particular partitions to understand how slow phylogenetic processes constrain—and are constrained by—fast, phenotypic processes. The main result is a formulation of adaptive fitness as a path integral of phenotypic fitness. Paths of least action, at the phenotypic and phylogenetic scales, can then be read as inference and learning processes, respectively. In this view, a phenotype actively infers the state of its econiche under a generative model, whose parameters are learned via natural (Bayesian model) selection. The ensuing variational synthesis features some unexpected aspects. Perhaps the most notable is that it is not possible to describe or model a population of conspecifics per se. Rather, it is necessary to consider populations of distinct natural kinds that influence each other. This paper is limited to a description of the mathematical apparatus and accompanying ideas. Subsequent work will use these methods for simulations and numerical analyses—and identify points of contact with related mathematical formulations of evolution. Full article
Show Figures

Figure 1

7 pages, 1118 KiB  
Article
How Active Inference Could Help Revolutionise Robotics
by Lancelot Da Costa, Pablo Lanillos, Noor Sajid, Karl Friston and Shujhat Khan
Entropy 2022, 24(3), 361; https://doi.org/10.3390/e24030361 - 2 Mar 2022
Cited by 31 | Viewed by 10341
Abstract
Recent advances in neuroscience have characterised brain function using mathematical formalisms and first principles that may be usefully applied elsewhere. In this paper, we explain how active inference—a well-known description of sentient behaviour from neuroscience—can be exploited in robotics. In short, active inference [...] Read more.
Recent advances in neuroscience have characterised brain function using mathematical formalisms and first principles that may be usefully applied elsewhere. In this paper, we explain how active inference—a well-known description of sentient behaviour from neuroscience—can be exploited in robotics. In short, active inference leverages the processes thought to underwrite human behaviour to build effective autonomous systems. These systems show state-of-the-art performance in several robotics settings; we highlight these and explain how this framework may be used to advance robotics. Full article
(This article belongs to the Special Issue Applying the Free-Energy Principle to Complex Adaptive Systems)
Show Figures

Figure 1

13 pages, 2204 KiB  
Article
Affect-Logic, Embodiment, Synergetics, and the Free Energy Principle: New Approaches to the Understanding and Treatment of Schizophrenia
by Luc Ciompi and Wolfgang Tschacher
Entropy 2021, 23(12), 1619; https://doi.org/10.3390/e23121619 - 1 Dec 2021
Cited by 8 | Viewed by 4394
Abstract
This theoretical paper explores the affect-logic approach to schizophrenia in light of the general complexity theories of cognition: embodied cognition, Haken’s synergetics, and Friston’s free energy principle. According to affect-logic, the mental apparatus is an embodied system open to its environment, driven by [...] Read more.
This theoretical paper explores the affect-logic approach to schizophrenia in light of the general complexity theories of cognition: embodied cognition, Haken’s synergetics, and Friston’s free energy principle. According to affect-logic, the mental apparatus is an embodied system open to its environment, driven by bioenergetic inputs of emotions. Emotions are rooted in goal-directed embodied states selected by evolutionary pressure for coping with specific situations such as fight, flight, attachment, and others. According to synergetics, nonlinear bifurcations and the emergence of new global patterns occur in open systems when control parameters reach a critical level. Applied to the emergence of psychotic states, synergetics and the proposed energetic understanding of emotions lead to the hypothesis that critical levels of emotional tension may be responsible for the transition from normal to psychotic modes of functioning in vulnerable individuals. In addition, the free energy principle through learning suggests that psychotic symptoms correspond to alternative modes of minimizing free energy, which then entails distorted perceptions of the body, self, and reality. This synthetic formulation has implications for novel therapeutic and preventive strategies in the treatment of psychoses, among these are milieu-therapeutic approaches of the Soteria type that focus on a sustained reduction of emotional tension and phenomenologically oriented methods for improving the perception of body, self, and reality. Full article
Show Figures

Figure 1

Back to TopTop