Yesterday we published a new mini-review article in Trends in Cognitive Sciences. Needless to say we’re pretty happy about this. Our article “Unravelling the Neurobiology of Interoceptive Inference” discusses two recent landmark papers in the systems neuroscience of interoception, and proposes a novel “interoceptive self-inference” model of insula function to account for these new findings.
You can read the article here: https://www.sciencedirect.com/science/article/abs/pii/S1364661320300541 And an open access preprint is available here: https://psyarxiv.com/7xgkr What exactly is “self inference”? This is something we’ve been working on for a number of years. The basic concept is that the brain not only encodes a prediction of interoceptive states, but uses these states to estimate its own expected uncertainty or precision. Expected uncertainty is critical for optimizing learning, attention, and metacognition. Our model suggests then that when we estimate how reliable or certain our future beliefs and sensations will be, we take into account the constant influence of the body on the noise levels of our sensory apparatus. Put simply, our model argues that the “state noise” term considered in most formal accounts of decision-making is in large part driven by visceral fluctuations, and that the brain utilizes interoception to sample and control these noise trajectories, i.e. through metacognitive active inference. So what has the ‘self’ got to do with that? That is something we are continuously expanding on – in the next year we have another major review in the pipeline, as well as a book chapter focused on consciousness science. In a very basic sense, we call it “self inference” because the brain is combining predictions about the body and the world to produce an estimate of its own reliability. You could think of it as something like a hybrid between global workspace, FEP, and higher-order thought theories of consciousness. In practical terms it is a bit like having a computing cluster which can monitor the temperature of the CPU to limit down-throttling, spreading load around to cooler CPUs when needed. By monitoring our own noise levels, we build more precise estimates of expected uncertainty, which can then be leveraged to determine how much we should update our models of the world (and self) when we encounter surprising events.
The implications of the self-inference model are fascinating, to us at least. For one, we pick up on existing strands of theory that suggest that Bayesian meta-cognitive inferences over expected precision underlie selfhood and perhaps even consciousness itself. By inferring (and controlling, through active inference) our own embodied noise trajectories, we are effectively estimating the influence of ‘the self’ on our re-afferent data streams. That is to say, we’re actively accounting for the minimal self or minimal embodied perspective as the constant source of noise influencing sensory flow and beliefs. The implication here is that, disruptions in self-belief can alter our visceral experiences (e.g., embodied hallucination), and conversely, disrupted interoception can ‘leak’ into our perceptual and metacognitive inferences, producing a variety of affective and decision biases. At the extreme, we think that maladaptive self-inference underlies a wide spectrum of psychiatric and even neurological disorders.
But that is just a sneak peak. You can follow the development of the self-inference model (perhaps we’ll call it MISE, for “metacognitive and interoceptive self inference”) in our recent publications: 1. forthcoming BBS commentary article: https://psyarxiv.com/5j2f3/ 2. An early book chapter with Manos Tsakiris: https://www.oxfordscholarship.com/view/10.1093/oso/9780198811930.001.0001/oso-9780198811930-chapter-2 3. Our recent computational model of interoceptive self-inference: https://www.biorxiv.org/content/10.1101/603928v1.full.pdf Stay tuned for more! We have one or two more major theory articles in the works this year, which should fully ‘flesh’ out the self-inference model and its implications for the phenomenology of the self, consciousness, and computational psychiatry.
Comments