Lecture Topic: Active Conclusion and Deep Time Models
Annotation: The lecture will talk about deep temporal models based on Markov decision-making processes. These ideas are based on the theory of active inference put forward earlier (active inference — the active collection of information by the brain to verify and update predictions about the world around it) and are used to simulate both behavior and electrophysiological data — within the framework of hierarchical generating models of transitions between discrete states. The inversion of such models (revealing hidden causes by analyzing the consequences) is based on nested sequential conclusions (predictions), organized in such a way that a state change at a higher hierarchical level entails multiple state changes at the underlying level. The deep structuring of these models in the temporal aspect means that evidence accumulates at different time scales, allowing conclusions to be drawn in narratives (that is, in sequences of scenes ordered in time) — as this, for example, can be observed when reading texts. We illustrate this behavior in the context of the concept of updating “Bayesian confidence networks” — and the corresponding theories of neural processes — to explain the phenomenon of search and accumulation of information aimed at resolving uncertainties (epistemic foraging) observed during reading. Computer simulation models that simulate such processes allow reproducing experimental data on perisaccadic neural activity and local field potentials; in particular, this relates to studies of evidence accumulation and to registration of the activity of site neurons. Finally, we use the deep structure of these models for computer simulation of responses to local (e.g., font type) and global (e.g., semantic) disorder in stimulus sequences, allowing reproducing effects such as mismatch negativity and P300 potential, respectively.
Date: September 23d, time: 17:00-19:00. Address: Moscow, Sretenka street, 29.
Registration by link