The brain's decoding of fast sensory streams is currently impossible to emulate, even approximately, with artificial agents. For example, robust speech recognition is relatively easy for humans but exceptionally difficult for artificial speech recognition systems. In this paper, we propose that recognition can be simplified with an internal model of how sensory input is generated, when formulated in a Bayesian framework. We show that a plausible candidate for an internal or generative model is a hierarchy of 'stable heteroclinic channels'. This model describes continuous dynamics in the environment as a hierarchy of sequences, where slower sequences cause faster sequences. Under this model, online recognition corresponds to the dynamic decoding of causal sequences, giving a representation of the environment with predictive power on several timescales. We illustrate the ensuing decoding or recognition scheme using synthetic sequences of syllables, where syllables are sequences of phonemes and phonemes are sequences of sound-wave modulations. By presenting anomalous stimuli, we find that the resulting recognition dynamics disclose inference at multiple time scales and are reminiscent of neuronal dynamics seen in the real brain.
In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.
The insula has consistently been shown to be involved in processing stimuli that evoke the emotional response of disgust. Recently, its specificity for processing disgust has been challenged and a broader role of the insula in the representation of interoceptive information has been suggested. Studying the temporal dynamics of insula activation during emotional processing can contribute valuable information pertaining to this issue. Few studies have addressed the insula's putative specificity to disgust and the dynamics of its underlying neural processes. In the present study, neuromagnetic responses of 13 subjects performing an emotional continuous performance task (CPT) to faces with disgust, happy, and neutral expressions were obtained. Magnetic field tomography extracted the time course of bilateral insula activities. Right insula activation was stronger to disgust and happy than neutral facial expressions at about 200 ms after stimulus onset. Later only at about 350 ms after stimulus onset the right insula was activated stronger to disgust than happy facial expressions. Thus, the early right insula response reflects activation to emotionally arousing stimuli regardless of valence, and the later right insula response differentiates disgust from happy facial expressions. Behavioral performance but not the insula activity differed between 100 ms and 1000 ms presentation conditions. Present findings support the notion that the insula is involved in the representation of interoceptive information.
The phenomenon of empathy entails the ability to share the affective experiences of others. In recent years social neuroscience made considerable progress in revealing the mechanisms that enable a person to feel what another is feeling. The present review provides an in-depth and critical discussion of these findings. Consistent evidence shows that sharing the emotions of others is associated with activation in neural structures that are also active during the first-hand experience of that emotion. Part of the neural activation shared between self- and other-related experiences seems to be rather automatically activated. However, recent studies also show that empathy is a highly flexible phenomenon, and that vicarious responses are malleable with respect to a number of factors--such as contextual appraisal, the interpersonal relationship between empathizer and other, or the perspective adopted during observation of the other. Future investigations are needed to provide more detailed insights into these factors and their neural underpinnings. Questions such as whether individual differences in empathy can be explained by stable personality traits, whether we can train ourselves to be more empathic, and how empathy relates to prosocial behavior are of utmost relevance for both science and society.
The present paper briefly describes and contrasts two different motivations crucially involved in decision making and cooperation, namely fairness-based and compassion-based motivation. Whereas both can lead to cooperation in comparable social situations, we suggest that they are driven by fundamentally different mechanisms and, overall, predict different behavioral outcomes. First, we provide a brief definition of each and discuss the relevant behavioral and neuroscientific literature with regards to cooperation in the context of economic games. We suggest that, whereas both fairness- and compassion-based motivation can support cooperation, fairness-based motivation leads to punishment in cases of norm violation, while compassion-based motivation can, in cases of defection, counteract a desire for revenge and buffer the decline into iterative noncooperation. However, those with compassion-based motivation alone may get exploited. Finally, we argue that the affective states underlying fairness-based and compassion-based motivation are fundamentally different, the former driven by anger or fear of being punished and the latter by a wish for the other person's well-being.
Although accumulating evidence highlights a crucial role of the insular cortex in feelings, empathy and processing uncertainty in the context of decision making, neuroscientific models of affective learning and decision making have mostly focused on structures such as the amygdala and the striatum. Here, we propose a unifying model in which insula cortex supports different levels of representation of current and predictive states allowing for error-based learning of both feeling states and uncertainty. This information is then integrated in a general subjective feeling state which is modulated by individual preferences such as risk aversion and contextual appraisal. Such mechanisms could facilitate affective learning and regulation of body homeostasis, and could also guide decision making in complex and uncertain environments.
The suppression of neuronal responses to a repeated event is a ubiquitous phenomenon in neuroscience. However, the underlying mechanisms remain largely unexplored. The aim of this study was to examine the temporal evolution of experience-dependent changes in connectivity induced by repeated stimuli. We recorded event-related potentials (ERPs) during frequency changes of a repeating tone. Bayesian inversion of dynamic causal models (DCM) of ERPs revealed systematic repetition-dependent changes in both intrinsic and extrinsic connections, within a hierarchical cortical network. Critically, these changes occurred very quickly, over inter-stimulus intervals that implicate short-term synaptic plasticity. Furthermore, intrinsic (within-source) connections showed biphasic changes that were much faster than changes in extrinsic (between-source) connections, which decreased monotonically with repetition. This study shows that auditory perceptual learning is associated with repetition-dependent plasticity in the human brain. It is remarkable that distinct changes in intrinsic and extrinsic connections could be quantified so reliably and non-invasively using EEG.
Functional integration in the brain rests on anatomical connectivity (the presence of axonal connections) and effective connectivity (the causal influences mediated by these connections). The deployment of anatomical connections provides important constraints on effective connectivity, but does not fully determine it, because synaptic connections can be expressed functionally in a dynamic and context-dependent fashion. Although it is generally assumed that anatomical connectivity data is important to guide the construction of neurobiologically realistic models of effective connectivity; the degree to which these models actually profit from anatomical constraints has not yet been formally investigated. Here, we use diffusion weighted imaging and probabilistic tractography to specify anatomically informed priors for dynamic causal models (DCMs) of fMRI data. We constructed 64 alternative DCMs, which embodied different mappings between the probability of an anatomical connection and the prior variance of the corresponding of effective connectivity, and fitted them to empirical fMRI data from 12 healthy subjects. Using Bayesian model selection, we show that the best model is one in which anatomical probability increases the prior variance of effective connectivity parameters in a nonlinear and monotonic (sigmoidal) fashion. This means that the higher the likelihood that a given connection exists anatomically, the larger one should set the prior variance of the corresponding coupling parameter; hence making it easier for the parameter to deviate from zero and represent a strong effective connection. To our knowledge, this study provides the first formal evidence that probabilistic knowledge of anatomical connectivity can improve models of functional integration.
Processing of speech and nonspeech sounds occurs bilaterally within primary auditory cortex and surrounding regions of the superior temporal gyrus; however, the manner in which these regions interact during speech and nonspeech processing is not well understood. Here, we investigate the underlying neuronal architecture of the auditory system with magnetoencephalography and a mismatch paradigm. We used a spoken word as a repeating "standard" and periodically introduced 3 "oddball" stimuli that differed in the frequency spectrum of the word's vowel. The closest deviant was perceived as the same vowel as the standard, whereas the other 2 deviants were perceived as belonging to different vowel categories. The neuronal responses to these vowel stimuli were compared with responses elicited by perceptually matched tone stimuli under the same paradigm. For both speech and tones, deviant stimuli induced coupling changes within the same bilateral temporal lobe system. However, vowel oddball effects increased coupling within the left posterior superior temporal gyrus, whereas perceptually equivalent nonspeech oddball effects increased coupling within the right primary auditory cortex. Thus, we show a dissociation in neuronal interactions, occurring at both different hierarchal levels of the auditory system (superior temporal versus primary auditory cortex) and in different hemispheres (left versus right). This hierarchical specificity depends on whether auditory stimuli are embedded in a perceptual context (i.e., a word). Furthermore, our lateralization results suggest left hemisphere specificity for the processing of phonological stimuli, regardless of their elemental (i.e., spectrotemporal) characteristics.
BACKGROUND: Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably) essential relevance to the phenomena of interest and combines available data in models of modest complexity. RESULTS: The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. CONCLUSION: During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations) would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.