Semantic grounding of concepts and meaning in brain-constrained neural networks
My Session Status
What:
Talk
Part of:
When:
11:00 AM, Monday 3 Jun 2024 EDT
(1 hour 30 minutes)
Theme:
Large Language Models & Multimodal Grounding
Neural networks can be used to increase our understanding of the brain basis of higher cognition, including capacities specific to humans. Simulations with brain-constrained networks give rise to conceptual and semantic representations when objects of similar type are experienced, processed and learnt. This is all based on feature correlations. If neurons are sensitive to semantic features, interlinked assemblies of such neurons can represent concrete concepts. Adding verbal labels to concrete concepts augments the neural assemblies, making them more robust and easier to activate. Abstract concepts cannot be learnt directly from experience, because the different instances to which an abstract concept applies are heterogeneous, making feature correlations small. Using the same verbal symbol, correlated with the instances of abstract concepts, changes this. Verbal symbols act as correlation amplifiers, which are critical for building and learning abstract concepts that are language dependent and specific to humans.
References
Nguyen, P. T., Henningsen-Schomers, M. R., & Pulvermüller, F. (2024). Causal influence of linguistic learning on perceptual and conceptual processing: A brain-constrained deep neural network study of proper names and category terms. Journal of Neuroscience, 44(9).
Grisoni, L., Boux, I. P., & Pulvermüller, F. (2024). Predictive Brain Activity Shows Congruent Semantic Specificity in Language Comprehension and Production. Journal of Neuroscience, 44(12).