“Understanding AI”: Semantic Grounding in Large Language Models
Mon statut pour la session
Quoi:
Talk
Partie de:
Quand:
9:00 AM, Vendredi 14 Juin 2024 EDT
(1 heure 30 minutes)
Thème:
Large Language Models & Multimodal Grounding
Do LLMs understand the meaning of the texts they generate? Do they possess a semantic grounding? And how could we understand whether and what they understand? We have recently witnessed a generative turn in AI, since generative models, including LLMs, are key for self-supervised learning. To assess the question of semantic grounding, I distinguish and discuss five methodological ways. The most promising way is to apply core assumptions of theories of meaning in philosophy of mind and language to LLMs. Grounding proves to be a gradual affair with a three-dimensional distinction between functional, social and causal grounding. LLMs show basic evidence in all three dimensions. A strong argument is that LLMs develop world models. Hence, LLMs are neither stochastic parrots nor semantic zombies, but already understand the language they generate, at least in an elementary sense.
Rerefences
Lyre, H. (2024). “Understanding AI”: Semantic Grounding in Large Language Models. arXiv preprint arXiv:2402.10992.
Lyre, H. (2022). Neurophenomenal structuralism. A philosophical agenda for a structuralist neuroscience of consciousness. Neuroscience of Consciousness, 2022(1), niac012.
Lyre, H. (2020). The state space of artificial intelligence. Minds and Machines, 30(3), 325-347.