Skip to main page content

“Understanding AI”: Semantic Grounding in Large Language Models

My Session Status

What:
Talk
When:
9:00 AM, Friday 14 Jun 2024 EDT (1 hour 30 minutes)
Theme:
Large Language Models & Multimodal Grounding
Do LLMs understand the meaning of the texts they generate? Do they possess a semantic grounding? And how could we understand whether and what they understand? We have recently witnessed a generative turn in AI, since generative models, including LLMs, are key for self-supervised learning. To assess the question of semantic grounding, I distinguish and discuss five methodological ways. The most promising way is to apply core assumptions of theories of meaning in philosophy of mind and language to LLMs. Grounding proves to be a gradual affair with a three-dimensional distinction between functional, social and causal grounding. LLMs show basic evidence in all three dimensions. A strong argument is that LLMs develop world models. Hence, LLMs are neither stochastic parrots nor semantic zombies, but already understand the language they generate, at least in an elementary sense.

 

Rerefences

Lyre, H. (2024). “Understanding AI”: Semantic Grounding in Large Language ModelsarXiv preprint arXiv:2402.10992.

Lyre, H. (2022). Neurophenomenal structuralism. A philosophical agenda for a structuralist neuroscience of consciousnessNeuroscience of Consciousness2022(1), niac012.

Lyre, H. (2020). The state space of artificial intelligenceMinds and Machines30(3), 325-347.

My Session Status

Session detail
Allows attendees to send short textual feedback to the organizer for a session. This is only sent to the organizer and not the speakers.
To respect data privacy rules, this option only displays profiles of attendees who have chosen to share their profile information publicly.

Changes here will affect all session detail pages