Passer au contenu de la page principale

“Understanding AI”: Semantic Grounding in Large Language Models

Mon statut pour la session

Quoi:
Talk
Partie de:
Quand:
9:00 AM, Vendredi 14 Juin 2024 EDT (1 heure 30 minutes)
Thème:
Large Language Models & Multimodal Grounding
Do LLMs understand the meaning of the texts they generate? Do they possess a semantic grounding? And how could we understand whether and what they understand? We have recently witnessed a generative turn in AI, since generative models, including LLMs, are key for self-supervised learning. To assess the question of semantic grounding, I distinguish and discuss five methodological ways. The most promising way is to apply core assumptions of theories of meaning in philosophy of mind and language to LLMs. Grounding proves to be a gradual affair with a three-dimensional distinction between functional, social and causal grounding. LLMs show basic evidence in all three dimensions. A strong argument is that LLMs develop world models. Hence, LLMs are neither stochastic parrots nor semantic zombies, but already understand the language they generate, at least in an elementary sense.

 

Rerefences

Lyre, H. (2024). “Understanding AI”: Semantic Grounding in Large Language ModelsarXiv preprint arXiv:2402.10992.

Lyre, H. (2022). Neurophenomenal structuralism. A philosophical agenda for a structuralist neuroscience of consciousnessNeuroscience of Consciousness2022(1), niac012.

Lyre, H. (2020). The state space of artificial intelligenceMinds and Machines30(3), 325-347.

Holger Lyre

Conférencier.ère

Mon statut pour la session

Detail de session
Pour chaque session, permet aux participants d'écrire un court texte de feedback qui sera envoyé à l'organisateur. Ce texte n'est pas envoyé aux présentateurs.
Afin de respecter les règles de gestion des données privées, cette option affiche uniquement les profils des personnes qui ont accepté de partager leur profil publiquement.

Les changements ici affecteront toutes les pages de détails des sessions