Skip to main page content

The Missing Links: What makes LLMs still fall short of language understanding?

My Session Status

What:
Talk
When:
9:00 AM, Monday 10 Jun 2024 EDT (1 hour 30 minutes)
Theme:
Large Language Models & Understanding
The unprecedented success of LLMs in carrying out linguistic interactions disguises the fact that, on closer inspection, their knowledge of meaning and their inference abilities are still quite limited and different from human ones. They generate human-like texts, but still fall short of fully understanding them. I will refer to this as the “semantic gap” of LLMs. Some claim that this gap depends on the lack of grounding of text-only LLMs. I instead argue that the problem lies in the very type of representations these models acquire. They learn highly complex association spaces that correspond only partially to truly semantic and inferential ones. This prompts the need to investigate the missing links to bridge the gap between LLMs as sophisticated statistical engines and full-fledged semantic agents.

 

References

Lenci A., & Sahlgren (2023). Distributional Semantics, Cambridge, Cambridge University Press.

Lenci, A. (2023). Understanding Natural Language Understanding Systems. A Critical Analysis. Sistemi Intelligenti, arXiv preprint arXiv:2303.04229.

Lenci, A., & Padó, S. (2022). Perspectives for natural language processing between AI, linguistics and cognitive science. Frontiers in Artificial Intelligence, 5, 1059998.

Lenci, A. (2018). Distributional models of word meaning. Annual review of Linguistics, 4, 151-171.

My Session Status

Session detail
Allows attendees to send short textual feedback to the organizer for a session. This is only sent to the organizer and not the speakers.
To respect data privacy rules, this option only displays profiles of attendees who have chosen to share their profile information publicly.

Changes here will affect all session detail pages