Day 6
My Session Status
Sub Sessions
The unprecedented success of LLMs in carrying out linguistic interactions disguises the fact that, on closer inspection, their knowledge of meaning and their inference abilities are still quite limited and different from human ones. They generate human-like texts, but still fall short of fully understanding them. I will refer to this as the “semantic gap” of LLMs. Some claim that this gap depends on the lack of grounding of text-only LLMs. I instead argue that the problem lies in the very typ...
Despite considerable effort, we see diminishing returns in detecting people with autism using genome-wide assays or brain scans. In contrast, the clinical intuition of healthcare professionals, from longstanding first-hand experience, remains the best way to diagnose autism. In an alternative approach, we used deep learning to dissect and interpret the mind of the clinician. After pre-training on hundreds of millions of general sentences, we applied large language models (LLMs) to >4000 fr...
Some say large language models are stochastic parrots, or mere imitators who can't understand. Others say that reasoning, understanding and other humanlike capacities may be emergent capacities of these models. I'll give an analysis of these issues, analyzing arguments for each view and distinguishing different varieties of "understanding" that LLMs may or may not possess. I'll also connect the issue of LLM understanding to the issue of AI consciousness, and to the issue of AI moral status in...