Skip to main page content

Stochastic Parrots or Emergent Reasoners: Can Large Language Models Understand?

My Session Status

What:
Talk
When:
1:30 PM, Monday 10 Jun 2024 EDT (1 hour 30 minutes)
Theme:
Large Language Models & Understanding
Some say large language models are stochastic parrots, or mere imitators who can't understand. Others say that reasoning, understanding and other humanlike capacities may be emergent capacities of these models. I'll give an analysis of these issues, analyzing arguments for each view and distinguishing different varieties of "understanding" that LLMs may or may not possess. I'll also connect the issue of LLM understanding to the issue of AI consciousness, and to the issue of AI moral status in turn.

 

References

Chalmers, D. J. (2023). Could a large language model be conscious?. arXiv preprint arXiv:2303.07103.

Chalmers, D.J. (2022) Reality+: Virtual worlds and the problems of philosophy. Penguin

Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7-19.

Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200-219.

My Session Status

Session detail
Allows attendees to send short textual feedback to the organizer for a session. This is only sent to the organizer and not the speakers.
To respect data privacy rules, this option only displays profiles of attendees who have chosen to share their profile information publicly.

Changes here will affect all session detail pages