Stochastic Parrots or Emergent Reasoners: Can Large Language Models Understand?
Mon statut pour la session
Quoi:
Talk
Partie de:
Quand:
1:30 PM, Lundi 10 Juin 2024 EDT
(1 heure 30 minutes)
Thème:
Large Language Models & Understanding
Some say large language models are stochastic parrots, or mere imitators who can't understand. Others say that reasoning, understanding and other humanlike capacities may be emergent capacities of these models. I'll give an analysis of these issues, analyzing arguments for each view and distinguishing different varieties of "understanding" that LLMs may or may not possess. I'll also connect the issue of LLM understanding to the issue of AI consciousness, and to the issue of AI moral status in turn.
References
Chalmers, D. J. (2023). Could a large language model be conscious?. arXiv preprint arXiv:2303.07103.
Chalmers, D.J. (2022) Reality+: Virtual worlds and the problems of philosophy. Penguin
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7-19.
Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200-219.