Skip to main page content

The Epistemology and Ethics of LLMs

My Session Status

What:
Talk
Part of:
When:
9:00 AM, Thursday 6 Jun 2024 EDT (1 hour 30 minutes)
Theme:
Large Language Models: Applications, Ethics & Risks
LLMs are impressive. They can extend human cognition in various ways and can be turned into a suite of virtual assistants. Yet, they have the same basic limitations as other deep learning-based systems. Generalizing accurately outside training distributions remains a problem, as their stubborn propensity to confabulate shows. Although LLMs do not take us significantly closer to AGI and, as a consequence, do not by themselves pose an existential risk to humankind, they do raise serious ethical issues related, for instance, to deskilling, disinformation, manipulation and alienation. Extended cognition together with the ethical risks posed by LLMs lend support to ethical concerns in about “genuine human control over AI."

 

References

Maclure, J. (2021). AI, explainability and public reason: The argument from the limitations of the human mind. Minds and Machines, 31(3), 421-438.

Cossette-Lefebvre, H., & Maclure, J. (2023). AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making. AI and Ethics, 3(4), 1255-1269

My Session Status

Session detail
Allows attendees to send short textual feedback to the organizer for a session. This is only sent to the organizer and not the speakers.
To respect data privacy rules, this option only displays profiles of attendees who have chosen to share their profile information publicly.

Changes here will affect all session detail pages