Computational Irreducibility, Minds, and Machine Learning
My Session Status
What:
Talk
Part of:
When:
1:30 PM, viernes 7 jun 2024 EDT
(1 hour 30 minutos)
Theme:
Large Language Models: Applications, Ethics & Risks
Whether we call it perception, measurement, or analysis, it is how we humans get an impression of the world in our minds. Human language, mathematics and logic are ways to formalize the world. A new and still more powerful one is computation. I’ve long wondered about ‘alien minds’ and what it might be like to see things from their point of view. Now we finally have in AI an accessible form of alien mind. Nobody expected this—not even its creators: ChatGPT has burst onto the scene as an AI capable of writing at a convincingly human level. But how does it really work? What’s going on inside its “AI mind”? After AI’s surprise successes, there’s a somewhat widespread belief that eventually AI will be able to “do everything”, or at least everything we currently do. So what about science? Over the centuries we humans have made incremental progress, gradually building up what’s now essentially the single largest intellectual edifice of our civilization. The success of ChatGPT brings together the latest neural net technology with foundational questions about language and human thought posed by Aristotle more than two thousand years ago.
References
Wolfram, S. (2023). What Is ChatGPT Doing … and Why Does It Work? Wolfram Media.
Matzakos, N., Doukakis, S., & Moundridou, M. (2023). Learning mathematics with large language models: A comparative study with computer algebra systems and other tools. International Journal of Emerging Technologies in Learning (iJET), 18(20), 51-71.
Wolfram, S. (2021). After 100 Years, Can We Finally Crack Post’s Problem of Tag? A Story of Computational Irreducibility, and More. arXiv preprint arXiv:2103.06931.