Day 1
My Session Status
Sub Sessions
Language models succeed in part because they share information-processing constraints with humans. These information-processing constraints do not have to do with the specific neural-network architecture nor any hardwired formal structure, but with the shared core task of language models and the brain: predicting upcoming input. I show that universals of language can be explained in terms of generic information-theoretic constraints, and that the same constraints explain language model perfor...
Neural networks can be used to increase our understanding of the brain basis of higher cognition, including capacities specific to humans. Simulations with brain-constrained networks give rise to conceptual and semantic representations when objects of similar type are experienced, processed and learnt. This is all based on feature correlations. If neurons are sensitive to semantic features, interlinked assemblies of such neurons can represent concrete concepts. Adding verbal labels to concret...
Remarkable progress in AI has far surpassed expectations of just a few years ago is rapidly changing science and society. Never before had a technology been deployed so widely and so quickly with so little understanding of its fundamentals. Yet our understanding of the fundamental principles of AI is lacking. I will argue that developing a mathematical theory of deep learning is necessary for a successful AI transition and, furthermore, that such a theory may well be within reach. I will disc...
Susan Schneider will discuss some of the implications of designing AI systems that might exhibit some form of consciousness, including the ethical challenges of "AI zombies," which behave as if conscious but lack subjective experience. The Turing Test could be extended to evaluate an AI's ability to engage in philosophical discussions on metaphysics and existence. An AI could be tested for whether it exhibits the high levels of "integrated information" posited by Tononi, Friston, Levin and ot...