Joscha Bach is a German cognitive scientist, AI researcher, and philosopher known for his work on cognitive architectures, artificial intelligence, mental representation, emotion, social modeling, multi-agent systems, and the philosophy of mind.
http://bach.ai/
Steve and Joscha discuss:
(00:00) - Introduction
(01:26) - Growing up in the forest in East Germany
(06:23) - Academia: early neural net pioneers, CS and Philosophy
(10:17) - The fall of the Berlin Wall
(14:57) - Commodore 64 and early programming experiences
(15:29) - AGI timeline and predictions
(19:35) - Scaling hypothesis, beyond Transformers, universality of information structures and world models
(25:29) - Consciousness
(41:11) - The ethics of brain interventions, zombies, and the Turing test
(43:43) - LLMs and simulated phenomenology
(46:34) - The future of consciousness research
(48:44) - Cultural perspectives on suffering
(52:19) - AGI and humanity's future
(58:18) - Simulation hypothesis
(01:03:33) - Liquid AI: Innovations and goals
(01:16:02) - Philosophy of Identity: the Transporter Problem, Is there anything beyond memory records?
Audio-only version and transcript: https://www.manifold1.com/episodes/joscha-bach-consciousness-and-agi-76
Is consciousness even a scientific (i.e., objective) phenomenon? Obviously no. The correlates of consciousness, based on first person reports (or functional MRI scans), are certainly worthy of investigation, but that is not quite the same thing.
In any case, human consciousness is by all accounts exceedingly complex. Why not start with trying to understand the most primitive forms of subjective experience that we presumably share with most if not all of the animal kingdon, namely, pleasure and pain.
We might start by asking whether pleasure and pain are separate and independent phenomena, as we usually think of them, or whether they are correlative, like the two sides of a coin, in which case there might be a symmetry between them such that they just balance out over the lifetime of every sentient creature.
By way of analogy, think of a spring, the stretching of which is experienced as pain, the relaxation of which is experienced as pleasure.
The best book on how intelligence might have evolved from primitive sentient creatures I know of is Max Bennet's "A Brief History of Intelligence." All that is required for a starting point is a creature that a) can move, b) can remember, and c) can distinguish (feel) pleasure and pain in the sense that it seeks to avoid pain and experience pleasure. From there through a process of Darwinian evolution you can imagine (or at least I can) something as complex as human consciousness gradually emerging—with human emotions for example (which are, at least in my opinion, the most fundamental ingredients of complex human consciousness) being highly modified forms of pleasure and pain.
But to see how this might happen (along with the emergence of non-emotional qualia, by the way, which are child's play in comparison) you first have to read Bennet's amazingly simple little book. Here is a link: https://www.amazon.com/Brief-History-Intelligence-Humans-Breakthroughs/dp/0063286343
One corollary of Bennet's thesis, I think, is that for computers to become truly conscious they must be able to actually experience pleasure and pain. Of course I may be wrong about this.