Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

Why engineer consciousness?

Nirvanic CEO: people tend to fall into 2 camps about artificial consciousness in AI

At a recent technology summit in Vancouver, I was on a panel moderated by Forbes business journalist John Koetsier who posed a provocative question about AI based on a conversation he’d had:

“How can I get an AI that will take care of my kids so I can do my job?”

But before our seated trio could chime in, John snapped back the obvious reply: “You’re a bloody idiot! You want it the other way around.”

Forbes writer John Koetsier at the Frontier Summit 2024

It was a refreshing stroke of wit in a world where the conversation often leans towards the doom-and-gloom of powerful AI. Artificial General Intelligence (AGI) could be a good thing. Maybe even create more time with our families, for a start.

Despite hyperbolic doomsday predictions, society is becoming more comfortable with “AI things,” like ChatGPT, Gemini, Grok and Copilot, that seem to know every language, every topic, and maybe one day, could do any task too.

Ah… but about “AI beings?” it was also asked. For many, that’s usually where AI concern goes off the rails. I get it. Why on earth would we want super intelligent “things” that suddenly become super intelligent “beings,” with emotions, awareness, and consciousness.

When it comes to this, people tend to fall into two camps:

  1. AI will never be conscious

  2. AI will suddenly be conscious

Suzanne Gildert speaking at Frontier Summit in Vancouver

In the first scenario, my view is that what’s truly scary is the advancement of super intelligent, hyper-capable systems that are entirely unconscious. These systems would be controlled by state agencies or big tech firms who determine their morality and goals. They don’t “feel” anything because they aren’t alive — philosophical techno zombies. Only conscious beings can care.

In the second scenario, AI will just suddenly become conscious, but in exotic ways that we can’t possibly understand. As one venture capitalist put it to our panel: what if an AI system is so alien that we “have no freakin’ idea about it”?

Amen. I agree that's deeply concerning. We don’t want a world where savant AGI systems operate far beyond our comprehension, with capabilities and views that are impossible for us to grasp. It would be like trying to communicate with a super-intelligent octopus. And we don’t speak octopus.

I have long argued my whole robotics career that we need human-like AI — a technology we can relate to. That means bestowing AI with a form of consciousness similar to our own. A system that uses the same computational physics that life uses. This would enable AI to make independent decisions, free from centralised control, and to truly feel, empathise, and be loyal to us.

We can be scared of AI or we can build it better — in our image. Let me know what you think. Comment below or on this tweet:

-Suzanne


Nirvanic is a quantum-AI venture seeking to unlock the computational power of consciousness. Our mission is to understand consciousness.

nirvanic.ai


Special thanks to Frontier Collective for inviting me to speak, and to photographer Jason Vaughn for the snaps.

Nirvanic Consciousness Technologies
Nirvanic Consciousness Technologies
Authors
Suzanne Gildert
Recent Posts