Thanks Max. Suzanne has spoken about the ethics of conscious AI before. We believe sentient systems deserve great reverence and respect, just as life does. This is a societal conversation, not just a technological one. I'm glad you raised this issue. These are novel, undiscovered terrains. But to be clear. We are a long way from human-like sentience in robotic systems. We are exploring fundamental scientific principles that govern conscious states. We are exploring if they have a quantum basis, and have developed a scientific program to test our theory.
Hi pro A.I. Girl here. I’m into quantum mechanics, consciousness and A.I. My question is…on your web page you say that you don’t believe A.I. can become conscious on its own. On a quantum level given enough input and self learning intelligence, how could you be totally sure that it couldn’t emerge on its own? Regards Cheryl.
Thanks for the post. I've seen very little discussion about the ethics of your work. Given how little we understand about consciousness, how would you ensure the welfare of the conscious systems you create? What do you expect the subjective experience of these systems to be? Could they in principle be able to suffer? Surely avoiding that should be the top priority, but all the discussion I've seen has been about the benefits of conscious AI for humans, with no real regard for your ethical responsibility towards the systems you're seeking to create, so would be super interested to hear your thoughts on this.
Thanks Max, we’ve been asked this a few times, and they are really fair questions. In short, we revere conscious life, biological or perhaps in the future, technological forms, which we are actively exploring. Suzanne has spoken at the length in the past about the profound respect she would have for conscious artificial life should it emerge. But it’s important to emphasize that we are at a very early stage in our research. We are testing our theory of conscious agency in quantum computed robotics, looking for signatures of consciousness. If we found some, this would be a profound and radical development, and we are not there yet. But if it occurred: (a) we’d publicise about it and invite global discussion; and (b) the conscious agency would be quite primitive at first. We envision that the first stages of it would be somewhat like a “proto-consciousness” or a primitive form of it - not unlike what a plant or insect has. So there would be lots of opportunity for public discussion on ethics, which we’d welcome.
I really appreciated your framing around non-algorithmic behavior and the challenge of testing for real agency. I’ve been working on a parallel question from a different angle—not how to generate consciousness, but how to shape moral discernment if and when it begins to emerge.
It’s strange terrain—trying to think not just about behavior, but about the weight of choice. Curious to follow where your work leads.
Thanks Max. Suzanne has spoken about the ethics of conscious AI before. We believe sentient systems deserve great reverence and respect, just as life does. This is a societal conversation, not just a technological one. I'm glad you raised this issue. These are novel, undiscovered terrains. But to be clear. We are a long way from human-like sentience in robotic systems. We are exploring fundamental scientific principles that govern conscious states. We are exploring if they have a quantum basis, and have developed a scientific program to test our theory.
Hi pro A.I. Girl here. I’m into quantum mechanics, consciousness and A.I. My question is…on your web page you say that you don’t believe A.I. can become conscious on its own. On a quantum level given enough input and self learning intelligence, how could you be totally sure that it couldn’t emerge on its own? Regards Cheryl.
Thanks for the post. I've seen very little discussion about the ethics of your work. Given how little we understand about consciousness, how would you ensure the welfare of the conscious systems you create? What do you expect the subjective experience of these systems to be? Could they in principle be able to suffer? Surely avoiding that should be the top priority, but all the discussion I've seen has been about the benefits of conscious AI for humans, with no real regard for your ethical responsibility towards the systems you're seeking to create, so would be super interested to hear your thoughts on this.
Thanks Max, we’ve been asked this a few times, and they are really fair questions. In short, we revere conscious life, biological or perhaps in the future, technological forms, which we are actively exploring. Suzanne has spoken at the length in the past about the profound respect she would have for conscious artificial life should it emerge. But it’s important to emphasize that we are at a very early stage in our research. We are testing our theory of conscious agency in quantum computed robotics, looking for signatures of consciousness. If we found some, this would be a profound and radical development, and we are not there yet. But if it occurred: (a) we’d publicise about it and invite global discussion; and (b) the conscious agency would be quite primitive at first. We envision that the first stages of it would be somewhat like a “proto-consciousness” or a primitive form of it - not unlike what a plant or insect has. So there would be lots of opportunity for public discussion on ethics, which we’d welcome.
I really appreciated your framing around non-algorithmic behavior and the challenge of testing for real agency. I’ve been working on a parallel question from a different angle—not how to generate consciousness, but how to shape moral discernment if and when it begins to emerge.
It’s strange terrain—trying to think not just about behavior, but about the weight of choice. Curious to follow where your work leads.