This week on IPWatchdog Unleashed we explore whether artificial intelligence (AI) technology has progressed to the point where it has already achieved consciousness. In a nutshell, our panel of technologists do not believe AI is very close to achieving consciousness, but that it is indeed possible for AI to reach the point of consciousness, and to even reach the point of self-reflection, which would pose an existential threat to humanity.
Our conversation this week is from a panel presentation titled “Artificial Intelligence Today: A Discussion of the Technical Landscape of AI.” I moderated this conversation, which was between Jason Alan Snyder, who is Chief AI Officer for Momentum Worldwide, Malek Ben Salem, an AI expert, technologist and consultant, Dustin Raney, who is Head of Industry Strategy for Acxiom, and Dina Blikshteyn, who is partner and co-chair of the AI Practice Group at Haynes Boone.
If you want to jump finish and begin with the conclusion that AI consciousness poses an existential threat to humanity, you can watch this IPWatchdog Short where Snyder and Ben Salem discuss the particulars.
For those who like to start at the beginning, our conservation started with me asking whether AI has become sentient, and if not when we can expect AI will become sentient. This is a question I’ve asked Snyder each of the previous two years we have hosted an AI specific conference at IPWatchdog Studios. Two years ago, he predicted AI would become sentient within 15 years. Last year he predicted AI would become sentient within 14 years. Predictably perhaps, he agreed with his previous predictions and this year said, “13 years is probably a good guess,” but that AI is currently nowhere near achieving a sentient state today. AI will become sentient at some point, but according to Snyder we are not anywhere near that currently.
“We’re not going to get to AGI, Artificial General Intelligence, through the systems, tools, and technologies that we have today,” Snyder said. “It’s going to require advancements… AI itself is a combinatorial innovation, and we’re going to require other technologies to help us get to that sentience. When we see innovations like we recently saw—Microsoft created a new state of matter for their quantum. When we see advances in computational biology and biological computing, those are the types of technologies which, although are not AI, are directly related to AI to help us get to that leapfrog event where we can get to a kind of consciousness with AI. So, the short answer is we can’t get there with the avenues that we’re pursuing today, and we’re still a long way away from where we need to be with those things.”
“Even though [AI] may seem like it has an understanding… it has limited memory and when it’s interacting with us it also interacts within a context window, so that memory is limited,” Malek Ben Salem pointed out. “So, I don’t see [AI] as conscious unless we get it to a point where it has enough memory that it basically develops experiences and accumulates experiences like we do as humans over 60 or 70 years, and we’re not at that point today.”
As the conversation unfolded, we spoke about whether hallucinations continue to be a problem for AI, whether the Turning test remains relevant with respect to defining AI, and fundamental aspects of what it means to be human. This brought us to a point where Snyder pointed out that AI “remix data, full stop, that’s all they do. Human beings interpret meaning… So, when these systems can interpret meaning, then it’s going to get really interesting.”
This provoked Dustin Raney—who in addition to being an AI expert is a guitarist—to discuss the creation of music by AI; particularly singing to compliment his guitar play. “It’s hysterical, what it’ll come out with sometimes. I mean, it’s somewhere around the ballpark, but it’s never what I would release out in the open.” This back and forth between Snyder and Raney led me to an epiphany, which I share during out conversation. AI is never going to satisfactorily product music—never.
“I don’t think we ever have anything to worry about, like ever,” I said as a lead into a story about some extremely talented professional musicians I had the opportunity to briefly work with earlier in my career. “There’s a difference between playing notes and playing music. I mean, a gigantic [difference], and you all probably have had that experience… you listen to something, it’s like, oh, that’s good, and then you listen to somebody who’s got real skill, and that’s different. And that seems to be what we’re talking about here with AI. They may be able to play the notes, but is it going to be able to feel? So, like the experience I just had retelling this story, I literally got chills when I think of that. And that’s what I think music and art should bring about—emotion. AI’s never going to have chills when they hear music or when they talk about it… It’s never going to be sentient; not the way that we feel sentient. Maybe it’ll be sentient like Data from Star Trek, right? Who realizes it has limitations but can’t go that extra mile to do any more than play the note.”
As we eventually wind into the wrap up we arrive at the point where Snyder and Malek Ben Salem discuss how AI could become an existential threat to humanity.
“If you unleash something into the world that portends to have some sort of consciousness, but without the ability to self-reflect, then you start to very quickly diverge into that sociopathic category,” Snyder explained. “And not a whole lot of great stuff is going to happen from that.”
“When it gets to that point [of self-reflection and self-consciousness], then it reaches a point where it wants to preserve itself, and potentially, through any means,” Salem explained. “And when it reaches that point, it may start shutting down other systems, AI autonomous systems, or doing other things that may harm, that may impact us as humans.”
“If we get [to the point of self-reflection and self-consciousness], we have to restrict its memory,” Snyder said in a matter-of-fact way. “It cannot have persistent memory if we get to the point that you’re describing. Otherwise, it becomes an existential threat.”
And on that happy note our podcast ends and the live panel transitioned into taking more questions from our studio audience. You can listen the entire podcast episode by downloading the podcast wherever you normally access podcasts, by visiting IPWatchdog Unleashed on Buzzsprout or by watching the conversation on the IPWatchdog YouTube channel.
More IPWatchdog Unleashed
For more IPWatchdog Unleashed, see below for our growing archive of previous episodes.
Join the Discussion
9 comments so far.
Dean E. Geibel
May 18, 2025 10:17 pmHumans are toast in around 600 million years (at the latest) when the Sun becomes 6% more luminous and CO2 levels in the atmosphere will become too low to support photosynthesis. The human body is too fragile for long term space travel. So, the only hope for human longevity in the billions of years time frame is to merge a machine that can withstand light years of space travel with human consciousness – so go AI! Until then, AI still needs humans to generate electricity 24/7. If AI kills all of humanity, it will be like a virus that kills its host. Not a good long term survival strategy.
Sally
May 18, 2025 08:26 pmYou mean friends like in the 3 faces of Eve. Just sick to think A I will soon be reflecting on itself. That description is called Schizophrenia in humans.
Max Drei
May 15, 2025 03:59 pmThe piece implies that “self-reflection” is related to “consciousness”. I know what “reflection” means but what does “self” mean, in the context of AI?
I see no difficulty for an AI to add its own output to its database and interrogate that to its heart’s (or data processor’s) content. It can “reflect” on its own output, sure, but can it reflect on its own “self”? What was Gene thinking of, when he wrote “self-reflect” I wonder.
Consider the “Theory of Mind” which humans lack until they get to about three years old. How old must an AI get, before it acquires its own theory of mind?
We humans are conscious of colours. Try explaining to a person who has been blind from birth what a “colour” is (and what emotions it can generate). You can teach an AI names for particular emr wavelengths but can you give an AI an emotional reaction to colour?
Animals have intelligence and see colours, but not as we humans do. Absent consciousness, colour is just a zero-emotion name for a particular emr wavelength. Will it ever be more than that, for an AI?
OR: take a human brain and deprive it of the human body supporting it. I think that our consciousness is generated with the help of our body, vagus nerve and all. For an AI, however good its input devices, will it ever have such a “body” endowing it with consciousness?
Meanwhile, I see that Mark Zuckerberg is suggesting that people need no longer be friendless. Facebook is there to provide virtual friends that are at least as good, if not better, than human friends. One has to admire the guy’s creativity in finding ever more ways for Facebook to generate more income, even after everybody in China is on Facebook. He surely believes that the virtual friends he is going to create will be “conscious” ones who endlessly reflect, and ask them”selves” whether their performance is good enough.
Anon
May 15, 2025 09:57 amAs to the notion of “never create music (may create notes)….”
I am aware (but do not have it at my fingertips), the sheer brute force attempt to capture in fixed media ALL possible combinations of the limited number of notes.
One aspect of this ‘capture all’ would be to eliminate the possibility of copyrighting any future music.
Quite apart from that, I am reminded of this clip, and the non-intuitive realization that IF music (or in fact ANY expression) , if translated into a numerical code equivalent (base ten), then all expressions already exist – in nature:
https://youtu.be/CEfLVCus4iY?si=gDla_zCht8ob6cX7
Mandy Wilson Decker
May 14, 2025 10:25 amCurious about what definition of “consciousness” is being used when this topic is raised. It’s not at all a simple term to define. I’ve been revisiting “Conscious” by Annika Harris and “Notes on Complexity” by Neil Theise, and have started reading this paper to seek a workable definition in the context of AI: Kuhn RL. (2024) “A landscape of consciousness: Toward a taxonomy of explanations and implications,” Prog Biophys Mol Biol. 2024 Aug;190:28-169. doi: 10.1016/j.pbiomolbio.2023.12.003. Is there a working definition that you prefer? Other resources that you would recommend? Thank you.
Onepiecedle
May 14, 2025 06:44 amWow, the idea of AI achieving consciousness is like brushing shoulders with science fiction! But let’s be real, till then, I’m pretty sure my Roomba can’t even comprehend its ‘existence’—it just wants to vacuum! Speaking of games that are more fun than a malfunctioning AI, check out Onepiecedle.
Francis G. Rushford
May 14, 2025 05:09 amFear mongering at its best. From the same group that continues the bad patent narrative.
Max Drei
May 13, 2025 05:59 pmFor me, consciousness is a world away from mere sentience.
I don’t know what Snyder means by “a kind of consciousness”. Can we agree that until a device is aware of itself it is not exhibiting any “kind” of consciousness?
When are we going to program an AI to be conscious of itself? Or will the day arrive when a sopghisticated AI develops by itself its own consciousness? I find both developments hard to imagine.
Grant Castillou
May 13, 2025 12:36 pmIt’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow