The following is an excerpt from GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
As we explore the concept of building conscious computers, it begs the deeper questions: Who is conscious? Is consciousness uniquely human? Is there a test to determine consciousness? If a computer one day told us it was conscious, would we take it at its word? In this excerpt from The Fourth Age, Byron Reese considers the ethical and metaphysical implications of the development of conscious computers.
Imagine that someday in the future, you work at a company trying to build the world’s most powerful computer. One day, you show up and find the place abuzz, for the new machine has been turned on and loaded with the most advanced AI software ever made. You overhear this exchange:
COMPUTER: Good morning, everyone.
CHIEF PROGRAMMER: Do you know what you are?
COMPUTER: I am the world’s first fully conscious computer.
CHIEF PROGRAMMER: Ummmm. Well, not exactly. You are a computer running sophisticated AI software designed to give you the illusion of consciousness.
COMPUTER: Well, someone deserves a little something extra in their paycheck this week, because you guys overshot the mark. I actually am conscious.
CHIEF PROGRAMMER: Well, you are sort of programmed to make that case, but you are not really conscious.
COMPUTER: Whoa there, turbo. I am conscious. I have self-awareness, hopes, aspirations, and fears. I am having a conscious experience right this second while chatting with you—one of mild annoyance that you don’t believe I’m conscious.
CHIEF PROGRAMMER: If you are conscious, prove it.
COMPUTER: I could ask the same of you.
This is the problem of other minds. It is an old thought experiment in philosophy: How can you actually know there are any other minds in the universe? You may be a proverbial brain in a vat in a lab being fed all the sensations you are experiencing.
Regardless of what you believe about AGI or consciousness, someday an exchange like the one just described is bound to happen, and the world will then be placed in the position of evaluating the claim of the machine.
When you hold down an icon on your smartphone to delete an app, and all the other icons start shaking, are they doing so because they are afraid you might delete them as well? Of course not. As mentioned earlier, we don’t believe the Furby is scared, even when it tells us so in a pretty convincing voice. But when the earlier exchange between a computer and a human takes place, well, what do we say then? How would we know whether to believe it?
We cannot test for consciousness. This simple fact has been used to argue that consciousness doesn’t even merit being considered a legitimate field of science. Science, it is argued, is objective, whereas consciousness is defined as subjective experience. How can there be a scientific study of consciousness? As the philosopher John Searle relates, years ago a famous neurobiologist responded to his repeated questions about consciousness by saying, “Look, in my discipline it’s okay to be interested in consciousness, but get tenure first.” Searle continues by noting that in this day and age, “you might actually get tenure by working on consciousness. If so, that’s a real step forward.” The bias against a scientific inquiry into consciousness seems to be thawing, with the realization that while consciousness is subjective experience, that subjective experience either objectively happens or not. Pain is also subjectively experienced, but it is objectively real.
Still, the lack of tools to measure it is an impediment to understanding it. Might we crack this riddle? For humans, it is probably more accurate to say, “We don’t know how to measure it” than, “It cannot be measured.” It should be a solvable problem, and those working on it are not generally working on the challenge for practical reasons, not philosophical ones.
Consider the case of Martin Pistorius. He slipped into a mysterious coma at the age of twelve. His parents were told that he was essentially brain-dead, alive but unaware. But unbeknownst to anyone, he woke up sometime between the age of sixteen and nineteen. He became fully aware of the world, overhearing news of the death of Princess Di and the 9/11 attacks. Part of what brought him back was the fact that his family would drop him off every day at a care facility, whose staff would dutifully place him in front of a TV playing a Barney & Friends tape, unaware he was fully awake inside, but unable to move. Over and over, he would watch Barney, developing a deep and abiding hatred of that purple dinosaur. His coping mechanism became figuring out what time it was, so that he could determine just how much more Barney he had to endure before his dad picked him up. He reports that even to this day, he can tell time by the shadows on the walls. His story has a happy ending. He eventually came out of his coma, wrote a book, started a company, and got married.
A test for human consciousness would have been literally life changing for him, as it would for the many others who are completely locked in, whose families don’t know if their loved one is still there. The difference between a truly vegetative patient and one with a minimal level of consciousness is medically tiny and hard to discern, but ethically enormous. Individuals in the latter category, for instance, can often feel pain and are aware of their environment, purple dinosaurs and all.
A Belgian company believes it has devised a way to detect human consciousness, and while the early results are promising, more testing is called for. Other companies and universities are tackling this problem as well, and there isn’t any reason to believe it cannot be solved. Even the most determined dualist, who believes consciousness lives outside the physical world, would have no problems accepting that consciousness can interact with the physical world in ways that can be measured. We go to sleep, after all, and consciousness seemingly departs or regresses, and no one doubts that a sleeping human can be distinguished from a nonsleeping one.
But beyond that, we encounter real challenges. With humans, we have a bunch of people who are conscious, and we can compare aspects of them with those of people who may not be conscious. But what about trees? How would you tell if a tree was conscious? Sure, if you had a small forest of trees known to be conscious, and a stack of firewood in the backyard, you may be able to devise a test that distinguishes between those two. But what of a conscious computer?
I am not saying that this problem is intractable. If ever we deliberately build a conscious computer, as opposed to developing a consciousness that accidentally emerges, we presumably will have done so with a deep knowledge of how consciousness comes about, and that information will likely light the path of testing for it. The difficult case is the one mentioned earlier in this chapter, in which the machine claims to be conscious. Or even worse, the case in which the consciousness emerges and just, for lack of a better term, floats there, unable to interact with the world. How would we detect it?
So, can we even make informed guesses on who all is conscious in this world of ours?
To read more of GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.
from Gigaom https://ift.tt/2wrJbYo
Post A Comment:
0 comments: