We Are All Futurists Now, Part 3
For the previous installment of this series, about the coming revolution in robotics, I chose as my title a cheery takeoff on a show tune, “Anything You Can Do, iCan Do Better.”
If I were to follow the favorite clichés of technology writers, I would have to choose a title like “The Rise of the Machines” or joke nervously about a “robot apocalypse” in which the machines become our overlords and we are enslaved or killed or kept as brains in vats for some reason that is never particularly clear.
Yes, there are some aspects of this robotic technology that might seem unnerving, particularly its military application. The US military is now working on robot planes that will be able to land on aircraft carriers and at least one weapons developer is building quadrotors with machine guns. (Don’t be fooled by the cheesy fake Russian accent in this last video. The host is an American weapons expert who performs his online videos in character.) This quadrotor, oddly, was produced not by an actual defense program, but by a maker of military video games which is including a next generation of combat robots in its simulations. All of this has one expert warning that if we’re concerned about the moral dilemmas of using drones, we had better start preparing for the issues we will face when we have whole robot armies. Especially since DARPA is now trying to build Skynet.
So if I were more prone to get overexcited about this sort of thing, I might declare that “Humanity’s War Against the Machines Starts Now” or describe an innocuous amusement park attraction as a step toward the “Robopocalypse,” or generally provide a lot of unpaid publicity for James Cameron.
No thank you.
These dark science-fiction fantasies about the future of robotics (and equivalent fantasies in the field of economics, which I will address in the next installment of this series) all come from a basic wrong assumption. The assumption is that we will keep increasing the power of our machines—and not want to use any of that power for ourselves. Yet the whole purpose of machines is to augment our own power, and we are already experimenting with ways to integrate robotics with our natural faculties to augment our physical and mental capabilities.
In short, we don’t need to fear the rise of the machines because we will be the machines.
We will meet the cyborgs, and they will be us.
This is no longer just a flight of science-fiction fantasy. It is based on innovations that are already happening.
There are some very superficial ways in which a few people are playing around with these ideas, calling themselves “body hackers” who implant themselves with magnets and RFID chips. This is all very low-grade and superficial, but there are some more serious ideas that are much farther in development, including a science-fiction technology whose time is here: the robot exoskeleton.
I have sometimes vented my disappointment that we don’t yet have flying cars, which every piece of mid-20th-century science fiction just assumed we would obviously have by the year 2000. But robot exoskeletons might just make up for that omission.
I saw one or two news items about this a year or so ago, but now they are coming thick and fast. In Los Angeles, there is already a paraplegic women walking around in a robot exoskeleton.
“To initiate walking it has a tilt sensor like a Segway. When she wants to walk she leans forward and changes the center of gravity,” Escallier said.
The ReWalk was invented by an Israeli engineer after he suffered an accident that left him quadriplegic. It uses computer technology and motion sensors to allow the person to walk again.
Hannigan said she is totally in control.
“My upper body communicates with the computer in the back. You can’t see it but I’m actually shifting a little bit left and right,” Hannigan said. “I can go up and down steps, simple curbs, walk up and down ramps…. Bionic woman—watch out!”
The field is growing rapidly.
Powered exoskeletons once looked like a technological dead end, like flying cars and hoverboards. It wasn’t that you couldn’t make one. It was that you couldn’t make it practical. Early attempts were absurdly bulky, inflexible, and needed too much electricity.
Those limitations haven’t gone away. But in the past 10 years, the state of the art has been advancing so fast that even Google can’t keep up. Ask the search engine, “How do exoskeletons work?” and the top result is an article from 2011 headlined, “How Exoskeletons Will Work.” As Woo can testify, the future tense is no longer necessary. The question now is, how widespread will they become—and what extraordinary powers will they give us?
The best detail in this story is that one of the companies working on exoskeleton technology calls itself Cyberdyne—the name of the company that develops the cyborg technology in The Terminator. So I guess we get some free publicity for James Cameron, after all.
Closely related is a revolution in bionic prosthetics. In Chicago, a man has used a bionic leg to climb the Sears Tower. (They call it the Willis Tower now, but we all know better.) An interview with amputee and bionics pioneer Hugh Herr explains how this technology has begun to mimic the natural human function of limbs—but will also, eventually, be able to go beyond what is possible to biological limbs.
Already, exoskeletons are being developed, not just to restore lost function, but to enhance normal function.
It takes a second to register, but the 40 kg of rice I just picked up like a human forklift truck suddenly seem as light as a feather. Thanks to the “muscle suit” Umehara slipped onto my back prior to the exercise, I feel completely empowered. Fixed at the hips and shoulders by a padded waistband and straps, and extending part-way down the side of my legs, the exoskeleton has an A-shaped aluminum frame and sleeves that rotate freely at elbow and shoulder joints.
It weighs 9.2 kg, but the burst of air that Umehara injected into four artificial muscles attached on the back of the frame make both jacket and rice feel virtually weightless.
The muscle suit is one of a series of cybernetic exoskeletons developed by Hiroshi Kobayashi’s team at the Tokyo University of Science in Japan. Scheduled for commercial release early next year, the wearable robot takes two forms: one augmenting the arms and back that is aimed at areas of commerce where heavy lifting is required. The other, a lighter, 5 kg version, will target the nursing industry to assist in lifting people in and out of bed, for example.
There are still a lot of limitations to this technology, including their power sources and the bulk and weight of pneumatics and electric motors. The Japanese system in this example uses a light-weight system of pneumatic-powered “artificial muscles,” and there are also experiments with a synthetic muscle made of nanotube fibers.
You will also notice that another challenge in developing bionic limbs and exoskeletons is the issue of control. How do you get the limb to respond quickly and naturally, as a part of your body would do?
The man who climbed the Sears Tower was using a bionic leg that is described as “neural-controlled.” The article is not more specific about what this means, but that connects to another new technology: using the brain to directly control robots.
While some bionic prosthetics are controlled by sensing muscle movements, including a quite good bionic hand, the next step is represented by a quadriplegic woman who is able to feed herself and perform other simple actions with a robot arm controlled by electrodes on her skull which respond to electrical activity in her brain. See a more in-depth description here. And then there is the man who controlled a robot arm remotely using only an electrode cap rather than a more invasive implant.
This sort of thing is, inevitably, called the “terminator arm.” Yet more free publicity for James Cameron.
All right, so what’s the next step after that? In addition to your brain controlling a robot arm, what if the robot arm could sense what it touches and communicate that information back to your brain? Here’s one experiment:
[M]onkeys used a joystick to control a virtual “avatar” (a monkey arm and hand) on a computer screen, and were encouraged to use the avatar to grab objects on the screen. The virtual objects had textures, and this was conveyed using stimulation through microwave arrays implanted in a part of the brain’s cortex responsible for sensing touch. The monkeys learned to hold the avatar’s hand over objects with a particular texture—conveyed by the frequency of stimulation—in order to be rewarded with food.
In another experiment, the monkeys received the same tactile feedback but controlled the virtual hand using just their thoughts, via microwire arrays implanted in the motor cortex. Although their performance on the task was less accurate, the monkeys improved over time.
Nicolelis says the successful use of a “brain-machine-brain interface” demonstrates that the processes of sensing and responding to tactile sensations can be combined. “We are decoding motor intentions and tactile messages simultaneously,” he says. “That’s never been done before.” Although the stimulation the monkeys receive is artificial, he says, they seem to learn to associate it with tactile information.
The bionic hand that can feel is already here.
The wiring of his new bionic hand will be connected to the patient’s nervous system with the hope that the man will be able to control the movements of the hand as well as receiving touch signals from the hand’s skin sensors.
Dr. Micera said that the hand will be attached directly to the patient’s nervous system via electrodes clipped onto two of the arm’s main nerves, the median and the ulnar nerves.
This should allow the man to control the hand by his thoughts, as well as receiving sensory signals to his brain from the hand’s sensors. It will effectively provide a fast, bidirectional flow of information between the man’s nervous system and the prosthetic hand.
Some new robotic prosthetics are devoted entirely to replacing lost sensory function, such as a new and improved bionic eye which seems to combine electronics and genetic engineering, providing electrical stimulation to genetically altered retina cells implanted in a damaged eye. Another version uses a digital chip inserted in the eye as an artificial retina that stimulates the eye’s nerve cells.
Obviously, at the center of all of these developments, as the key to our future as a race of cyborgs, is the development of a brain-machine interface.
An overview of this growing field describes the implications.
So far the focus has been on medical applications—restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.
Does that sound like science fiction? Actually, the thought-controlled drone is already here.
More fundamentally, key research has already proven the basic concepts. The item that originally caught my attention was from a year and a half ago, in September of 2011, when Israeli researchers built an artificial rat cerebellum, the part of the brain that coordinates movement.
The team’s synthetic cerebellum is more or less a simple microchip, but can receive sensory input from the brain stem, interpret that nerve input, and send the appropriate signal to a different region of the brain stem to initiate the appropriate movement. Right now it is only capable of dealing with the most basic stimuli/response sequence, but the very fact that researchers can do such a thing marks a pretty remarkable leap forward.
So there you have the two key steps: being able to take signals from the brain to a computer, and being able to send them back from the computer to the brain. This demonstrates that the basic mechanisms are possible. The history of technology shows that from here on out, it is mostly a matter of devoting manpower and capital to make the interface progressively more complex and sophisticated. Especially when the folks at Intel—the same people who brought us Moore’s Law—are now working on it and want to have chips that can be implanted in the brain by 2020. That gives a whole new meaning to the catchphrase “Intel Inside.”
To understand the possibilities of this technology, you have to grasp the role of “neural plasticity”—the ability of the brain to adapt and rewire and reprogram itself. Consider an experiment on tadpoles with ectopic eyes. When embryonic eye tissue is implanted in a tadpole’s body far away from the brain, it sends out a network of neurons as it grows, connects itself to the nervous system in the spine or the stomach, and then begins sending sensory information to the brain, so that the tadpole now “sees” out of eyes it was never intended to have.
To see where this leads, look at an example that comes a little closer to home: another experiment in which rats were able to sense infrared light through an electronic sensor wired to microscopic electrodes in their brains.
The researchers say that, in theory at least, a human with a damaged visual cortex might be able to regain sight through a device implanted in another part of the brain.
Lead author Miguel Nicolelis said this was the first time a brain-machine interface has augmented a sense in adult animals. The experiment also shows that a new sensory input can be interpreted by a region of the brain that normally does something else (without having to “hijack” the function of that brain region)….
His colleague Eric Thomson commented: “The philosophy of the field of brain-machine interfaces has until now been to attempt to restore a motor function lost to lesion or damage of the central nervous system. This is the first paper in which a neuroprosthetic device was used to augment function—literally enabling a normal animal to acquire a sixth sense.”
Notice the part about how rats can acquire a “sixth sense” through one part of their brain without hijacking the normal function of that region. This is important, because it implies that you can go on seeing and hearing and feeling with the parts of the brain normally used for those purposes—while also being able to “see” and “feel” and “hear” an additional, artificial input. So at some point, using technology that is already being proven in principle, it might be possible to implant electrodes and microchips in your brain that will give you a third and fourth eye, so to speak, with sufficiently high resolution to allow you to see in non-visible spectrums of light or to access a visual display with the weather and other information—an internal form of Google Glass. And in addition to receiving signals from the outside, you might be able to send signals back out from the inside. You might be able, not just to “hear” a telephone conversation from inside the hearing centers of the brain, but to “talk” back without moving your lips. What this implies for the stories above, about robot exoskeletons and bionic prosthetics, is that you might be able to control robotic extensions of your body through a direct act of will, the same way you cause your biological limbs to move.
Let’s draw out just one radical implication of such a brain-machine interface: it may be inaccurate to refer to it as an “interface” at all.
There is a debate currently raging among software designers about “skeuomorphic design,” in which the digital version of a tool is designed to look and act like its old analog version. The iPhone is a leading example: the compass looks like an old-fashioned compass, the notepad looks like old-fashioned lined yellow paper, the buttons on the calculator have three-dimensional shadowing to make them look like the buttons on the old Texas Instruments model I used to carry in high school. And so on. By contrast, some designers champion “flat design,” as in the new Microsoft operating system where every application is designated by nothing more than a flat square or rectangle of color.
But this debate may be rendered obsolete, because there is a much stronger argument that the best interface is no interface.
Several car companies have recently created smartphone apps that allow drivers to unlock their car doors. Generally, the unlocking feature plays out like this:
1. A driver approaches her car.
2. Takes her smartphone out of her purse.
3. Turns her phone on.
4. Slides to unlock her phone.
5. Enters her passcode into her phone.
6. Swipes through a sea of icons, trying to find the app.
7. Taps the desired app icon.
8. Waits for the app to load.
9. Looks at the app, and tries to figure out (or remember) how it works.
10. Makes a best guess about which menu item to hit to unlock doors and taps that item.
11. Taps a button to unlock the doors.
12. The car doors unlock.
13. She opens her car door.
Thirteen steps later, she can enter her car.
The app forces the driver to use her phone. She has to learn a new interface. And the experience is designed around the flow of the computer, not the flow of a person.
If we eliminate the UI [user interface], we’re left with only three, natural steps:
1. A driver approaches her car.
2. The car doors unlock.
3. She opens her car door.
Anything beyond these three steps should be frowned upon.
The ideal is wherever possible to eliminate the user interface, so that you don’t focus on how to interact with your computer in order to do something, you just go do it. If this is the goal, a brain-machine interface is the ultimate tool. It raises the possibility that someday we will access information the same way we look out at the world: we don’t access an interface and tell it to open our eyelids, to focus our lenses, and to move our eyes around. We just open them up and look. It raises the possibility that we will control robots and prosthetics the same way we walk across the room: we don’t access an interface and tap on a series of icons to give instructions to our legs. We just walk, by a direct act of will.
If it seems like I’m getting carried away, let me introduce you to some people who really are getting carried away: the “transhumanists” who are predicting something they call “the singularity,” a merging of man and machine in which we are all going to have our brains transferred into immortal robots.
At that point, the Singularity holds, human beings and machines will so effortlessly and elegantly merge that poor health, the ravages of old age and even death itself will all be things of the past….
“We will transcend all of the limitations of our biology,” says Raymond Kurzweil, the inventor and businessman who is the Singularity’s most ubiquitous spokesman and boasts that he intends to live for hundreds of years and resurrect the dead, including his own father. “That is what it means to be human—to extend who we are.”
While I like that last sentiment, I’m afraid Kurzweil is being wildly over-optimistic and more than a bit hucksterish. At age 65, he is unlikely ever to see anything close to the kind of technology he talks about so confidently.
Here we have to keep in mind the difference between two versions of “futurism.” What I’ve been trying to stick to is the good kind: taking technology whose basic principles have already been proven and projecting what a few decades of rapid improvement might bring. The other kind of futurism is more a species of science fiction. It involves taking technology whose principles have not been proven yet and projecting that it will be developed by some deadline, which is inherently arbitrary since it assumes the discovery of the unknown. This brand of futurism may have some value, as science fiction does, but you have to be careful to distinguish it from a more straightforward projection of known science.
We know enough about the workings of the brain to think that we can build machines to interact with it and augment it in some ways. But we don’t yet know anywhere near enough about the brain to say we can build an electronic one, or that we know how to transfer over the existing connections and knowledge and personality from a biological brain into this hypothetical artificial brain. And all of that is aside from very real philosophical and scientific questions about the physical and biological basis of consciousness, of conceptual thinking, and of free will.
Some say the brain is not computable. I say: call me when you really think you’ve computed it, and then we’ll talk.
Tom Hartsfield at RealClearScience recently looked at the state of artificial intelligence and concluded that it follows the basic pattern of the science-fiction version of futurism: “It seems that AI is always predicted, by most experts, to be something like 15-25 years away from the present.” And as the years tick by, it always remains 15 to 25 years away. He provides the best answer I’ve heard to predictions that artificial intelligence is just around the corner: “I’m sorry Dave, I’m afraid I can’t do that.”
Since we’re already off in the realm of science fiction, here’s where I suppose we have to give a little bit of free publicity, not to James Cameron, but to Gene Roddenberry. Could the technology of brain-machine interfaces lead to such an invasion of our internal mental privacy that we are “assimilated” into a collective consciousness like Star Trek’s “Borg”? As with artificial intelligence, this depends on the assumption of a much more advanced and complex brain-machine interface than any of the developments that we’re talking about—and it also depends on some philosophical assumptions about whether a “collective brain” would be able to function, and whether there could be such a thing as actual, literal collective thinking.
It is an idea that can be posited in science fiction, but which is almost certainly impossible in reality, for all of the reasons artificial intelligence is almost certainly impossible: it posits conceptual thought without individual free will and without individual motivation or consequences for survival. A “collective consciousness,” if it were technologically possible, would kill thinking rather than collectivizing it—which pretty much sums up the history of attempts to collectivize human life. This sounds like a way of creating a group of glazed-eyed human zombies barely capable of tying their own shoelaces, much less conquering the galaxy.
So no, resistance is not futile.
I think this is a debate we can safely table for a future century. That said, I don’t want to downplay what can be achieved in this century by some of the work that calls itself “artificial intelligence.” Some of this includes advances in “deep learning,” which refers to very advanced, complex forms of pattern-recognition. So our machines will be getting “smarter” in the sense of being much more capable at advanced perceptual-level tasks and at anticipating our needs, making them much more useful and productive extensions of ourselves.
Remember that the context for all of this is not some science-fiction scenario set in the 24th century. The context is the very real, present-day advances we looked at in the first two installments of this series: the merging of information technology with manufacturing, transportation, and heavy industry, and the ongoing revolution in robotics.
This will all be connected with another revolution in the connection between man and machine. Robotic exoskeletons and bionic prosthetics and who knows what else will augment our physical capabilities, while brain-machine interfaces will increase our ability to effortlessly take in digital information and to direct the actions of our robot servants.
This will mean, contrary to a century of dystopian science fiction, that we will gain more control over our machines, not the other way around. The rise of the cyborgs will be the rise of us.