In 1950 the journal Mind featured a curious proposal for determining if a computer could think. Submitted by code-breaker and computing pioneer Alan Turing, the idea was that an examiner would pose questions to a pair of test-takers located in a different room, and the answers from both respondents — one human and one machine — would be displayed on a monitor. If the examiner failed to determine which of the two was cybernetic, the computer was deemed intelligent. Turing predicted that by the end of the 20th century computers would be able to fool 30% of interrogators if questioning was limited to a few minutes. Man-made mind was on its way.
The Turing test featured prominently in the movie Blade Runner, in which a cop named Deckard has to pose more than a hundred questions to a young woman before discovering she's a computer-operated “replicant.” Even then, what gives “her” away is a glitch in the pupil rather than a failure to convincingly imitate intelligent thought. Turing would have been proud, as is the manufacturer, who brushes aside Deckard's concerns upon realizing the replicant doesn't know it's not human. “Commerce is our goal here at Tyrell. More human than human is our motto.”
The real world has proven more obstinate. Distinguishing computers from people has proven so easy that even computers can do it, as demonstrated by CAPTCHA, an automated visual recognition test that screens out “bots” seeking access to websites. Killer bots participating in a “deathmatch” videogame have fooled people into thinking they're up against other humans, but then no one is firing philosophical questions at the bots.
Of course, even if a computer does someday pass the Turing test, a convincing imitation of intelligence is still just that — an imitation. To believe the computer is actually intelligent is to overlook the distinction between image and reality, in other words, to fail the sanity test.
Nonetheless the dream of digital thoughtfulness lives on.
“What happens when machines become more intelligent than humans?” So wonders philosopher David Chalmers, apparently unaware that intelligence is precisely where thinking ceases to be mechanical. Undeterred by irksome reality-based considerations, Chalmers not only takes the concept of artificial intelligence at face value but believes AI will evolve — much like the actual intelligence it mimics — into something far beyond the relatively simple AI systems designed by human engineers. Writing for the Journal of Consciousness Studies, Chalmers argues in favor of a scenario proposed in 1965 by statistician IJ Good:
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind.”
“The key idea,” writes Chalmers, “is that a machine that is more intelligent than humans will be better than humans at designing machines.” It's only a matter of time before an electronic brain outsmarts its biological forerunner, uncovering design principles we humans could never have found. Once next-gen AI arrives, it will devise even better ways to craft computer circuitry, resulting in a yet more sophisticated AI and so on and so on. The genie has left the bottle.
In keeping with exponential gains in processing speed over the past few decades, each generation of AI is expected to improve on its design faster than previous generations, reducing intergenerational downtime from years to minutes to microseconds until finally we come to the fabled Singularity, a blast of infinite intelligence yielding a new cosmos composed of bits in place of atoms.
Like most believers in the AI apocalypse, Chalmers never gets around to defining intelligence, even claiming he can make his case without invoking a “canonical notion of intelligence.” Like Ray Kurzweil, author of The Singularity Is Near, Chalmers treats intelligence as if it's interchangeable with the step-by-step algorithmic operations of a computer.
An algorithm is a sequence of instructions for reaching a preplanned conclusion regardless of starting point or obstacles along the way. Let's say you're driving a car to the north end of town. If you're going east the onboard computer tells you to take a left. Going west it tells you to take a right. If you can't take a right, it tells you to turn around, go back and then take a left.
The ability to reason in a step-by-step manner, though obviously necessary for intelligence, is far from sufficient. Notably absent from computers are the human traits of understanding and imagination.
Understanding is not a programmable task. Consider a primitive cybernetic system such as James Watt's steam engine and its “governor,” which utilizes an algorithmic “if-then” process to stabilize energy output. If the pressure from steam rises beyond a threshold, the engine slows down, just as it speeds up whenever the pressure falls below a certain level. At no point does the engine reflect on its varying predicaments and act accordingly. By design its corrections are entirely mechanical. A computer is only a more complex version of this. Though humans can interpret the results, the computation itself is no more thoughtful than a hammer hitting a nail.
AI is artificial not just in the sense of artifact but fake. Without comprehension computers can only simulate intelligence. If human intelligence is wholly computable, then we too are fakes, incapable of genuine understanding and subject to digitization and downloading into appropriately configured circuitry. The meaning of meaning is that we are not computable.
Because every operation in a computer is binary, calculations involving the number zero require less processing when zero is “denormalized” into a pair of positive and negative zeroes. Yet treating zero in these terms makes no sense and could only interfere with human calculations. It's no problem for computers because comprehension plays no role in their operations in the first place.
Not only understanding but imagination eludes the circuitry of a machine. Among the greatest achievements of mathematics is the complex number, which combines a real number with an imaginary number. Denoted by the symbol i, this number equals the square root of -1, an absurdity since no number multiplied by itself yields a negative. Though illogical, i plays a critical role in physics, including the analysis of self-organized “complex” systems as well as quantum systems. By stepping outside the bounds of actuality, mathematicians retrieved a tool for investigating some of the most perplexing aspects of the real world.
A warped mirror image of understanding, imagination has proven essential to the scientific project. Far from a series of if-then statements leading mechanically to the correct conclusion, Einstein discovered special relativity by way of a fantasy of traveling alongside a wave of light. How can a computer, which can't even comprehend reality, break through the firewall to unreality?
Computer architecture makes no room for creative leaps. Instead of emerging all at once, discoveries are built up piecemeal, like trying to reach the other side of the universe in a rocket. As Einstein might have said, no matter how fast you go, you'll never get there. Creative intelligence is like a wormhole whose interior is unreal — at least in relation to space — and this property allows it to pin a piece of local space to a distant cosmic locale.
But Chalmers has an ace up his sleeve. Surely computers would be conscious and capable of creative leaps if we build them to operate just like the human brain. “If a system in our world duplicates not only our outputs but our internal computational structure, then it will duplicate the important internal aspects of mentality too.” Though he concedes “we have no idea how a nonbiological system, such as a silicon computational system, could be conscious... we also have no idea how a biological system, such as a neural system, could be conscious.”
Perhaps we don't know how brains could be conscious because they're not. The only thing we know for sure about the brain is that it facilitates the consciousness of the organism as a whole. If brains themselves are conscious, they're not telling anyone.
Central to the fantasy of the Singularity is the proposal that the brain, like a computer, is a machine. “The weight of evidence to date,” writes Chalmers, “suggests that the brain is mechanical.” Yet the evidence for mechanical operations within the brain in no way suggests that overall brain function is mechanical. Neural mechanisms operate only in the larger context of a cycling natural system. Brains resemble chaotic weather systems more than predictable machines.
To identify mental properties in the brain, we must decode it. We must literally learn the brain's language and read out the contents of a mind like data from a hard drive. We must identify desires and meanings in patterns of neural transmission. So long as we depend on correlating brain activity with subjective reports of mental activity, we cannot reduce mind to brain or brain to machine.
Chalmers evades this problem by assuming the false dichotomy of materialism vs. dualism: the conscious self is either reducible to the brain or autonomous and distinct. Given the undivided wholeness of the organism, the autonomous mind is easily expelled, leaving only gray matter.
But what if “brain” and “consciousness” are different aspects of the same thing? Rather than being inherent to the thing itself, its dual aspects arise from our differing perspectives, like viewing a coin from the “heads” or “tails” side. The mind appears alternately as brain or conscious self depending on our perspective.
Our instinct and languages carry a built-in bias for the visual and therefore spatial. The very idea of perspective is spatial. But when we try pin down the mind to a particular location, all we get is nervous tissue. To flip the coin, to find ourselves, we must take the “point of view” of time. Only in the context of past neural activity does current activity constitute thought. Only in the context of memories and goals does the present take on meaning. Our substance is temporal as much as molecular. Life is the translation of past, present and future into memory, awareness and will.
When it comes to a computer, however, there is no other side of the coin, only a spatial set of material-mechanical components. Lacking its own memory-infused awareness, there's no perspective from which the computer resolves into mind. A computer is a coin which, when flipped over, disappears.
“Metaphor” is Greek for carry beyond. A metaphor represents something real, such as a computer, with something unreal, such as a coin with only one side. Being illogical, metaphors don't compute. This is problematic, especially since metaphor is implicit in all language. Every word, whether spoken, written or gestured, is at once tangible and abstract. Not only imagination but language itself depends on the ability to carry the tangible beyond the boundary of the real, an impossibility for a “one-sided” object.
Despite the chasm separating us, we and our electronic offspring share a propensity for generality. According to Belgian theorist Jos Verhulst, the essence of human nature is retention of the unspecialized condition characteristic of the juvenile, enabling the flexibility required for continued adaptation. At each stage of primate evolution, our ancestors were less specialized than their relatives, whether australopithecines compared to chimpanzees, Homo ergaster compared to Homo habilis or Cro-Magnon compared to Neanderthal. To be human is to keep to the trunk of the evolutionary tree, refraining from stepping out on a branch of specialized function. As ethologist Konrad Lorenz put it, the “true man” is defined by “always remaining in a state of development.”
The defining feature of a computer is the ability to simulate any mechanical process, to reach into any branch of specialized function without sacrificing its global flexibility. It wasn't called a Universal Turing Machine for nothing. For this reason AI seems at first glance the ideal vehicle of post-human evolution. However, because machines cannot imagine or even comprehend, their creativity is limited to elaborations on whatever computational potential they start with. No matter how great that potential is, successive iterations amount to nothing more than a branch of a branch of a branch of a branch, ending not with a big bang of intelligence but the most perfectly refined twig in the universe.
Since evolution is a natural process, we ought to consider that nature's logic differs sharply from the version we installed in our minds and later our books and circuit boards. In contrast to logic 2.0, the logic of nature seems paradoxical: order comes from chaos; muscle increases with use; mind is both chemical and meaningful, etc. In the present context the key paradox is the “self-organized” complex system.
In the Newtonian dream, both subsequent and prior conditions can always be deduced from current conditions. But the real world includes systems comprised of vast numbers of molecules whose behavior doesn't yield to the predictive (or retrodictive) power of Newton's laws of motion. The behavior of a gas involves way too many variables to be computed on the basis of interactions between molecules. Over time a thermodynamic system obscures its origins, erasing the mechanical “memory” characteristic of a Newtonian system.
Under certain conditions, however, natural processes can abruptly switch into coherence, like the systematic transferal of heat during convection. Stupendously improbable from a thermodynamic viewpoint, convection requires only a temperature differential between one end of a fluid and the other, and the system spontaneously self-organizes to restore evenness in temperature, much like a tornado that pops into being to shuttle warm air at ground level into the cooler upper atmosphere.
Though ordinary thermodynamic systems destroy the mechanical retrievability of past conditions, self-organized thermodynamic systems bring the past to bear in a different way. Taylor vortices, for example, which serve to smooth out pressure differentials in rotating fluids, defy prediction on the basis of current conditions. Only when their history is taken into account are they subject to accurate prediction. What the vortice does now depends on what it did before. Like organisms, Taylor vortices carry their past with them. Private memory, in turn, is the basis of interiority and self-existence, the flipside of that dynamically ordered cycling system we call the brain.
According to human techno-logic, memory is the storage of data subject to mechanical retrieval. By contrast the seeming illogic of nature's memory is to obscure the past in a cloud of indeterminate molecules out of which emerges a self-organized complex system informed by its own history. Our logic is to build complex systems, while nature's logic is to let them build themselves. Evolution is a function of nature's logic, nature's memory and nature's complexity. The artificial analogue of these traits, embodied by the computer, is irrelevant to the future of intelligence.
Far from an explosion of infinite brilliance, the Singularity is a chain reaction of delusion: if nature is fundamentally mechanical, organisms are machines; since machines don't self-exist, neither do we; if we don't exist intrinsically, intelligence has no basis in our will to reason but follows blindly from the mechanical functions of a data-manipulating nervous system; if we can duplicate intelligence by manufacturing an electronic device smarter than we are, it will design a machine smarter than itself and so on ad infinitum. To arrive at the Singularity, you have to think like a machine, with one delusion triggering the next in a fully automated sequence.
Machines may not be able to think, but thought readily becomes machine-like. According to William Blum, author of the incisive history of US foreign policy, Killing Hope, anticommunism began in the 1920s as carefully crafted propaganda designed to protect elite interests from popular uprising. Strangely enough, by the 50s the disseminators had bought into their own lie. Anticommunism mutated from “cynical exercise” to “moral imperative.” Injections of red scares into the body politic were obsolete, as the propaganda was propagating itself. No need to conclude on the basis of evidence that communism was an international conspiracy to destroy American freedom when this very belief was insidiously colonizing our minds.
The people who organized the war against Vietnam were known as “the best and the brightest,” which only goes to show that being smart is no defense against autodelusion. Whether you're a teenager mistaking amplified celebrity worship for music or an industrial farmer confusing chemical-infused clay with soil, once a lie picks up enough momentum it can steamroll right over you. Ultimately the spell is cast not by the liar but the lie itself.
Locked into an ever more specialized high-tech society, we're forgetting how to be human, how to perceive and think and evolve. With the severing of our umbilical link to Mother Earth, with the decline of craftsmanship, skilled labor and education for its own sake, it's no wonder each successive generation is quicker to fall prey to self-organized systems of delusion. Whereas proponents of infinite AI see it as our savior from deepening economic and environmental crises, in reality the Singularity is a warped mirror image of the end of the world.
Be First to Comment