The philosopher Daniel Dennett, in his 2017 book From Bacteria to Bach and Back, makes the case that the human mind, our consciousness, languages, and cultures, are the result of a Darwinian evolution. At first, this did not seem to me like a particularly controversial position until I realized that he is talking about much more than the brain and its biological structure and processes. There is little serious debate these days that biology has emerged from evolution, but Dennett is arguing that consciousness emerges from more than biology. Dennett defends and elaborates the earlier controversial position posited by Richard Dawkins in his 1976 book, The Selfish Gene, where Dawkins coined the term "memes" for cultural artifacts and ideas, drawing an analogy between their propagation in human culture and Darwinian evolution. From Dawkins: I think that a new kind of replicator has recently emerged on this very planet. It is staring us in the face. It is still in its infancy, still drifting clumsily in its primeval soup, but already it is achieving evolutionary change at a rate which leaves the old gene panting far behind. ... The new soup is the soup of human culture. We need a name for the new replicator, a noun which conveys the idea of a unit of cultural transmission, or a unit of imitation.That name is "meme."
Dawkins had quite a few detractors fighting his analogy with genetics, but Dennett argues that even some of the most fervent detractors espoused, using other words, essentially the same theory that ideas, culture, and languages propagate via a neo-Darwinian natural selection, where, in Dennett's words, "fitness means procreative prowess." When I talk in Plato and the Nerd about coevolution of humans and technology (chapter 9), I am not talking about genetic mutation and evolutionary change over generations. I am talking about the much faster and much younger form of evolution, the cultural evolution that Dawkins and Dennett support. But going further than Dawkins or Dennett, I claim that technology itself should be viewed as a new class of replicators that procreate or die. These technospecies evolve symbiotically with human cognition and culture, the memetic species of Dawkins and Dennett. The technoreplicators are like memes, but a bit different because they can "live" (for a while) independent of humans. An individual in a technospecies is, for example, a computer program or set of computer programs running in the cloud. To talk of the evolution of technospecies is easier even than of the evolution of memes because a computer program running in the cloud is more like a biological living thing than a meme is. Wikipedia, for example, has quite a few features of a living thing: it reacts to stimulus from its environment (electrical signals coming in over the network); it operates autonomously (for a while, at least, but it is dependent on us for long term survival); it requires nourishment (electricity from the power grid); it self repairs (vandalism detection, see chapter 1 of P & N); and it even dreams (background indexing to facilitate search, see chapter 5 of P & N). Memes have few if any of these features, so talking about Darwinian evolution of memes is more of an analogy than a direct application of Darwin's idea. What would Darwin have thought of memes? As I point out in P & N, a technospecies individual such as Wikipedia facilitates the evolution of memes. Wikipedia makes me smarter. Its existence rewards us humans, who in turn nurture and develop it, making it "smarter." Moreover, we have become extremely dependent on technospecies, just as they are dependent on us; what would happen to humanity if our computerized banking systems suddenly failed? It's a classic symbiosis. Dennett, however, falls short of identifying today's technology as part of his memetic evolution. In fact, he points to digital technology and software as a canonical example of an opposite kind of design from evolution, what he calls "top-down intelligent design." He argues that top-down intelligent design is less effective than evolution at producing complex behaviors, a position that will no doubt annoy those religious zealots who argue that the complexity of life proves the existence of God. Dennett points to an elevator controller, observing that every contingency, every reaction, every behavior of the system is imposed on it by a cognitive engineer, probably a nerd, who designed it. Indeed this is true, but as software systems go, an elevator controller is a rather simple one. For more complex digital and computational behaviors, like those in Wikipedia, a banking system, or a smart phone, it is hard to identify any cognitive being that performed anything resembling top-down intelligent design. These systems evolved through the combination of many components, themselves similarly evolved, and decades-long iterative design revisions with many failures along the way. Dennett argues that, unlike biological beings, the parts of a digital design have no yearnings for resources, nothing driving them forward, no purposes or reasons, and that they are just reactive automata. But they actually share the same procreative drive of other evolving species. Many alternative designs died along the way, and the ones that survived did so for Darwinian reasons, because they were able to propagate. Barring unsound teleology, propagation is the also the closest that biological evolution gets to having a purpose. The propagation of technospecies is facilitated by the very concrete benefits they afford to the humans that use them, for example by providing those humans with income. This income facilitates propagation and further evolution of the software. Viewing software as "top-down intelligent design" falls victim to the same proclivity that Dennett criticizes, a belief in the homunculus in the brain, a little man or committee that observes and drives the decision making of the human brain. In contrast, a coevolutionary stance says that software evolves in much the same way that bacteria evolve, through a goal-less coevolution with humans driven by its own Darwinian reward functions, survival and propagation. The tendency to see these designs as "top-down intelligent designs" is anthropocentric, a tendency that we, as humans, find naturally very difficult to avoid. We do not like seeing our mental cognitive processes as themselves cogs in a relentless purposeless evolution. But that is exactly what they are. Interestingly, Dennett does notice coevolution in simpler technologies than software. If you will forgive my three levels of indirection, Dennett quotes Rogers and Ehrlich (2008) quoting the French philosopher Alain ([1908] 1956) writing about fishing boats in Brittany: Every boat is copied from another boat. ... Let's reason as follows in the manner of Darwin. It is clear that a very badly made boat will end up at the bottom after one or two voyages and thus never be copied. ... One could then say, with complete rigor, that it is the sea herself who fashions the boats, choosing those which function and destroying the others.But Dennett fails to see that software and boats are not so different: To take the obvious recent example of such a phenomenon, the Internet is a very complex and costly artifact, intelligently designed and built for a most practical or vital purpose: today's Internet is the direct descendant of the Arpanet, funded by ARPA (now DARPA, the Defense Advanced Research Projects Agency), created by the Pentagon in 1958 in response to the Russians beating the United States into space with its Sputnik satellite, and its purpose was to facilitate the R&D of military technology.
This is an oversimplification of the Internet. ARPA funded the development of a few of the protocols that underlie the Internet, but even these protocols emerged from many failed experiments at methods for getting computers to interact with one another (see chapter 6 of P & N). ARPA and DARPA had little to do with most of what we recognize as the Internet today, including web pages, search engines, YouTube, etc. Much of the Internet evolved from the highly competitive entrepreneurial dog-eat-dog ecosystem of Silicon Valley. Further emphasizing the top-down nature of technology, Dennett says, All of this computer R&D has been top-down intelligent design, of course, with extensive analysis of the problem spaces, the acoustics, optics, and other relevant aspects of the physics involved, and guided by explicit applications of cost-benefit analysis, but it still has uncovered many of the same paths to good design blindly located by bottom-up Darwinian design over longer periods of time.
But "computer R&D" is actually much like culture, a mix of top-down intelligent design and evolution. Humans are as much facilitators as inventors, and as Dennett notes about culture, [S]ome of the marvels of culture can be attributed to the genius of their inventors, but much less than is commonly imagined ...The same is true of technology.
Although Dennett overstates the amount of top-down intelligent design in technospecies, there can be no doubt that human cognitive decision making strongly influences their evolution. At the hand of a human with a keyboard, software emerges that defines how a new species reacts to stimulus around it, and if those reactions are not beneficial to humans, the species very likely dies out. But this design is constructed in a context that has evolved. It uses a human-designed programming language that has survived a Darwinian evolution and encodes a way of thinking. It puts together pieces of software created and modified over years by others and codified in libraries of software components. The human is partly doing design and partly doing husbandry, "facilitating sex between software beings by recombining and mutating programs into new ones" (P & N, chapter 9). So it seems that what we have is a facilitated evolution, facilitated by elements of top-down down intelligent design and conscious deliberate husbandry. Is facilitated evolution still evolution? Approximately 540 million years ago, a relatively rapid burst of evolution called the Cambrian explosion produced a very large number of metazoan species over a relatively short period of about 20 million years. Andrew Parker, in his 2003 book In the Blink of an Eye, postulated the "Light Switch" theory, which posits that the evolution of eyes initiated the arms race that led to the explosion. Eyes made possible a facilitated evolution because they enabled predation. A predator facilitates the evolution of other species by killing many of them off, just as the sea kills boats. So facilitated evolution is still evolution. Now, in the anthropocene era, humans have facilitated the emergence of many species through husbandry, including wheat, corn, chickens, cows, dogs, and cats. Humans designing software are facilitators in the current Googleian Explosion of technospecies. It is proactive evolution, not just passive random mutation and dying due to lack of fitness. Instead, it mixes husbandry, predation, and top-down intelligent design. Predation plays a critical role in the evolution of technospecies. The success of Silicon Valley depends on failure of startup companies as much as it depends on their success. Software competes for a limited resource, the attention and nurturing of humans that is required for the software to survive and propagate. Consider the browser wars, where many attempts at programs for viewing content on the Internet succumbed to competition in acts of deliberate and systematic killing. Having been caught by surprise by the emergence of the Web, starting around 1995, Microsoft built Internet Explorer into all Windows systems, free of charge, in a deliberate attempt to kill off the competing browsers. Today, very few browser species survive. How far can this coevolution go? Dennett observes that it is a biological fact that our brains are limited, and he talks about "mysterians" who conclude that many phenomena, including cognition itself, must remain mysterious simply because our brains are limited. He then counters the mysterian argument with this observation: [H]uman brains have become equipped with add-ons, thinking tools by the thousands, that multiply our brains' cognitive powers by many orders of magnitude.
He cites language as a key such tool. But Wikipedia and Google are also spectacular multipliers, greatly amplifying the effectiveness of language. Google and Wikipedia are not themselves top-down intelligent designs. Although their evolution has most certainly been facilitated by various small acts of top-down intelligent design, they far exceed as affordances anything that any human I know could have possibly designed. They have coevolved with their human symbionts. Dennett observes that collaboration between humans vastly exceeds the capabilities of any individual human. I argue that collaboration between humans and technology further multiplies this effect. Technology itself now occupies a niche in our (cultural) evolutionary ecosystem. It is still very primitive, much like the bacteria in our gut, which facilitate digestion. Technology facilitates thinking. In arguing that thinking emerges from more than biology, Dennett is echoing a principle that I first heard from the historian and philosopher David Bates, who argues that thinking is not contained within the brain, but rather is part of an interaction between the brain and its environment, most specifically its cultural context and its interaction with other human beings. Bates, in his 2013 paper "Cartesian Robotics," observes: [T]hinking in the human sense of the term is always predicated on technological prostheses--that cannot be subsumed into thought but that makes it possible in the first place. ... Thinking is therefore a product of our external condition--social, technical, and neurophysiological--but it is not reducible to these conditions.
What Bates calls "technological prostheses" goes far beyond what I consider to be "technology" to include language, culture, writing systems, etc. But unlike Dennett, he also includes what I call "technology." Dennett goes further in debunking the mysterians. He points out the "systematic elusiveness of good examples of mysteries." As soon as you frame a question that you claim we will never be able to answer, you set in motion the very process that might well prove you wrong: you raise a topic of investigation.
Our symbiotic technospecies make this process far more effective, since a question that is raised becomes instantaneously visible worldwide. Dennett takes on AI and most particularly deep learning systems, calling them "parasitic." [D]eep learning (so far) discriminates but doesn't notice. That is, the flood of data that a system takes in does not have relevance for the system except as more "food" to "digest."
This limitation evaporates when these systems are viewed as symbiotic rather than parasitic. In Dennett's own words, "deep-learning machines are dependent on human understanding." Today, there is a lot of hand wringing and angst about AI. Dennett raises one common question: How concerned should we be that we are dumbing ourselves down by our growing reliance on intelligent machines?
Are we dumbing ourselves down? It doesn't look that way to me. In fact, Dennett notices a similar partnership between memes and the neurons in the brain, There is not just coevolution between memes and genes; there is codependence between our minds' top-down reasoning abilities and the bottom-up uncomprehending talents of our animal brains.
From this perspective, AI should perhaps be viewed instead as IA, Intelligence Amplification. For the neurons in our brain, the flood of data they experience also has no "relevance for the system except as more 'food' to 'digest.'" An AI that requires a human to give semantics to its outputs (see P & N chapter 9) is performing a function much like the neurons in our brain, which also, by themselves, have nothing like comprehension. It is an IA, not an AI. This does not mean we are out of danger. Far from it. Again, from Dennett: The real danger, I think, is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will overestimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence.
Even more worrisome, IA in the hands of nefarious humans is a scary prospect indeed. Dennett argues that top-down intelligent design is less effective at producing complex behaviors than evolution. He uses this observation to criticize "good old-fashioned artificial intelligence" (GOFAI), where a program designed top-down by a presumably intelligent programmer explicitly encodes knowledge about the world and uses rules and pattern matching to leverage that knowledge to react to stimulus. The ELIZA program (P & N chapter 11) is an example of a GOFAI program. In contrast, machine-learning techniques, particularly deep learning, have a more explicit mix of evolution and top-down intelligent design. The structure of a deep learning program is designed, but its behavior evolves. There have even been moderately successful experiments at Google where programs learn to write programs. These developments have created a bit of a panic about AI, where doomsayers predict that new technospecies will shed their symbiotic dependence on humans, making us superfluous. Dennett's final words are more optimistic: [I]f our future follows the trajectory of our past--something that is partly in our control--our artificial intelligences will continue to be dependent on us even as we become more warily dependent on them.
I share this optimism, but also recognize that rapid coevolution, which is most certainly happening, is extremely dangerous to individuals. Rapid evolution requires a great deal of death. Many technospecies will go extinct, and so will memetospecies, including entire careers such as clerks and taxi drivers. Edward Ashford Lee References
|
Edward Ashford Lee
Author of Plato and the Nerd: The Creative Partnership of Humans and Technology |
Leave a Reply.