I recently read Steven Levy’s book on Artificial Life. I enjoyed the book very much, since the a-life theme weaves together many of the threads of research into complex adaptive systems, and is a useful way of thinking about the relationship between the various topics. Levy also tells a human story of the scientific pursuit of artificial life, the tale of a motley crew of eccentric scientists, pursuing their work at the margins of the scientific mainstream, who join together to create a rich new area for exploration.
The book was written in 1992; ten years later, the results of the pursuit of a-life have been decidedly mixed. Despite substantial scientific progress, the more ambitious ideas of artificial life seem to have retreated to the domain of philosophy. And as a scientific field, the study of artificial life seems to have returned to the margins. The topic is fascinating, and the progress seems real — why the retreat? One way to look at progress and stasis in the field is to consider how scientists filled in the gaps of von Neumann’s original thesis. The brilliant pioneer of computer science, in Levy’s words, “realized that biology offered the most powerful information processing sytem available by far and that its emulation would be the key to powerful artificial systems.” Considering reproduction the diagnostic aspect of life, von Neumann proposed a thought experiment describing a self-reproducing
automaton.
The automaton was a mechanical creature which floated in a pond that happened to be chock full of parts like the parts from which the creature was composed. The creature had a sensing apparatus to detect the parts, and a robot arm to select, cut, and combine parts. The creature read binary instructions from a mechanical tape, duplicated the instructions, and fed the instructions to the robot arm, which assembled new copies of the creature from the parts floating in the pond. The imaginary system implemented two key aspects of biological life:
* a genotype encoding the design for the creature, with the ability to replicate its own instructions (like DNA)
* a phenotype implementing the design, with the ability to replicate new creatures (like biological reproduction)
The thought experiment is even cleverer than it seems — von Neumann described the model in the 1940s, several years before the discovery of DNA!
In the years since von Neumann’s thought experiment, scientists have conceived numerous simulations that implement aspects of living systems that were not included in the original model:
* Incremental growth. The von Neumann creature assembled copies of itself, using macroscopic cutting and fusing actions, guided by a complex mechanical plan. Later scientists developed construction models that work more like the way nature builds things; by growth rather than assembly. Algorithms called L-systems, after their inventor, biologist Astrid Lindenmeyer, create elaborate patterns by the repeated application of very simple rules. With modification of their parameters, these L-systems generate patterns that look remarkably like numerous
species of plants and seashells. (There is a series of wonderful-looking books describing applications of the algorithms).
* Evolution. Von Neumann’s creature knows how to find parts and put together more creatures, but it has no ability to produce creatures that are different from itself. If the pond gradually dried up, the system come to a halt; it would not evolve new creatures that could walk instead of paddle. John Holland, the pioneering scientist based at the University of Michigan, invented a family of algorithms that simulate evolution. Instead of copying the plan for a new creature one for one, the genetic algorithm simulates the effect of sexual reproduction by
occasionally mutating a creature’s instruction set and regularly swapping parts of the instruction sets of two creatures. One useful insight from the execution of genetic algorithm simulations is that recombination proves to be a more powerful technique for generating useful adaptation than mutation.
* Predators and natural selection. In von Neumann’s world, creatures will keep assembling other creatures until the pond runs out of parts. Genetic algorithms introduce selection pressure; creatures that meet some sort of externally imposed criterion get to live longer and have more occasions to reproduce. Computer scientist Danny Hillis used genetic algorithms to evolve computer programs that solved searching
problems. When Hillis introduced predators in the form of test programs that weeded out weak algorithms, the selection process generated stronger results.
Genetic algorithms have proven to be highly useful for solving technical problems. They are used to solve optimization problems and model evolutionary behavior in fields of economics, finance, operations,
ecology, and other areas. Genetic algorithms have been used to synthesize computer programs that solve some computing problems as well as humans can.
* Increasingly complex structure. Evolution in nature has generated increasingly complex organisms. Genetic algorithms simulate part of the process of increasing complexity. Because the recombination process
generates new instruction sets by swapping of large chunks of old instruction sets, the force of selection necessarily operates on modules of instructions, rather than individual instructions (see Holland’s book, Hidden Order, for a good explanation of how this works).
* Self-guided motion. Von Neumann’s creatures were able to paddle about and find components; how this happens is left up the the imagination of the reader — it’s a thought experiment, after all. Rodney Brooks’ robot
group at the MIT AI lab has created simple robots, modeled after the behavior of insects, which avoid obstacles and find things. Instead of using the top-heavy techniques of early AI, in which the robot needed to
build a conceptual model of the appearance of the world before it could move, the Brooks group robots obey simple rules like moving forward, and turning if it meets an obstacle.
* Complex behavior. Living systems are complex, a mathematical term of art for systems that are composed of simple parts whose behavior as a group defies simple explanation (concise definition lifted from Gary
Flake). Von Neumann pioneered the development of cellular automata, a class of computing systems that can generate complex behavior. John Conway’s Game of Life implemented a cellular automaton that proved to be
able to generate self-replicating behavior (apparently after the Levy book was published), and, in fact, was able to act as a general-purpose computer (Flake’s chapter on this topic is excellent). Cellular automata can be used to simulate many of the complex, lifelike behaviors described below.
* Group behavior. Each von Neumann creature assembles new creatures on its own, oblivious to its peers. Later scientists have devised methods of ways of simulating group behavior: Craig Reynolds simulated bird flocking behavior, each artificial bird following simple rules to avoid collisions and maintain a clear line of sight. Similarly, a group of scientists at the Free University in Brussels simulated the collective foraging behavior of social insects like ants and bees. If a creature finds food, it releases pheremone on the trail; other creatures
wandering randomly will tend to follow pheremone trails and find the food. These behaviors are not mandated by a leader or control program, they emerge naturally, as a result of each creature obeying a simple set.
of rules.
Like genetic algorithms, simulations of social insects have proven very useful at solving optimization problems, in domains such as routing and scheduling. For example scientists Erik Bonabeau and Marco Dorigo used
ant algorithms to solve the classic travelling salesman program.
* Competition and co-operation. Robert Axelrod simulated “game theory” contests, in which players employed different strategies for co-operation and competition with other players. Axelrod set populations
of players using different algorithms to play against each other for long periods of time; players with winning algorithms survived and multiplied, while losing species died out. In these simulations, co-operative algorithms tend to predominate in most circumstances.
* Ecosystems. The von Neumann world starts with a single pond creature, which creates a world full of copies of itself. Simulators Chris Langton, Steen Rasmussen and Tom Ray evolved worlds containing whole ecosystems worth of simulated creatures. The richest environment is Tom Ray’s Tierra. A descendant of “core wars,” a hobbyist game written in assembly language, the Tierra universe evolved parasites, viruses, simbionts, mimics, evolutionary arms races — an artificial ecosystem full of interations that mimic the dynamics of natural systems. (Tierra is actually written in C, but emulates the computer core environment. In the metaphor of the simulation, CPU time serves as the “energy” resource and memory is the “material” resource for the ecosystem. Avida, a newer variant on Tierra, is maintained by a group at CalTech).
* Extinction. Von Neumann’s creatures will presumably replicate until they run out of components, and then all die off together. The multi-species Tierra world and other evolutionary simulations provide a more complex and realistic model of population extinction. Individual species are frequently driven extinct by environmental pressures. Over a long period of time, there are a few large cascades of extinctions, and many extinctions of individual species or clusters of species. Extinctions can be simulated using the same algorithms that describe
avalanches; any given pebble rolling down a steep hill might cause a large or small avalanche; over a long period of time, there will be many small avalances and a few catastrophic ones.
* Co-evolution. Ecosystems are composed of multiple organisms that evolve in concert with each other and with changes in the environment. Stuart Kauffman at the Santa Fe institute created models that simulate the evolutionary interactions between multiple creatures and their environment. Running the simulation replicates several attributes of evolution as it is found in the historical record. Early in an evolutionary scenario, when species have just started to adapt to the environment, there is explosion of diversity. A small change in an organism can lead to a great increase in fitness. Later on, when species become more better adapted to the environment, evolution is more likely to proceed in small, incremental steps. (see pages 192ff in Kauffman’s At Home in the Universe for an explanation.)
* Cell differentiation. One of the great mysteries of evolution is the emergence of multi-celled organisms, which grow from a single cell. Levy’s book writes about several scientists who have proposed models of cell differentiation. However, these seem less compelling than the other models in the book. Stuart Kauffman developed models that simulate a key property of cell differentiation — the generation of only a few basic
cell types, out of a genetic code with the potential to express a huge variety of patterns. Kaufman’s model consists of a network in which each node is influenced by other nodes. If each gene affects only a few other genes, the number of “states” encoded by gene expression will be proportional to the square root of the number of genes.
There are several reasons that this model is somewhat unsatisfying. First, unlike other models discussed in the book, this simulates a numerical result rather than a behavior. Many other simulations could create the same numerical result! Second, the empirical relationship between number of genes and number of cell types seems rather loose — there is even a dispute about the number of genes in the human genome!
Third, there is no evidence of a mechanism connecting epistatic coupling and the number of cell types. John Holland proposed an “Echo” agent system to model differentiation (not discussed in the Levy book). This model is less elegant than other emergent systems models, which generate complexity from simple rules; it starts pre-configured with multiple, high-level assumptions. Also, Tom Ray claims to have made progress at modeling differentiation with the Tierra simulation. This is not covered in Levy’s book, but is on my reading list.
There are several topics, not covered in Levy’s book, where progress seems to have been made in the last decade. I found resources for these on the internet, but have not yet read them.
* Metabolism. The Von Neumann creature assembles replicas of itself out of parts. Real living creatures extract and synthesize chemical elements from complex raw materials. There has apparently been substantial progress in modelling metabolism in the last decade; using detailed models gleaned from biochemical research.
* Immune system. Holland’s string-matching models seems well-suited to simulating the behavior of the immune system. In the last decade, work has been published on this topic, which I have not yet read.
* Healing and self-repair. Work in this area is being conducted by IBM and the military, among other parties interested in robust systems. I have not seen evidence of effective work in this area, though I have not searched extensively.
* Life cycle. The von Neumann model would come to a halt with the pond strip-mined of the raw materials for life, littered with the corpses of dead creatures. By contrast, when organisms in nature die, their bodies
feed a whole food chain of scavengers and micro-organisms; the materials of a dead organism serve as nutrients for new generations of living things. There have been recent efforts to model ecological food chains
using network models; I haven’t found a strong example of this yet. Von Neumann’s original thought experiment proposed an automaton which would replicate itself using a factory-like assembly process, independent of its peers and its environment. In subsequent decades, researchers have made tremendous progress at creating beautiful and useful models of many more elements of living systems, including growth, self-replication, evolution, social behavior, and ecosystem interactions.
These simulations express several key insights about the nature of living systems.
* bottom up, not top down. Complex structures grow out of simple components following simple steps.
* systems, not individuals. Living systems are composed of networks of interacting organisms, rather than individual organisms in an inert background.
* layered architecture. Living and lifelike systems express different behavior at different scales of time and space. On different scales, living systems change based on algorithms for growth, for learning, and for evolution.
Many “artificial life” experiments have helped to provide a greater understanding of the components of living systems, and these simulations have found useful applications in a wide range of fields. However, there has been little progress at evolving more sophisticated, life-like systems that contain many of these aspects at the same time.
A key theme of the Levy book is the question of whether “artificial life” simulations can actually be alive. At the end of the book, Levy opend the scope to speculations about the “strong claim” of artificial
life. Proponents of a-life, like proponents of artificial intelligence, argue that “the real thing” is just around the corner — if it is not a property of Tierra and the MIT insect robots already!
For example, John Conway, the mathematics professor who developed the Game of Life, believed that if the Game was left to run with enough space and time, real life would eventually evolve. “Genuinely living,
whatever reasonable definition you care to give to it. Evolving, reproducing, squabbling over territory. Getting cleverer and cleverer. Writing learned PhD theses. On a large enough board, here is no doubt in
my mind that this sort of thing would happen.”(Levy, p. 58) That doesn’t seem imminent, notwithstanding Ray Kurzweil’s opinions that we are about to be supplanted by our mechanical betters.
Nevertheless, it is interesting to consider the point at which simulations might become life. There are a variety of cases that test the borders between life and non-life. Does life require chemistry based
on carbon and water? That’s the easiest of the border cases — it seems unlikely. Does a living thing need a body? Is a prion a living thing? A self-replicating computer program? Do we consider a brain-dead human whose lungs are operated by a respirator to be alive? When is a fetus considered to be alive? At the border, however, these definitions fall into the domain of philosophy and ethics, not science.
Since the creation of artificial life, in all of its multidimensional richness, has generated little scientific progress, practitioners over the last decade have tended to focus on specific application domains, which continue to advance, or have shifted their focus to other fields.
* Cellular automata have become useful tools in the modeling of epidemics, ecosystems, cities, forest fires, and other systems composed of things that spread and transform.
* Genetic algorithms have found a wide variety of practical applications, creating a market for software and services based on these simulation techniques.
* The simulation of plant and animal forms has morphed into the computer graphics field, providing techniques to simulate the appearance of complex living and nonliving things.
* The software for the Sojourner robot that expored Mars in 1997 included concepts developed by Rodney Brooks’ team at MIT; there are numerous scientific and industrial applications for the insect-like robots.
* John Conway put down the Game and returned to his work as a mathematician, focusing on crystal lattice structure.
* Tom Ray left to the silicon test tubes of Tierra, and went to the University of Oklahoma to study newly-assembled genome databases for insight into gene evolution and human cognition. The latest
developments in computational biology have generated vast data sets that seem more interesting than an artificial world of assembly language parasites.
While the applications of biology to computing and computing to biology are booming these days, the synthesis of life does not seem to be the most fruitful line of scientific investigation. Will scientists ever evolve life, in a computer or a test tube? Maybe. It seems possible to me. But even if artificial creatures never write
their PhD thesis, at the very least, artificial life will serve the purpose of medieval alchemy. In the pursuit of the philosophers stone early experimenters learned the properties of chemicals and techniques for chemistry, even though they never did found the elixir of eternal life.