We are in the midst of a Cambrian explosion of algorithms. Roughly 540 million years ago, the fossil record tells us that the planet experienced a dramatic diversification of animal life, producing the ancestors of most of the complex organisms that we’re familiar with today. The acceleration of innovation in algorithms and artificial intelligence similarly presents us with a stunning array of technologies and applications and with a puzzling, transformational dynamic.
According to the World Intellectual Property Organization, more than half of our 340,000 AI-related inventions have been published between 2013 and 2019. This exponential growth in numbers proceeds in parallel with “a shift from theoretical research to the use of AI technologies in commercial products and services.” There are fewer and fewer areas of life where we do not interact, knowingly or unknowingly, with such algorithms, from the maps that tell us where to go, to the systems that choose what we watch or who we talk to, to the assistants that aid our doctors in their diagnoses.
We have become symbiotic with these machines. We feed them with energy and data, and they reward us with a host of services. But our relationship with them goes deeper. There are multiple layers of feedback loops as we shape algorithms and they shape us, at the individual and collective levels. What framework can we turn to to analyze this complex ecosystem?
We need to turn to the instructive experience of a researcher centuries ago. In his voyages through Latin America and Russia in the late 18th and early 19th centuries, the Prussian polymath Alexander von Humboldt creatively observed natural patterns and theorized about their underlying unity. As Andrea Wulf beautifully captures in her book, The Invention of Nature, Humboldt’s research transformed our views on the interconnectedness between animals, plants, geology, the climate, and even people and societies. Our desire to understand machine behavior today follows in that tradition as an effort to craft an interdisciplinary, systems-thinking perspective centered around our relationship with machines.
The novel approach has proponents and a manifesto. In “Machine Behaviour,” a 2019 article in Nature, Iyad Rahwan of the MIT Media Lab with a host of colleagues describes the emergence of an eponymous field of study. The 23 authors, who cut across institutions and disciplines in the natural and social sciences, argue that machine behavior “is concerned with the scientific study of intelligent machines, not as engineering artefacts, but as a class of actors with particular behavioural patterns and ecology.” The endeavor matters, they say, because it is “critical to maximizing the potential benefits of AI for society.”
The root of the framework lies in an analogy between machines—algorithms—and animals. “Animal and human behaviours cannot be fully understood without the study of the contexts in which behaviours occur,” Rahwan and the co-authors write in the paper. “Machine behaviour similarly cannot be fully understood without the integrated study of algorithms and the social environments in which algorithms operate.”
Some of the parallels are apt. Like organisms, machines consume energy and succeed or fail based on whether they replicate. It is a model that Eric Beinhocker described in The Origin of Wealth, an examination of economics as a complex, evolutionary system. For Beinhocker, an organism (like an elephant) fulfills the same task as physical technologies (like a microchip) and social technologies (like venture capital), which is to transform disorder—entropy—into order and information for functions like reproduction or wealth-creation.
Change in technology is not entirely akin to biological evolution, however. Gene transmission in organisms occurs vertically, through the passage of information from parents to offspring across generations. This creates a sequential hierarchy of organisms and groups of organisms: those that are closer to each other in the evolutionary timeline, like humans and chimpanzees, also have more in common genetically than those further apart, like humans and gorillas.
“The key feature of technology is that it ain’t transmitted vertically. It’s transmitted horizontally,” Andrew Berry, a lecturer in organismic and evolutionary biology at Harvard University, told Gizmodo. Horizontal transmission means that information exchange is not limited to that from parents to offspring, and can happen more quickly and in more ways. When YouTube alters its recommendation algorithm, your local instance of YouTube will reflect the new features instantly, regardless of the lineage of your app or browser or your distance from Silicon Valley.
Machine behavior thus exhibits a radically different transmission path and lack of hierarchy. While he acknowledges the similarity between some features of gene and idea transmission, Berry notes that Rahwan and colleagues’ approach “is trying to force a faux-biological parallel. The parallel is bereft of meaning.”
For Berry, this has implications on how we should study machine behavior. Rather than devising a new field that mimics biology, he suggests that the lens provided by the history of science and technology would be much more instructive. It doesn’t matter if we’re observing the First Industrial Revolution or the Fourth, the spinning wheel or machine-learning algorithms. In either case, the history of science and technology allows us to study the consequences of technical progress, including the resulting social dynamics.
Others recommend a more person-centric approach. “Ethnography,” Anne Washington told Gizmodo, explaining her lens, “is about deeply listening.” Washington, an assistant professor of data policy at New York University, highlighted the importance of research ethics and consent in studying the impact that algorithms have on communities that display digital differential vulnerability. As Washington and a co-author argued in a paper laying out this concept, people who are already vulnerable in the real world—because of ethnicity, gender, social status, or other characteristics—experience “disproportionate exposures to harm within data technology.” The digital and physical worlds thus become increasingly indistinguishable.
In this context, machines have specific origins and tangible effects. With algorithms as with the internal-combustion engine, we should look at an artifact’s provenance: in Washington’s words, “why it was created, where it came from, who bought it, who made it, under what conditions did it become a commercial product?” A machine is not “a found object”; it is a deliberate human creation, the result of specific economic incentives and business models and the cause of real-world impacts on individuals and groups.
In a more qualitative framework like that of ethnography, storytelling also matters. First, it can facilitate understanding through analogies. Washington asks whether one would use a car to drive up a mountain if it has only been tested in a parking lot: if a tool has not been tried in the relevant context, it may not be fit for purpose. Sara Kingsley, a doctoral student at Carnegie Mellon University’s Human-Computer Interaction Institute who has also worked for Microsoft Research and the Department of Labor, instead compares software algorithms with human decision processes. For example, Kingsley told Gizmodo, a selection algorithm—software allocating public benefits or a lawyer selecting a jury—may superficially appear unbiased while not actually being so.
Second, storytelling can allow a more intimate discussion of the consequences of machine behavior. “We’re using storytelling to highlight voices and stories,” Kingsley explained. “Our hope is that, by using storytelling, we’re using a method of research that allows us to learn about a problem that exists in this space and to communicate to scholars without revealing the confidential information of subjects.” This may be helpful to educate about machine behavior in the media or popular culture, and to initiate conversations beyond academia.
We’re not yet there, however. For Washington, computer science is still in an “imperial stage,” where the focus is taking from subjects rather than considering them as fully human. Power dynamics are central in this understanding, as they were in Humboldt’s view of the world. “Humboldt was the first to relate colonialism to the devastation of the environment,” says Wulf in her book. As he traveled throughout Latin America, he witnessed first-hand how the Spanish Empire’s plunder caused droughts, deforestation, and abuse against indigenous people.
Grasping the complex relationships between political and business institutions, people, and machines remains essential today. For the Prussian scientist, Wulf writes, “nature, politics, and society formed a triangle of connections.” At a macroscopic level, we can observe politics shaping algorithmic niches like rivers carving a landscape into distinct territories. In China, it’s through the so-called Great Firewall; in Europe through the General Data Protection Regulation; and in the U.S. through a laissez-faire approach to technology. At a finer level, the machines that emerge within each of these environments may differ in how they touch the lives of billions, but the interconnection between people and algorithms will remain a global, increasing trend.
With their paper, Iyad Rahwan and his co-authors articulate how we can begin to approach such a vast domain. Machines and organisms maintain important differences, though these too may recede as we increasingly engineer algorithms to mimic natural processes. The current manifesto is not yet final either, as additional perspectives and critiques join the effort to model the behavior of machines and their complex relationships with humans and humanity. By following Alexander von Humboldt’s example and transcending disciplines, acutely observing, and maintaining a profound social consciousness, we’ll have a better chance of shepherding our machine companions and machine-dependent society towards a more enlightened direction.
Giacomo Bagarella is based in New York City, where he consults on urban and economic development at HR&A Advisors by day and writes about technology and policy by night.