There is an intelligent question at the heart of Ted Chiang’s new novella, The Lifecycle of Software Objects. The story is set in a near future, where online virtual worlds have grown to such levels of sophistication that they are able to support genetic programmes which can imitate the behaviour of life. Initially marketed as digital pets, it quickly becomes apparent that these software objects are far more lifelike than their creators intended. The novella follows the progress of the lifeforms and their carers over the formative years of the new lifeforms, if lifeforms they really are.
Artificial Intelligence is one of the more familiar tropes of Science Fiction, and one that has made it into the popular imagination through films like Bladerunner, Terminator and The Matrix. The machines are alive, and they’re coming to get us. How the machines come to life is less often explored. Often they are constructed, manufactured as full adult intelligences rolling from assembly lines. Or they are emergent, ghosts arising from the complexity of the machine and information systems. But they are rarely nurtured. Why would they be? A machine body does not need to be grown like a biological body, so why would a machine mind, or even a machine consciousness?
In The Lifecycle of Software Objects, Ted Chiang asks, what if Artificial Intelligence can only be created through nurture? What if an infant AI requires all the same care, protection and love as an infant human? Can such an AI still be considered a machine, or would it be owed all the same rights and privileges as a human?
It’s a question that allows Chiang to explore not just the moral consequences of AI, but more broadly the question of consciousness. Of all the unanswered questions in science, consciousness is among the most intransigent. Whilst our knowledge of the brain has advanced in leaps and bounds, it has brought us no closer to really understanding the fundamental nature of consciousness. Is consciousness merely a product of the brain? If so, what is it’s physical process? Is it rooted on the quantum level, in which case is it even attached to the human body? Or is, as the Buddhists claim, consciousness a universal quality that simply arises through the human form?
The only really significant thing we can say about consciousness is that we do not know. And given that lack of knowledge, many of our most fundamental moral assumptions come in to question. Chiang has a startling capacity to challenge those assumptions in the most direct and economical of ways. If the AI of the story are really only software objects, then why is it so horrific to learn that hacker groups have developed torture programmes for them? Or that software objects that are ‘hothoused’ and grown without human contact become autistic or even psychotic? Why should we care about these software objects, more than say an iPhone app or the latest distro of Linux?
Towards the end of the novella Chiang states his thesis as ‘experience is algorithmically incompressible’. Experience is the only source of intelligence. In order for Artificial Intelligence to exist it must live and experience, fully and completely, so that we can no longer truly consider it as artificial. In the end, we care about the software objects of Chiang’s novella because they have shared our experience. Whether they share our form or not, whether they are truly alive or not, the software objects are part of the human experience, so in some way human.
It’s an interesting thesis because it removes AI from the realm of sci-fi fantasy, and places it firmly in the bounds of very real probability. Chiang so skilfully explores his thesis in relation to the dynamics of the software development industry, consumer culture, capitalist economics and human nature, that after finishing the 30,000 word novella it’s difficult to imagine that some form of Chiang’s scenario will not emerge sooner rather than later. It’s both a hopeful and a horrifying prospect. Hopeful because any new emergence of life in to the world brings immense hope for the future. Horrifying because if The Lifecycle of Software Objects illustrates anything, it is the immense human capacity to abuse and damage consciousness that arises in any form, even its own.