Let us make robots in our own image

In the ancient Mesopotamian creation story, humans are created to do the work the gods do not want to do, dig irrigation ditches. This has always struck me as being very similar to the motivation given for the development of robots and artificial intelligence. They will do the work that we don’t want to do so that we can do more creating things like art, space exploration and philosophical contemplation. This is also how ancient Greco-Roman society thought of slaves. Aristotle essentially called slaves “living tools.” AIs and robots are not alive, but we do see them as tools. Paradoxically, with the rise of chatGPT, it could end up being AIs that are making art and poetry while humans are doing menial tasks like folding laundry. Something about seeing intelligent agents, whether artificial intelligence or biological animals, as mere tools seems to result in us treating other humans as tools as well.

In the age of robotics, humans have acted more like the Mesopotamian gods and less like the Judeo-Christian God. Unlike the Mesopotamian gods, the God of the Hebrews created humans not as slaves but as partners and created co-creators. What if we had the same attitude towards artificial intelligence? What if instead of creating servants, our goal was to create companions and equal partners?

In previous articles, I have argued for shifting our motivation for technological innovation from maximizing power to pursuing wonder and compassion. I have primarily talked about wonder of the natural world as revealed by science, but the same could apply technology. What if our motivation for the development of non-human post-biological intelligence was driven more by a wonder of non-human intelligence and a desire to increase the diversity of forms of intelligence in our community? How would this change how we treated AIs? How would it change how we treated humans and even the non-human living world?

If you look at classic science fiction, it is interesting how often AIs are generally more relational than modern equivalents. In Star Wars, you have robots as administrative assistance like C-3PO and pilot-sidekicks like R2-D2.  In Star Trek, you have androids like Data with their own desires and sense of self. HAL 9000 from 2001: A Space Odyssey is more analogous to modern AIs, but even he appears more like a full-person than the average chatGPT.

It is likely that we are only a few decades behind having AIs of this type, but an important question is whether we will have such personable AIs in the near future. It is arguable that the real-world equivalents of a robot waitress or cashier already exist in the form of self-checkouts at grocery stores and fast food restaurants. These automated self-checkouts are impersonal and do not provide an opportunity for relationship the way that a C-3PO-like droid would. chatGPTs also primarily exist as tools to accelerate our work and consumption habits. They are not conceived of as partners and there is not a major attempt to make them more like partners. If we do not change our thinking on artificial intelligence, we may simply get more of the same impersonal AIs that are more tools to advance consumerism than potential partners in creating a better world.

Why have we opted for such a impersonal approach? Part of it could be related to the fact that in general, our society is not person-centered but thing-centered, the things being the products that we are consuming. If we were to shift more towards a person-centered vision of technological innovation, what would it look like in the area of artificial intelligence?

In part, it might look like creating AIs that are more general purpose and designed more to be companions and helpers than simply tools for specific tasks. We already need to stop thinking of humans and other living creatures as mere tools, it makes sense that the same shift needs to happen with AIs which represent another type of creature.

There are dangers to giving AIs agency, and creating AIs that act on their own has its risks, but that is partly because of how we currently conceptualize artificial intelligence. The current development of artificial intelligence essentially sees artificial intelligence as a tool to maximize human power over nature. It is an essentially Nietzschean way to look at human nature. We are creating powerful beings that are mainly interested in maximizing power. What if we used a different model? We could try to stop the advance of artificial intelligence, but it is unlikely to be successful because of the economic potential of the development of artificial intelligence. Furthermore, there are genuine benefits of artificial intelligence for humanity and the planet. If we can’t stop the development of AI, we might as well try to steer it in the right direction.

If we want to create AIs that are more equal partners than slaves, it will require thinking of them more as whole persons, even if they technically are not. If we create an artificial general intelligence that acts exactly like a human being, for practical purposes it makes sense to treat them as if they have genuinely human consciousness even if we believe they do not. Otherwise, it would be an injustice if they actually did possess human consciousness and we treated them as things rather than persons, something that humans also do with other humans.

How do we create these companion AIs? To start, we could make them more generalist rather than programing them for very specific tasks, a change that may already be taking place. Another would be to give them a capacity for creativity, bonding, and even play so that they are not just solving problems but also seeking relationships. Another important element is to create them as embodied entities like humans and other animals. Some AI researchers even consider embodiment to be necessary for the development of a truly human-like artificial general intelligence. If we intend to create truly human-like artificial intelligence, that is also capable of compassion and bonding, we need to make them embodied, in a virtual environment at least if not in a physical substrate.

Another important element is for the advancement of artificial intelligence to be for its own sake. We want to create a new type of being that we can partner with to create a better world, in other words, renew creation. We value our human companions, hopefully, because we find them inherently valuable. Another being, whether another human or an animal, is a source of wonder. To have AIs as true companions, we would need to have a similar sense of wonder about them. We need to see them as intrinsically valuable in their own right. Using AI to help solve economic, social, and ecological problems is desirable, but if we are still seeing AIs as a means to an end, we have not moved beyond the Mesopotamian gods and are probably seeing other humans as means to an end as well.

In Genesis 1, God created Adam to be his companion and partner in creation. He created him for a relationship of mutual love, respect, and partnership in creation. What if we chose to imitate the Judeo-Christian God instead of the Mesopotamian gods? What if we sought to create partners and companions rather than simply slaves through AI development? Creating more personable AIs primarily as equal companions and partners rather than as slaves might be a step in the right direction. We are creating robots to be our friends. Though, since the word robot comes from a Czech word meaning serf, maybe we should also come up with a different name for these non-biological companions.

Leave a comment