Down the Rabbit Hole of Artificial Intelligence

The recent podcast on religion and artificial intelligence (AI) was a lively discussion on a host of issues revolving around the issue of AI. One might be excused for feeling a bit exhausted by the end of it, as the panel whips from one topic to the next in a furious attempt to unpack the many issues that surround this newest phenomenon. Artificial Intelligence, robots, chatbots, Superintelligence and transhumanism all make the rounds as the panelists try to start thinking together about the ways these topics relate to religion. In this short response, I’m going to try to bring some method to the madness and give the uninitiated some help in understanding what isgoing on in these various discussions and perhaps share some of my own insight.

Let me begin by diverting for a moment into an experience I had the other day. FedEx was supposed to deliver a package to me. But the company had told me they would contact me first about a delivery time. They had not. So I got on FedEx’s website and had this interaction:

( 1m 29s ) Clarissa A: Thank you, Randy. Let me pull out the shipping details with the information provided. Please allow me to review the status of the shipment, in that way I’ll be able to provide you an accurate information. Would that be fine?
( 1m 41s ) Randy: yes, go ahead.
( 2m 10s ) Clarissa A: Thank you for waiting, Randy. I appreciate your patience. Getting you the proper assistance is important to us. The department that can best handle your concern is our
FREIGHT SERVICES. Please call 1-866-393-4585. I apologize for you have been inadvertently routed to our FedEx chat support for US Express and Ground domestic packages.
( 2m 16s ) Clarissa A: I am sorry if I was not able to meet your expectations. It would really ease my heart to process your request myself. It is with sincere regret that I have to forward it to the proper channel. Please accept my apology for my shortcomings.
( 2m 27s ) Clarissa A: Randy, is there anything else I can assist you with today?
( 2m 46s ) Randy: No, thanks for your help.
( 3m 43s ) Clarissa A: You are most welcome, Randy. Apologies for the inconvenience! I am looking forward for this issue to be resolved after the transfer. Thank you for chatting with FedEx.

Now a piece of information and a question. FedEx uses chatbots (artificial intelligence that is designed to interact with users like a person) on its website. Question: Was Clarissa A. a chat bot or a real person? If you’re like me, you’re not sure, but you’re suspicious. There’s something not quite right about the interaction. It’s too emotive at one level. The phrases “really ease my heart” and “sincere regret” and “apology for my shortcomings” as well as the perky “I am looking forward for this issue to be resolved…” do not seem quite right. They are too self-effacing to imagine a human saying. I posted this interaction on Facebook and asked my friends (mostly fellow academics) to vote. They were unanimous that it was probably a chat bot. But many also conceded that it might be a person with a strict script, particularly a non-english native speaker (the last sentence is really not quite grammatically copacetic – would a computer make that mistake?).

Let’s assume, however, for the sake of argument, that Clarissa A. was a chatbot. The things that make us uncomfortable about the interaction is what is sometimes referred to as “the uncanny valley.” Most often this applies to robots who are supposed to look human, but can’t quite pull it off. But it seems appropriate to this interaction as well. You reach the uncanny valley when you get close to “almost human” in looks or interactions.

Roomba doesn’t have this problem, it’s clearly a robot, and doesn’t intend to look like a person. The new robot Kuri that just premiered at CES, looks like one of the Japanese figures from Fantasmic, it is far from the Uncanny valley. But because I can neither hear nor see Clarissa, just based on her on-line interactions, she enters the uncanny valley. I am put in the uncomfortable position of not knowing whether I am dealing with a human being or piece of software that is doing an almost, but not quite, convincing human imitation.

What Clarissa A. is (if she’s a chatbot) is what would be called a “Narrow A.I.” This is to be distinguished from a “General A.I.”. A narrow A.I. is an A.I. that is really designed to solve a particular problem. In Clarissa A’s case, it’s helping me get my package. If I had varied from that and asked her opinion of the Steelers or Trump, it might have become immediately apparent whether I was dealing with an A.I. Clarissa A. is very good at figuring out where my package is, and when it’s going to get to me (and very sorry when she fails) but that’s the limit of the “intelligence” in her artificial intelligence. In terms of religion, Clarissa A. is not much of an issue. And while a quarter of a million people may have proposed to Amazon’s Alexa, like Clarissa A. no one is going to convert her to a religion, no one believes she has a soul, no one believes she’s a person. I asked both Alexa and Google Home what their religion was and they both declined to answer (Google Home told me, “I guess I wasn’t programmed to be religious”). Narrow A.I.’s undoubtedly will be increasingly common. Facebook has just introduce a developers toolkit, to create narrow A.I.’s that will do things like help you book a plane, or send
your mother flowers. So we should expect to see more of them and their interactions will undoubtedly get better, more human, over time.

A general A.I. is a whole other story. An Artificial General Intelligence (AGI) would be a machine which could interact with you on a host of different topics. It would in many ways be indistinguishable from a human intelligence. What we are talking about is machine intelligence.
A machine that could make decisions, plans, and choices. A machine that could improve itself and learn. This is the holy grail of artificial intelligence. This is also the stuff of science fiction movies most recently like Ex Machina and Her.

Here is where we often hear talk about the “turing test.” Alan Turing thought a machine might be described as intelligent if in an interaction with it, a normal person would not be able to distinguish between it and an actual person. In the podcast, Beth Singler is quite skeptical of the Turing test, and rightfully so. One might argue that Clarissa A. passes the Turing Test. There is real doubt whether she is a human or not. But as Singler points out, that’s only because we have a messy idea of intelligence. We don’t actually know what human intelligence is so we don’t really know when a machine might have it, or surpass it.

On the other hand what if we had an electronic entity who we had no doubt was intelligent and could actually modify itself, improving itself in a system of recursion which might quickly surpass human intelligence and become superintelligent. This is what is sometimes envisioned in an Artificial General Intelligence (AGI). An Artificial General Intelligence is the stuff of nightmares as well as dreams. The Matrix and Terminator both are manifestations of the fear of AGI. But they are not alone. Philosopher Nick Bostrum’s book Superintelligence lays out the dangers of an AGI. People like Bill Gates, Stephen Hawking and Elon Musk have all sounded the alarm that the potential danger from an AGI is not to be dismissed. Bostrum argues that part of the problem is that it’s a very hard thing to gain human level intelligence. But once gained, there is no reason that an AGI would stop at human level intelligence. The smartest person in the world may have an I.Q. of 200. But once an AGI developed the equivalence of an I.Q. of 100, it would be able to self-improve and there would be no natural barrier of an I.Q. of 200 like with Humans. Humans are limited to that because of the size of our skulls. An AGI would have no such limit, and therefore could quickly surpass the smartest humans in a potentially short amount of time. It would then become a superintelligent being, capable of almost anything.

But there are a variety of cultural and religious issues that arise when you have an AGI that do not with narrow A.I.’s or with robots (who generally are also Narrow AI’s). Once you have an AGI (whether in a robot body or not) you have serious considerations. Would an AGI have a soul? Would an AGI believe in God? In Isaac Asimov’s classic tale “Reason,” a robot concludes in a of combination of the cosmological and ontological arguments that its creators are not the humans who claim to have made it, but some greater being and starts its own religion. Would an AGI follow suit? And more interesting might be the question raised by Robert Sawyer’s
“WWW:Wake” series where the internet (called Webmind) comes to consciousness and becomes an AGI. In the book, Webmind, is mistaken for God, and as an experiment, admits to being God to some of its users. Would a religion develop around an AGI? Would an AGI accept itself as a divinity? It might reason it has all the elements of a God, so why would it not accept
the title?

In this way, while it would be a mistake to call Bostrom’s book a book of “theology.” It is without doubt one of the more theologically important books today, because it raises the question, what happens when we create God? Not the illusion of God as Freud argued, but for all practical purposes a being indistinguishable from many definitions of God. And what happens if this is not a God of love? What will the “Will” of this God be? And how can we ensure that it is benevolent? Bostrom’s book is a call to arms, a plea to consider this problem and address it. He takes for granted it is only a matter of time until an AGI is created. The problem is one of how to control it once it arrives and ensure it works for us and not against us. That, he says, is the thorny problem, but it must be solved b efore AGI is created. We must, he in effect argues, learn how to control God. One thinks back to the panic in heaven over Babel, “if…they have begun to do this, then nothing they plan to do will be impossible for them.” (Gen 11:6). Will we hear God say this again? Will we say it ourselves about AGIs?

Thus, we arrive again at religion, but now at a religious conception that is very different than we are used to. It will ultimately require a new way of making sense of the world, but one in which the insights of Religious Studies become more useful, not less. The podcast showed the way
that Religion and these technological advances are intertwined with each other. Religious Studies shirks this responsibility at our peril.

Religious Studies Project Opportunities Digest – 10 February 2015

Calls for papers

Conference: In Search of the Origins of Religions

September 11–13, 2015

Ghent, Belgium

Deadline: March 1, 2015

More information (English)

Conference: Second Undegraduate Conference on Religion and Culture

March 28, 2015

Syracuse, NY, USA

Deadline: February 15, 2015

More information

Symposium: Society for the Study of Religion and Transhumanism (SSRT)

June 27, 2015

Lancaster University, UK

Deadline: March 31, 2015

More information

AAR group: Secularism and Secularity

Deadline: March 2, 2015

More information

Journal: Studi e materiali di storia delle religioni

Theme issue: Religion as a Colonial Concept in Early modern History (Africa, America, Asia)

Deadline: May 15, 2015

More information

Article collection: Religious subcultures in Unexpected Places

Deadline: May 1, 2015

More information


Conference: International Tyndale Conference

October 1–4, 2015

Oxford, UK

More information

Congress: “Ad Astra per Corpora: Astrología y Sexualidad en el Mundo Antiguo

February 19–21, 2015

Málaga, Spain

More information (Spanish)


Research assistant: Indology

Westfälische Wilhelms-Universität Münster, Germany

Deadline: February 28, 2015

More information (German)

Religion in the Age of Cyborgs

Merlin Donald’s Big Thoughts on the evolution of culture offer opportunities to speculate about the place of religion in the natural history of our species – an opportunity most recently taken by Robert Bellah in his much discussed last book, Religion in Human Evolution: From the Paleolithic to the Axial Age (2011). But Donald’s work also affords opportunities for an even more speculative exercise: that of forecasting religion’s future. Instead of letting the many obvious obstacles of such forecasting hold us back, let’s indulge.

In Origins of the Modern Mind (1991), Donald suggested that human cultural evolution has gone through three main stages: mimetic culture (arising early in human evolutionary history), mythic culture (arising soon after the invention of language), and theoretic culture (taking shape only as late as the Enlightenment). These stages are explained fairly well in the interview, so I will not recapitulate here.

Donald’s thinking about cultural evolution is based to a considerable degree on his view on distributed cognition. Thinking does not all happen inside the cranium. It was not a sudden expansion of brain mass that inaugurated the era of cognitively and behaviourally modern humans, but rather drastic changes in the distributed cognitive networks that individual brains are part of: networks that engage many brains in coordinated ways to create “cognitive ecosystems”. Cultural evolution is based on changes in these distributed cognitive networks rather than sudden mutations in individual brains.

A growing school in cognitive science and the philosophy of mind is developing the idea of the extended mind, from Tyler Burge’s anti-individualism to Andy Clark’s supersized mind to Lambros Malafouris’ recent “Material Engagement Theory”. This school, to which we may count Donald as a moderate adherent, has serious implications for all disciplines studying human culture.

It also provides us with a useful clue for speculating about the future of religion. Donald holds that ritual behaviour emerges extremely early, and plays a significant role in “mimetic culture”. Religions of the doctrinaire type depend on more extensive language use, and emerge around powerful narratives and myths in the transition to “mythic culture”. Dependent primarily on mimetic imagination and narrative skills, then, we should not expect ritual and religion to disintegrate from the human cultural repertoire anytime soon.

Theoretic culture, on the other hand – ostensibly secular, reflective, scientific, and disenchanted – is a much more fragile thing. Its deepest roots lie in the “exographic revolution” (i.e. the invention of systems for externalizing memory), which started with simple carving and painting techniques in the upper Paleolithic and kicked off around 5,000 years ago with the invention of writing. It became possible to externalize thought and distribute abstract concepts to such an extent that difficult, reflective thinking could emerge.

But reflective thinking did not obsolete mythic culture – instead it was absorbed in it, subsumed by its governance structures and used to further them. It took other sorts of revolutions in the distributed cognitive network to pave the way for a theoretic culture to emerge: the printing press, the spread of literacy to wider populations, the creation of new institutions and rationalized bureaucracies. Even then, mythic culture was not supplanted by theoretic culture: the new nation states notably made use of all the strategies of mythic culture in creating grand narratives of the folk and their soil, united under one flag, one anthem, one canon of art and literature – and kept safe under the watchful eyes of one government. But these new “secular”-but-mythologized nation states also gave room for institutions where reflective knowledge was to be cultivated, and its fruits exploited in industry, business, and the ordering of society itself. We got education systems disciplining individual brains to do very difficult tasks such as reading, writing, and calculating things. We got the sort of distributed cognitive system that we are part of today.

The central message of this story, however, is not one of the unstoppable march of progress. Rather, it is that theoretic culture is extremely fragile, because entirely dependent on complex cognitive distribution networks spanning numerous interdependent institutions. As Robert McCauley concludes in Why Religion Is Natural and Science Is Not (2011), science is a socio-cognitive enterprise that can easily be crushed and disappear from a culture entirely with the collapse of a few central institutions. As Donald notes in the interview, there are reasons to doubt whether theoretic culture is sustainable on the longer run – let alone that it can ever be “purified” in the sense of ridding us of mythic and mimetic elements. Secularists and atheists may not have much reason to cheer the converging evidence from the cognitive science of religion (CSR). What Pascal Boyer (2001) called “the tragedy of the theologian” – that “theological correctness” is rarely followed in practice due to various constraints on online, unreflective cognition – is simultaneously the tragedy of the atheist demagogue. As (the later) Peter Berger put it: ‘The religious impulse … has been a perennial feature of humanity. … It would require something close to a mutation of the species to extinguish this impulse for good.’

We have to overcome humanity itself to overcome religion. So, to spice up our forecast, let’s look at some who would not shy away from doing exactly that: the transhumanists. What happens to religion if the future belongs to the cyborgs?

To begin with: transhumanists are divided on the question of religion/spirituality. A clear majority identifies as secular, and many of those are self-proclaimed atheists. Some, such as the Brighter Brains Institute think-tank, dabble in militant atheism (their term) together with neuroengineering, biohacking, and radical life extension. But there are also various strands of explicitly religious transhumanists, such as the Mormon Transhumanist Association. These Cyborgs for God see new technologies and radical modifications of human nature as ways of approaching salvation and becoming divine. Others, who would often self-describe as secular, still draw on religion-like narratives to talk about our imminent transhuman revolution through the “technological Singularity”. Some advocates, such as Ray Kurzweil, even see the singularity as a way to create God by rearranging all the matter in the universe and making it conscious.

That implementing new and even deeply transformative technologies would not necessarily stall the development of religious meaning-making but set it on a new course instead should not surprise us. Humans are after all natural born cyborgs, waking up to find new ways to improve the reach of our bodies and limits of our minds. The transhuman future (whichever one it is) may be more of a quantitative than a qualitative change. A technocentered spirituality of cyborgs that continue to utilize the deep proclivities from evolutionary history even in an age of exoskeletons, biohacks, and brain/computer interfaces is one possible transhuman future for religion. The form and function of this spirituality would depend entirely on the social form that this transhuman society would take – the governance structure of the by then extremely distributed cognitive network (think ubiquitous computing). If current trends of speculation among spiritual transhumanists are any indication, worship of the emerging Internet of Things as itself “conscious” and “divine” seems one path. But the actions of the class of experts who build, develop, and – most crucially – own the infrastructure of this network remains a decisive factor. Think of Google’s “Don’t Be Evil” turned into a first commandment, flashing on our retinas when we power up in the morning.

What about the intertwined future of irreligion? Another possibility is that a convergence of neuroengineering and artificial intelligence manages to rewire the brain in such a way that it meets Berger’s condition for the eradication of religion. In other words, not just a change in the distributed cognitive network, but a radical transformation of the biological component of that network – something that we haven’t seen in the previous cultural revolutions according to Donald.

To atheist transhumanists reading this: such rewiring may be one possible route to universal atheism, but you need to seriously consider whether it is a desirable one. In another recent book on religion and evolution, Big Gods (2013), Ara Norenzayan distinguishes between four roads to atheism. The first of these, “mind-blind atheism”, is the most fundamental. It addresses the neuroanatomical and computational level that could be altered by a radical transhuman approach bent on removing the basic cognitive mechanisms that create our susceptibility for what these engineers would consider “religion” (notions of gods, spirits, rituals and so forth). Since those basic mechanisms include such fundamental things as Theory of Mind and conceptual blending, however, rewiring us for atheism essentially means rewiring us for autism – and taking away our grasp of such things as metaphor while at it.

That’s probably a price too high for getting rid of a few god concepts. But the transhuman atheist need not necessarily despair. There are more feasible paths to near-global atheism. These would however rely, once more, on the structure of distributed cognitive networks rather than on essential changes to the brain. It will be important to establish certain types of institutions and forms of governance. Seeing that a large proportion of transhumanists appear to lean towards free-market libertarianism and anarcho-capitalism, the necessary steps of this model might in fact not be too appealing: It appears that to build well-functioning godless societies we must first become Scandinavian-style social democrats.

It is true that the sort of post-scarcity “abundance society” that some transhumanist authors imagine might correlate to some extent with the apathetic kind of atheism (“We’ve got all this cool stuff, so why bother?”). But the evidence suggests that it is the distribution of this wealth and power that will be the key factor. Social and economic equality, managed by a big welfare state that citizens trust, are the strongest correlates for irreligion. The futuristic medievalists of the “neoreactionary movement” that’s currently attracting some attention in transhumanist circles is certainly wide off the mark. They want to keep high-technology while essentially abandoning Merlin Donald’s theoretic culture all together for a return to old-school mythic culture – kings, knights, underlings and all. Sort of sounds like a bad idea. But good conditions for strange new religions to emerge.

The question of religion’s evolutionary future, then, has little to do with whether or not we become cyborgs. We already are cyborgs, and have been for tens of thousands of years. It has more to do with what kinds of cyborgs we become, and how we organize ourselves when we’re there.


Bellah, Robert. 2011. Religion in Human Evolution: From the Paleolithic to the Axial Age. The Bellknap Press / Harvard University Press.

Boyer, Pascal. 2001. Religion Explained: The Evolutionary Origins of Religious Thought. New York, NY: Basic Books.

Burge, Tyler. 2010. Origins of Objectivity. Oxford and New York: Oxford University Press.

Clark, Andy. 2003. Natural Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. Oxford and New York: Oxford University Press.

Clark, Andy. 2010. Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford and New York: Oxford University Press.

Donald, Merlin. 1991. Origins of the Modern Mind. Cambridge: Harvard University Press.

Donald, Merlin. 2001. A Mind So Rare: The Evolution of Human Consciousness. New York: W.W. Norton.

Fauconnier, Gilles and Mark Turner. 2002. The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. New York, NY: Basic Books.

Malafouris, Lambros. 2013. How Things Shape the Mind: A Theory of Material Engagement. Cambridge: MIT Press.

McCauley, Robert. 2011. Why Religion Is Natural and Science Is Not. Oxford & New York: Oxford University Press.

Norenzayan, Ara. 2013. Big Gods: How Religion Transformed Cooperation and Conflict. Princeton: Princeton University Press.