Down the Rabbit Hole of Artificial Intelligence

The things that make us uncomfortable about the interaction is what is sometimes referred to as “the uncanny valley.” Most often this applies to robots who are supposed to look human, but can’t quite pull it off. But it seems appropriate to this interaction as well. You reach the uncanny valley when you get close to “almost human” in looks or interactions.

By Randall Reed

Dr. Randall Reed is a Professor of Religion at Appalachian State University in North Carolina USA. He is currently working on the intersection of religion and technology. He is co-chair of the American Academy of Religion Research Seminar on Artificial Intelligence and Religion . His latest article (co-authored with Laura Ammon) is entitled “Is Alexa My Neighbor?” and is forthcoming in Journal of Posthuman Studies: Philosophy, Technology, Media.

Randall Reed

Dr. Randall Reed is a Professor of Religion at Appalachian State University in North Carolina USA. He is currently working on the intersection of religion and technology. He is co-chair of the American Academy of Religion Research Seminar on Artificial Intelligence and Religion . His latest article (co-authored with Laura Ammon) is entitled “Is Alexa My Neighbor?” and is forthcoming in Journal of Posthuman Studies: Philosophy, Technology, Media.

In response to:

AI and Religion: An Initial Conversation

This roundtable, in association with the Faraday Institute for Science and Religion, considers the impact of recent technological advances in Artificial Intelligence (AI) and robotics on religion, religious conceptions of the world, and the human. It draws attention to how such advances push religion beyond how it has been commonly defined and considered.

The recent podcast on religion and artificial intelligence (AI) was a lively discussion on a host of issues revolving around the issue of AI. One might be excused for feeling a bit exhausted by the end of it, as the panel whips from one topic to the next in a furious attempt to unpack the many issues that surround this newest phenomenon. Artificial Intelligence, robots, chatbots, Superintelligence and transhumanism all make the rounds as the panelists try to start thinking together about the ways these topics relate to religion. In this short response, I’m going to try to bring some method to the madness and give the uninitiated some help in understanding what isgoing on in these various discussions and perhaps share some of my own insight.

Let me begin by diverting for a moment into an experience I had the other day. FedEx was supposed to deliver a package to me. But the company had told me they would contact me first about a delivery time. They had not. So I got on FedEx’s website and had this interaction:

( 1m 29s ) Clarissa A: Thank you, Randy. Let me pull out the shipping details with the information provided. Please allow me to review the status of the shipment, in that way I’ll be able to provide you an accurate information. Would that be fine?
( 1m 41s ) Randy: yes, go ahead.
( 2m 10s ) Clarissa A: Thank you for waiting, Randy. I appreciate your patience. Getting you the proper assistance is important to us. The department that can best handle your concern is our
FREIGHT SERVICES. Please call 1-866-393-4585. I apologize for you have been inadvertently routed to our FedEx chat support for US Express and Ground domestic packages.
( 2m 16s ) Clarissa A: I am sorry if I was not able to meet your expectations. It would really ease my heart to process your request myself. It is with sincere regret that I have to forward it to the proper channel. Please accept my apology for my shortcomings.
( 2m 27s ) Clarissa A: Randy, is there anything else I can assist you with today?
( 2m 46s ) Randy: No, thanks for your help.
( 3m 43s ) Clarissa A: You are most welcome, Randy. Apologies for the inconvenience! I am looking forward for this issue to be resolved after the transfer. Thank you for chatting with FedEx.

Now a piece of information and a question. FedEx uses chatbots (artificial intelligence that is designed to interact with users like a person) on its website. Question: Was Clarissa A. a chat bot or a real person? If you’re like me, you’re not sure, but you’re suspicious. There’s something not quite right about the interaction. It’s too emotive at one level. The phrases “really ease my heart” and “sincere regret” and “apology for my shortcomings” as well as the perky “I am looking forward for this issue to be resolved…” do not seem quite right. They are too self-effacing to imagine a human saying. I posted this interaction on Facebook and asked my friends (mostly fellow academics) to vote. They were unanimous that it was probably a chat bot. But many also conceded that it might be a person with a strict script, particularly a non-english native speaker (the last sentence is really not quite grammatically copacetic – would a computer make that mistake?).

Let’s assume, however, for the sake of argument, that Clarissa A. was a chatbot. The things that make us uncomfortable about the interaction is what is sometimes referred to as “the uncanny valley.” Most often this applies to robots who are supposed to look human, but can’t quite pull it off. But it seems appropriate to this interaction as well. You reach the uncanny valley when you get close to “almost human” in looks or interactions.

Roomba doesn’t have this problem, it’s clearly a robot, and doesn’t intend to look like a person. The new robot Kuri that just premiered at CES, looks like one of the Japanese figures from Fantasmic, it is far from the Uncanny valley. But because I can neither hear nor see Clarissa, just based on her on-line interactions, she enters the uncanny valley. I am put in the uncomfortable position of not knowing whether I am dealing with a human being or piece of software that is doing an almost, but not quite, convincing human imitation.

What Clarissa A. is (if she’s a chatbot) is what would be called a “Narrow A.I.” This is to be distinguished from a “General A.I.”. A narrow A.I. is an A.I. that is really designed to solve a particular problem. In Clarissa A’s case, it’s helping me get my package. If I had varied from that and asked her opinion of the Steelers or Trump, it might have become immediately apparent whether I was dealing with an A.I. Clarissa A. is very good at figuring out where my package is, and when it’s going to get to me (and very sorry when she fails) but that’s the limit of the “intelligence” in her artificial intelligence. In terms of religion, Clarissa A. is not much of an issue. And while a quarter of a million people may have proposed to Amazon’s Alexa, like Clarissa A. no one is going to convert her to a religion, no one believes she has a soul, no one believes she’s a person. I asked both Alexa and Google Home what their religion was and they both declined to answer (Google Home told me, “I guess I wasn’t programmed to be religious”). Narrow A.I.’s undoubtedly will be increasingly common. Facebook has just introduce a developers toolkit, to create narrow A.I.’s that will do things like help you book a plane, or send
your mother flowers. So we should expect to see more of them and their interactions will undoubtedly get better, more human, over time.

A general A.I. is a whole other story. An Artificial General Intelligence (AGI) would be a machine which could interact with you on a host of different topics. It would in many ways be indistinguishable from a human intelligence. What we are talking about is machine intelligence.
A machine that could make decisions, plans, and choices. A machine that could improve itself and learn. This is the holy grail of artificial intelligence. This is also the stuff of science fiction movies most recently like Ex Machina and Her.

Here is where we often hear talk about the “turing test.” Alan Turing thought a machine might be described as intelligent if in an interaction with it, a normal person would not be able to distinguish between it and an actual person. In the podcast, Beth Singler is quite skeptical of the Turing test, and rightfully so. One might argue that Clarissa A. passes the Turing Test. There is real doubt whether she is a human or not. But as Singler points out, that’s only because we have a messy idea of intelligence. We don’t actually know what human intelligence is so we don’t really know when a machine might have it, or surpass it.

On the other hand what if we had an electronic entity who we had no doubt was intelligent and could actually modify itself, improving itself in a system of recursion which might quickly surpass human intelligence and become superintelligent. This is what is sometimes envisioned in an Artificial General Intelligence (AGI). An Artificial General Intelligence is the stuff of nightmares as well as dreams. The Matrix and Terminator both are manifestations of the fear of AGI. But they are not alone. Philosopher Nick Bostrum’s book Superintelligence lays out the dangers of an AGI. People like Bill Gates, Stephen Hawking and Elon Musk have all sounded the alarm that the potential danger from an AGI is not to be dismissed. Bostrum argues that part of the problem is that it’s a very hard thing to gain human level intelligence. But once gained, there is no reason that an AGI would stop at human level intelligence. The smartest person in the world may have an I.Q. of 200. But once an AGI developed the equivalence of an I.Q. of 100, it would be able to self-improve and there would be no natural barrier of an I.Q. of 200 like with Humans. Humans are limited to that because of the size of our skulls. An AGI would have no such limit, and therefore could quickly surpass the smartest humans in a potentially short amount of time. It would then become a superintelligent being, capable of almost anything.

But there are a variety of cultural and religious issues that arise when you have an AGI that do not with narrow A.I.’s or with robots (who generally are also Narrow AI’s). Once you have an AGI (whether in a robot body or not) you have serious considerations. Would an AGI have a soul? Would an AGI believe in God? In Isaac Asimov’s classic tale “Reason,” a robot concludes in a of combination of the cosmological and ontological arguments that its creators are not the humans who claim to have made it, but some greater being and starts its own religion. Would an AGI follow suit? And more interesting might be the question raised by Robert Sawyer’s
“WWW:Wake” series where the internet (called Webmind) comes to consciousness and becomes an AGI. In the book, Webmind, is mistaken for God, and as an experiment, admits to being God to some of its users. Would a religion develop around an AGI? Would an AGI accept itself as a divinity? It might reason it has all the elements of a God, so why would it not accept
the title?

In this way, while it would be a mistake to call Bostrom’s book a book of “theology.” It is without doubt one of the more theologically important books today, because it raises the question, what happens when we create God? Not the illusion of God as Freud argued, but for all practical purposes a being indistinguishable from many definitions of God. And what happens if this is not a God of love? What will the “Will” of this God be? And how can we ensure that it is benevolent? Bostrom’s book is a call to arms, a plea to consider this problem and address it. He takes for granted it is only a matter of time until an AGI is created. The problem is one of how to control it once it arrives and ensure it works for us and not against us. That, he says, is the thorny problem, but it must be solved b efore AGI is created. We must, he in effect argues, learn how to control God. One thinks back to the panic in heaven over Babel, “if…they have begun to do this, then nothing they plan to do will be impossible for them.” (Gen 11:6). Will we hear God say this again? Will we say it ourselves about AGIs?

Thus, we arrive again at religion, but now at a religious conception that is very different than we are used to. It will ultimately require a new way of making sense of the world, but one in which the insights of Religious Studies become more useful, not less. The podcast showed the way
that Religion and these technological advances are intertwined with each other. Religious Studies shirks this responsibility at our peril.

 Fund the RSP while you shop! Use an Amazon.co.uk, Amazon.ca, or Amazon.com affiliate link whenever you make a purchase. There’s no additional cost to you, but every bit helps us stay on the air! 

We need your support!

Want to support us directly? Become a monthly Patron or consider giving us a one-time donation through PayPal

Other EPISODES YOU MIGHT ENJOY

Doing Anthropological Fieldwork

Podcast

“If we want to discover what [wo]man amounts to, we can only find it in what [wo]men are: and what [wo]men are, above all other things, is various. It is in understanding that variousness – its range, its nature, its basis, and its implications – that we shall come to construct a concept of human nature that, more than a statistical shadow, and less than a primitivists dream, ...
Science and Religion in Europe: A Historical Perspective

Podcast

Professor Peter Harrison discusses the false historical assumptions behind the current perception that "science" and "religion" have always been in conflict. Providing a wide-ranging historical overview, Harrison begins with the early interplay between religious institutions and scientific activity, ...
How Religious Freedom Makes Religion

Podcast

Tisa Wenger tells David Robertson how local, national, and international regimes of religious freedom have produced and reproduced the category 'religion' and its others in the modern world.
Religion and the News Panel

Podcast

It goes without saying that ‘religion’ is a topic that frequently finds itself in the media spotlight. Whether we are talking about the recent Boston Marathon bombings, the funeral of Margaret Thatcher, the Arab Spring, or the recent critique of the UK government’s welfare policy levelled by four major British churches, the ways in which the media negotiates, ...
America’s Changing Religious Landscape

Podcast

The religious landscape of the United States is changing dramatically. Americans must consider what it means to govern a nation of religious minorities. We interview Dr. Robert P. Jones, the founding CEO of the Public Religion Research Institute. Jones discusses findings from PRRI's national surveys on religion and public life, many of which are represented in the American Values Atlas. The data collected by PRRI reveal a number of surprising trends related to religion and its intersection with politics, voting patterns, age, race, immigration, and secularism in the United States. A few key findings highlighted in PRRI's 2016 report on America's changing religious identity and covered in this podcast: (1) white Christians now account for fewer than half of the public, (2) white evangelical Protestants are in decline, (3) non-Christian religious groups are growing, and (4) atheists and agnostics account for a minority of all religiously unaffiliated. We discuss the implications of these findings and more, and we briefly review the research methodologies utilized by PRRI.
“Unruly Angels”: An Interview with Ingvild Gilhus

Podcast

Angels seem always to break boundaries. Neither human nor god, male nor female, whether Christian or otherwise, angels seem always to have functioned as representatives of an unruly popular religious impulse which seems to sit just below the elite constructions with which the study of religion has traditionally concerned itself.

This work is licensed under a Creative Commons Attribution- NonCommercial- NoDerivs 3.0 Unported License.

The views expressed in podcasts, features and responses are the views of the individual contributors, and do not necessarily reflect the views of The Religious Studies Project or our sponsors. The Religious Studies Project is produced by the Religious Studies Project Association (SCIO), a Scottish Charitable Incorporated Organisation (charity number SC047750).