Gaps in Our Understanding: AI, Gods, and Humanity

Essay by:

In response to:

Dr. Randall Reed is a Professor of Religion at Appalachian State University in North Carolina USA. He is currently working on the intersection of religion and technology. He is co-chair of the American Academy of Religion Research Seminar on Artificial Intelligence and Religion . His latest article (co-authored with Laura Ammon) is entitled “Is Alexa My Neighbor?” and is forthcoming in Journal of Posthuman Studies: Philosophy, Technology, Media.

Randall Reed

Dr. Randall Reed is a Professor of Religion at Appalachian State University in North Carolina USA. He is currently working on the intersection of religion and technology. He is co-chair of the American Academy of Religion Research Seminar on Artificial Intelligence and Religion . His latest article (co-authored with Laura Ammon) is entitled “Is Alexa My Neighbor?” and is forthcoming in Journal of Posthuman Studies: Philosophy, Technology, Media.

Artificial Intelligence and Religion

Chris Cotter and Beth Singler discuss the intersections between religion and Artificial Intelligence from slavery and pain to machines taking over religious functions and practices.

Gaps in Our Understanding: AI, Gods, and Humanity

I am delighted to have an opportunity to respond to Dr. Beth Singler’s interview for the Religious Studies Podcast. As anyone who has had the opportunity to hear Dr. Singler in the past knows, she is always brilliant, entertaining, and specializes in making the field of artificial intelligence, particularly as it relates to religious studies, intelligible to the non-computer scientist. While there are many issues that Dr. Singler raises as part of her discussion during the podcast, I would perhaps identify two main strands that we might explore. On the one hand is the question of how is A.I. like or not like humans? On the other hand, how is A.I. like or not like God? The second question may seem impertinent, and yet perhaps particularly to the amazement of those, like Dr. Singler, who really understand artificial intelligence, that question becomes more relevant each day.

 

Let me start with the first question, however: how is A.I. like humans? It is here that I think we enter a highly contested area: Intelligence. Does or can A.I. have intelligence? And of course, there is first the question of definition. What constitutes intelligence? There is no easy answer as is evident in the rich history of historical, psychological and philosophical debate over this issue. Likewise, this discussion becomes mired in an anthropocentric view; human intelligence is considered the “gold standard” of intelligence. But certainly, in the animal world, we see various levels of intelligence. The squirrels in our yards, as they subvert ever-increasing obstacles, seem to exhibit intelligence. Of course, we recognize that the squirrels are not doing trigonometry (at least consciously). Still, we generally don’t require equivalence before we proclaim that an animal is “smart.”

 

Often times though, when we talk about artificial intelligence, we want to demand the higher standard of human intelligence. There is a running joke in the A.I. community that intelligence is that which a computer hasn’t done until it does it and then intelligence is something else. So intelligence was to beat a grandmaster at chess until a computer did that. Then it was beating a computer a Go (the world’s most popular board game) until it did that. And so on. As artificial intelligence continues its inexorable trek to besting humans on various tasks (see Sebastian Ruder’s “leaderboard,” which seeks to track A.I. competence on a wide range of natural language tasks), the question of what constitutes intelligence becomes ever more confusing.

Above, a Buddhist temple, includes a robotic priest designed after the deity of mercy. It already delivers sermons, and supporters hope that it will continue to grow in intelligence to share more complex information over time.

Thus, both the question of intelligence and the concomitant problem of how an A.I. is and is not like a human being are not simple. In the interest of space, I have not engaged the ethical issues involved (much like the podcast), but there are a variety of ethical issues both hypothetical and real that are raised as A.I.s become able to do more things that humans can do but at an exponentially faster rate.

 

The second issue that I would like to raise is how A.I.s are and are not like the divine. The paper that Dr. Singler presented at the University of Edinburgh and another version at the American Academy of Religion in San Diego broached this topic. In her paper, the issue is not an ontological question — whether A.I. is actually god-like (though some scholars have speculated that in the future A.I. may, in fact, be indistinguishable from our western conception of a god) — rather the question that Dr. Singler asks is whether humans are treating A.I. like a god?

 

And here the answer seems to be, at least sometimes, “yes.” For as Dr. Singler’s study shows, there is at least a micro-trend on Twitter talking about being “blessed by the algorithm.” The use of that kind of religious language, which seems to indicate a kind of divination of A.I., is present regardless of the actual nature of the algorithm. Dr. Singler notes that this often seems to be a kind of habit among humans, that we seem to fall into religious tropes and narratives when we encounter the unknown.

 

Perhaps this is an extension of the “God of the Gaps.” This notion postulates that when human knowledge cannot determine the cause of something, we often turn to supernatural explanations. Before we developed a scientific understanding of lightning, we saw it as a divine act. Once science exposed its natural causes, that bit of divinity in our world was erased. The circle that once was the magisterium of religion that encompassed the universe has thereby been today reduced to a small space in one’s heart.

 

I would like to go a bit further than that here, because with the case that Dr. Singler uncovers, it is not simply that people do not understand how the algorithms make their decisions (which is true of not just the uninitiated but the creators of those algorithms as well), but rather these A.I. have power. The Uber driver or YouTuber who seeks the promise of remuneration recognizes that those mechanical intelligences whose “ways are not our ways” can cast either blessing or curse. It is this power that holds human life in its grasp, acting in seemingly inexplicable ways, that has the petitioner suddenly using the language of religion.

 

And yet, the danger of the God of the Gaps is that it surrenders human responsibility for the state of the world. It exchanges supplication for experimentation and explanation. Whatever problems there may be with the Enlightenment (and they are legion to be sure, several of which were elucidated in the podcast), at one level, the Enlightenment represents a moment in which human beings took responsibility for the understanding of their world. We are at a time in which we stand again with a requirement for decision. The retreat to religious language, while not an unusual strategy for humanity, must not be our final riposte. We must once again find the way to understand and perhaps master this new force that seems increasingly arrayed against us in the deployment of A.I. Dr. Singler points out our slippage into the language of religion and at the same time urges us to not remain there.

 Fund the RSP while you shop! Use an Amazon.co.uk, Amazon.ca, or Amazon.com affiliate link whenever you make a purchase. There’s no additional cost to you, but every bit helps us stay on the air! 

We need your support!

Want to support us directly? Become a monthly Patron or consider giving us a one-time donation through PayPal