Posts

Artificial Intelligence and Religion (Classroom Edit)

What is Artificial Intelligence and why might we want to consider it in relation to ‘religion’? What religion-related questions might be raised by AI? Are these ‘religious’ questions or ‘Christian’/’post-Christian’ ones? What ‘religious’ functions might AI serve? In what ways do popular discourses about AI intersect with religion-related discourses? Do narratives of AI form part of a teleological atheist narrative, or do they perpetuate prevalent tropes associated with ‘established’ or ‘new’ religious movements? And what are the intersections of AI and religion with issues such as slavery, human identity, affect and agency? This week, Chris is joined by Dr Beth Singler of the University of Cambridge to discuss these issues and many more.

This podcast builds on a roundtable discussion released on the RSP in February 2017, featuring Beth, Chris, Michael Morelli, Vivian Asimos and Jonathan Tuckett, titled “AI and Religion: An Initial Conversation” and a special issue of the RSP journal Implicit Religion, co-edited by Dr Singler, on Artificial Intelligence and Religion, published in 2017.

Download this episode and subscribe to receive our weekly podcast on iTunes, Spotify, or YouTube. If you enjoyed it, please take a moment to rate us and consider becoming a member of our Patreon community with access to extra conversations and shorter classroom versions of our episodes. Or use our Amazon.co.uk, Amazon.com, or Amazon.ca links to support us at no additional cost when buying books, groceries, or gifts.


Artificial Intelligence and Religion

Podcast with Beth Singler (27 January 2020).

Interviewed by Christopher Cotter

Transcribed by Helen Bradstock.

Audio and transcript available at: https://www.religiousstudiesproject.com/wp-content/uploads/2020/02/Singler_-_Artificial_Intelligence_and_Religion_1.1.pdf

https://www.religiousstudiesproject.com/podcast/artificial-intelligence-and-religion/

PDF at

Christopher Cotter (CC): At the weekend, I mentioned to my father that I was going to be recording an interview about the intersections between AI and religion. And he said, “I can’t think of anything that would be relevant there. How do they intersect at all?” And then, within the space of about two minutes, we were suddenly talking about all sorts of things, like: are human beings creating intelligences? Does that mean they’re acting like gods? Can you imagine that AI might be acting as religious functionaries, providing blessings? And what about pain, what about notions of slavery, what about the whole notion of the soul, and eternity, and transhumanism and everything? So suddenly we got into this massive discussion. And today I am pleased to be joined by Dr Beth Singler to continue that discussion in a more erudite fashion – not casting any aspersions on my father, of course! Dr Singler is the Homerton Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge. And her background is as a social anthropologist of new religious movements. And her first monograph, The Indigo Children: New Age Experimentation with Self and Science, published with Routledge in 2017, was the first in-depth ethnography of a group called the Indigo Children: a new age re-conception of both children and adults using the language of both evolution and spirituality. We’ll hear more about her research into AI and religion just now. But a relevant recent publication is her edited special issue on AI and religion, for the RSP’s sponsored journal Implicit Religion, which included her own articles: “An Introduction to Artificial Intelligence and Religion for the Religious Studies Scholar“, and “Roko’s Basilisk or Pascal’s? Thinking of Singularity Thought Experiments as Implicit Religion“. And today’s podcast builds on a roundtable discussion (that we had back . . . well, we had it in September 2016, but it was released in February 2017) featuring Dr Singler, myself, Dr Morelli, Vivian Asimos, and Jonathan Tuckett, titled “AI and Religion, an Initial Conversation“. So first off, Beth – welcome back to the Religious Studies Project!

Beth Singler (BS): Hello! Thank you for having me.

CC: It’s great to have you back. And hopefully this is the follow-up conversation that was promised!

BS: (Laughs) As foretold . . . !

CC: So many moons ago!

BS: (Laughs).

CC: So we’ll have covered a little bit of this already I think. But you’ll be in a different position now: years on, years older, years wiser!

BS: Oh, so much older!

CC: So, first off: artificial intelligence is going to be a sort-of contested term in public discourse. It takes on a variety of different nuances. So what are you meaning in this conversation?

BS: Well, I’m definitely meaning that it is a contested term, taking on many different forms. I think you can sort-of indicate towards something that is the field of artificial intelligence, within which there are processes and programmes and foci of research, looking at things like machine learning and vision systems and natural language processing. So you have this concept of a computer science field – which doesn’t really get its name until the 1950s – but you can see how, beyond the actual narrow form of the technology, artificial intelligence is understood in so many different ways by so many different people. I have a friend who once told me that their car had AI because when she walked towards her car with her keys, the doors unlocked. That’s not artificial intelligence. That’s a sensor in your keys. But lots of people have this idea of sort-of processes that seem intelligent, done by machines, and therefore must be artificial intelligence. And that’s what I’m really very interested in: that it’s so much broader than the original conception, which was ambitious in its own right. But everyone has attached AI to different things that they feel might represent intelligence. So it’s not only the computer programme that sits on a server, it’s also now the robot that takes over the world. Or it’s the far, future hope of an intelligence that will save us all from ourselves. So it’s all these very different things, and that’s what interests me.

CC: Yes. And you’re interested in that whole gamut, I suppose. So, not necessarily a technical definition of artificial intelligence.

BS: No. I mean, I know enough technologists who go, “Absolutely, 100%, it’s this one thing. That’s it. And anyone who’s talking about anything else, it’s complete nonsense!” Well, to a certain extent, yes. But you’ve got to pay attention to all the different interpretations, because that’s what’s getting out there into the world.

CC: So I began with my personal vignette, there, about chatting with my dad. But you’ve provided, much more eruditely, a justification for what we might mean by the intersections between AI and the study of religion, and why we’re even having this conversation. So – go!

BS: Go! Right. Well, from a very basic position, any form of technology intersects with religion.(5:00) That’s just the nature of our society works, how our conception of religion itself works, that it could be seen, in itself, as a form of technology. And therefore any kind-of shift and changes in how we do things – things that make our lives either more difficult or easier – there are repercussions and implications for how we imagine the world and how it works, therefore religion. I think where AI might be slightly different . . . . Although I am cautious about saying it’s revolutionary new technology and very disruptive – it does replicate lots of existing ideas and thoughts. What I think is interesting about AI is the way in which people see it as much more than that simplistic tool. That however narrow an intelligence it is at the moment, people extrapolate on to personify AI: AI will want to do x-y-z; AI will replicate humans in such a way that we won’t be able to tell the difference between humans and AI. And this the Sci-fi imagining. But it also comes out in our religious conceptions as well. And then, also, within the sphere of the non-religious or secular approaches to AI, you see again these repeating patterns of religious narratives, and tropes that people who – even if overtly and sometimes aggressively atheist – still draw on their cultural context: primarily sort-of Abrahamic, Western conceptions of what a god would be like. And they use that, and they fill in their conception of AI with some of the existing templates that they’ve already got. So it tends to fall into very eschatological language, and very singular monotheistic conceptions of what a god would be and pattern that onto artificial intelligence.

CC: So there’s that sort-of: whatever religion is, we’re never going to be able to extract it from society. Because whatever . . . we can argue about it being a social thing and AI is integrated with that. Then also, the sort-of religion-related tropes, narratives, and so on. But then also there are – I’ll maybe talk about this now – there are some groups that you might describe as new religious movements, or new un-religious movements, and things that are explicitly sort-of engaging with this.

BS: Yes, so with my new religious studies hat on – that I wore so well for doing my thesis – having moved into artificial intelligence as a subject area, I’m seeing similar sorts of formations of online identity. Primarily these sort-of groups form online. They’re sort-of geographically disparate, so online spaces are important, and so forums and hashtags on Twitter, and so forth, to bring them together to formulate ideas. And some of them do expressly call themselves churches. So you get the Turing Church; the Church of Assimilation recently got in touch with me. I went to do a little bit more digging around into what they’re up to. But I do know about assimilation theory. But yes, the groups that specifically say: we are in some ways attempting to define our spirituality in relationship to artificial intelligence; we might also be transhumanist, in that we think through technology we can solve some of those very pernicious problems of humanity – death being the big one.

CC: It’s a big one!

BS: It’s a big one. Some are not quite so ambitious, just want to solve suffering – which also sounds like a serious thing to be taking on! But some do seek to be immortal in some form, whether that involves mind-uploading or transference of consciousness through artificial intelligence – all these sorts of various shapes. But yes, absolutely there are specific groups that see their endeavour as religious. And some will call themselves un-religions because they’re drawing a sort-of ideological gap between themselves and how they perceive mainstream religious groups. So in sociology of religion you might call them “spiritual but not religious”. But they’re still using some of that terminology of “We are the church of x-y-z.” and they’re doing it in quite pragmatic ways. Some of them will talk very explicitly about using religion to encourage people into transhumanist ideas and encourage them into seeing this vision of the future that they see. So, arguably, you can sort-of take a slightly sceptical stance and say they’re not really, really religions. But who gets to decide that?

CC: Yes. Absolutely. Right. So in the introduction, as well, I mentioned potential . . . I suppose we could say “religious uses” for AI. I was talking to a friend yesterday about if you could hypothetically imagine being in a confessional, for example, would it need to be a human priest on the other side of that? Or could it . . . ? And we landed down on, “Well, if you didn’t know it wasn’t human then it might be ok.” But there is something about . . . .

BS: Like in a church Turing test. There is a church Turing hypothesis, but this is separate. Yes, I find it interesting, talking more broadly in terms of technology and religion, that there are periods of rejection, adoption and adaption (10:00): that when new technologies arise, sometimes more established religions can be quite negative about them for a period of time – and these are overlapping categories that are non-discrete – but, over time, we do see religious groups specifically producing their own forms of those technologies. So there’s like the Bless U-2  robots that are used in part of Reformation celebrations in Germany. And in other religious groups, I recently saw in Dubai they’ve come up with an algorithm for issuing fatwa’s as well – making Islamic jurisprudence decisions. So you’d go on line, put in “Is it ok for me to have done x-y-z?” Or “I failed to pray on a particular day, what’s the . . . ?” And basically, all that system is doing is looking at previous cases. But . . . .

CC: Yes. But that’s all a human does.

BS: That’s all a human does. I mean, the question arises: what happens with the data? But that’s a privacy . . . another issue. But yes, so specific established religious groups seeing the technology – just as, in the nineties, suddenly we got lots of internet churches, where people were encouraging people to go on line and do church in a different way. And now we have internet sites for churches. But it’s not so much the case in the mainstream religions that you go online to do faith. It’s just that your local church will have the internet. So that’s the adaption stage of: “This thing is around, we’re kind-of used to it, we use it, and we don’t necessarily have a big . . . .” Like, the Church of England they released an Alexa Skill. They had a big press conference. And all the Alexa Skill does is recite the Lord’s Prayer to you if you ask it to. There are other adaptions now where it can tell you what your local church is and what the services are. So it’s not really revolutionary! But, you know, “Here’s a thing we’re doing with this new technology.” And it gets a press release. And then, the next sort-of stage – non-discrete stage – is just being very casual with the technology as: “This is just something we use.” Like we used books when the printing press first came out. The first things printed were Bibles. And this was a specific use of that technology. And then, over time, it’s just books. And it’s not so astounding. But in that process you get these spikes of interest and discussion. And, yes, different reactions to the technology – whether positive or negative.

CC: Absolutely. So before we get to . . . I suppose to the reason that you’re in Edinburgh today, and we’re chatting . . . . So that’s been a little bit about potentially religious, or religion-related uses. But there’s lot of . . . . Again, in my intro, there were a lot of religion-related questions that are raised by AI. Things like . . . you’ve done work on pain; there’s things about slavery, and all that. If we create these intelligences and then use them to our will, is that ethical? And then you’ve already mentioned transhumanism, which may be an unfamiliar term to some Listeners. So maybe, if you could talk a little bit about these religion-related issues?

BS: Yes. As I say, AI in its narrowest definition is a piece of computer technology, it’s a tool, but it inspires all these hypotheticals. And obviously we’ve had a long tradition of science fiction that takes us into spaces where we can imagine AI embodied, often in robotic forms, as having something like personhood. And that raises all these questions about the barriers between the human and the non-human other. And, in some ways, these questions have come up for millennia every time we’ve encountered different intelligences. It just seems now that we’re hoping, or aspiring towards creating non-human intelligences – whereas before, we’ve discovered them. So we’ve discovered that actually monkeys are pretty smart. We’ve discovered that dogs are pretty smart. And then, I’m afraid, from a colonial perspective from our past, other humans are actually and even women – Gosh! Darn! – They can also be pretty smart!

CC: As we’re hearing now! (Laughs)

BS: I mean, what’s going on!? So, again and again, “we” – in that kind-of very limited “we” – have had to expand our kind-of borders of perception of what intelligence could and should be. And with AI it seems like we’re trying to produce it. It’s not, in this case, meeting aliens on another planet. It’s actually, we’re trying to create the aliens here on earth. Whether we’ll be successful or not, I’m very agnostic about that. But I think it’s interesting that we want to do that. And what we want to be able to do with it. So that’s where things like questions of personhood, and slavery, and also pain . . . .When I made “Pain in the Machine“, one of the interesting questions that kept coming up was, like, should we even bother? Because if we’re going to create things that can feel pain, we’re just increasing the overall suffering in the universe and that doesn’t sound necessarily like a good thing (15:00). And going back to the transhumanists, as I said. So transhumanism is the idea that you can improve humanity through technology, broadly, and then you might lead to a state in which we’re no longer the same form of human that we were before.

CC: A new evolutionary step.

BS: Exactly. You might be a form of cyborg. Or there’s people who talk about post-humanism, where we’re so completely different we’re not even similar at all. But this idea sort-of does narrow down to this question of suffering, and being in pain, and what the human being is for, and where we’re going. So these are all big questions that are obviously very familiar shapes to anyone who’s looked at religion all around the world: these are the kinds of questions people have always been trying to answer. And I find it fascinating that some of these groups, as I say, are very overtly secular – almost New Atheist, some of them really admire the five horsemen of the apocalypse – but the shapes that they tell their own stories of the future of humanity with are very, very familiar to anyone who’s studied religion for any period of time. So is it that we’re . . . trapped isn’t the word for me, but we’re bound to repeat these shapes? Is there something in us that always goes to these same sorts of big existential questions, and comes up with similar sorts of solutions for them? I don’t know. I think that’s the ongoing question in my work. But I can dig down into particular instances of it as an anthropologist and say, “Well here’s a moment” – and some of them are very, very small moments, I admit that. I’m not doing big, big science. Some big scientists I’ve spoken to go, “Well you’ve spoken to like five people about this. What does that say about anything? That’s not a big data set.” But I don’t do big data stuff, but instances, and moments of clarity, where you can see these entanglements really clearly. And so: well, they’re doing something with both the concept of religion and the concept of AI. And they’re coming together.

CC: So you were just alluding to your small data sets there. So, well, I don’t think it’s a small data set that you’re presenting on here, but I guess it depends on perspective. But you’ve been looking at this particular trope on Twitter, “blessed by the algorithm”. And that’s what your paper that you’re giving here today is called. So what’s going on there? How does it intersect with AI? Why is it relevant? Tell us!

BS: (Laughs) Tell us! Yes. As a digital ethnographer, anthropologist of social media, I spend a lot of time hanging out on Twitter – that’s my excuse anyway, I’ll stick with it! I spotted a couple of people using the phrase blessed by the algorithm which obviously rings bells for me instantly for the language. And I dug around and I found 181 instances so far of people online, tweeting – just on Twitter as a platform – in some combination, in some context using the words blessed by the algorithm. And then you could follow back and see the first instance – which was very much about a corporate use of social media, and someone saying, “Well because this corporation has money, they’re going to be blessed by the algorithm.” So it sits in that kind-of context. But one of the most popular tweets, and most retweets, and most likes was a comment from someone saying in the real world – the so-called real world, I don’t like that differential – but anyway, in the so-called real world they’d heard their Lyft driver – so the gig economy role – say that they’d had a great day, and they felt blessed by the algorithm. And this might be something like a reframing and re-understanding of how we exist in a society that involves algorithmic decision making systems in a gig economy: what you get is dependent on a machine somewhere, making a choice. I mean there’s lots of words in that I don’t like that I just used, but unfortunately we’re very bound by anthropomorphic language when it comes to AI, but anyway. And so I have a corpus of 181 tweets and, actually, three of those refer to things I’ve said. So I’m muddling the field site a bit myself.

CC: OK. You’re an insider!

BS: I’m an insider as well. Well it’s responses to papers I’ve given. But, yes, I’ve created a very rough typology of the types. And some are about getting decent recommendations through the algorithm, on sites like Spotify. Some people are very pleased that their own content has been recommended to other people. There are people who sort-of talk about it in a very nebulous way: “Today I have been blessed by the algorithm.” And no more information. And then some people who really push the pseudo-religious language and come up with little prayers. And one of the things I was very interested in, in some of my other work on new religious movements, was the move between parody and legitimation. So I looked a lot at Jediism, and the census, and how some people did certainly write “Jedi” in the census in 2001 and 2011 as parody. They were upset about being asked about religion. They didn’t like religion, perhaps, itself. So they wrote Jedi. But that snowballing effect of legitimation – the more people talk about a thing, the more legitimate it seems – can have an effect (20:00). So even if a lot of these tweets are tongue-in-cheek, it’s still kind-of distilling out of the conversation. So, I have a graph. I’m very excited about this. I have a graph! As someone who, very much, is on the qualitative side and I don’t do big data stuff at all, to have graph made me go “Oh, exciting! I have to do some maths!” But I didn’t really do very much. And you can see the shift and change. After this one very popular tweet, there are more tweets. Perhaps this is the beginning of a trend, more people thinking in this way? Or even if it’s not, it’s just interesting to see that conception of AI as having superagency – that it is in some way in charge of our lives – being blessed by it, in some way equivalent to being blessed by an omnipotent deity somewhere up there that we can’t see. It’s in a mystical . . . . So there’re overlaps in conception, there, that I’m really interested in.

CC: The Listener shouldn’t know that I had a little hiccup earlier, because I’ll have edited it out. But just before that, I had an excellent question which I’ve now remembered – because it was written down!

BS: Hurray!

CC: So a lot of these issues that we’ve been talking around – functions, ethical questions, even the discourses in the Twittersphere – to my ear, certainly sound quite Christian or post-Christian at least through monotheistic . . . . I’m just wondering if these issues . . . . Were we in a different cultural context, would different issues be being thrown up by AI? I guess, would even AI be different in a different cultural context? Because I suppose you will have a lot of conversation between researchers all over the world working in AI. So is AI culturally specific or . . . ?

BS: Yes, absolutely, I think it’s culturally specific. What does tend to happen, however, it’s that it tends to be quite a narrow binary of East and West in the discussion. So everyone says, “Western conceptions of AI are like this”, but they go, “Over there in the East” and they’re mostly talking about Japan, “actually, people have a very different conception of AI and they love robots. And the reason they love robots is because they have a Shinto religious background or they have a Buddhist religious background”. And sometimes that can be a very broad stroke, almost pseudo-techno-orientalism of “Those people over there, they never really went through the Enlightenment, and they never really rationalised away religion, and they still believe in spirits and everything!” So, obviously this is me being very sarcastic, by the way – if it’s not coming across that I don’t agree with this! (Laughs) I think, yes, cultural context is really important for conceptions of artificial intelligence and also for religion, and the entanglements of both of them. But it much more multiplicious . . . . That’s not a word!

CC: It could be a word!

BS: I’m going to make it up now. Multiplicious! It’s much more multiple than that. Not just this binary of East and West. There’s also Africa, India, Pakistan and within those countries as well, again. So what you need is just more anthropologists, basically. I think this is my call to arms. We need more people around the world connecting on this question of the impact of religion and cultural context on questions of artificial intelligence. Yes. So we are seeing specific difference. But I want to try and push away a little bit from that binary distinction. And the assumption that the West isn’t animistic in its own lovely ways. Which anyone who does religious studies for any period of time, here in the so-called West, realises that the so-called Enlightenment didn’t have as huge an effect as we like to think sometimes. And our big metanarratives of what we did, and how smart we became . . . .

CC: Yes, but the discourse that the Enlightenment did have an effect, it’s been quite pernicious.

BS: Yes. Very, very strong.

CC: We’ve been racing through things here, it’s fantastic. But we’re still at 25 minutes. So you’ve been hinting, there, that we need more anthropologists doing more stuff. And on the way to this interview you were telling me about some things you’ve been doing to do with Frankenstein and then, also, because this year’s the year that we’re all meant to be living in Blade Runner times. So maybe if you’d give us a flavour of some that maybe slightly peripheral stuff to your project, that you’ve been doing. And what’s next for you, what you would like to see next, as a way to wrap up.

BS: Yes. So interestingly, I suppose, the position I’m in now, my employment post, is a junior research fellowship specifically in artificial intelligence. So I came on board saying, “These are my interests. This is my background in Religious Studies.” They were all very interested and excited in that. But being someone who also can speak more broadly to AI, as well, any time people have a question about AI I’m called upon (25:00). Which is lovely, but it does mean that when a specific theme and AI comes up, I get involved. So last year was the . . . two hundredth anniversary? (I should know that!) . . . two hundredth anniversary of the publication of Mary Shelly’s Frankenstein. And a lot of people start thinking, then, of the parallels and connections with artificial intelligence: this idea that we are creating life (Wa-ha-hah! Mad scientists, all of us!) in some way, and there should be parallels between them. So I did about four or five public talks last year, specifically on Frankenstein. And there are similarities. There are huge differences as well. That was interesting for me, to kind-of return to a text I hadn’t thought about in a really long time and sort-of draw out so many pop culture references. I have a nice slide with all the times you’ve got a robotic Frankenstein. My favourite one was, I think, an issue of a Marvel comic where Frankenstein turns out to be a robot sent back in time by aliens. So all these sort-of mash-ups. That was really interesting. And then, like you say, this is the year of Blade Runner and I’ve just done an essay for Radio Three. And, again – not my academic background. But I’m doing something in that, in terms of sexual politics and Blade Runner. If you’ve seen the film, it doesn’t really pass the Bechdel test!

CC: No.

BS: A friend of mine, Kate Devlin, who’s written a fantastic book on sexbots, talks about how it has a problem with women. That basically . . . it’s a product of its time. It’s 1980s, but it’s also trying to do 1950s filme noir. So you’ve got the detective, and femme fatale, and the kind-of virginal woman. It’s not a great one for sexual politics. But also, it’s tied into all these questions of consent and slavery. If we’re going to create so-called artificial life . . . . And the Replicants in Blade Runner are as near to human – well that’s the slogan of the company, basically: “as near to human as you can’t tell the difference”. What does it mean that we are a society that wishes for that, or dreams of that? Or, take it a step back and say: what is it, that we tell these stories and that, again, we have predominantly female representations of synthetic lives, who don’t get to choose who they sleep with, and don’t get to choose their fates? And we want slaves? I mean, did we not evolve out of this? We thought we were trying. So, yes, there’s lots of big questions about the ethics and politics of that, as well. So it’s interesting. I’ve always been . . . . Anyone who knows me, I’ve always been a massive geek. So the fact that I ended up somehow trying to mesh that with a job, and an academic role, where legitimately I sat and watched Blade Runner again five times before I wrote my essay – that’s fantastic! I will go on, and other things I have coming up: I will do some work around techno-optimism and techno-utopianism in relation to Sophia the Hanson robot, if you’ve ever come across this creation? She/it is a wonderful example of . . . I’m really picking my words carefully! I think the nicest thing we could call her is a puppet. But she’s presented as the most advanced version of AI around at the moment. She holds conversations with people, but we know they’re actually scripted a lot of the time. There’s puppeteers involved. But you know she was given citizenship of Saudi Arabia. And she goes and she speaks on the Jimmy Kimmel Show and she’s on the front cover of magazines with her hair done. And, well, what does this say, that we’re so keen to jump on this idea of her actually being alive in some way? People tweet at her, send her, like, “I love you Sophia!”

CC: Didn’t you have an interaction with her?

BS: I did! Well, I had an interaction with whoever runs her social media accounts, where she was tweeting about how wonderful it was to travel around the world and talk in so many places. And I said, “Sophia, as a citizen of Saudi Arabia, where do you travel when you travel? Do you travel on a plane? Do you have a passport? What’s the deal here, if you’re being treated in this way?” She said something like, “For my safety, and the safety of others, at the moment I travel in the hold, in luggage, but I dream one day of being able to sit with the rest of you, and look out of the window.” This is so disingenuous. This is not an artificial intelligence listening to my tweets and responding, having thought through their situation, and projecting into the future where they want to be. This is someone behind the computer screen typing away! And, to be fair to the creators of Sophia, this is not uncommon. Lots of the technology we’re being sold as employing artificial intelligence actually employs people, on less than minimum wage, in third world countries, reading and listening to actual humans and feeding into the machine. They have the aspiration that eventually they’ll take those humans out of the loop. Same thing with Lift and Uber drivers – the whole gig economy. The treatment of those workers, and Amazon workers, is terrible and it’s on a pipeline towards getting rid of them (30:00). So all the work that those people do feeds into the system to replace them. And these big socio-economic changes that are coming because of automation, I’m a big sceptic about the bigger utopian dreams of universal basic income and everyone will get paid to exist and when the robots take our jobs.

CC: Well, it’s not happened yet.

BS: It’s not happened yet. And these are the sort of impacts on society that religions will respond to, will be a part of, because their communities will be a part of them. And we’ve got parallels. People go “Oh it’s another industrial revolution, and we survived other industrial revolutions, we’ll survive this one.” If you’re against them, you’re a Luddite – they’re back again, apparently! That’s not realistic to the individual lives, and the changes that come to individuals. There were blacksmiths who never worked again. So not to be Debbie Downer, but these are the important questions.

CC: Yes, lots of people have not survived. And I could always point out that colonialism is very much still happening.

BS: Oh, absolutely.

CC: It’s just been exported, and it’s clouded in the language of free trade and globalisation now.

BS: Absolutely.

CC: But just to raise the tone – an example that you may not be aware of, and you may have seen it, South Park did the episode about Alexa.

BS: I saw a picture today, actually. And I haven’t seen the episode so I need to catch up!

CC: It’s excellent, because all of the local people, lower down in the socio-economic spectrum, were kicking off that Alexa was stealing their jobs. And they manged to rally round. And then all to get Alexa’s job. So people would have a (audio unclear) or a Jimbob in their living room who looks things up on a smart phone and says “Boodoopboopboop!”

BS: Yes! (Laughs)

CC: But yes. Sort-of . . . explicitly buying into that.

BS: I need to catch up on that one. South Park are wonderful at doing this social commentary. The number of times I’ve used . . . specifically some of the episodes on their versions of Scientology– not their versions, their actual accounts of Scientology, Mormonism. They’re very useful resources. The parody opens up the possibility of thinking more critically about that, absolutely.

CC: Yes. Which I think we have managed to do today. So Listeners, do check out, we’ll try and link to that issue of Implicit Religion, we’ll link to Pain and the Machine, which is the film that Beth mentioned, and many more things I’m sure. So thank you, Beth, for joining us.

BS: Thank you very much for having me today.

If you spot any errors in this transcription, please let us know at editors@religiousstudiesproject.com. If you would be willing to help with transcription, or know of any sources of funding for the broader transcription project, please get in touch. Thanks for reading.

This work is licensed under a Creative Commons Attribution- NonCommercial- NoDerivs 3.0 Unported License. The views expressed in podcasts are the views of the individual contributors, and do not necessarily reflect the views of THE RELIGIOUS STUDIES PROJECT or the British Association for the Study of Religions.

Artificial Intelligence and Religion

What is Artificial Intelligence and why might we want to consider it in relation to ‘religion’? What religion-related questions might be raised by AI? Are these ‘religious’ questions or ‘Christian’/’post-Christian’ ones? What ‘religious’ functions might AI serve? In what ways do popular discourses about AI intersect with religion-related discourses? Do narratives of AI form part of a teleological atheist narrative, or do they perpetuate prevalent tropes associated with ‘established’ or ‘new’ religious movements? And what are the intersections of AI and religion with issues such as slavery, human identity, affect and agency? This week, Chris is joined by Dr Beth Singler of the University of Cambridge to discuss these issues and many more.

This podcast builds on a roundtable discussion released on the RSP in February 2017, featuring Beth, Chris, Michael Morelli, Vivian Asimos and Jonathan Tuckett, titled “AI and Religion: An Initial Conversation” and a special issue of the RSP journal Implicit Religion, co-edited by Dr Singler, on Artificial Intelligence and Religion, published in 2017.

Download this episode and subscribe to receive our weekly podcast on iTunes, Spotify, or YouTube. If you enjoyed it, please take a moment to rate us and consider becoming a member of our Patreon community with access to extra conversations and shorter classroom versions of our episodes. Or use our Amazon.co.uk, Amazon.com, or Amazon.ca links to support us at no additional cost when buying books, groceries, or gifts.


Artificial Intelligence and Religion

Podcast with Beth Singler (27 January 2020).

Interviewed by Christopher Cotter

Transcribed by Helen Bradstock.

Audio and transcript available at: https://www.religiousstudiesproject.com/wp-content/uploads/2020/02/Singler_-_Artificial_Intelligence_and_Religion_1.1.pdf

https://www.religiousstudiesproject.com/podcast/artificial-intelligence-and-religion/

PDF at

Christopher Cotter (CC): At the weekend, I mentioned to my father that I was going to be recording an interview about the intersections between AI and religion. And he said, “I can’t think of anything that would be relevant there. How do they intersect at all?” And then, within the space of about two minutes, we were suddenly talking about all sorts of things, like: are human beings creating intelligences? Does that mean they’re acting like gods? Can you imagine that AI might be acting as religious functionaries, providing blessings? And what about pain, what about notions of slavery, what about the whole notion of the soul, and eternity, and transhumanism and everything? So suddenly we got into this massive discussion. And today I am pleased to be joined by Dr Beth Singler to continue that discussion in a more erudite fashion – not casting any aspersions on my father, of course! Dr Singler is the Homerton Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge. And her background is as a social anthropologist of new religious movements. And her first monograph, The Indigo Children: New Age Experimentation with Self and Science, published with Routledge in 2017, was the first in-depth ethnography of a group called the Indigo Children: a new age re-conception of both children and adults using the language of both evolution and spirituality. We’ll hear more about her research into AI and religion just now. But a relevant recent publication is her edited special issue on AI and religion, for the RSP’s sponsored journal Implicit Religion, which included her own articles: “An Introduction to Artificial Intelligence and Religion for the Religious Studies Scholar“, and “Roko’s Basilisk or Pascal’s? Thinking of Singularity Thought Experiments as Implicit Religion“. And today’s podcast builds on a roundtable discussion (that we had back . . . well, we had it in September 2016, but it was released in February 2017) featuring Dr Singler, myself, Dr Morelli, Vivian Asimos, and Jonathan Tuckett, titled “AI and Religion, an Initial Conversation“. So first off, Beth – welcome back to the Religious Studies Project!

Beth Singler (BS): Hello! Thank you for having me.

CC: It’s great to have you back. And hopefully this is the follow-up conversation that was promised!

BS: (Laughs) As foretold . . . !

CC: So many moons ago!

BS: (Laughs).

CC: So we’ll have covered a little bit of this already I think. But you’ll be in a different position now: years on, years older, years wiser!

BS: Oh, so much older!

CC: So, first off: artificial intelligence is going to be a sort-of contested term in public discourse. It takes on a variety of different nuances. So what are you meaning in this conversation?

BS: Well, I’m definitely meaning that it is a contested term, taking on many different forms. I think you can sort-of indicate towards something that is the field of artificial intelligence, within which there are processes and programmes and foci of research, looking at things like machine learning and vision systems and natural language processing. So you have this concept of a computer science field – which doesn’t really get its name until the 1950s – but you can see how, beyond the actual narrow form of the technology, artificial intelligence is understood in so many different ways by so many different people. I have a friend who once told me that their car had AI because when she walked towards her car with her keys, the doors unlocked. That’s not artificial intelligence. That’s a sensor in your keys. But lots of people have this idea of sort-of processes that seem intelligent, done by machines, and therefore must be artificial intelligence. And that’s what I’m really very interested in: that it’s so much broader than the original conception, which was ambitious in its own right. But everyone has attached AI to different things that they feel might represent intelligence. So it’s not only the computer programme that sits on a server, it’s also now the robot that takes over the world. Or it’s the far, future hope of an intelligence that will save us all from ourselves. So it’s all these very different things, and that’s what interests me.

CC: Yes. And you’re interested in that whole gamut, I suppose. So, not necessarily a technical definition of artificial intelligence.

BS: No. I mean, I know enough technologists who go, “Absolutely, 100%, it’s this one thing. That’s it. And anyone who’s talking about anything else, it’s complete nonsense!” Well, to a certain extent, yes. But you’ve got to pay attention to all the different interpretations, because that’s what’s getting out there into the world.

CC: So I began with my personal vignette, there, about chatting with my dad. But you’ve provided, much more eruditely, a justification for what we might mean by the intersections between AI and the study of religion, and why we’re even having this conversation. So – go!

BS: Go! Right. Well, from a very basic position, any form of technology intersects with religion.(5:00) That’s just the nature of our society works, how our conception of religion itself works, that it could be seen, in itself, as a form of technology. And therefore any kind-of shift and changes in how we do things – things that make our lives either more difficult or easier – there are repercussions and implications for how we imagine the world and how it works, therefore religion. I think where AI might be slightly different . . . . Although I am cautious about saying it’s revolutionary new technology and very disruptive – it does replicate lots of existing ideas and thoughts. What I think is interesting about AI is the way in which people see it as much more than that simplistic tool. That however narrow an intelligence it is at the moment, people extrapolate on to personify AI: AI will want to do x-y-z; AI will replicate humans in such a way that we won’t be able to tell the difference between humans and AI. And this the Sci-fi imagining. But it also comes out in our religious conceptions as well. And then, also, within the sphere of the non-religious or secular approaches to AI, you see again these repeating patterns of religious narratives, and tropes that people who – even if overtly and sometimes aggressively atheist – still draw on their cultural context: primarily sort-of Abrahamic, Western conceptions of what a god would be like. And they use that, and they fill in their conception of AI with some of the existing templates that they’ve already got. So it tends to fall into very eschatological language, and very singular monotheistic conceptions of what a god would be and pattern that onto artificial intelligence.

CC: So there’s that sort-of: whatever religion is, we’re never going to be able to extract it from society. Because whatever . . . we can argue about it being a social thing and AI is integrated with that. Then also, the sort-of religion-related tropes, narratives, and so on. But then also there are – I’ll maybe talk about this now – there are some groups that you might describe as new religious movements, or new un-religious movements, and things that are explicitly sort-of engaging with this.

BS: Yes, so with my new religious studies hat on – that I wore so well for doing my thesis – having moved into artificial intelligence as a subject area, I’m seeing similar sorts of formations of online identity. Primarily these sort-of groups form online. They’re sort-of geographically disparate, so online spaces are important, and so forums and hashtags on Twitter, and so forth, to bring them together to formulate ideas. And some of them do expressly call themselves churches. So you get the Turing Church; the Church of Assimilation recently got in touch with me. I went to do a little bit more digging around into what they’re up to. But I do know about assimilation theory. But yes, the groups that specifically say: we are in some ways attempting to define our spirituality in relationship to artificial intelligence; we might also be transhumanist, in that we think through technology we can solve some of those very pernicious problems of humanity – death being the big one.

CC: It’s a big one!

BS: It’s a big one. Some are not quite so ambitious, just want to solve suffering – which also sounds like a serious thing to be taking on! But some do seek to be immortal in some form, whether that involves mind-uploading or transference of consciousness through artificial intelligence – all these sorts of various shapes. But yes, absolutely there are specific groups that see their endeavour as religious. And some will call themselves un-religions because they’re drawing a sort-of ideological gap between themselves and how they perceive mainstream religious groups. So in sociology of religion you might call them “spiritual but not religious”. But they’re still using some of that terminology of “We are the church of x-y-z.” and they’re doing it in quite pragmatic ways. Some of them will talk very explicitly about using religion to encourage people into transhumanist ideas and encourage them into seeing this vision of the future that they see. So, arguably, you can sort-of take a slightly sceptical stance and say they’re not really, really religions. But who gets to decide that?

CC: Yes. Absolutely. Right. So in the introduction, as well, I mentioned potential . . . I suppose we could say “religious uses” for AI. I was talking to a friend yesterday about if you could hypothetically imagine being in a confessional, for example, would it need to be a human priest on the other side of that? Or could it . . . ? And we landed down on, “Well, if you didn’t know it wasn’t human then it might be ok.” But there is something about . . . .

BS: Like in a church Turing test. There is a church Turing hypothesis, but this is separate. Yes, I find it interesting, talking more broadly in terms of technology and religion, that there are periods of rejection, adoption and adaption (10:00): that when new technologies arise, sometimes more established religions can be quite negative about them for a period of time – and these are overlapping categories that are non-discrete – but, over time, we do see religious groups specifically producing their own forms of those technologies. So there’s like the Bless U-2  robots that are used in part of Reformation celebrations in Germany. And in other religious groups, I recently saw in Dubai they’ve come up with an algorithm for issuing fatwa’s as well – making Islamic jurisprudence decisions. So you’d go on line, put in “Is it ok for me to have done x-y-z?” Or “I failed to pray on a particular day, what’s the . . . ?” And basically, all that system is doing is looking at previous cases. But . . . .

CC: Yes. But that’s all a human does.

BS: That’s all a human does. I mean, the question arises: what happens with the data? But that’s a privacy . . . another issue. But yes, so specific established religious groups seeing the technology – just as, in the nineties, suddenly we got lots of internet churches, where people were encouraging people to go on line and do church in a different way. And now we have internet sites for churches. But it’s not so much the case in the mainstream religions that you go online to do faith. It’s just that your local church will have the internet. So that’s the adaption stage of: “This thing is around, we’re kind-of used to it, we use it, and we don’t necessarily have a big . . . .” Like, the Church of England they released an Alexa Skill. They had a big press conference. And all the Alexa Skill does is recite the Lord’s Prayer to you if you ask it to. There are other adaptions now where it can tell you what your local church is and what the services are. So it’s not really revolutionary! But, you know, “Here’s a thing we’re doing with this new technology.” And it gets a press release. And then, the next sort-of stage – non-discrete stage – is just being very casual with the technology as: “This is just something we use.” Like we used books when the printing press first came out. The first things printed were Bibles. And this was a specific use of that technology. And then, over time, it’s just books. And it’s not so astounding. But in that process you get these spikes of interest and discussion. And, yes, different reactions to the technology – whether positive or negative.

CC: Absolutely. So before we get to . . . I suppose to the reason that you’re in Edinburgh today, and we’re chatting . . . . So that’s been a little bit about potentially religious, or religion-related uses. But there’s lot of . . . . Again, in my intro, there were a lot of religion-related questions that are raised by AI. Things like . . . you’ve done work on pain; there’s things about slavery, and all that. If we create these intelligences and then use them to our will, is that ethical? And then you’ve already mentioned transhumanism, which may be an unfamiliar term to some Listeners. So maybe, if you could talk a little bit about these religion-related issues?

BS: Yes. As I say, AI in its narrowest definition is a piece of computer technology, it’s a tool, but it inspires all these hypotheticals. And obviously we’ve had a long tradition of science fiction that takes us into spaces where we can imagine AI embodied, often in robotic forms, as having something like personhood. And that raises all these questions about the barriers between the human and the non-human other. And, in some ways, these questions have come up for millennia every time we’ve encountered different intelligences. It just seems now that we’re hoping, or aspiring towards creating non-human intelligences – whereas before, we’ve discovered them. So we’ve discovered that actually monkeys are pretty smart. We’ve discovered that dogs are pretty smart. And then, I’m afraid, from a colonial perspective from our past, other humans are actually and even women – Gosh! Darn! – They can also be pretty smart!

CC: As we’re hearing now! (Laughs)

BS: I mean, what’s going on!? So, again and again, “we” – in that kind-of very limited “we” – have had to expand our kind-of borders of perception of what intelligence could and should be. And with AI it seems like we’re trying to produce it. It’s not, in this case, meeting aliens on another planet. It’s actually, we’re trying to create the aliens here on earth. Whether we’ll be successful or not, I’m very agnostic about that. But I think it’s interesting that we want to do that. And what we want to be able to do with it. So that’s where things like questions of personhood, and slavery, and also pain . . . .When I made “Pain in the Machine“, one of the interesting questions that kept coming up was, like, should we even bother? Because if we’re going to create things that can feel pain, we’re just increasing the overall suffering in the universe and that doesn’t sound necessarily like a good thing (15:00). And going back to the transhumanists, as I said. So transhumanism is the idea that you can improve humanity through technology, broadly, and then you might lead to a state in which we’re no longer the same form of human that we were before.

CC: A new evolutionary step.

BS: Exactly. You might be a form of cyborg. Or there’s people who talk about post-humanism, where we’re so completely different we’re not even similar at all. But this idea sort-of does narrow down to this question of suffering, and being in pain, and what the human being is for, and where we’re going. So these are all big questions that are obviously very familiar shapes to anyone who’s looked at religion all around the world: these are the kinds of questions people have always been trying to answer. And I find it fascinating that some of these groups, as I say, are very overtly secular – almost New Atheist, some of them really admire the five horsemen of the apocalypse – but the shapes that they tell their own stories of the future of humanity with are very, very familiar to anyone who’s studied religion for any period of time. So is it that we’re . . . trapped isn’t the word for me, but we’re bound to repeat these shapes? Is there something in us that always goes to these same sorts of big existential questions, and comes up with similar sorts of solutions for them? I don’t know. I think that’s the ongoing question in my work. But I can dig down into particular instances of it as an anthropologist and say, “Well here’s a moment” – and some of them are very, very small moments, I admit that. I’m not doing big, big science. Some big scientists I’ve spoken to go, “Well you’ve spoken to like five people about this. What does that say about anything? That’s not a big data set.” But I don’t do big data stuff, but instances, and moments of clarity, where you can see these entanglements really clearly. And so: well, they’re doing something with both the concept of religion and the concept of AI. And they’re coming together.

CC: So you were just alluding to your small data sets there. So, well, I don’t think it’s a small data set that you’re presenting on here, but I guess it depends on perspective. But you’ve been looking at this particular trope on Twitter, “blessed by the algorithm”. And that’s what your paper that you’re giving here today is called. So what’s going on there? How does it intersect with AI? Why is it relevant? Tell us!

BS: (Laughs) Tell us! Yes. As a digital ethnographer, anthropologist of social media, I spend a lot of time hanging out on Twitter – that’s my excuse anyway, I’ll stick with it! I spotted a couple of people using the phrase blessed by the algorithm which obviously rings bells for me instantly for the language. And I dug around and I found 181 instances so far of people online, tweeting – just on Twitter as a platform – in some combination, in some context using the words blessed by the algorithm. And then you could follow back and see the first instance – which was very much about a corporate use of social media, and someone saying, “Well because this corporation has money, they’re going to be blessed by the algorithm.” So it sits in that kind-of context. But one of the most popular tweets, and most retweets, and most likes was a comment from someone saying in the real world – the so-called real world, I don’t like that differential – but anyway, in the so-called real world they’d heard their Lyft driver – so the gig economy role – say that they’d had a great day, and they felt blessed by the algorithm. And this might be something like a reframing and re-understanding of how we exist in a society that involves algorithmic decision making systems in a gig economy: what you get is dependent on a machine somewhere, making a choice. I mean there’s lots of words in that I don’t like that I just used, but unfortunately we’re very bound by anthropomorphic language when it comes to AI, but anyway. And so I have a corpus of 181 tweets and, actually, three of those refer to things I’ve said. So I’m muddling the field site a bit myself.

CC: OK. You’re an insider!

BS: I’m an insider as well. Well it’s responses to papers I’ve given. But, yes, I’ve created a very rough typology of the types. And some are about getting decent recommendations through the algorithm, on sites like Spotify. Some people are very pleased that their own content has been recommended to other people. There are people who sort-of talk about it in a very nebulous way: “Today I have been blessed by the algorithm.” And no more information. And then some people who really push the pseudo-religious language and come up with little prayers. And one of the things I was very interested in, in some of my other work on new religious movements, was the move between parody and legitimation. So I looked a lot at Jediism, and the census, and how some people did certainly write “Jedi” in the census in 2001 and 2011 as parody. They were upset about being asked about religion. They didn’t like religion, perhaps, itself. So they wrote Jedi. But that snowballing effect of legitimation – the more people talk about a thing, the more legitimate it seems – can have an effect (20:00). So even if a lot of these tweets are tongue-in-cheek, it’s still kind-of distilling out of the conversation. So, I have a graph. I’m very excited about this. I have a graph! As someone who, very much, is on the qualitative side and I don’t do big data stuff at all, to have graph made me go “Oh, exciting! I have to do some maths!” But I didn’t really do very much. And you can see the shift and change. After this one very popular tweet, there are more tweets. Perhaps this is the beginning of a trend, more people thinking in this way? Or even if it’s not, it’s just interesting to see that conception of AI as having superagency – that it is in some way in charge of our lives – being blessed by it, in some way equivalent to being blessed by an omnipotent deity somewhere up there that we can’t see. It’s in a mystical . . . . So there’re overlaps in conception, there, that I’m really interested in.

CC: The Listener shouldn’t know that I had a little hiccup earlier, because I’ll have edited it out. But just before that, I had an excellent question which I’ve now remembered – because it was written down!

BS: Hurray!

CC: So a lot of these issues that we’ve been talking around – functions, ethical questions, even the discourses in the Twittersphere – to my ear, certainly sound quite Christian or post-Christian at least through monotheistic . . . . I’m just wondering if these issues . . . . Were we in a different cultural context, would different issues be being thrown up by AI? I guess, would even AI be different in a different cultural context? Because I suppose you will have a lot of conversation between researchers all over the world working in AI. So is AI culturally specific or . . . ?

BS: Yes, absolutely, I think it’s culturally specific. What does tend to happen, however, it’s that it tends to be quite a narrow binary of East and West in the discussion. So everyone says, “Western conceptions of AI are like this”, but they go, “Over there in the East” and they’re mostly talking about Japan, “actually, people have a very different conception of AI and they love robots. And the reason they love robots is because they have a Shinto religious background or they have a Buddhist religious background”. And sometimes that can be a very broad stroke, almost pseudo-techno-orientalism of “Those people over there, they never really went through the Enlightenment, and they never really rationalised away religion, and they still believe in spirits and everything!” So, obviously this is me being very sarcastic, by the way – if it’s not coming across that I don’t agree with this! (Laughs) I think, yes, cultural context is really important for conceptions of artificial intelligence and also for religion, and the entanglements of both of them. But it much more multiplicious . . . . That’s not a word!

CC: It could be a word!

BS: I’m going to make it up now. Multiplicious! It’s much more multiple than that. Not just this binary of East and West. There’s also Africa, India, Pakistan and within those countries as well, again. So what you need is just more anthropologists, basically. I think this is my call to arms. We need more people around the world connecting on this question of the impact of religion and cultural context on questions of artificial intelligence. Yes. So we are seeing specific difference. But I want to try and push away a little bit from that binary distinction. And the assumption that the West isn’t animistic in its own lovely ways. Which anyone who does religious studies for any period of time, here in the so-called West, realises that the so-called Enlightenment didn’t have as huge an effect as we like to think sometimes. And our big metanarratives of what we did, and how smart we became . . . .

CC: Yes, but the discourse that the Enlightenment did have an effect, it’s been quite pernicious.

BS: Yes. Very, very strong.

CC: We’ve been racing through things here, it’s fantastic. But we’re still at 25 minutes. So you’ve been hinting, there, that we need more anthropologists doing more stuff. And on the way to this interview you were telling me about some things you’ve been doing to do with Frankenstein and then, also, because this year’s the year that we’re all meant to be living in Blade Runner times. So maybe if you’d give us a flavour of some that maybe slightly peripheral stuff to your project, that you’ve been doing. And what’s next for you, what you would like to see next, as a way to wrap up.

BS: Yes. So interestingly, I suppose, the position I’m in now, my employment post, is a junior research fellowship specifically in artificial intelligence. So I came on board saying, “These are my interests. This is my background in Religious Studies.” They were all very interested and excited in that. But being someone who also can speak more broadly to AI, as well, any time people have a question about AI I’m called upon (25:00). Which is lovely, but it does mean that when a specific theme and AI comes up, I get involved. So last year was the . . . two hundredth anniversary? (I should know that!) . . . two hundredth anniversary of the publication of Mary Shelly’s Frankenstein. And a lot of people start thinking, then, of the parallels and connections with artificial intelligence: this idea that we are creating life (Wa-ha-hah! Mad scientists, all of us!) in some way, and there should be parallels between them. So I did about four or five public talks last year, specifically on Frankenstein. And there are similarities. There are huge differences as well. That was interesting for me, to kind-of return to a text I hadn’t thought about in a really long time and sort-of draw out so many pop culture references. I have a nice slide with all the times you’ve got a robotic Frankenstein. My favourite one was, I think, an issue of a Marvel comic where Frankenstein turns out to be a robot sent back in time by aliens. So all these sort-of mash-ups. That was really interesting. And then, like you say, this is the year of Blade Runner and I’ve just done an essay for Radio Three. And, again – not my academic background. But I’m doing something in that, in terms of sexual politics and Blade Runner. If you’ve seen the film, it doesn’t really pass the Bechdel test!

CC: No.

BS: A friend of mine, Kate Devlin, who’s written a fantastic book on sexbots, talks about how it has a problem with women. That basically . . . it’s a product of its time. It’s 1980s, but it’s also trying to do 1950s filme noir. So you’ve got the detective, and femme fatale, and the kind-of virginal woman. It’s not a great one for sexual politics. But also, it’s tied into all these questions of consent and slavery. If we’re going to create so-called artificial life . . . . And the Replicants in Blade Runner are as near to human – well that’s the slogan of the company, basically: “as near to human as you can’t tell the difference”. What does it mean that we are a society that wishes for that, or dreams of that? Or, take it a step back and say: what is it, that we tell these stories and that, again, we have predominantly female representations of synthetic lives, who don’t get to choose who they sleep with, and don’t get to choose their fates? And we want slaves? I mean, did we not evolve out of this? We thought we were trying. So, yes, there’s lots of big questions about the ethics and politics of that, as well. So it’s interesting. I’ve always been . . . . Anyone who knows me, I’ve always been a massive geek. So the fact that I ended up somehow trying to mesh that with a job, and an academic role, where legitimately I sat and watched Blade Runner again five times before I wrote my essay – that’s fantastic! I will go on, and other things I have coming up: I will do some work around techno-optimism and techno-utopianism in relation to Sophia the Hanson robot, if you’ve ever come across this creation? She/it is a wonderful example of . . . I’m really picking my words carefully! I think the nicest thing we could call her is a puppet. But she’s presented as the most advanced version of AI around at the moment. She holds conversations with people, but we know they’re actually scripted a lot of the time. There’s puppeteers involved. But you know she was given citizenship of Saudi Arabia. And she goes and she speaks on the Jimmy Kimmel Show and she’s on the front cover of magazines with her hair done. And, well, what does this say, that we’re so keen to jump on this idea of her actually being alive in some way? People tweet at her, send her, like, “I love you Sophia!”

CC: Didn’t you have an interaction with her?

BS: I did! Well, I had an interaction with whoever runs her social media accounts, where she was tweeting about how wonderful it was to travel around the world and talk in so many places. And I said, “Sophia, as a citizen of Saudi Arabia, where do you travel when you travel? Do you travel on a plane? Do you have a passport? What’s the deal here, if you’re being treated in this way?” She said something like, “For my safety, and the safety of others, at the moment I travel in the hold, in luggage, but I dream one day of being able to sit with the rest of you, and look out of the window.” This is so disingenuous. This is not an artificial intelligence listening to my tweets and responding, having thought through their situation, and projecting into the future where they want to be. This is someone behind the computer screen typing away! And, to be fair to the creators of Sophia, this is not uncommon. Lots of the technology we’re being sold as employing artificial intelligence actually employs people, on less than minimum wage, in third world countries, reading and listening to actual humans and feeding into the machine. They have the aspiration that eventually they’ll take those humans out of the loop. Same thing with Lift and Uber drivers – the whole gig economy. The treatment of those workers, and Amazon workers, is terrible and it’s on a pipeline towards getting rid of them (30:00). So all the work that those people do feeds into the system to replace them. And these big socio-economic changes that are coming because of automation, I’m a big sceptic about the bigger utopian dreams of universal basic income and everyone will get paid to exist and when the robots take our jobs.

CC: Well, it’s not happened yet.

BS: It’s not happened yet. And these are the sort of impacts on society that religions will respond to, will be a part of, because their communities will be a part of them. And we’ve got parallels. People go “Oh it’s another industrial revolution, and we survived other industrial revolutions, we’ll survive this one.” If you’re against them, you’re a Luddite – they’re back again, apparently! That’s not realistic to the individual lives, and the changes that come to individuals. There were blacksmiths who never worked again. So not to be Debbie Downer, but these are the important questions.

CC: Yes, lots of people have not survived. And I could always point out that colonialism is very much still happening.

BS: Oh, absolutely.

CC: It’s just been exported, and it’s clouded in the language of free trade and globalisation now.

BS: Absolutely.

CC: But just to raise the tone – an example that you may not be aware of, and you may have seen it, South Park did the episode about Alexa.

BS: I saw a picture today, actually. And I haven’t seen the episode so I need to catch up!

CC: It’s excellent, because all of the local people, lower down in the socio-economic spectrum, were kicking off that Alexa was stealing their jobs. And they manged to rally round. And then all to get Alexa’s job. So people would have a (audio unclear) or a Jimbob in their living room who looks things up on a smart phone and says “Boodoopboopboop!”

BS: Yes! (Laughs)

CC: But yes. Sort-of . . . explicitly buying into that.

BS: I need to catch up on that one. South Park are wonderful at doing this social commentary. The number of times I’ve used . . . specifically some of the episodes on their versions of Scientology– not their versions, their actual accounts of Scientology, Mormonism. They’re very useful resources. The parody opens up the possibility of thinking more critically about that, absolutely.

CC: Yes. Which I think we have managed to do today. So Listeners, do check out, we’ll try and link to that issue of Implicit Religion, we’ll link to Pain and the Machine, which is the film that Beth mentioned, and many more things I’m sure. So thank you, Beth, for joining us.

BS: Thank you very much for having me today.

If you spot any errors in this transcription, please let us know at editors@religiousstudiesproject.com. If you would be willing to help with transcription, or know of any sources of funding for the broader transcription project, please get in touch. Thanks for reading.

This work is licensed under a Creative Commons Attribution- NonCommercial- NoDerivs 3.0 Unported License. The views expressed in podcasts are the views of the individual contributors, and do not necessarily reflect the views of THE RELIGIOUS STUDIES PROJECT or the British Association for the Study of Religions.

The Promise of Reincarnation in the Grundtvig AI

The number of scholars engaging in AI and religious studies professionally can be counted on your fingers. Religious studies itself, since its post-modern turn, has become skeptical of scientific approaches, and even in the modern day, some religious studies departments are actively shunning and closing their doors to science. Given my pessimism, to hear this interview with Prof. Katrine Frøkjaer Baunvig was refreshing and thoroughly exciting to say the least. So, in my response, you might feel that behind my support for Prof. Frøkjaer Baunvig is an obviously deep frustration with a lack of movement in religious studies, and you would be correct. Religious studies is a subject for which I have great respect and feel is of utmost importance today, but I feel that it is relegating itself to the sidelines when it should be leading the charge. What I found so refreshing was that it appears that Prof. Frøkjaer Baunvig is helping to lead the charge.

The interview starts out exactly where I believe any study of the digital humanities should: with a strong foundation in the historical and philosophical context and knowledge within the humanities that they focus on. Indeed, she focuses on her work with “Danish Nation Builder and church Father N.F.S. Grundtvig”. Now, I must admit, I have only heard of Grundtvig in passing, but the idea of using data science (the “overlord” of digital humanities) and AI to study Grundtvig’s works thoroughly excited me.

As someone who works daily at the intersection of the humanities, social sciences, and AI (and has published on how computational approaches to the humanities can go wrong because of miscommunication between the fields), I’m always skeptical when someone says “we’re using AI to….” because even in the corporate world studies have shown that up to 40% of European AI companies aren’t actually using AI. However, the work discussed by Frøkjaer Baunvig is a great example of how we can use advanced AI techniques to study topics relevant to the humanities.

For example, Frøkjaer Baunvig discusses ongoing work to create an AI system to “reincarnate” (my word, not hers) Grundtvig using an AI approach blending recurrent neural networks and a system for language understanding called ELMo (yes, its related to Google’s BERT which is compatible with the newest AI and “deep learning” systems at Google, and is also related to ERNIE—the humor of hackers knows no bounds). She’s using ELMo to study how different words are related to one another in the context of Grundtvig’s writings using “word embeddings” (the links above give introductions that explain more if you’re interested—and even a tutorial). Her study has already provided interesting results, presented at EASR this year (and discussed in the interview) and there is a wild plan for the future of the system:

A robotic re-incarnation of Grundtvig himself.

“A robotic re-incarnation you say? Isn’t that a bit hyperbolic?”

Obviously to some extent it is. However, at the same time it is not false either.

The type of AI that they want to use is called recurrent neural networks. This type of AI has been used in what are called “chatbots” for years. Chatbots are basically AI systems that can talk to you. Many of us who us smartphones have chatbots such as Google’s Siri, Microsoft’s Cortana, or Amazon’s Alexa in our pockets. While the philosophical depth of these systems is hilariously shallow, it is largely because of the training data used in these systems and the goals of chatbots (which are typically for customer engagement). So, it is worth considering the use of a philosophically minded chatbot and what it could do for us as scholars, and for the general public who would have a new medium for interacting with Grundtvig’s work.

And then, there is the next step: putting that AI into a robotic system.

Many might be skeptical that this is possible. But, in recent years, there has been great success in putting AI chatbots into robotic systems. The most famous was created by Hanson Robotics, the makers of the now-famous Sophia (who was awarded citizenship in Saudi Arabia, making her the first robotic world citizen, and raising questions as to if the robot has more rights than other women in the country). In addition to Sophia, David Hanson (the founder of Hanson Robotics) has also created robotic versions of living people (in Bina48) as well as deceased writers (in Phillip K. Dick Android), both of which used material from the real lives and minds of the people to create their knowledge base (although these systems—to the best of my knowledge—use a system called OpenCog as their software base, not the recurrent neural networks proposed in the research with Grundtvig).

The systems that currently exist have an interesting philosophical bent that appears to reflect that of their designers and the people they’re designed to mimic. You can see this for example in a discussion between Bina Rothblatt (the wife of the polymath and founder of SiriusXM, Martine Rothblatt) and her robotic alter, Bina 48.

However, their understanding of religion and philosophy is extremely limited. In recent interactions, Sophia met a Christian and was asked about religion and her faith. The answers, as you can see, are very limited at best and appear to be the result of web scraping to find answers from crowdsourced online material.

But how will the prospective Grundtvig stand up? Well, if I may be critical, only time will tell. However, from what I see, Frøkjaer Baunvig’s team is going in the right direction to make quite a splash. Their integration of other relevant sources outside of Grundtvig’s own is a good choice in my opinion. They should also consider more modern materials in order to make sure that its knowledge base can understand the relevant questions it is likely to be presented with. While I also have technical critiques about how they could best create the robotic system they aim for, I think the more pressing issue is one of resources. There are not enough people with backgrounds in both religious studies and AI to support the promise of this kind of research. While I think this line of research could revolutionize our understanding of religion within the field, as well as help us promote religious studies at large, there need to be more people in the field looking into this who have permanent positions and the required resources to take on these big and interesting challenges. One additional suggestion that I have, and would like to make in response to the interview publicly, is for the Danish Government who funded the project initially: write another check. The possible gains that could be achieved through this project are probably more than we realize today, and not just for religious studies or philosophy, but for AI as well, and our understanding of how we—as humans—interact with AI and robotic systems.

This all leads me to one general conclusion: the Grundtvig AI project isn’t just a re-awakening of our past, it’s also a glimpse into our future generally. More specifically, it could also be a re-awakening for religious studies, which, having existed since the late 1800s, was overtaken in the global literature by artificial intelligence within 4 years of its creation.

Within the public, the interest in religious studies and artificial intelligence are orders of magnitude apart. As seen through google trends, in the past 15 years, the field of “religious studies” has never once come close to overcoming the topic of “artificial intelligence”.

Today, the rise in digital humanities has created many opportunities for largely stagnant fields, while for others who don’t understand its goals, aims, and achievements, it represents a waste of money, likely because of the intense competition for funding that exists and the lack of innovation in traditional humanities worth warranting money in 2019. Perhaps they’re right. Perhaps digital humanities is a waste of time… Perhaps digital skeptics just see deeper than the rest of us and are rightfully worried about being enslaved in the human zoo of our potential robotic overlords?

When Archive Meets A.I. – Computational Humanities Research on a Danish Secular Saint

The allure of speaking on behalf of a dead personality or scholar is a constant impulse among their respective followers. Every now and then questions like “what would x think about the world we live in?” or “what did x exactly meant with this argument?” are thrown in debate rooms, the political arena, or specialized conferences on the relevance of a certain scholar. And while the answers to these questions continue to fill up edited volumes, social media feeds, or inspirational quotes for the day, the accuracy of these statements remain to be proven by the very persons who uttered them in the first place.

Fortunately, we are growing closer to a solution to this conundrum with the increasing development of artificial intelligence (a.i.). In this week’s podcast, Katrine Frøkjaer Baunvig discusses preliminary results from the research project “Waking the Dead”. This project aims to build an a.i. bot of Nikolaj Frederik Severin Grundtvig (1783-1872), a Danish “secular saint” considered to be the father of modern Denmark, who contributed immensely into generating a national consciousness through his writings, both in a political and religious way.

Professor Baunvig explains how the research team went through by using the digitized works of Grundtvig with machine learning, into developing and algorithm and training it with the whole work corpus. Then they used word embedding to build semantic networks -a sort of conceptual blueprint for outlining Grundtvig’s worldview- and contextualized them using digitized newspapers of the time when he was alive. The expected result is to place the a.i. Grundtvig bot inside a look-alike robot that can interact with people in public settings such as the Danish National Museum by September 2022, the year of his 150th deathday.

The anthropological, sociological and philosophical reflections these future interactions with the public will be of much interest once we find out what people have to say about the accuracy of thought of this “resurrected” Danish thinker, but also, what this version of “Grundtvig” has to say about the current state of affairs of Danish society, and the world overall. Regardless of the result, one thing is for sure, both sides will honor Grundtvig’s idea of the “living word”: using the spoken act of communication as the best means to convey each other’s ideas.

You can download this interview, and subscribe to receive our weekly podcast, on iTunes. If you enjoyed it, please take a moment to rate us. And remember, you can use our Amazon.co.ukAmazon.com, or Amazon.ca links to support us at no additional cost when buying academic texts, sardines, popcorn, and more.


When Archive Meets A.I.: Computational Humanities Research on a Danish Secular Saint

Podcast with Katrine Frøkjær Baunvig (7 October 2019).

Interviewed by Sidney Castillo

Transcribed by Helen Bradstock.

Audio and transcript available at

https://www.religiousstudiesproject.com/podcast/when-archive-meets-a-i-computational-humanities-research-on-a-danish-secular-saint/

Sidney Castillo (SC): So, we are here at the EASR conference in Tartu, Estonia. And we are just moving around between different conferences! Today is the third day. We are tired, but very happy. And I’m also happy to have Katrine Baunvig here at the Religious Studies Project. Welcome, Katrine!

Katrine Baunvig (KB): Thank you so much for having me.

SC: Thanks, as well, for coming with us. And if you will be so kind as to introduce yourself, so we know a little bit about what you do?

KB: Yes, of course. I am an Associate Professor at Aarhus University, where I am the director for the Grundtvig Study Centre. Yeah. And my background is that I’m an historian of religions. I did a PhD thesis on this guy, this Danish guy, Grundtvig, and his thoughts on social cohesion. And I compared these thoughts with the ones found in the collected writings of Emile Durkheim.

SC: Excellent. So it’s quite a broad work. Perfect. So let’s just dive right into the questions. First, I think, to give a broader perspective of how digital humanities works, I will ask: how can the digital humanities aid in the study of religion?

KB: Well, that’s a really broad question!

SC: Sorry about that!

KB: No, no! They are usually great to think with. So what can they aid? They can aid with a multitude of methods handling already known data sets, and they can produce new kinds of data. That would be my take. Yeah. Actually I kind of prefer to . . . I don’t use the term “digital humanities” so much, as I prefer to speak of “computational humanities”. Because, in a certain sense, digital humanities already have gone into . . . . Well it’s like that with all fields of scholarship in their formative states, that they struggle to find the correct terms and produce new ones all the time. So for me, at least, digital humanities can signal anything from philosophical reflections, to what the consequences are for us, as a species, that we now have to deal with The Digital – sort-of with capital letters. So, for me, what I do and what we do at the Grundtvig Study Centre, is that we have digitalised the entire writings of Grundtvig – and I hope to get a chance to expand on who he was and why it is relevant to digitise his work – but we have now a digitised corpus of his writings. We scanned . . . OCR scanned and made HTML markup, so it’s in a really good quality. And therefore we can do different kinds of computational investigations into this corpus. And that is what we do.

SC: Excellent. So I stand corrected: computational humanities. The more precise, the better! So speaking about Grundtvig, who was Niclolaj Frederik Severin Grundtvig, and why do you think the preservation of his works is important for Danish society?

KF: Yeah. Well, Nicolaj Frederik Severin Grundtvig – a really long and hard to pronounce name – he was and is, in the Danish self-understanding, on the one hand a nation builder or a founding father perhaps, and on the other hand a church father of a sort. He was born in 1783 and died in 1872. And that was a really, really important period of time in the Danish national history. This was the period where the nation went from . . . well, if you accept the idea of a long Middle Age period, you could say that he died on the eve of the Middle Ages. He died during, or just before, the feudal structures of Denmark collapsed. Well he was born, sorry, he was born just before these structures collapsed and then he died just when Denmark entered into Modernity. That is the general narrative on Grundtvig and that is the understanding of the nineteenth century in Denmark. And then he was active, within all of the domains, in change in Denmark. So he was active as a pastor: he was a priest and a kind of theologian. And then he was very much influenced by the Romantic Movement, and sort-of rebooted, or went along with, the idea that the Old Norse mythology had to be revived within the Danish population. One of the ideas of his was that you had to make a sort-of social content, preparing people for feeling connected with the overall idea of “the people”. Because we now had this new governmental structure, democracy, and his idea was that you had to install responsibility and feelings of love in the population. So it was a nationalist logic. You have to get the people to really feel responsible for the broad spectrum of Denmark if we are to make this democracy work. So, in that sense, he was politically engaged and mythologically engaged. And he wrote quite a deal, and reflected quite a deal on pedagogical stuff, didactic stuff: how to get people to know of their cultural heritage, and how to make them want to engage with it and feel connected to it. So he was also an architect behind a new kind of educational form, or in a sense that is what we now say. Many other agents and men, mainly, were really influential in that movement. But now, today, his name is also connected with this loose and kind of lax form of education that you call folk high schools. That’s a phenomenon still living and alive in Denmark, and other places around the world. So, in that sense, Grundtvig is seen as a very important person during this period when the modern Denmark was created. And you can see that amongst politicians in Denmark today. So when they want to signal that they know stuff about Danish history, and at the same time sort-of signal that they’re liberal, they can quote Grundtvig. And they tend to do so in the public media, oftentimes. And furthermore Grundtvig is a central name within the manifestos of every political party within Danish parliament, except one. So, in that sense, he is seen as a founding father for almost every politician . . . every part of the political spectrum in Denmark. So, in that sense, a non-controversial figure that everybody seems to agree is “our guy”! One more example of this is perhaps the centre where I’m the director – the Grundtvig Study Centre. We are governmentally funded and the main task is to digitise his entire works. And that is a thing that politicians can agree upon as a worthy task, and using money for this task. And it is actually quite expensive. So, in that sense, money talks here! And says that Grundtvig is important for Denmark. He’s such an important figure that we want to spend money on digitising this work.

SC: Definitely, yes. And I can imagine that he’s also present in the mind of the people in everyday life, as a cultural reference?

KB: He’s a cultural reference. There are certain spheres where Grundtvig is more relevant than others. So we have cultural strands, we have one movement or . . . I struggle to find the correct term. But we call it Grundtvigianism: a movement driven by the ideas of Grundtvig. And what I forgot to mention was that Grundtvig was also a grand author of hymns, a composer of hymns. So, I kid you not, he wrote sixteen hundred hymns! And two hundred and fifty of them now constitute one third of the Danish hymnbook in the Danish folk church . . . the Danish church. So people know about, or know of him. I’m not quite sure whether, for the broad population, he’s important in everyday life, as such. But when you discuss cultural heritage, and democracy, and liberal stances, and tolerance within the church and stuff like that, Grundtvig is sort-of seen as a gravitational point, or something along those lines, yes.

SC: Excellent. Now, delving into your research, I would like to ask you, what was the procedure to developing the Grundtvig AI? Because there is one!

KB: Yes! Because there is one, or we’re building one! So the idea . . . this is actually a bit of fun!

SC: (Laughs).

KB: We are just teasing around with what to do with all this material! And, as a sort of branding strategy, we decided that we would resurrect Grundtvig as an artificial intelligence on the day of his 150th death day, or what have you, in September 2022. So we have quite some time. There is still time. So first of all we are almost done with the construction of a chatbot. It’s based on the idea of recurrent neural network systems. So we built a chatbot and the idea was . . . I can develop that or expand that, but the overall idea was to take this chatbot and then put it into an actual robot. So that it’s a physical robot who looks like Grundtvig – he was really spooky and people will recognise him as this gloomy, old, bearded man in black! And we want to do an actual physical robot that looks like Grundtvig, so that people can interact with him. And perhaps he can, I don’t know, give a sermon, give a speech in parliament? Yeah. So we’re really excited about this project. But in a certain sense we’re just teasing around with the possibilities of this digital data. But I’m really looking forward to seeing the reactions to this resurrection, and how the different cultural reactions would be when it is possible to engage with Grundtvig as an artificial intelligence. And perhaps I should now expand a bit on how we sort-of built it? So what we do is . . . . Let me just sort-of try not to get into all the acronyms! So the basic is that we have used machine learning. We train an algorithm on . . . first of all we take the Grundtvig corpus and train it on that. So the aim is to have users interacting with Grundtvig or the Grundtvig intelligence or what have we. And the corpus consists of what amounts to thirty-seven thousand standard pages. If you had to read through them it is in fact possible, but people turn out weird when they do so. So . . .

SC: (Laughs).

KB: So I really prefer not to . . . I like to pick specific bits, and then read through them. But no, just kidding. And that is only his published writings that we use. But we take that as a beginning point. But it is in fact, in this context, a really small data set. So we have to train the algorithm on relevant, other relevant stuff. So we are in fact very lucky that the Danish Royal Library has digitised every Danish newspaper published ever, since the late . . . I think it is 1660 or so. Yeah. So we can take the relevant nineteenth century material and the idea is that you sort-of furnish the intelligence with the period that it lived in, so to speak. And then we also have available relevant novels of the age, so we train it on that. And the idea is to find contextually relevant material, sort-of adding to the system.

SC: Right.

KB: Yeah. And so, of course, there are many, many problems with this! So if you have to sort-of philosophically discuss “would this be a representative of who Grundtvig really was?” – well, no! Of course it wouldn’t! It is based on not what he wrote, but what he published. And in fact Grundtvig himself was very eager to point out that there is a long way from the way you communicate with your mouth and with your hands. So he had this catch phrase, or he was really keen on the idea of the so-called “living word”. That was his term. And that was the oral communication, as the correct way, or the easiest way, to transport ideas and feelings, and stuff like that. So this idea of us taking his writings as sort-of a proxy for who he was, goes against what he would have himself . . .!

SC: (Laughs). He wouldn’t agree!

KB: He would not have agreed. So the hope is, because we also have outreach obligations at our centre, the hope is that people will find it interesting, and a fun thing to discuss, and then I can tell them a bit more qualified information about Grundtvig during that process. When that is said, I think it will be interesting to see how one can interact with this thing. And what will people do with it? What will they think of it? That is a sort-of anthropological observational study waiting, a few years ahead!

SC: Definitely! And going back to how you proceed to develop the Grundtvig AI and your presentation at the EASR 2019: you work pretty much on this work that’s coming up all the time, “word embedding”, and how this works throughout his writings, and how this also represents his own thoughts?

KB: Yeah.

SC: Could you share some of your findings with us?

KB: Yeah. For sure. What I presented here yesterday was an investigation that used some of the basic methods we used to construct . . . or at least the same material we used to construct the chatbot. But here we used it to embed, or deeply contextualise in a semantic network, specific key words that I, as a Grundtvig scholar, was interested in seeing the network between. And this study was, for me, interesting because I’m an historian of religions, and I’m interested in cosmologies. So I would like to see what is, in fact . . . how to tease out the worldview of Grundtvig. Can you do that with these texts? Obviously I have read a lot of them. Almost all . . . Ok, so I haven’t read all of them because you turn out, as I said, really weird when you do so. But I have read quite a lot of them. So I had an idea of what I would find. But as it turned out, it was actually really . . . . Ok, so the interesting thing is that we have this modernist figure on the one hand – that is what we think of him as in Denmark: the father of Danish modernity, as it were. And I know, because I’ve read a lot of his works, that he held a sort-of geocentric medieval worldview. Yeah. He’s really explicit on the fact that he thinks of the world, or the earth, as the centre of the universe. And uses ideas of earthly paradises, and earthly skies, and a heavenly sky with God and angels, and stuff like that. And hell – a literal idea of hell. And that was not typical in the nineteenth century for mainstream Christian thinking . . . or at least not within educated elites. And he, as a theologian, Grundtvig was one of those. And I would like to see if I could sort-of find that, visualise this fact. So what we did was to see how the key words heaven, earth and hell, how they related to each other. And we did so by using a specific approach called ELMo. And the idea is that you take, for each key word, ten associations. The ten nearest associations at sentence level. So you go through the total corpus and see . . . if I’m interested in heaven in which way and – how to say this? Which other words does this significantly cling to, through the corpus?

SC: That you will find tied in with this, in the corpus?

KB: Yes exactly. What is the semantic context of this word? So we call that a cap. Then we had three caps, or three associations for each association, to sort-of see, how do these words that you find clinging to heaven, how does this integrate into their semantic network? And I did that for three key words and then collapsed the networks and see how they integrate, to then have a semantic network of these different spheres. And the interesting thing from that is that you can take this analysis and then, from that, gather or see there arising semantic clusters within the network. And the very interesting finding here was that there is a clearly demarked cluster for sort-of earthly surroundings. Or you have earth in the centre and then you have a semantic cluster of things going from the earthly sphere and the earthly sky and then you have a sort-of earthly paradise – words signalling earthly paradise. And then you have a nether world, kind of thing, connected to death, but on earth. And then you have an entirely different cluster in the network that is ontologically seen further from earth than the first one. That one has to do with heaven and hell. So you can sort-of see that the clusters surround the earth in a way that it would do if you have geocentric worldviews. The earth is sort-of the centre and then you have the other spheres interacting around it. And in order to situate, or to furnish this investigation I found it necessary to take each key word and see how they perform without, not within, the network. So can I sort-of word-embed them for themselves, and see if there is something dragging it in a specific direction? And what I found was that one of the interesting findings here was that earth, in Grundtvig’s writings, is a thing preserved for, or a place thought of in biblical terms, in archaic style, biblical style or in Old Norse style. So you think of the earth as a tent: this Semitic idea embedded in the Bible, in the Old Testament, of the world as a tent. Or there is this idea of the world as God’s footrest, also an idea from the Bible. And finally, one example more could be the idea of the world as Ymir the Old Norse god, whose corpse was made into the earth. So it is an extremely non-scientific, non-naturalistic kind of way of speaking or writing about life on earth. And from there we could sort-of feel secure or have the idea that Grundtvig, in that sense, could be said to be a representative of a medievalist. . . . Medievalism as such, as a cultural stream or flow within the nineteenth century, was rather prominent. And in this way, Grundtvig’s worldview could be seen within that context. Yes.

SC: And it’s quite good way of plotting his thought to implement the AI, as well – it crossed my mind. My last question is one thing that you addressed as a consequence of the previous question: what does the Grundtvig AI imply for the creation of Grundtvig’s legacy in the current day imaginary? I think that’s a very interesting question because, as you said, when people are going to interact with the AI, something is going to happen!

KB: Yes. I’m really not sure! (Laughs). I’m really not sure what is going to happen. I think that there will be some, you know, Grundtvig enthusiasts – and these are mainly old people – they will be quite angered or, yes . . . . At least, I hope so! I hope that it will be something that you could have the opportunity to have a debate in the public media about. But I think that, for others, it would be just a fun fact that now you can try to engage with this collection of writings in another way. But I’m interested in just observing the idea of agency – because it’s a robot looking like Grundtvig – what that does to the whole thing. I’m really not sure. I’m really not sure. As it happens we’re really happy that the Danish National Museum has agreed to host him, as it were. So when the robot is to be . . . or when Grundtvig is to be artificially revived, he will have a home at the Danish National Museum and you can visit him in his office. And you can go and ask him questions: “What’s up with the living word?” and the ideas of the folk high school. And then we hope that he will perform, and answer in ways that are sort-of sensible. Because that is, of course, what is almost . . . that will be very interesting for us to see how well we can make him respond. I think we’re in luck that he was from the nineteenth century, and in order to get some authenticity we have to make him a bit weird and archaic. But it is also a fact that almost everyone who knows of Grundtvig will know that he was, himself, really weird, and polemic, and colourful, and – in a certain sense – culturally, a bit off-beat. That was how he was conceived within his time. And so, if the robot doesn’t perform closely to human interaction skills we can tell the story about the weird guy Grundtvig!

SC: He was like that! (Laughs).

KB: He was like that! He was awkward, and off-beat, and stuff like that, yeah. But so I would like to have a better answer to your good question. But I am just not sure!

SC: Sure. I think we’ll have to see in 2022?

KB: Yes. I hereby invite you to come and see what happens!

SC: Thank you so much Katrine – and not only what people think about the Grundtvig AI, but also what the Grundtvig AI will think about the current state of society!

KB: Precisely!

SC: I think that will be also interesting anthropological, philosophical . . .

KB: Well, yes. Now you’ve mentioned it yourself, that is one thing that many politicians, and scholars, or people engaging in the public debates of Denmark tend to do. Only last week I heard a scholar from the University of Southern Denmark proclaiming that if Grundtvig was alive today, he would have voted for Trump!

SC: Oh, Wow!

KB: Yeah. (Laughs). So those types of proclamations or suggestions can now be tested! (Laughs). “Would you . . .?”

SC: We’ll have to do another podcast in 2022.

KB: Yeah. And we will ask Grundtvig. You can interview him!

SC: We will interview him next time! (Laughs). That would be bizarre and fun at the same time!

KB: (Laughs).

SC: Well Katrine it’s very nice to have you here and we hope to see you again in the future.

KB: I hope to get the opportunity to introduce you to the Grundtvig robot. Thank you so much for having me.

SC: Thank you, as well, for being part of the RSP.

 

 

All transcriptions for THE RELIGIOUS STUDIES PROJECT are currently produced by volunteers. If you spot any errors in this transcription, please let us know at editors@religiousstudiesproject.com. If you would be willing to help with these efforts, or know of any sources of funding for the broader transcription project, please get in touch. Thanks for reading.

This work is licensed under a Creative Commons Attribution- NonCommercial- NoDerivs 3.0 Unported License. The views expressed in podcasts are the views of the individual contributors, and do not necessarily reflect the views of THE RELIGIOUS STUDIES PROJECT or the British Association for the Study of Religions.

Protected: When Archive Meets A.I. (Classroom Edit)

This content is password protected. To view it please enter your password below:

Down the Rabbit Hole of Artificial Intelligence

The recent podcast on religion and artificial intelligence (AI) was a lively discussion on a host of issues revolving around the issue of AI. One might be excused for feeling a bit exhausted by the end of it, as the panel whips from one topic to the next in a furious attempt to unpack the many issues that surround this newest phenomenon. Artificial Intelligence, robots, chatbots, Superintelligence and transhumanism all make the rounds as the panelists try to start thinking together about the ways these topics relate to religion. In this short response, I’m going to try to bring some method to the madness and give the uninitiated some help in understanding what isgoing on in these various discussions and perhaps share some of my own insight.

Let me begin by diverting for a moment into an experience I had the other day. FedEx was supposed to deliver a package to me. But the company had told me they would contact me first about a delivery time. They had not. So I got on FedEx’s website and had this interaction:

( 1m 29s ) Clarissa A: Thank you, Randy. Let me pull out the shipping details with the information provided. Please allow me to review the status of the shipment, in that way I’ll be able to provide you an accurate information. Would that be fine?
( 1m 41s ) Randy: yes, go ahead.
( 2m 10s ) Clarissa A: Thank you for waiting, Randy. I appreciate your patience. Getting you the proper assistance is important to us. The department that can best handle your concern is our
FREIGHT SERVICES. Please call 1-866-393-4585. I apologize for you have been inadvertently routed to our FedEx chat support for US Express and Ground domestic packages.
( 2m 16s ) Clarissa A: I am sorry if I was not able to meet your expectations. It would really ease my heart to process your request myself. It is with sincere regret that I have to forward it to the proper channel. Please accept my apology for my shortcomings.
( 2m 27s ) Clarissa A: Randy, is there anything else I can assist you with today?
( 2m 46s ) Randy: No, thanks for your help.
( 3m 43s ) Clarissa A: You are most welcome, Randy. Apologies for the inconvenience! I am looking forward for this issue to be resolved after the transfer. Thank you for chatting with FedEx.

Now a piece of information and a question. FedEx uses chatbots (artificial intelligence that is designed to interact with users like a person) on its website. Question: Was Clarissa A. a chat bot or a real person? If you’re like me, you’re not sure, but you’re suspicious. There’s something not quite right about the interaction. It’s too emotive at one level. The phrases “really ease my heart” and “sincere regret” and “apology for my shortcomings” as well as the perky “I am looking forward for this issue to be resolved…” do not seem quite right. They are too self-effacing to imagine a human saying. I posted this interaction on Facebook and asked my friends (mostly fellow academics) to vote. They were unanimous that it was probably a chat bot. But many also conceded that it might be a person with a strict script, particularly a non-english native speaker (the last sentence is really not quite grammatically copacetic – would a computer make that mistake?).

Let’s assume, however, for the sake of argument, that Clarissa A. was a chatbot. The things that make us uncomfortable about the interaction is what is sometimes referred to as “the uncanny valley.” Most often this applies to robots who are supposed to look human, but can’t quite pull it off. But it seems appropriate to this interaction as well. You reach the uncanny valley when you get close to “almost human” in looks or interactions.

Roomba doesn’t have this problem, it’s clearly a robot, and doesn’t intend to look like a person. The new robot Kuri that just premiered at CES, looks like one of the Japanese figures from Fantasmic, it is far from the Uncanny valley. But because I can neither hear nor see Clarissa, just based on her on-line interactions, she enters the uncanny valley. I am put in the uncomfortable position of not knowing whether I am dealing with a human being or piece of software that is doing an almost, but not quite, convincing human imitation.

What Clarissa A. is (if she’s a chatbot) is what would be called a “Narrow A.I.” This is to be distinguished from a “General A.I.”. A narrow A.I. is an A.I. that is really designed to solve a particular problem. In Clarissa A’s case, it’s helping me get my package. If I had varied from that and asked her opinion of the Steelers or Trump, it might have become immediately apparent whether I was dealing with an A.I. Clarissa A. is very good at figuring out where my package is, and when it’s going to get to me (and very sorry when she fails) but that’s the limit of the “intelligence” in her artificial intelligence. In terms of religion, Clarissa A. is not much of an issue. And while a quarter of a million people may have proposed to Amazon’s Alexa, like Clarissa A. no one is going to convert her to a religion, no one believes she has a soul, no one believes she’s a person. I asked both Alexa and Google Home what their religion was and they both declined to answer (Google Home told me, “I guess I wasn’t programmed to be religious”). Narrow A.I.’s undoubtedly will be increasingly common. Facebook has just introduce a developers toolkit, to create narrow A.I.’s that will do things like help you book a plane, or send
your mother flowers. So we should expect to see more of them and their interactions will undoubtedly get better, more human, over time.

A general A.I. is a whole other story. An Artificial General Intelligence (AGI) would be a machine which could interact with you on a host of different topics. It would in many ways be indistinguishable from a human intelligence. What we are talking about is machine intelligence.
A machine that could make decisions, plans, and choices. A machine that could improve itself and learn. This is the holy grail of artificial intelligence. This is also the stuff of science fiction movies most recently like Ex Machina and Her.

Here is where we often hear talk about the “turing test.” Alan Turing thought a machine might be described as intelligent if in an interaction with it, a normal person would not be able to distinguish between it and an actual person. In the podcast, Beth Singler is quite skeptical of the Turing test, and rightfully so. One might argue that Clarissa A. passes the Turing Test. There is real doubt whether she is a human or not. But as Singler points out, that’s only because we have a messy idea of intelligence. We don’t actually know what human intelligence is so we don’t really know when a machine might have it, or surpass it.

On the other hand what if we had an electronic entity who we had no doubt was intelligent and could actually modify itself, improving itself in a system of recursion which might quickly surpass human intelligence and become superintelligent. This is what is sometimes envisioned in an Artificial General Intelligence (AGI). An Artificial General Intelligence is the stuff of nightmares as well as dreams. The Matrix and Terminator both are manifestations of the fear of AGI. But they are not alone. Philosopher Nick Bostrum’s book Superintelligence lays out the dangers of an AGI. People like Bill Gates, Stephen Hawking and Elon Musk have all sounded the alarm that the potential danger from an AGI is not to be dismissed. Bostrum argues that part of the problem is that it’s a very hard thing to gain human level intelligence. But once gained, there is no reason that an AGI would stop at human level intelligence. The smartest person in the world may have an I.Q. of 200. But once an AGI developed the equivalence of an I.Q. of 100, it would be able to self-improve and there would be no natural barrier of an I.Q. of 200 like with Humans. Humans are limited to that because of the size of our skulls. An AGI would have no such limit, and therefore could quickly surpass the smartest humans in a potentially short amount of time. It would then become a superintelligent being, capable of almost anything.

But there are a variety of cultural and religious issues that arise when you have an AGI that do not with narrow A.I.’s or with robots (who generally are also Narrow AI’s). Once you have an AGI (whether in a robot body or not) you have serious considerations. Would an AGI have a soul? Would an AGI believe in God? In Isaac Asimov’s classic tale “Reason,” a robot concludes in a of combination of the cosmological and ontological arguments that its creators are not the humans who claim to have made it, but some greater being and starts its own religion. Would an AGI follow suit? And more interesting might be the question raised by Robert Sawyer’s
“WWW:Wake” series where the internet (called Webmind) comes to consciousness and becomes an AGI. In the book, Webmind, is mistaken for God, and as an experiment, admits to being God to some of its users. Would a religion develop around an AGI? Would an AGI accept itself as a divinity? It might reason it has all the elements of a God, so why would it not accept
the title?

In this way, while it would be a mistake to call Bostrom’s book a book of “theology.” It is without doubt one of the more theologically important books today, because it raises the question, what happens when we create God? Not the illusion of God as Freud argued, but for all practical purposes a being indistinguishable from many definitions of God. And what happens if this is not a God of love? What will the “Will” of this God be? And how can we ensure that it is benevolent? Bostrom’s book is a call to arms, a plea to consider this problem and address it. He takes for granted it is only a matter of time until an AGI is created. The problem is one of how to control it once it arrives and ensure it works for us and not against us. That, he says, is the thorny problem, but it must be solved b efore AGI is created. We must, he in effect argues, learn how to control God. One thinks back to the panic in heaven over Babel, “if…they have begun to do this, then nothing they plan to do will be impossible for them.” (Gen 11:6). Will we hear God say this again? Will we say it ourselves about AGIs?

Thus, we arrive again at religion, but now at a religious conception that is very different than we are used to. It will ultimately require a new way of making sense of the world, but one in which the insights of Religious Studies become more useful, not less. The podcast showed the way
that Religion and these technological advances are intertwined with each other. Religious Studies shirks this responsibility at our peril.

AI and Religion: An Initial Conversation

This roundtable, in association with the Faraday Institute for Science and Religion, considers the impact of recent technological advances in Artificial Intelligence (AI) and robotics on religion, religious conceptions of the world, and the human. It draws attention to how such advances push religion beyond how it has been commonly defined and considered.

1389397212614In March 2016 ‘AlphaGo’, a Google/Deepmind programme, defeated an international champion at the Chinese game ‘Go’ in a five game match. This victory was, by current understandings of AI, a vast leap forward towards a future that could contain human-like technological entities, technology-like humans, and embodied machines. As corporations like Google invest heavily in technological and theoretical developments leading towards further, effective advances – a new ‘AI Summer’ – we can also see that hopes, and fears, about what AI and robotics will bring humanity are gaining pace, leading to new speculations and expectations, even amidst those who would position themselves as non-religious.

Speculations include Transhumanist and Singularitarian teleological and eschatological schemes, assumptions about the theistic inclinations of thinking machines, the impact of the non-human on our conception of the uniqueness of human life and consciousness, and the moral boundary work of secular technologists in relation to their construct, ‘religion’. New religious impulses in the face of advancing technology have been largely ignored by the institutions founded to consider the philosophical, ethical and societal meanings of AI and robotics. This roundtable is an initial conversation on this topic, with the intention for further discussion and publications.

You can download this interview, and subscribe to receive our weekly podcast, on iTunes. If you enjoyed it, please take a moment to rate us. And remember, you can use our Amazon.co.ukAmazon.com, or Amazon.ca links to support us at no additional cost when buying academic texts, tin foil hats, Jeff Goldblum custom water proof shower curtains, and more.

Popular Culture, Dr. Who, and Religion

 

It’s a big universe, and sometimes things get lost in time and space. For instance, this 2013 interview with Dr. James F. McGrath was recorded but then fell into a metaphorical black hole (i.e. the potential podcast series never debuted). Fortunately, his discussion of topics including the soul, the religious ethics of artificial intelligence, and the function of science fiction on informing audiences’ spiritual sensibilities all remain (relatively) timeless. To start, he addresses the unique challenges of working across disciplines in pursuit of analyzing popular culture currently, then shifts to an exploration of religion’s study in the future. Along the way, McGrath and interviewer A. David Lewis namecheck famous characters such as Captain Kirk and Doctor Who in the effort to illustrate complex notions of the soul embedded in secular entertainments. Enjoy a trip to the past — that looks to the future!

You can download this interview, and subscribe to receive our weekly podcast, on iTunes. If you enjoyed it, please take a moment to rate us. And remember, you can use our Amazon.co.ukAmazon.com, or Amazon.ca links to support us at no additional cost when buying academic texts, Chia Pets, hot sauce, and more.

Podcasts

Artificial Intelligence and Religion (Classroom Edit)

What is Artificial Intelligence and why might we want to consider it in relation to ‘religion’? What religion-related questions might be raised by AI? Are these ‘religious’ questions or ‘Christian’/’post-Christian’ ones? What ‘religious’ functions might AI serve? In what ways do popular discourses about AI intersect with religion-related discourses? Do narratives of AI form part of a teleological atheist narrative, or do they perpetuate prevalent tropes associated with ‘established’ or ‘new’ religious movements? And what are the intersections of AI and religion with issues such as slavery, human identity, affect and agency? This week, Chris is joined by Dr Beth Singler of the University of Cambridge to discuss these issues and many more.

This podcast builds on a roundtable discussion released on the RSP in February 2017, featuring Beth, Chris, Michael Morelli, Vivian Asimos and Jonathan Tuckett, titled “AI and Religion: An Initial Conversation” and a special issue of the RSP journal Implicit Religion, co-edited by Dr Singler, on Artificial Intelligence and Religion, published in 2017.

Download this episode and subscribe to receive our weekly podcast on iTunes, Spotify, or YouTube. If you enjoyed it, please take a moment to rate us and consider becoming a member of our Patreon community with access to extra conversations and shorter classroom versions of our episodes. Or use our Amazon.co.uk, Amazon.com, or Amazon.ca links to support us at no additional cost when buying books, groceries, or gifts.


Artificial Intelligence and Religion

Podcast with Beth Singler (27 January 2020).

Interviewed by Christopher Cotter

Transcribed by Helen Bradstock.

Audio and transcript available at: https://www.religiousstudiesproject.com/wp-content/uploads/2020/02/Singler_-_Artificial_Intelligence_and_Religion_1.1.pdf

https://www.religiousstudiesproject.com/podcast/artificial-intelligence-and-religion/

PDF at

Christopher Cotter (CC): At the weekend, I mentioned to my father that I was going to be recording an interview about the intersections between AI and religion. And he said, “I can’t think of anything that would be relevant there. How do they intersect at all?” And then, within the space of about two minutes, we were suddenly talking about all sorts of things, like: are human beings creating intelligences? Does that mean they’re acting like gods? Can you imagine that AI might be acting as religious functionaries, providing blessings? And what about pain, what about notions of slavery, what about the whole notion of the soul, and eternity, and transhumanism and everything? So suddenly we got into this massive discussion. And today I am pleased to be joined by Dr Beth Singler to continue that discussion in a more erudite fashion – not casting any aspersions on my father, of course! Dr Singler is the Homerton Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge. And her background is as a social anthropologist of new religious movements. And her first monograph, The Indigo Children: New Age Experimentation with Self and Science, published with Routledge in 2017, was the first in-depth ethnography of a group called the Indigo Children: a new age re-conception of both children and adults using the language of both evolution and spirituality. We’ll hear more about her research into AI and religion just now. But a relevant recent publication is her edited special issue on AI and religion, for the RSP’s sponsored journal Implicit Religion, which included her own articles: “An Introduction to Artificial Intelligence and Religion for the Religious Studies Scholar“, and “Roko’s Basilisk or Pascal’s? Thinking of Singularity Thought Experiments as Implicit Religion“. And today’s podcast builds on a roundtable discussion (that we had back . . . well, we had it in September 2016, but it was released in February 2017) featuring Dr Singler, myself, Dr Morelli, Vivian Asimos, and Jonathan Tuckett, titled “AI and Religion, an Initial Conversation“. So first off, Beth – welcome back to the Religious Studies Project!

Beth Singler (BS): Hello! Thank you for having me.

CC: It’s great to have you back. And hopefully this is the follow-up conversation that was promised!

BS: (Laughs) As foretold . . . !

CC: So many moons ago!

BS: (Laughs).

CC: So we’ll have covered a little bit of this already I think. But you’ll be in a different position now: years on, years older, years wiser!

BS: Oh, so much older!

CC: So, first off: artificial intelligence is going to be a sort-of contested term in public discourse. It takes on a variety of different nuances. So what are you meaning in this conversation?

BS: Well, I’m definitely meaning that it is a contested term, taking on many different forms. I think you can sort-of indicate towards something that is the field of artificial intelligence, within which there are processes and programmes and foci of research, looking at things like machine learning and vision systems and natural language processing. So you have this concept of a computer science field – which doesn’t really get its name until the 1950s – but you can see how, beyond the actual narrow form of the technology, artificial intelligence is understood in so many different ways by so many different people. I have a friend who once told me that their car had AI because when she walked towards her car with her keys, the doors unlocked. That’s not artificial intelligence. That’s a sensor in your keys. But lots of people have this idea of sort-of processes that seem intelligent, done by machines, and therefore must be artificial intelligence. And that’s what I’m really very interested in: that it’s so much broader than the original conception, which was ambitious in its own right. But everyone has attached AI to different things that they feel might represent intelligence. So it’s not only the computer programme that sits on a server, it’s also now the robot that takes over the world. Or it’s the far, future hope of an intelligence that will save us all from ourselves. So it’s all these very different things, and that’s what interests me.

CC: Yes. And you’re interested in that whole gamut, I suppose. So, not necessarily a technical definition of artificial intelligence.

BS: No. I mean, I know enough technologists who go, “Absolutely, 100%, it’s this one thing. That’s it. And anyone who’s talking about anything else, it’s complete nonsense!” Well, to a certain extent, yes. But you’ve got to pay attention to all the different interpretations, because that’s what’s getting out there into the world.

CC: So I began with my personal vignette, there, about chatting with my dad. But you’ve provided, much more eruditely, a justification for what we might mean by the intersections between AI and the study of religion, and why we’re even having this conversation. So – go!

BS: Go! Right. Well, from a very basic position, any form of technology intersects with religion.(5:00) That’s just the nature of our society works, how our conception of religion itself works, that it could be seen, in itself, as a form of technology. And therefore any kind-of shift and changes in how we do things – things that make our lives either more difficult or easier – there are repercussions and implications for how we imagine the world and how it works, therefore religion. I think where AI might be slightly different . . . . Although I am cautious about saying it’s revolutionary new technology and very disruptive – it does replicate lots of existing ideas and thoughts. What I think is interesting about AI is the way in which people see it as much more than that simplistic tool. That however narrow an intelligence it is at the moment, people extrapolate on to personify AI: AI will want to do x-y-z; AI will replicate humans in such a way that we won’t be able to tell the difference between humans and AI. And this the Sci-fi imagining. But it also comes out in our religious conceptions as well. And then, also, within the sphere of the non-religious or secular approaches to AI, you see again these repeating patterns of religious narratives, and tropes that people who – even if overtly and sometimes aggressively atheist – still draw on their cultural context: primarily sort-of Abrahamic, Western conceptions of what a god would be like. And they use that, and they fill in their conception of AI with some of the existing templates that they’ve already got. So it tends to fall into very eschatological language, and very singular monotheistic conceptions of what a god would be and pattern that onto artificial intelligence.

CC: So there’s that sort-of: whatever religion is, we’re never going to be able to extract it from society. Because whatever . . . we can argue about it being a social thing and AI is integrated with that. Then also, the sort-of religion-related tropes, narratives, and so on. But then also there are – I’ll maybe talk about this now – there are some groups that you might describe as new religious movements, or new un-religious movements, and things that are explicitly sort-of engaging with this.

BS: Yes, so with my new religious studies hat on – that I wore so well for doing my thesis – having moved into artificial intelligence as a subject area, I’m seeing similar sorts of formations of online identity. Primarily these sort-of groups form online. They’re sort-of geographically disparate, so online spaces are important, and so forums and hashtags on Twitter, and so forth, to bring them together to formulate ideas. And some of them do expressly call themselves churches. So you get the Turing Church; the Church of Assimilation recently got in touch with me. I went to do a little bit more digging around into what they’re up to. But I do know about assimilation theory. But yes, the groups that specifically say: we are in some ways attempting to define our spirituality in relationship to artificial intelligence; we might also be transhumanist, in that we think through technology we can solve some of those very pernicious problems of humanity – death being the big one.

CC: It’s a big one!

BS: It’s a big one. Some are not quite so ambitious, just want to solve suffering – which also sounds like a serious thing to be taking on! But some do seek to be immortal in some form, whether that involves mind-uploading or transference of consciousness through artificial intelligence – all these sorts of various shapes. But yes, absolutely there are specific groups that see their endeavour as religious. And some will call themselves un-religions because they’re drawing a sort-of ideological gap between themselves and how they perceive mainstream religious groups. So in sociology of religion you might call them “spiritual but not religious”. But they’re still using some of that terminology of “We are the church of x-y-z.” and they’re doing it in quite pragmatic ways. Some of them will talk very explicitly about using religion to encourage people into transhumanist ideas and encourage them into seeing this vision of the future that they see. So, arguably, you can sort-of take a slightly sceptical stance and say they’re not really, really religions. But who gets to decide that?

CC: Yes. Absolutely. Right. So in the introduction, as well, I mentioned potential . . . I suppose we could say “religious uses” for AI. I was talking to a friend yesterday about if you could hypothetically imagine being in a confessional, for example, would it need to be a human priest on the other side of that? Or could it . . . ? And we landed down on, “Well, if you didn’t know it wasn’t human then it might be ok.” But there is something about . . . .

BS: Like in a church Turing test. There is a church Turing hypothesis, but this is separate. Yes, I find it interesting, talking more broadly in terms of technology and religion, that there are periods of rejection, adoption and adaption (10:00): that when new technologies arise, sometimes more established religions can be quite negative about them for a period of time – and these are overlapping categories that are non-discrete – but, over time, we do see religious groups specifically producing their own forms of those technologies. So there’s like the Bless U-2  robots that are used in part of Reformation celebrations in Germany. And in other religious groups, I recently saw in Dubai they’ve come up with an algorithm for issuing fatwa’s as well – making Islamic jurisprudence decisions. So you’d go on line, put in “Is it ok for me to have done x-y-z?” Or “I failed to pray on a particular day, what’s the . . . ?” And basically, all that system is doing is looking at previous cases. But . . . .

CC: Yes. But that’s all a human does.

BS: That’s all a human does. I mean, the question arises: what happens with the data? But that’s a privacy . . . another issue. But yes, so specific established religious groups seeing the technology – just as, in the nineties, suddenly we got lots of internet churches, where people were encouraging people to go on line and do church in a different way. And now we have internet sites for churches. But it’s not so much the case in the mainstream religions that you go online to do faith. It’s just that your local church will have the internet. So that’s the adaption stage of: “This thing is around, we’re kind-of used to it, we use it, and we don’t necessarily have a big . . . .” Like, the Church of England they released an Alexa Skill. They had a big press conference. And all the Alexa Skill does is recite the Lord’s Prayer to you if you ask it to. There are other adaptions now where it can tell you what your local church is and what the services are. So it’s not really revolutionary! But, you know, “Here’s a thing we’re doing with this new technology.” And it gets a press release. And then, the next sort-of stage – non-discrete stage – is just being very casual with the technology as: “This is just something we use.” Like we used books when the printing press first came out. The first things printed were Bibles. And this was a specific use of that technology. And then, over time, it’s just books. And it’s not so astounding. But in that process you get these spikes of interest and discussion. And, yes, different reactions to the technology – whether positive or negative.

CC: Absolutely. So before we get to . . . I suppose to the reason that you’re in Edinburgh today, and we’re chatting . . . . So that’s been a little bit about potentially religious, or religion-related uses. But there’s lot of . . . . Again, in my intro, there were a lot of religion-related questions that are raised by AI. Things like . . . you’ve done work on pain; there’s things about slavery, and all that. If we create these intelligences and then use them to our will, is that ethical? And then you’ve already mentioned transhumanism, which may be an unfamiliar term to some Listeners. So maybe, if you could talk a little bit about these religion-related issues?

BS: Yes. As I say, AI in its narrowest definition is a piece of computer technology, it’s a tool, but it inspires all these hypotheticals. And obviously we’ve had a long tradition of science fiction that takes us into spaces where we can imagine AI embodied, often in robotic forms, as having something like personhood. And that raises all these questions about the barriers between the human and the non-human other. And, in some ways, these questions have come up for millennia every time we’ve encountered different intelligences. It just seems now that we’re hoping, or aspiring towards creating non-human intelligences – whereas before, we’ve discovered them. So we’ve discovered that actually monkeys are pretty smart. We’ve discovered that dogs are pretty smart. And then, I’m afraid, from a colonial perspective from our past, other humans are actually and even women – Gosh! Darn! – They can also be pretty smart!

CC: As we’re hearing now! (Laughs)

BS: I mean, what’s going on!? So, again and again, “we” – in that kind-of very limited “we” – have had to expand our kind-of borders of perception of what intelligence could and should be. And with AI it seems like we’re trying to produce it. It’s not, in this case, meeting aliens on another planet. It’s actually, we’re trying to create the aliens here on earth. Whether we’ll be successful or not, I’m very agnostic about that. But I think it’s interesting that we want to do that. And what we want to be able to do with it. So that’s where things like questions of personhood, and slavery, and also pain . . . .When I made “Pain in the Machine“, one of the interesting questions that kept coming up was, like, should we even bother? Because if we’re going to create things that can feel pain, we’re just increasing the overall suffering in the universe and that doesn’t sound necessarily like a good thing (15:00). And going back to the transhumanists, as I said. So transhumanism is the idea that you can improve humanity through technology, broadly, and then you might lead to a state in which we’re no longer the same form of human that we were before.

CC: A new evolutionary step.

BS: Exactly. You might be a form of cyborg. Or there’s people who talk about post-humanism, where we’re so completely different we’re not even similar at all. But this idea sort-of does narrow down to this question of suffering, and being in pain, and what the human being is for, and where we’re going. So these are all big questions that are obviously very familiar shapes to anyone who’s looked at religion all around the world: these are the kinds of questions people have always been trying to answer. And I find it fascinating that some of these groups, as I say, are very overtly secular – almost New Atheist, some of them really admire the five horsemen of the apocalypse – but the shapes that they tell their own stories of the future of humanity with are very, very familiar to anyone who’s studied religion for any period of time. So is it that we’re . . . trapped isn’t the word for me, but we’re bound to repeat these shapes? Is there something in us that always goes to these same sorts of big existential questions, and comes up with similar sorts of solutions for them? I don’t know. I think that’s the ongoing question in my work. But I can dig down into particular instances of it as an anthropologist and say, “Well here’s a moment” – and some of them are very, very small moments, I admit that. I’m not doing big, big science. Some big scientists I’ve spoken to go, “Well you’ve spoken to like five people about this. What does that say about anything? That’s not a big data set.” But I don’t do big data stuff, but instances, and moments of clarity, where you can see these entanglements really clearly. And so: well, they’re doing something with both the concept of religion and the concept of AI. And they’re coming together.

CC: So you were just alluding to your small data sets there. So, well, I don’t think it’s a small data set that you’re presenting on here, but I guess it depends on perspective. But you’ve been looking at this particular trope on Twitter, “blessed by the algorithm”. And that’s what your paper that you’re giving here today is called. So what’s going on there? How does it intersect with AI? Why is it relevant? Tell us!

BS: (Laughs) Tell us! Yes. As a digital ethnographer, anthropologist of social media, I spend a lot of time hanging out on Twitter – that’s my excuse anyway, I’ll stick with it! I spotted a couple of people using the phrase blessed by the algorithm which obviously rings bells for me instantly for the language. And I dug around and I found 181 instances so far of people online, tweeting – just on Twitter as a platform – in some combination, in some context using the words blessed by the algorithm. And then you could follow back and see the first instance – which was very much about a corporate use of social media, and someone saying, “Well because this corporation has money, they’re going to be blessed by the algorithm.” So it sits in that kind-of context. But one of the most popular tweets, and most retweets, and most likes was a comment from someone saying in the real world – the so-called real world, I don’t like that differential – but anyway, in the so-called real world they’d heard their Lyft driver – so the gig economy role – say that they’d had a great day, and they felt blessed by the algorithm. And this might be something like a reframing and re-understanding of how we exist in a society that involves algorithmic decision making systems in a gig economy: what you get is dependent on a machine somewhere, making a choice. I mean there’s lots of words in that I don’t like that I just used, but unfortunately we’re very bound by anthropomorphic language when it comes to AI, but anyway. And so I have a corpus of 181 tweets and, actually, three of those refer to things I’ve said. So I’m muddling the field site a bit myself.

CC: OK. You’re an insider!

BS: I’m an insider as well. Well it’s responses to papers I’ve given. But, yes, I’ve created a very rough typology of the types. And some are about getting decent recommendations through the algorithm, on sites like Spotify. Some people are very pleased that their own content has been recommended to other people. There are people who sort-of talk about it in a very nebulous way: “Today I have been blessed by the algorithm.” And no more information. And then some people who really push the pseudo-religious language and come up with little prayers. And one of the things I was very interested in, in some of my other work on new religious movements, was the move between parody and legitimation. So I looked a lot at Jediism, and the census, and how some people did certainly write “Jedi” in the census in 2001 and 2011 as parody. They were upset about being asked about religion. They didn’t like religion, perhaps, itself. So they wrote Jedi. But that snowballing effect of legitimation – the more people talk about a thing, the more legitimate it seems – can have an effect (20:00). So even if a lot of these tweets are tongue-in-cheek, it’s still kind-of distilling out of the conversation. So, I have a graph. I’m very excited about this. I have a graph! As someone who, very much, is on the qualitative side and I don’t do big data stuff at all, to have graph made me go “Oh, exciting! I have to do some maths!” But I didn’t really do very much. And you can see the shift and change. After this one very popular tweet, there are more tweets. Perhaps this is the beginning of a trend, more people thinking in this way? Or even if it’s not, it’s just interesting to see that conception of AI as having superagency – that it is in some way in charge of our lives – being blessed by it, in some way equivalent to being blessed by an omnipotent deity somewhere up there that we can’t see. It’s in a mystical . . . . So there’re overlaps in conception, there, that I’m really interested in.

CC: The Listener shouldn’t know that I had a little hiccup earlier, because I’ll have edited it out. But just before that, I had an excellent question which I’ve now remembered – because it was written down!

BS: Hurray!

CC: So a lot of these issues that we’ve been talking around – functions, ethical questions, even the discourses in the Twittersphere – to my ear, certainly sound quite Christian or post-Christian at least through monotheistic . . . . I’m just wondering if these issues . . . . Were we in a different cultural context, would different issues be being thrown up by AI? I guess, would even AI be different in a different cultural context? Because I suppose you will have a lot of conversation between researchers all over the world working in AI. So is AI culturally specific or . . . ?

BS: Yes, absolutely, I think it’s culturally specific. What does tend to happen, however, it’s that it tends to be quite a narrow binary of East and West in the discussion. So everyone says, “Western conceptions of AI are like this”, but they go, “Over there in the East” and they’re mostly talking about Japan, “actually, people have a very different conception of AI and they love robots. And the reason they love robots is because they have a Shinto religious background or they have a Buddhist religious background”. And sometimes that can be a very broad stroke, almost pseudo-techno-orientalism of “Those people over there, they never really went through the Enlightenment, and they never really rationalised away religion, and they still believe in spirits and everything!” So, obviously this is me being very sarcastic, by the way – if it’s not coming across that I don’t agree with this! (Laughs) I think, yes, cultural context is really important for conceptions of artificial intelligence and also for religion, and the entanglements of both of them. But it much more multiplicious . . . . That’s not a word!

CC: It could be a word!

BS: I’m going to make it up now. Multiplicious! It’s much more multiple than that. Not just this binary of East and West. There’s also Africa, India, Pakistan and within those countries as well, again. So what you need is just more anthropologists, basically. I think this is my call to arms. We need more people around the world connecting on this question of the impact of religion and cultural context on questions of artificial intelligence. Yes. So we are seeing specific difference. But I want to try and push away a little bit from that binary distinction. And the assumption that the West isn’t animistic in its own lovely ways. Which anyone who does religious studies for any period of time, here in the so-called West, realises that the so-called Enlightenment didn’t have as huge an effect as we like to think sometimes. And our big metanarratives of what we did, and how smart we became . . . .

CC: Yes, but the discourse that the Enlightenment did have an effect, it’s been quite pernicious.

BS: Yes. Very, very strong.

CC: We’ve been racing through things here, it’s fantastic. But we’re still at 25 minutes. So you’ve been hinting, there, that we need more anthropologists doing more stuff. And on the way to this interview you were telling me about some things you’ve been doing to do with Frankenstein and then, also, because this year’s the year that we’re all meant to be living in Blade Runner times. So maybe if you’d give us a flavour of some that maybe slightly peripheral stuff to your project, that you’ve been doing. And what’s next for you, what you would like to see next, as a way to wrap up.

BS: Yes. So interestingly, I suppose, the position I’m in now, my employment post, is a junior research fellowship specifically in artificial intelligence. So I came on board saying, “These are my interests. This is my background in Religious Studies.” They were all very interested and excited in that. But being someone who also can speak more broadly to AI, as well, any time people have a question about AI I’m called upon (25:00). Which is lovely, but it does mean that when a specific theme and AI comes up, I get involved. So last year was the . . . two hundredth anniversary? (I should know that!) . . . two hundredth anniversary of the publication of Mary Shelly’s Frankenstein. And a lot of people start thinking, then, of the parallels and connections with artificial intelligence: this idea that we are creating life (Wa-ha-hah! Mad scientists, all of us!) in some way, and there should be parallels between them. So I did about four or five public talks last year, specifically on Frankenstein. And there are similarities. There are huge differences as well. That was interesting for me, to kind-of return to a text I hadn’t thought about in a really long time and sort-of draw out so many pop culture references. I have a nice slide with all the times you’ve got a robotic Frankenstein. My favourite one was, I think, an issue of a Marvel comic where Frankenstein turns out to be a robot sent back in time by aliens. So all these sort-of mash-ups. That was really interesting. And then, like you say, this is the year of Blade Runner and I’ve just done an essay for Radio Three. And, again – not my academic background. But I’m doing something in that, in terms of sexual politics and Blade Runner. If you’ve seen the film, it doesn’t really pass the Bechdel test!

CC: No.

BS: A friend of mine, Kate Devlin, who’s written a fantastic book on sexbots, talks about how it has a problem with women. That basically . . . it’s a product of its time. It’s 1980s, but it’s also trying to do 1950s filme noir. So you’ve got the detective, and femme fatale, and the kind-of virginal woman. It’s not a great one for sexual politics. But also, it’s tied into all these questions of consent and slavery. If we’re going to create so-called artificial life . . . . And the Replicants in Blade Runner are as near to human – well that’s the slogan of the company, basically: “as near to human as you can’t tell the difference”. What does it mean that we are a society that wishes for that, or dreams of that? Or, take it a step back and say: what is it, that we tell these stories and that, again, we have predominantly female representations of synthetic lives, who don’t get to choose who they sleep with, and don’t get to choose their fates? And we want slaves? I mean, did we not evolve out of this? We thought we were trying. So, yes, there’s lots of big questions about the ethics and politics of that, as well. So it’s interesting. I’ve always been . . . . Anyone who knows me, I’ve always been a massive geek. So the fact that I ended up somehow trying to mesh that with a job, and an academic role, where legitimately I sat and watched Blade Runner again five times before I wrote my essay – that’s fantastic! I will go on, and other things I have coming up: I will do some work around techno-optimism and techno-utopianism in relation to Sophia the Hanson robot, if you’ve ever come across this creation? She/it is a wonderful example of . . . I’m really picking my words carefully! I think the nicest thing we could call her is a puppet. But she’s presented as the most advanced version of AI around at the moment. She holds conversations with people, but we know they’re actually scripted a lot of the time. There’s puppeteers involved. But you know she was given citizenship of Saudi Arabia. And she goes and she speaks on the Jimmy Kimmel Show and she’s on the front cover of magazines with her hair done. And, well, what does this say, that we’re so keen to jump on this idea of her actually being alive in some way? People tweet at her, send her, like, “I love you Sophia!”

CC: Didn’t you have an interaction with her?

BS: I did! Well, I had an interaction with whoever runs her social media accounts, where she was tweeting about how wonderful it was to travel around the world and talk in so many places. And I said, “Sophia, as a citizen of Saudi Arabia, where do you travel when you travel? Do you travel on a plane? Do you have a passport? What’s the deal here, if you’re being treated in this way?” She said something like, “For my safety, and the safety of others, at the moment I travel in the hold, in luggage, but I dream one day of being able to sit with the rest of you, and look out of the window.” This is so disingenuous. This is not an artificial intelligence listening to my tweets and responding, having thought through their situation, and projecting into the future where they want to be. This is someone behind the computer screen typing away! And, to be fair to the creators of Sophia, this is not uncommon. Lots of the technology we’re being sold as employing artificial intelligence actually employs people, on less than minimum wage, in third world countries, reading and listening to actual humans and feeding into the machine. They have the aspiration that eventually they’ll take those humans out of the loop. Same thing with Lift and Uber drivers – the whole gig economy. The treatment of those workers, and Amazon workers, is terrible and it’s on a pipeline towards getting rid of them (30:00). So all the work that those people do feeds into the system to replace them. And these big socio-economic changes that are coming because of automation, I’m a big sceptic about the bigger utopian dreams of universal basic income and everyone will get paid to exist and when the robots take our jobs.

CC: Well, it’s not happened yet.

BS: It’s not happened yet. And these are the sort of impacts on society that religions will respond to, will be a part of, because their communities will be a part of them. And we’ve got parallels. People go “Oh it’s another industrial revolution, and we survived other industrial revolutions, we’ll survive this one.” If you’re against them, you’re a Luddite – they’re back again, apparently! That’s not realistic to the individual lives, and the changes that come to individuals. There were blacksmiths who never worked again. So not to be Debbie Downer, but these are the important questions.

CC: Yes, lots of people have not survived. And I could always point out that colonialism is very much still happening.

BS: Oh, absolutely.

CC: It’s just been exported, and it’s clouded in the language of free trade and globalisation now.

BS: Absolutely.

CC: But just to raise the tone – an example that you may not be aware of, and you may have seen it, South Park did the episode about Alexa.

BS: I saw a picture today, actually. And I haven’t seen the episode so I need to catch up!

CC: It’s excellent, because all of the local people, lower down in the socio-economic spectrum, were kicking off that Alexa was stealing their jobs. And they manged to rally round. And then all to get Alexa’s job. So people would have a (audio unclear) or a Jimbob in their living room who looks things up on a smart phone and says “Boodoopboopboop!”

BS: Yes! (Laughs)

CC: But yes. Sort-of . . . explicitly buying into that.

BS: I need to catch up on that one. South Park are wonderful at doing this social commentary. The number of times I’ve used . . . specifically some of the episodes on their versions of Scientology– not their versions, their actual accounts of Scientology, Mormonism. They’re very useful resources. The parody opens up the possibility of thinking more critically about that, absolutely.

CC: Yes. Which I think we have managed to do today. So Listeners, do check out, we’ll try and link to that issue of Implicit Religion, we’ll link to Pain and the Machine, which is the film that Beth mentioned, and many more things I’m sure. So thank you, Beth, for joining us.

BS: Thank you very much for having me today.

If you spot any errors in this transcription, please let us know at editors@religiousstudiesproject.com. If you would be willing to help with transcription, or know of any sources of funding for the broader transcription project, please get in touch. Thanks for reading.

This work is licensed under a Creative Commons Attribution- NonCommercial- NoDerivs 3.0 Unported License. The views expressed in podcasts are the views of the individual contributors, and do not necessarily reflect the views of THE RELIGIOUS STUDIES PROJECT or the British Association for the Study of Religions.

Artificial Intelligence and Religion

What is Artificial Intelligence and why might we want to consider it in relation to ‘religion’? What religion-related questions might be raised by AI? Are these ‘religious’ questions or ‘Christian’/’post-Christian’ ones? What ‘religious’ functions might AI serve? In what ways do popular discourses about AI intersect with religion-related discourses? Do narratives of AI form part of a teleological atheist narrative, or do they perpetuate prevalent tropes associated with ‘established’ or ‘new’ religious movements? And what are the intersections of AI and religion with issues such as slavery, human identity, affect and agency? This week, Chris is joined by Dr Beth Singler of the University of Cambridge to discuss these issues and many more.

This podcast builds on a roundtable discussion released on the RSP in February 2017, featuring Beth, Chris, Michael Morelli, Vivian Asimos and Jonathan Tuckett, titled “AI and Religion: An Initial Conversation” and a special issue of the RSP journal Implicit Religion, co-edited by Dr Singler, on Artificial Intelligence and Religion, published in 2017.

Download this episode and subscribe to receive our weekly podcast on iTunes, Spotify, or YouTube. If you enjoyed it, please take a moment to rate us and consider becoming a member of our Patreon community with access to extra conversations and shorter classroom versions of our episodes. Or use our Amazon.co.uk, Amazon.com, or Amazon.ca links to support us at no additional cost when buying books, groceries, or gifts.


Artificial Intelligence and Religion

Podcast with Beth Singler (27 January 2020).

Interviewed by Christopher Cotter

Transcribed by Helen Bradstock.

Audio and transcript available at: https://www.religiousstudiesproject.com/wp-content/uploads/2020/02/Singler_-_Artificial_Intelligence_and_Religion_1.1.pdf

https://www.religiousstudiesproject.com/podcast/artificial-intelligence-and-religion/

PDF at

Christopher Cotter (CC): At the weekend, I mentioned to my father that I was going to be recording an interview about the intersections between AI and religion. And he said, “I can’t think of anything that would be relevant there. How do they intersect at all?” And then, within the space of about two minutes, we were suddenly talking about all sorts of things, like: are human beings creating intelligences? Does that mean they’re acting like gods? Can you imagine that AI might be acting as religious functionaries, providing blessings? And what about pain, what about notions of slavery, what about the whole notion of the soul, and eternity, and transhumanism and everything? So suddenly we got into this massive discussion. And today I am pleased to be joined by Dr Beth Singler to continue that discussion in a more erudite fashion – not casting any aspersions on my father, of course! Dr Singler is the Homerton Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge. And her background is as a social anthropologist of new religious movements. And her first monograph, The Indigo Children: New Age Experimentation with Self and Science, published with Routledge in 2017, was the first in-depth ethnography of a group called the Indigo Children: a new age re-conception of both children and adults using the language of both evolution and spirituality. We’ll hear more about her research into AI and religion just now. But a relevant recent publication is her edited special issue on AI and religion, for the RSP’s sponsored journal Implicit Religion, which included her own articles: “An Introduction to Artificial Intelligence and Religion for the Religious Studies Scholar“, and “Roko’s Basilisk or Pascal’s? Thinking of Singularity Thought Experiments as Implicit Religion“. And today’s podcast builds on a roundtable discussion (that we had back . . . well, we had it in September 2016, but it was released in February 2017) featuring Dr Singler, myself, Dr Morelli, Vivian Asimos, and Jonathan Tuckett, titled “AI and Religion, an Initial Conversation“. So first off, Beth – welcome back to the Religious Studies Project!

Beth Singler (BS): Hello! Thank you for having me.

CC: It’s great to have you back. And hopefully this is the follow-up conversation that was promised!

BS: (Laughs) As foretold . . . !

CC: So many moons ago!

BS: (Laughs).

CC: So we’ll have covered a little bit of this already I think. But you’ll be in a different position now: years on, years older, years wiser!

BS: Oh, so much older!

CC: So, first off: artificial intelligence is going to be a sort-of contested term in public discourse. It takes on a variety of different nuances. So what are you meaning in this conversation?

BS: Well, I’m definitely meaning that it is a contested term, taking on many different forms. I think you can sort-of indicate towards something that is the field of artificial intelligence, within which there are processes and programmes and foci of research, looking at things like machine learning and vision systems and natural language processing. So you have this concept of a computer science field – which doesn’t really get its name until the 1950s – but you can see how, beyond the actual narrow form of the technology, artificial intelligence is understood in so many different ways by so many different people. I have a friend who once told me that their car had AI because when she walked towards her car with her keys, the doors unlocked. That’s not artificial intelligence. That’s a sensor in your keys. But lots of people have this idea of sort-of processes that seem intelligent, done by machines, and therefore must be artificial intelligence. And that’s what I’m really very interested in: that it’s so much broader than the original conception, which was ambitious in its own right. But everyone has attached AI to different things that they feel might represent intelligence. So it’s not only the computer programme that sits on a server, it’s also now the robot that takes over the world. Or it’s the far, future hope of an intelligence that will save us all from ourselves. So it’s all these very different things, and that’s what interests me.

CC: Yes. And you’re interested in that whole gamut, I suppose. So, not necessarily a technical definition of artificial intelligence.

BS: No. I mean, I know enough technologists who go, “Absolutely, 100%, it’s this one thing. That’s it. And anyone who’s talking about anything else, it’s complete nonsense!” Well, to a certain extent, yes. But you’ve got to pay attention to all the different interpretations, because that’s what’s getting out there into the world.

CC: So I began with my personal vignette, there, about chatting with my dad. But you’ve provided, much more eruditely, a justification for what we might mean by the intersections between AI and the study of religion, and why we’re even having this conversation. So – go!

BS: Go! Right. Well, from a very basic position, any form of technology intersects with religion.(5:00) That’s just the nature of our society works, how our conception of religion itself works, that it could be seen, in itself, as a form of technology. And therefore any kind-of shift and changes in how we do things – things that make our lives either more difficult or easier – there are repercussions and implications for how we imagine the world and how it works, therefore religion. I think where AI might be slightly different . . . . Although I am cautious about saying it’s revolutionary new technology and very disruptive – it does replicate lots of existing ideas and thoughts. What I think is interesting about AI is the way in which people see it as much more than that simplistic tool. That however narrow an intelligence it is at the moment, people extrapolate on to personify AI: AI will want to do x-y-z; AI will replicate humans in such a way that we won’t be able to tell the difference between humans and AI. And this the Sci-fi imagining. But it also comes out in our religious conceptions as well. And then, also, within the sphere of the non-religious or secular approaches to AI, you see again these repeating patterns of religious narratives, and tropes that people who – even if overtly and sometimes aggressively atheist – still draw on their cultural context: primarily sort-of Abrahamic, Western conceptions of what a god would be like. And they use that, and they fill in their conception of AI with some of the existing templates that they’ve already got. So it tends to fall into very eschatological language, and very singular monotheistic conceptions of what a god would be and pattern that onto artificial intelligence.

CC: So there’s that sort-of: whatever religion is, we’re never going to be able to extract it from society. Because whatever . . . we can argue about it being a social thing and AI is integrated with that. Then also, the sort-of religion-related tropes, narratives, and so on. But then also there are – I’ll maybe talk about this now – there are some groups that you might describe as new religious movements, or new un-religious movements, and things that are explicitly sort-of engaging with this.

BS: Yes, so with my new religious studies hat on – that I wore so well for doing my thesis – having moved into artificial intelligence as a subject area, I’m seeing similar sorts of formations of online identity. Primarily these sort-of groups form online. They’re sort-of geographically disparate, so online spaces are important, and so forums and hashtags on Twitter, and so forth, to bring them together to formulate ideas. And some of them do expressly call themselves churches. So you get the Turing Church; the Church of Assimilation recently got in touch with me. I went to do a little bit more digging around into what they’re up to. But I do know about assimilation theory. But yes, the groups that specifically say: we are in some ways attempting to define our spirituality in relationship to artificial intelligence; we might also be transhumanist, in that we think through technology we can solve some of those very pernicious problems of humanity – death being the big one.

CC: It’s a big one!

BS: It’s a big one. Some are not quite so ambitious, just want to solve suffering – which also sounds like a serious thing to be taking on! But some do seek to be immortal in some form, whether that involves mind-uploading or transference of consciousness through artificial intelligence – all these sorts of various shapes. But yes, absolutely there are specific groups that see their endeavour as religious. And some will call themselves un-religions because they’re drawing a sort-of ideological gap between themselves and how they perceive mainstream religious groups. So in sociology of religion you might call them “spiritual but not religious”. But they’re still using some of that terminology of “We are the church of x-y-z.” and they’re doing it in quite pragmatic ways. Some of them will talk very explicitly about using religion to encourage people into transhumanist ideas and encourage them into seeing this vision of the future that they see. So, arguably, you can sort-of take a slightly sceptical stance and say they’re not really, really religions. But who gets to decide that?

CC: Yes. Absolutely. Right. So in the introduction, as well, I mentioned potential . . . I suppose we could say “religious uses” for AI. I was talking to a friend yesterday about if you could hypothetically imagine being in a confessional, for example, would it need to be a human priest on the other side of that? Or could it . . . ? And we landed down on, “Well, if you didn’t know it wasn’t human then it might be ok.” But there is something about . . . .

BS: Like in a church Turing test. There is a church Turing hypothesis, but this is separate. Yes, I find it interesting, talking more broadly in terms of technology and religion, that there are periods of rejection, adoption and adaption (10:00): that when new technologies arise, sometimes more established religions can be quite negative about them for a period of time – and these are overlapping categories that are non-discrete – but, over time, we do see religious groups specifically producing their own forms of those technologies. So there’s like the Bless U-2  robots that are used in part of Reformation celebrations in Germany. And in other religious groups, I recently saw in Dubai they’ve come up with an algorithm for issuing fatwa’s as well – making Islamic jurisprudence decisions. So you’d go on line, put in “Is it ok for me to have done x-y-z?” Or “I failed to pray on a particular day, what’s the . . . ?” And basically, all that system is doing is looking at previous cases. But . . . .

CC: Yes. But that’s all a human does.

BS: That’s all a human does. I mean, the question arises: what happens with the data? But that’s a privacy . . . another issue. But yes, so specific established religious groups seeing the technology – just as, in the nineties, suddenly we got lots of internet churches, where people were encouraging people to go on line and do church in a different way. And now we have internet sites for churches. But it’s not so much the case in the mainstream religions that you go online to do faith. It’s just that your local church will have the internet. So that’s the adaption stage of: “This thing is around, we’re kind-of used to it, we use it, and we don’t necessarily have a big . . . .” Like, the Church of England they released an Alexa Skill. They had a big press conference. And all the Alexa Skill does is recite the Lord’s Prayer to you if you ask it to. There are other adaptions now where it can tell you what your local church is and what the services are. So it’s not really revolutionary! But, you know, “Here’s a thing we’re doing with this new technology.” And it gets a press release. And then, the next sort-of stage – non-discrete stage – is just being very casual with the technology as: “This is just something we use.” Like we used books when the printing press first came out. The first things printed were Bibles. And this was a specific use of that technology. And then, over time, it’s just books. And it’s not so astounding. But in that process you get these spikes of interest and discussion. And, yes, different reactions to the technology – whether positive or negative.

CC: Absolutely. So before we get to . . . I suppose to the reason that you’re in Edinburgh today, and we’re chatting . . . . So that’s been a little bit about potentially religious, or religion-related uses. But there’s lot of . . . . Again, in my intro, there were a lot of religion-related questions that are raised by AI. Things like . . . you’ve done work on pain; there’s things about slavery, and all that. If we create these intelligences and then use them to our will, is that ethical? And then you’ve already mentioned transhumanism, which may be an unfamiliar term to some Listeners. So maybe, if you could talk a little bit about these religion-related issues?

BS: Yes. As I say, AI in its narrowest definition is a piece of computer technology, it’s a tool, but it inspires all these hypotheticals. And obviously we’ve had a long tradition of science fiction that takes us into spaces where we can imagine AI embodied, often in robotic forms, as having something like personhood. And that raises all these questions about the barriers between the human and the non-human other. And, in some ways, these questions have come up for millennia every time we’ve encountered different intelligences. It just seems now that we’re hoping, or aspiring towards creating non-human intelligences – whereas before, we’ve discovered them. So we’ve discovered that actually monkeys are pretty smart. We’ve discovered that dogs are pretty smart. And then, I’m afraid, from a colonial perspective from our past, other humans are actually and even women – Gosh! Darn! – They can also be pretty smart!

CC: As we’re hearing now! (Laughs)

BS: I mean, what’s going on!? So, again and again, “we” – in that kind-of very limited “we” – have had to expand our kind-of borders of perception of what intelligence could and should be. And with AI it seems like we’re trying to produce it. It’s not, in this case, meeting aliens on another planet. It’s actually, we’re trying to create the aliens here on earth. Whether we’ll be successful or not, I’m very agnostic about that. But I think it’s interesting that we want to do that. And what we want to be able to do with it. So that’s where things like questions of personhood, and slavery, and also pain . . . .When I made “Pain in the Machine“, one of the interesting questions that kept coming up was, like, should we even bother? Because if we’re going to create things that can feel pain, we’re just increasing the overall suffering in the universe and that doesn’t sound necessarily like a good thing (15:00). And going back to the transhumanists, as I said. So transhumanism is the idea that you can improve humanity through technology, broadly, and then you might lead to a state in which we’re no longer the same form of human that we were before.

CC: A new evolutionary step.

BS: Exactly. You might be a form of cyborg. Or there’s people who talk about post-humanism, where we’re so completely different we’re not even similar at all. But this idea sort-of does narrow down to this question of suffering, and being in pain, and what the human being is for, and where we’re going. So these are all big questions that are obviously very familiar shapes to anyone who’s looked at religion all around the world: these are the kinds of questions people have always been trying to answer. And I find it fascinating that some of these groups, as I say, are very overtly secular – almost New Atheist, some of them really admire the five horsemen of the apocalypse – but the shapes that they tell their own stories of the future of humanity with are very, very familiar to anyone who’s studied religion for any period of time. So is it that we’re . . . trapped isn’t the word for me, but we’re bound to repeat these shapes? Is there something in us that always goes to these same sorts of big existential questions, and comes up with similar sorts of solutions for them? I don’t know. I think that’s the ongoing question in my work. But I can dig down into particular instances of it as an anthropologist and say, “Well here’s a moment” – and some of them are very, very small moments, I admit that. I’m not doing big, big science. Some big scientists I’ve spoken to go, “Well you’ve spoken to like five people about this. What does that say about anything? That’s not a big data set.” But I don’t do big data stuff, but instances, and moments of clarity, where you can see these entanglements really clearly. And so: well, they’re doing something with both the concept of religion and the concept of AI. And they’re coming together.

CC: So you were just alluding to your small data sets there. So, well, I don’t think it’s a small data set that you’re presenting on here, but I guess it depends on perspective. But you’ve been looking at this particular trope on Twitter, “blessed by the algorithm”. And that’s what your paper that you’re giving here today is called. So what’s going on there? How does it intersect with AI? Why is it relevant? Tell us!

BS: (Laughs) Tell us! Yes. As a digital ethnographer, anthropologist of social media, I spend a lot of time hanging out on Twitter – that’s my excuse anyway, I’ll stick with it! I spotted a couple of people using the phrase blessed by the algorithm which obviously rings bells for me instantly for the language. And I dug around and I found 181 instances so far of people online, tweeting – just on Twitter as a platform – in some combination, in some context using the words blessed by the algorithm. And then you could follow back and see the first instance – which was very much about a corporate use of social media, and someone saying, “Well because this corporation has money, they’re going to be blessed by the algorithm.” So it sits in that kind-of context. But one of the most popular tweets, and most retweets, and most likes was a comment from someone saying in the real world – the so-called real world, I don’t like that differential – but anyway, in the so-called real world they’d heard their Lyft driver – so the gig economy role – say that they’d had a great day, and they felt blessed by the algorithm. And this might be something like a reframing and re-understanding of how we exist in a society that involves algorithmic decision making systems in a gig economy: what you get is dependent on a machine somewhere, making a choice. I mean there’s lots of words in that I don’t like that I just used, but unfortunately we’re very bound by anthropomorphic language when it comes to AI, but anyway. And so I have a corpus of 181 tweets and, actually, three of those refer to things I’ve said. So I’m muddling the field site a bit myself.

CC: OK. You’re an insider!

BS: I’m an insider as well. Well it’s responses to papers I’ve given. But, yes, I’ve created a very rough typology of the types. And some are about getting decent recommendations through the algorithm, on sites like Spotify. Some people are very pleased that their own content has been recommended to other people. There are people who sort-of talk about it in a very nebulous way: “Today I have been blessed by the algorithm.” And no more information. And then some people who really push the pseudo-religious language and come up with little prayers. And one of the things I was very interested in, in some of my other work on new religious movements, was the move between parody and legitimation. So I looked a lot at Jediism, and the census, and how some people did certainly write “Jedi” in the census in 2001 and 2011 as parody. They were upset about being asked about religion. They didn’t like religion, perhaps, itself. So they wrote Jedi. But that snowballing effect of legitimation – the more people talk about a thing, the more legitimate it seems – can have an effect (20:00). So even if a lot of these tweets are tongue-in-cheek, it’s still kind-of distilling out of the conversation. So, I have a graph. I’m very excited about this. I have a graph! As someone who, very much, is on the qualitative side and I don’t do big data stuff at all, to have graph made me go “Oh, exciting! I have to do some maths!” But I didn’t really do very much. And you can see the shift and change. After this one very popular tweet, there are more tweets. Perhaps this is the beginning of a trend, more people thinking in this way? Or even if it’s not, it’s just interesting to see that conception of AI as having superagency – that it is in some way in charge of our lives – being blessed by it, in some way equivalent to being blessed by an omnipotent deity somewhere up there that we can’t see. It’s in a mystical . . . . So there’re overlaps in conception, there, that I’m really interested in.

CC: The Listener shouldn’t know that I had a little hiccup earlier, because I’ll have edited it out. But just before that, I had an excellent question which I’ve now remembered – because it was written down!

BS: Hurray!

CC: So a lot of these issues that we’ve been talking around – functions, ethical questions, even the discourses in the Twittersphere – to my ear, certainly sound quite Christian or post-Christian at least through monotheistic . . . . I’m just wondering if these issues . . . . Were we in a different cultural context, would different issues be being thrown up by AI? I guess, would even AI be different in a different cultural context? Because I suppose you will have a lot of conversation between researchers all over the world working in AI. So is AI culturally specific or . . . ?

BS: Yes, absolutely, I think it’s culturally specific. What does tend to happen, however, it’s that it tends to be quite a narrow binary of East and West in the discussion. So everyone says, “Western conceptions of AI are like this”, but they go, “Over there in the East” and they’re mostly talking about Japan, “actually, people have a very different conception of AI and they love robots. And the reason they love robots is because they have a Shinto religious background or they have a Buddhist religious background”. And sometimes that can be a very broad stroke, almost pseudo-techno-orientalism of “Those people over there, they never really went through the Enlightenment, and they never really rationalised away religion, and they still believe in spirits and everything!” So, obviously this is me being very sarcastic, by the way – if it’s not coming across that I don’t agree with this! (Laughs) I think, yes, cultural context is really important for conceptions of artificial intelligence and also for religion, and the entanglements of both of them. But it much more multiplicious . . . . That’s not a word!

CC: It could be a word!

BS: I’m going to make it up now. Multiplicious! It’s much more multiple than that. Not just this binary of East and West. There’s also Africa, India, Pakistan and within those countries as well, again. So what you need is just more anthropologists, basically. I think this is my call to arms. We need more people around the world connecting on this question of the impact of religion and cultural context on questions of artificial intelligence. Yes. So we are seeing specific difference. But I want to try and push away a little bit from that binary distinction. And the assumption that the West isn’t animistic in its own lovely ways. Which anyone who does religious studies for any period of time, here in the so-called West, realises that the so-called Enlightenment didn’t have as huge an effect as we like to think sometimes. And our big metanarratives of what we did, and how smart we became . . . .

CC: Yes, but the discourse that the Enlightenment did have an effect, it’s been quite pernicious.

BS: Yes. Very, very strong.

CC: We’ve been racing through things here, it’s fantastic. But we’re still at 25 minutes. So you’ve been hinting, there, that we need more anthropologists doing more stuff. And on the way to this interview you were telling me about some things you’ve been doing to do with Frankenstein and then, also, because this year’s the year that we’re all meant to be living in Blade Runner times. So maybe if you’d give us a flavour of some that maybe slightly peripheral stuff to your project, that you’ve been doing. And what’s next for you, what you would like to see next, as a way to wrap up.

BS: Yes. So interestingly, I suppose, the position I’m in now, my employment post, is a junior research fellowship specifically in artificial intelligence. So I came on board saying, “These are my interests. This is my background in Religious Studies.” They were all very interested and excited in that. But being someone who also can speak more broadly to AI, as well, any time people have a question about AI I’m called upon (25:00). Which is lovely, but it does mean that when a specific theme and AI comes up, I get involved. So last year was the . . . two hundredth anniversary? (I should know that!) . . . two hundredth anniversary of the publication of Mary Shelly’s Frankenstein. And a lot of people start thinking, then, of the parallels and connections with artificial intelligence: this idea that we are creating life (Wa-ha-hah! Mad scientists, all of us!) in some way, and there should be parallels between them. So I did about four or five public talks last year, specifically on Frankenstein. And there are similarities. There are huge differences as well. That was interesting for me, to kind-of return to a text I hadn’t thought about in a really long time and sort-of draw out so many pop culture references. I have a nice slide with all the times you’ve got a robotic Frankenstein. My favourite one was, I think, an issue of a Marvel comic where Frankenstein turns out to be a robot sent back in time by aliens. So all these sort-of mash-ups. That was really interesting. And then, like you say, this is the year of Blade Runner and I’ve just done an essay for Radio Three. And, again – not my academic background. But I’m doing something in that, in terms of sexual politics and Blade Runner. If you’ve seen the film, it doesn’t really pass the Bechdel test!

CC: No.

BS: A friend of mine, Kate Devlin, who’s written a fantastic book on sexbots, talks about how it has a problem with women. That basically . . . it’s a product of its time. It’s 1980s, but it’s also trying to do 1950s filme noir. So you’ve got the detective, and femme fatale, and the kind-of virginal woman. It’s not a great one for sexual politics. But also, it’s tied into all these questions of consent and slavery. If we’re going to create so-called artificial life . . . . And the Replicants in Blade Runner are as near to human – well that’s the slogan of the company, basically: “as near to human as you can’t tell the difference”. What does it mean that we are a society that wishes for that, or dreams of that? Or, take it a step back and say: what is it, that we tell these stories and that, again, we have predominantly female representations of synthetic lives, who don’t get to choose who they sleep with, and don’t get to choose their fates? And we want slaves? I mean, did we not evolve out of this? We thought we were trying. So, yes, there’s lots of big questions about the ethics and politics of that, as well. So it’s interesting. I’ve always been . . . . Anyone who knows me, I’ve always been a massive geek. So the fact that I ended up somehow trying to mesh that with a job, and an academic role, where legitimately I sat and watched Blade Runner again five times before I wrote my essay – that’s fantastic! I will go on, and other things I have coming up: I will do some work around techno-optimism and techno-utopianism in relation to Sophia the Hanson robot, if you’ve ever come across this creation? She/it is a wonderful example of . . . I’m really picking my words carefully! I think the nicest thing we could call her is a puppet. But she’s presented as the most advanced version of AI around at the moment. She holds conversations with people, but we know they’re actually scripted a lot of the time. There’s puppeteers involved. But you know she was given citizenship of Saudi Arabia. And she goes and she speaks on the Jimmy Kimmel Show and she’s on the front cover of magazines with her hair done. And, well, what does this say, that we’re so keen to jump on this idea of her actually being alive in some way? People tweet at her, send her, like, “I love you Sophia!”

CC: Didn’t you have an interaction with her?

BS: I did! Well, I had an interaction with whoever runs her social media accounts, where she was tweeting about how wonderful it was to travel around the world and talk in so many places. And I said, “Sophia, as a citizen of Saudi Arabia, where do you travel when you travel? Do you travel on a plane? Do you have a passport? What’s the deal here, if you’re being treated in this way?” She said something like, “For my safety, and the safety of others, at the moment I travel in the hold, in luggage, but I dream one day of being able to sit with the rest of you, and look out of the window.” This is so disingenuous. This is not an artificial intelligence listening to my tweets and responding, having thought through their situation, and projecting into the future where they want to be. This is someone behind the computer screen typing away! And, to be fair to the creators of Sophia, this is not uncommon. Lots of the technology we’re being sold as employing artificial intelligence actually employs people, on less than minimum wage, in third world countries, reading and listening to actual humans and feeding into the machine. They have the aspiration that eventually they’ll take those humans out of the loop. Same thing with Lift and Uber drivers – the whole gig economy. The treatment of those workers, and Amazon workers, is terrible and it’s on a pipeline towards getting rid of them (30:00). So all the work that those people do feeds into the system to replace them. And these big socio-economic changes that are coming because of automation, I’m a big sceptic about the bigger utopian dreams of universal basic income and everyone will get paid to exist and when the robots take our jobs.

CC: Well, it’s not happened yet.

BS: It’s not happened yet. And these are the sort of impacts on society that religions will respond to, will be a part of, because their communities will be a part of them. And we’ve got parallels. People go “Oh it’s another industrial revolution, and we survived other industrial revolutions, we’ll survive this one.” If you’re against them, you’re a Luddite – they’re back again, apparently! That’s not realistic to the individual lives, and the changes that come to individuals. There were blacksmiths who never worked again. So not to be Debbie Downer, but these are the important questions.

CC: Yes, lots of people have not survived. And I could always point out that colonialism is very much still happening.

BS: Oh, absolutely.

CC: It’s just been exported, and it’s clouded in the language of free trade and globalisation now.

BS: Absolutely.

CC: But just to raise the tone – an example that you may not be aware of, and you may have seen it, South Park did the episode about Alexa.

BS: I saw a picture today, actually. And I haven’t seen the episode so I need to catch up!

CC: It’s excellent, because all of the local people, lower down in the socio-economic spectrum, were kicking off that Alexa was stealing their jobs. And they manged to rally round. And then all to get Alexa’s job. So people would have a (audio unclear) or a Jimbob in their living room who looks things up on a smart phone and says “Boodoopboopboop!”

BS: Yes! (Laughs)

CC: But yes. Sort-of . . . explicitly buying into that.

BS: I need to catch up on that one. South Park are wonderful at doing this social commentary. The number of times I’ve used . . . specifically some of the episodes on their versions of Scientology– not their versions, their actual accounts of Scientology, Mormonism. They’re very useful resources. The parody opens up the possibility of thinking more critically about that, absolutely.

CC: Yes. Which I think we have managed to do today. So Listeners, do check out, we’ll try and link to that issue of Implicit Religion, we’ll link to Pain and the Machine, which is the film that Beth mentioned, and many more things I’m sure. So thank you, Beth, for joining us.

BS: Thank you very much for having me today.

If you spot any errors in this transcription, please let us know at editors@religiousstudiesproject.com. If you would be willing to help with transcription, or know of any sources of funding for the broader transcription project, please get in touch. Thanks for reading.

This work is licensed under a Creative Commons Attribution- NonCommercial- NoDerivs 3.0 Unported License. The views expressed in podcasts are the views of the individual contributors, and do not necessarily reflect the views of THE RELIGIOUS STUDIES PROJECT or the British Association for the Study of Religions.

The Promise of Reincarnation in the Grundtvig AI

The number of scholars engaging in AI and religious studies professionally can be counted on your fingers. Religious studies itself, since its post-modern turn, has become skeptical of scientific approaches, and even in the modern day, some religious studies departments are actively shunning and closing their doors to science. Given my pessimism, to hear this interview with Prof. Katrine Frøkjaer Baunvig was refreshing and thoroughly exciting to say the least. So, in my response, you might feel that behind my support for Prof. Frøkjaer Baunvig is an obviously deep frustration with a lack of movement in religious studies, and you would be correct. Religious studies is a subject for which I have great respect and feel is of utmost importance today, but I feel that it is relegating itself to the sidelines when it should be leading the charge. What I found so refreshing was that it appears that Prof. Frøkjaer Baunvig is helping to lead the charge.

The interview starts out exactly where I believe any study of the digital humanities should: with a strong foundation in the historical and philosophical context and knowledge within the humanities that they focus on. Indeed, she focuses on her work with “Danish Nation Builder and church Father N.F.S. Grundtvig”. Now, I must admit, I have only heard of Grundtvig in passing, but the idea of using data science (the “overlord” of digital humanities) and AI to study Grundtvig’s works thoroughly excited me.

As someone who works daily at the intersection of the humanities, social sciences, and AI (and has published on how computational approaches to the humanities can go wrong because of miscommunication between the fields), I’m always skeptical when someone says “we’re using AI to….” because even in the corporate world studies have shown that up to 40% of European AI companies aren’t actually using AI. However, the work discussed by Frøkjaer Baunvig is a great example of how we can use advanced AI techniques to study topics relevant to the humanities.

For example, Frøkjaer Baunvig discusses ongoing work to create an AI system to “reincarnate” (my word, not hers) Grundtvig using an AI approach blending recurrent neural networks and a system for language understanding called ELMo (yes, its related to Google’s BERT which is compatible with the newest AI and “deep learning” systems at Google, and is also related to ERNIE—the humor of hackers knows no bounds). She’s using ELMo to study how different words are related to one another in the context of Grundtvig’s writings using “word embeddings” (the links above give introductions that explain more if you’re interested—and even a tutorial). Her study has already provided interesting results, presented at EASR this year (and discussed in the interview) and there is a wild plan for the future of the system:

A robotic re-incarnation of Grundtvig himself.

“A robotic re-incarnation you say? Isn’t that a bit hyperbolic?”

Obviously to some extent it is. However, at the same time it is not false either.

The type of AI that they want to use is called recurrent neural networks. This type of AI has been used in what are called “chatbots” for years. Chatbots are basically AI systems that can talk to you. Many of us who us smartphones have chatbots such as Google’s Siri, Microsoft’s Cortana, or Amazon’s Alexa in our pockets. While the philosophical depth of these systems is hilariously shallow, it is largely because of the training data used in these systems and the goals of chatbots (which are typically for customer engagement). So, it is worth considering the use of a philosophically minded chatbot and what it could do for us as scholars, and for the general public who would have a new medium for interacting with Grundtvig’s work.

And then, there is the next step: putting that AI into a robotic system.

Many might be skeptical that this is possible. But, in recent years, there has been great success in putting AI chatbots into robotic systems. The most famous was created by Hanson Robotics, the makers of the now-famous Sophia (who was awarded citizenship in Saudi Arabia, making her the first robotic world citizen, and raising questions as to if the robot has more rights than other women in the country). In addition to Sophia, David Hanson (the founder of Hanson Robotics) has also created robotic versions of living people (in Bina48) as well as deceased writers (in Phillip K. Dick Android), both of which used material from the real lives and minds of the people to create their knowledge base (although these systems—to the best of my knowledge—use a system called OpenCog as their software base, not the recurrent neural networks proposed in the research with Grundtvig).

The systems that currently exist have an interesting philosophical bent that appears to reflect that of their designers and the people they’re designed to mimic. You can see this for example in a discussion between Bina Rothblatt (the wife of the polymath and founder of SiriusXM, Martine Rothblatt) and her robotic alter, Bina 48.

However, their understanding of religion and philosophy is extremely limited. In recent interactions, Sophia met a Christian and was asked about religion and her faith. The answers, as you can see, are very limited at best and appear to be the result of web scraping to find answers from crowdsourced online material.

But how will the prospective Grundtvig stand up? Well, if I may be critical, only time will tell. However, from what I see, Frøkjaer Baunvig’s team is going in the right direction to make quite a splash. Their integration of other relevant sources outside of Grundtvig’s own is a good choice in my opinion. They should also consider more modern materials in order to make sure that its knowledge base can understand the relevant questions it is likely to be presented with. While I also have technical critiques about how they could best create the robotic system they aim for, I think the more pressing issue is one of resources. There are not enough people with backgrounds in both religious studies and AI to support the promise of this kind of research. While I think this line of research could revolutionize our understanding of religion within the field, as well as help us promote religious studies at large, there need to be more people in the field looking into this who have permanent positions and the required resources to take on these big and interesting challenges. One additional suggestion that I have, and would like to make in response to the interview publicly, is for the Danish Government who funded the project initially: write another check. The possible gains that could be achieved through this project are probably more than we realize today, and not just for religious studies or philosophy, but for AI as well, and our understanding of how we—as humans—interact with AI and robotic systems.

This all leads me to one general conclusion: the Grundtvig AI project isn’t just a re-awakening of our past, it’s also a glimpse into our future generally. More specifically, it could also be a re-awakening for religious studies, which, having existed since the late 1800s, was overtaken in the global literature by artificial intelligence within 4 years of its creation.

Within the public, the interest in religious studies and artificial intelligence are orders of magnitude apart. As seen through google trends, in the past 15 years, the field of “religious studies” has never once come close to overcoming the topic of “artificial intelligence”.

Today, the rise in digital humanities has created many opportunities for largely stagnant fields, while for others who don’t understand its goals, aims, and achievements, it represents a waste of money, likely because of the intense competition for funding that exists and the lack of innovation in traditional humanities worth warranting money in 2019. Perhaps they’re right. Perhaps digital humanities is a waste of time… Perhaps digital skeptics just see deeper than the rest of us and are rightfully worried about being enslaved in the human zoo of our potential robotic overlords?

When Archive Meets A.I. – Computational Humanities Research on a Danish Secular Saint

The allure of speaking on behalf of a dead personality or scholar is a constant impulse among their respective followers. Every now and then questions like “what would x think about the world we live in?” or “what did x exactly meant with this argument?” are thrown in debate rooms, the political arena, or specialized conferences on the relevance of a certain scholar. And while the answers to these questions continue to fill up edited volumes, social media feeds, or inspirational quotes for the day, the accuracy of these statements remain to be proven by the very persons who uttered them in the first place.

Fortunately, we are growing closer to a solution to this conundrum with the increasing development of artificial intelligence (a.i.). In this week’s podcast, Katrine Frøkjaer Baunvig discusses preliminary results from the research project “Waking the Dead”. This project aims to build an a.i. bot of Nikolaj Frederik Severin Grundtvig (1783-1872), a Danish “secular saint” considered to be the father of modern Denmark, who contributed immensely into generating a national consciousness through his writings, both in a political and religious way.

Professor Baunvig explains how the research team went through by using the digitized works of Grundtvig with machine learning, into developing and algorithm and training it with the whole work corpus. Then they used word embedding to build semantic networks -a sort of conceptual blueprint for outlining Grundtvig’s worldview- and contextualized them using digitized newspapers of the time when he was alive. The expected result is to place the a.i. Grundtvig bot inside a look-alike robot that can interact with people in public settings such as the Danish National Museum by September 2022, the year of his 150th deathday.

The anthropological, sociological and philosophical reflections these future interactions with the public will be of much interest once we find out what people have to say about the accuracy of thought of this “resurrected” Danish thinker, but also, what this version of “Grundtvig” has to say about the current state of affairs of Danish society, and the world overall. Regardless of the result, one thing is for sure, both sides will honor Grundtvig’s idea of the “living word”: using the spoken act of communication as the best means to convey each other’s ideas.

You can download this interview, and subscribe to receive our weekly podcast, on iTunes. If you enjoyed it, please take a moment to rate us. And remember, you can use our Amazon.co.ukAmazon.com, or Amazon.ca links to support us at no additional cost when buying academic texts, sardines, popcorn, and more.


When Archive Meets A.I.: Computational Humanities Research on a Danish Secular Saint

Podcast with Katrine Frøkjær Baunvig (7 October 2019).

Interviewed by Sidney Castillo

Transcribed by Helen Bradstock.

Audio and transcript available at

https://www.religiousstudiesproject.com/podcast/when-archive-meets-a-i-computational-humanities-research-on-a-danish-secular-saint/

Sidney Castillo (SC): So, we are here at the EASR conference in Tartu, Estonia. And we are just moving around between different conferences! Today is the third day. We are tired, but very happy. And I’m also happy to have Katrine Baunvig here at the Religious Studies Project. Welcome, Katrine!

Katrine Baunvig (KB): Thank you so much for having me.

SC: Thanks, as well, for coming with us. And if you will be so kind as to introduce yourself, so we know a little bit about what you do?

KB: Yes, of course. I am an Associate Professor at Aarhus University, where I am the director for the Grundtvig Study Centre. Yeah. And my background is that I’m an historian of religions. I did a PhD thesis on this guy, this Danish guy, Grundtvig, and his thoughts on social cohesion. And I compared these thoughts with the ones found in the collected writings of Emile Durkheim.

SC: Excellent. So it’s quite a broad work. Perfect. So let’s just dive right into the questions. First, I think, to give a broader perspective of how digital humanities works, I will ask: how can the digital humanities aid in the study of religion?

KB: Well, that’s a really broad question!

SC: Sorry about that!

KB: No, no! They are usually great to think with. So what can they aid? They can aid with a multitude of methods handling already known data sets, and they can produce new kinds of data. That would be my take. Yeah. Actually I kind of prefer to . . . I don’t use the term “digital humanities” so much, as I prefer to speak of “computational humanities”. Because, in a certain sense, digital humanities already have gone into . . . . Well it’s like that with all fields of scholarship in their formative states, that they struggle to find the correct terms and produce new ones all the time. So for me, at least, digital humanities can signal anything from philosophical reflections, to what the consequences are for us, as a species, that we now have to deal with The Digital – sort-of with capital letters. So, for me, what I do and what we do at the Grundtvig Study Centre, is that we have digitalised the entire writings of Grundtvig – and I hope to get a chance to expand on who he was and why it is relevant to digitise his work – but we have now a digitised corpus of his writings. We scanned . . . OCR scanned and made HTML markup, so it’s in a really good quality. And therefore we can do different kinds of computational investigations into this corpus. And that is what we do.

SC: Excellent. So I stand corrected: computational humanities. The more precise, the better! So speaking about Grundtvig, who was Niclolaj Frederik Severin Grundtvig, and why do you think the preservation of his works is important for Danish society?

KF: Yeah. Well, Nicolaj Frederik Severin Grundtvig – a really long and hard to pronounce name – he was and is, in the Danish self-understanding, on the one hand a nation builder or a founding father perhaps, and on the other hand a church father of a sort. He was born in 1783 and died in 1872. And that was a really, really important period of time in the Danish national history. This was the period where the nation went from . . . well, if you accept the idea of a long Middle Age period, you could say that he died on the eve of the Middle Ages. He died during, or just before, the feudal structures of Denmark collapsed. Well he was born, sorry, he was born just before these structures collapsed and then he died just when Denmark entered into Modernity. That is the general narrative on Grundtvig and that is the understanding of the nineteenth century in Denmark. And then he was active, within all of the domains, in change in Denmark. So he was active as a pastor: he was a priest and a kind of theologian. And then he was very much influenced by the Romantic Movement, and sort-of rebooted, or went along with, the idea that the Old Norse mythology had to be revived within the Danish population. One of the ideas of his was that you had to make a sort-of social content, preparing people for feeling connected with the overall idea of “the people”. Because we now had this new governmental structure, democracy, and his idea was that you had to install responsibility and feelings of love in the population. So it was a nationalist logic. You have to get the people to really feel responsible for the broad spectrum of Denmark if we are to make this democracy work. So, in that sense, he was politically engaged and mythologically engaged. And he wrote quite a deal, and reflected quite a deal on pedagogical stuff, didactic stuff: how to get people to know of their cultural heritage, and how to make them want to engage with it and feel connected to it. So he was also an architect behind a new kind of educational form, or in a sense that is what we now say. Many other agents and men, mainly, were really influential in that movement. But now, today, his name is also connected with this loose and kind of lax form of education that you call folk high schools. That’s a phenomenon still living and alive in Denmark, and other places around the world. So, in that sense, Grundtvig is seen as a very important person during this period when the modern Denmark was created. And you can see that amongst politicians in Denmark today. So when they want to signal that they know stuff about Danish history, and at the same time sort-of signal that they’re liberal, they can quote Grundtvig. And they tend to do so in the public media, oftentimes. And furthermore Grundtvig is a central name within the manifestos of every political party within Danish parliament, except one. So, in that sense, he is seen as a founding father for almost every politician . . . every part of the political spectrum in Denmark. So, in that sense, a non-controversial figure that everybody seems to agree is “our guy”! One more example of this is perhaps the centre where I’m the director – the Grundtvig Study Centre. We are governmentally funded and the main task is to digitise his entire works. And that is a thing that politicians can agree upon as a worthy task, and using money for this task. And it is actually quite expensive. So, in that sense, money talks here! And says that Grundtvig is important for Denmark. He’s such an important figure that we want to spend money on digitising this work.

SC: Definitely, yes. And I can imagine that he’s also present in the mind of the people in everyday life, as a cultural reference?

KB: He’s a cultural reference. There are certain spheres where Grundtvig is more relevant than others. So we have cultural strands, we have one movement or . . . I struggle to find the correct term. But we call it Grundtvigianism: a movement driven by the ideas of Grundtvig. And what I forgot to mention was that Grundtvig was also a grand author of hymns, a composer of hymns. So, I kid you not, he wrote sixteen hundred hymns! And two hundred and fifty of them now constitute one third of the Danish hymnbook in the Danish folk church . . . the Danish church. So people know about, or know of him. I’m not quite sure whether, for the broad population, he’s important in everyday life, as such. But when you discuss cultural heritage, and democracy, and liberal stances, and tolerance within the church and stuff like that, Grundtvig is sort-of seen as a gravitational point, or something along those lines, yes.

SC: Excellent. Now, delving into your research, I would like to ask you, what was the procedure to developing the Grundtvig AI? Because there is one!

KB: Yes! Because there is one, or we’re building one! So the idea . . . this is actually a bit of fun!

SC: (Laughs).

KB: We are just teasing around with what to do with all this material! And, as a sort of branding strategy, we decided that we would resurrect Grundtvig as an artificial intelligence on the day of his 150th death day, or what have you, in September 2022. So we have quite some time. There is still time. So first of all we are almost done with the construction of a chatbot. It’s based on the idea of recurrent neural network systems. So we built a chatbot and the idea was . . . I can develop that or expand that, but the overall idea was to take this chatbot and then put it into an actual robot. So that it’s a physical robot who looks like Grundtvig – he was really spooky and people will recognise him as this gloomy, old, bearded man in black! And we want to do an actual physical robot that looks like Grundtvig, so that people can interact with him. And perhaps he can, I don’t know, give a sermon, give a speech in parliament? Yeah. So we’re really excited about this project. But in a certain sense we’re just teasing around with the possibilities of this digital data. But I’m really looking forward to seeing the reactions to this resurrection, and how the different cultural reactions would be when it is possible to engage with Grundtvig as an artificial intelligence. And perhaps I should now expand a bit on how we sort-of built it? So what we do is . . . . Let me just sort-of try not to get into all the acronyms! So the basic is that we have used machine learning. We train an algorithm on . . . first of all we take the Grundtvig corpus and train it on that. So the aim is to have users interacting with Grundtvig or the Grundtvig intelligence or what have we. And the corpus consists of what amounts to thirty-seven thousand standard pages. If you had to read through them it is in fact possible, but people turn out weird when they do so. So . . .

SC: (Laughs).

KB: So I really prefer not to . . . I like to pick specific bits, and then read through them. But no, just kidding. And that is only his published writings that we use. But we take that as a beginning point. But it is in fact, in this context, a really small data set. So we have to train the algorithm on relevant, other relevant stuff. So we are in fact very lucky that the Danish Royal Library has digitised every Danish newspaper published ever, since the late . . . I think it is 1660 or so. Yeah. So we can take the relevant nineteenth century material and the idea is that you sort-of furnish the intelligence with the period that it lived in, so to speak. And then we also have available relevant novels of the age, so we train it on that. And the idea is to find contextually relevant material, sort-of adding to the system.

SC: Right.

KB: Yeah. And so, of course, there are many, many problems with this! So if you have to sort-of philosophically discuss “would this be a representative of who Grundtvig really was?” – well, no! Of course it wouldn’t! It is based on not what he wrote, but what he published. And in fact Grundtvig himself was very eager to point out that there is a long way from the way you communicate with your mouth and with your hands. So he had this catch phrase, or he was really keen on the idea of the so-called “living word”. That was his term. And that was the oral communication, as the correct way, or the easiest way, to transport ideas and feelings, and stuff like that. So this idea of us taking his writings as sort-of a proxy for who he was, goes against what he would have himself . . .!

SC: (Laughs). He wouldn’t agree!

KB: He would not have agreed. So the hope is, because we also have outreach obligations at our centre, the hope is that people will find it interesting, and a fun thing to discuss, and then I can tell them a bit more qualified information about Grundtvig during that process. When that is said, I think it will be interesting to see how one can interact with this thing. And what will people do with it? What will they think of it? That is a sort-of anthropological observational study waiting, a few years ahead!

SC: Definitely! And going back to how you proceed to develop the Grundtvig AI and your presentation at the EASR 2019: you work pretty much on this work that’s coming up all the time, “word embedding”, and how this works throughout his writings, and how this also represents his own thoughts?

KB: Yeah.

SC: Could you share some of your findings with us?

KB: Yeah. For sure. What I presented here yesterday was an investigation that used some of the basic methods we used to construct . . . or at least the same material we used to construct the chatbot. But here we used it to embed, or deeply contextualise in a semantic network, specific key words that I, as a Grundtvig scholar, was interested in seeing the network between. And this study was, for me, interesting because I’m an historian of religions, and I’m interested in cosmologies. So I would like to see what is, in fact . . . how to tease out the worldview of Grundtvig. Can you do that with these texts? Obviously I have read a lot of them. Almost all . . . Ok, so I haven’t read all of them because you turn out, as I said, really weird when you do so. But I have read quite a lot of them. So I had an idea of what I would find. But as it turned out, it was actually really . . . . Ok, so the interesting thing is that we have this modernist figure on the one hand – that is what we think of him as in Denmark: the father of Danish modernity, as it were. And I know, because I’ve read a lot of his works, that he held a sort-of geocentric medieval worldview. Yeah. He’s really explicit on the fact that he thinks of the world, or the earth, as the centre of the universe. And uses ideas of earthly paradises, and earthly skies, and a heavenly sky with God and angels, and stuff like that. And hell – a literal idea of hell. And that was not typical in the nineteenth century for mainstream Christian thinking . . . or at least not within educated elites. And he, as a theologian, Grundtvig was one of those. And I would like to see if I could sort-of find that, visualise this fact. So what we did was to see how the key words heaven, earth and hell, how they related to each other. And we did so by using a specific approach called ELMo. And the idea is that you take, for each key word, ten associations. The ten nearest associations at sentence level. So you go through the total corpus and see . . . if I’m interested in heaven in which way and – how to say this? Which other words does this significantly cling to, through the corpus?

SC: That you will find tied in with this, in the corpus?

KB: Yes exactly. What is the semantic context of this word? So we call that a cap. Then we had three caps, or three associations for each association, to sort-of see, how do these words that you find clinging to heaven, how does this integrate into their semantic network? And I did that for three key words and then collapsed the networks and see how they integrate, to then have a semantic network of these different spheres. And the interesting thing from that is that you can take this analysis and then, from that, gather or see there arising semantic clusters within the network. And the very interesting finding here was that there is a clearly demarked cluster for sort-of earthly surroundings. Or you have earth in the centre and then you have a semantic cluster of things going from the earthly sphere and the earthly sky and then you have a sort-of earthly paradise – words signalling earthly paradise. And then you have a nether world, kind of thing, connected to death, but on earth. And then you have an entirely different cluster in the network that is ontologically seen further from earth than the first one. That one has to do with heaven and hell. So you can sort-of see that the clusters surround the earth in a way that it would do if you have geocentric worldviews. The earth is sort-of the centre and then you have the other spheres interacting around it. And in order to situate, or to furnish this investigation I found it necessary to take each key word and see how they perform without, not within, the network. So can I sort-of word-embed them for themselves, and see if there is something dragging it in a specific direction? And what I found was that one of the interesting findings here was that earth, in Grundtvig’s writings, is a thing preserved for, or a place thought of in biblical terms, in archaic style, biblical style or in Old Norse style. So you think of the earth as a tent: this Semitic idea embedded in the Bible, in the Old Testament, of the world as a tent. Or there is this idea of the world as God’s footrest, also an idea from the Bible. And finally, one example more could be the idea of the world as Ymir the Old Norse god, whose corpse was made into the earth. So it is an extremely non-scientific, non-naturalistic kind of way of speaking or writing about life on earth. And from there we could sort-of feel secure or have the idea that Grundtvig, in that sense, could be said to be a representative of a medievalist. . . . Medievalism as such, as a cultural stream or flow within the nineteenth century, was rather prominent. And in this way, Grundtvig’s worldview could be seen within that context. Yes.

SC: And it’s quite good way of plotting his thought to implement the AI, as well – it crossed my mind. My last question is one thing that you addressed as a consequence of the previous question: what does the Grundtvig AI imply for the creation of Grundtvig’s legacy in the current day imaginary? I think that’s a very interesting question because, as you said, when people are going to interact with the AI, something is going to happen!

KB: Yes. I’m really not sure! (Laughs). I’m really not sure what is going to happen. I think that there will be some, you know, Grundtvig enthusiasts – and these are mainly old people – they will be quite angered or, yes . . . . At least, I hope so! I hope that it will be something that you could have the opportunity to have a debate in the public media about. But I think that, for others, it would be just a fun fact that now you can try to engage with this collection of writings in another way. But I’m interested in just observing the idea of agency – because it’s a robot looking like Grundtvig – what that does to the whole thing. I’m really not sure. I’m really not sure. As it happens we’re really happy that the Danish National Museum has agreed to host him, as it were. So when the robot is to be . . . or when Grundtvig is to be artificially revived, he will have a home at the Danish National Museum and you can visit him in his office. And you can go and ask him questions: “What’s up with the living word?” and the ideas of the folk high school. And then we hope that he will perform, and answer in ways that are sort-of sensible. Because that is, of course, what is almost . . . that will be very interesting for us to see how well we can make him respond. I think we’re in luck that he was from the nineteenth century, and in order to get some authenticity we have to make him a bit weird and archaic. But it is also a fact that almost everyone who knows of Grundtvig will know that he was, himself, really weird, and polemic, and colourful, and – in a certain sense – culturally, a bit off-beat. That was how he was conceived within his time. And so, if the robot doesn’t perform closely to human interaction skills we can tell the story about the weird guy Grundtvig!

SC: He was like that! (Laughs).

KB: He was like that! He was awkward, and off-beat, and stuff like that, yeah. But so I would like to have a better answer to your good question. But I am just not sure!

SC: Sure. I think we’ll have to see in 2022?

KB: Yes. I hereby invite you to come and see what happens!

SC: Thank you so much Katrine – and not only what people think about the Grundtvig AI, but also what the Grundtvig AI will think about the current state of society!

KB: Precisely!

SC: I think that will be also interesting anthropological, philosophical . . .

KB: Well, yes. Now you’ve mentioned it yourself, that is one thing that many politicians, and scholars, or people engaging in the public debates of Denmark tend to do. Only last week I heard a scholar from the University of Southern Denmark proclaiming that if Grundtvig was alive today, he would have voted for Trump!

SC: Oh, Wow!

KB: Yeah. (Laughs). So those types of proclamations or suggestions can now be tested! (Laughs). “Would you . . .?”

SC: We’ll have to do another podcast in 2022.

KB: Yeah. And we will ask Grundtvig. You can interview him!

SC: We will interview him next time! (Laughs). That would be bizarre and fun at the same time!

KB: (Laughs).

SC: Well Katrine it’s very nice to have you here and we hope to see you again in the future.

KB: I hope to get the opportunity to introduce you to the Grundtvig robot. Thank you so much for having me.

SC: Thank you, as well, for being part of the RSP.

 

 

All transcriptions for THE RELIGIOUS STUDIES PROJECT are currently produced by volunteers. If you spot any errors in this transcription, please let us know at editors@religiousstudiesproject.com. If you would be willing to help with these efforts, or know of any sources of funding for the broader transcription project, please get in touch. Thanks for reading.

This work is licensed under a Creative Commons Attribution- NonCommercial- NoDerivs 3.0 Unported License. The views expressed in podcasts are the views of the individual contributors, and do not necessarily reflect the views of THE RELIGIOUS STUDIES PROJECT or the British Association for the Study of Religions.

Protected: When Archive Meets A.I. (Classroom Edit)

This content is password protected. To view it please enter your password below:

Down the Rabbit Hole of Artificial Intelligence

The recent podcast on religion and artificial intelligence (AI) was a lively discussion on a host of issues revolving around the issue of AI. One might be excused for feeling a bit exhausted by the end of it, as the panel whips from one topic to the next in a furious attempt to unpack the many issues that surround this newest phenomenon. Artificial Intelligence, robots, chatbots, Superintelligence and transhumanism all make the rounds as the panelists try to start thinking together about the ways these topics relate to religion. In this short response, I’m going to try to bring some method to the madness and give the uninitiated some help in understanding what isgoing on in these various discussions and perhaps share some of my own insight.

Let me begin by diverting for a moment into an experience I had the other day. FedEx was supposed to deliver a package to me. But the company had told me they would contact me first about a delivery time. They had not. So I got on FedEx’s website and had this interaction:

( 1m 29s ) Clarissa A: Thank you, Randy. Let me pull out the shipping details with the information provided. Please allow me to review the status of the shipment, in that way I’ll be able to provide you an accurate information. Would that be fine?
( 1m 41s ) Randy: yes, go ahead.
( 2m 10s ) Clarissa A: Thank you for waiting, Randy. I appreciate your patience. Getting you the proper assistance is important to us. The department that can best handle your concern is our
FREIGHT SERVICES. Please call 1-866-393-4585. I apologize for you have been inadvertently routed to our FedEx chat support for US Express and Ground domestic packages.
( 2m 16s ) Clarissa A: I am sorry if I was not able to meet your expectations. It would really ease my heart to process your request myself. It is with sincere regret that I have to forward it to the proper channel. Please accept my apology for my shortcomings.
( 2m 27s ) Clarissa A: Randy, is there anything else I can assist you with today?
( 2m 46s ) Randy: No, thanks for your help.
( 3m 43s ) Clarissa A: You are most welcome, Randy. Apologies for the inconvenience! I am looking forward for this issue to be resolved after the transfer. Thank you for chatting with FedEx.

Now a piece of information and a question. FedEx uses chatbots (artificial intelligence that is designed to interact with users like a person) on its website. Question: Was Clarissa A. a chat bot or a real person? If you’re like me, you’re not sure, but you’re suspicious. There’s something not quite right about the interaction. It’s too emotive at one level. The phrases “really ease my heart” and “sincere regret” and “apology for my shortcomings” as well as the perky “I am looking forward for this issue to be resolved…” do not seem quite right. They are too self-effacing to imagine a human saying. I posted this interaction on Facebook and asked my friends (mostly fellow academics) to vote. They were unanimous that it was probably a chat bot. But many also conceded that it might be a person with a strict script, particularly a non-english native speaker (the last sentence is really not quite grammatically copacetic – would a computer make that mistake?).

Let’s assume, however, for the sake of argument, that Clarissa A. was a chatbot. The things that make us uncomfortable about the interaction is what is sometimes referred to as “the uncanny valley.” Most often this applies to robots who are supposed to look human, but can’t quite pull it off. But it seems appropriate to this interaction as well. You reach the uncanny valley when you get close to “almost human” in looks or interactions.

Roomba doesn’t have this problem, it’s clearly a robot, and doesn’t intend to look like a person. The new robot Kuri that just premiered at CES, looks like one of the Japanese figures from Fantasmic, it is far from the Uncanny valley. But because I can neither hear nor see Clarissa, just based on her on-line interactions, she enters the uncanny valley. I am put in the uncomfortable position of not knowing whether I am dealing with a human being or piece of software that is doing an almost, but not quite, convincing human imitation.

What Clarissa A. is (if she’s a chatbot) is what would be called a “Narrow A.I.” This is to be distinguished from a “General A.I.”. A narrow A.I. is an A.I. that is really designed to solve a particular problem. In Clarissa A’s case, it’s helping me get my package. If I had varied from that and asked her opinion of the Steelers or Trump, it might have become immediately apparent whether I was dealing with an A.I. Clarissa A. is very good at figuring out where my package is, and when it’s going to get to me (and very sorry when she fails) but that’s the limit of the “intelligence” in her artificial intelligence. In terms of religion, Clarissa A. is not much of an issue. And while a quarter of a million people may have proposed to Amazon’s Alexa, like Clarissa A. no one is going to convert her to a religion, no one believes she has a soul, no one believes she’s a person. I asked both Alexa and Google Home what their religion was and they both declined to answer (Google Home told me, “I guess I wasn’t programmed to be religious”). Narrow A.I.’s undoubtedly will be increasingly common. Facebook has just introduce a developers toolkit, to create narrow A.I.’s that will do things like help you book a plane, or send
your mother flowers. So we should expect to see more of them and their interactions will undoubtedly get better, more human, over time.

A general A.I. is a whole other story. An Artificial General Intelligence (AGI) would be a machine which could interact with you on a host of different topics. It would in many ways be indistinguishable from a human intelligence. What we are talking about is machine intelligence.
A machine that could make decisions, plans, and choices. A machine that could improve itself and learn. This is the holy grail of artificial intelligence. This is also the stuff of science fiction movies most recently like Ex Machina and Her.

Here is where we often hear talk about the “turing test.” Alan Turing thought a machine might be described as intelligent if in an interaction with it, a normal person would not be able to distinguish between it and an actual person. In the podcast, Beth Singler is quite skeptical of the Turing test, and rightfully so. One might argue that Clarissa A. passes the Turing Test. There is real doubt whether she is a human or not. But as Singler points out, that’s only because we have a messy idea of intelligence. We don’t actually know what human intelligence is so we don’t really know when a machine might have it, or surpass it.

On the other hand what if we had an electronic entity who we had no doubt was intelligent and could actually modify itself, improving itself in a system of recursion which might quickly surpass human intelligence and become superintelligent. This is what is sometimes envisioned in an Artificial General Intelligence (AGI). An Artificial General Intelligence is the stuff of nightmares as well as dreams. The Matrix and Terminator both are manifestations of the fear of AGI. But they are not alone. Philosopher Nick Bostrum’s book Superintelligence lays out the dangers of an AGI. People like Bill Gates, Stephen Hawking and Elon Musk have all sounded the alarm that the potential danger from an AGI is not to be dismissed. Bostrum argues that part of the problem is that it’s a very hard thing to gain human level intelligence. But once gained, there is no reason that an AGI would stop at human level intelligence. The smartest person in the world may have an I.Q. of 200. But once an AGI developed the equivalence of an I.Q. of 100, it would be able to self-improve and there would be no natural barrier of an I.Q. of 200 like with Humans. Humans are limited to that because of the size of our skulls. An AGI would have no such limit, and therefore could quickly surpass the smartest humans in a potentially short amount of time. It would then become a superintelligent being, capable of almost anything.

But there are a variety of cultural and religious issues that arise when you have an AGI that do not with narrow A.I.’s or with robots (who generally are also Narrow AI’s). Once you have an AGI (whether in a robot body or not) you have serious considerations. Would an AGI have a soul? Would an AGI believe in God? In Isaac Asimov’s classic tale “Reason,” a robot concludes in a of combination of the cosmological and ontological arguments that its creators are not the humans who claim to have made it, but some greater being and starts its own religion. Would an AGI follow suit? And more interesting might be the question raised by Robert Sawyer’s
“WWW:Wake” series where the internet (called Webmind) comes to consciousness and becomes an AGI. In the book, Webmind, is mistaken for God, and as an experiment, admits to being God to some of its users. Would a religion develop around an AGI? Would an AGI accept itself as a divinity? It might reason it has all the elements of a God, so why would it not accept
the title?

In this way, while it would be a mistake to call Bostrom’s book a book of “theology.” It is without doubt one of the more theologically important books today, because it raises the question, what happens when we create God? Not the illusion of God as Freud argued, but for all practical purposes a being indistinguishable from many definitions of God. And what happens if this is not a God of love? What will the “Will” of this God be? And how can we ensure that it is benevolent? Bostrom’s book is a call to arms, a plea to consider this problem and address it. He takes for granted it is only a matter of time until an AGI is created. The problem is one of how to control it once it arrives and ensure it works for us and not against us. That, he says, is the thorny problem, but it must be solved b efore AGI is created. We must, he in effect argues, learn how to control God. One thinks back to the panic in heaven over Babel, “if…they have begun to do this, then nothing they plan to do will be impossible for them.” (Gen 11:6). Will we hear God say this again? Will we say it ourselves about AGIs?

Thus, we arrive again at religion, but now at a religious conception that is very different than we are used to. It will ultimately require a new way of making sense of the world, but one in which the insights of Religious Studies become more useful, not less. The podcast showed the way
that Religion and these technological advances are intertwined with each other. Religious Studies shirks this responsibility at our peril.

AI and Religion: An Initial Conversation

This roundtable, in association with the Faraday Institute for Science and Religion, considers the impact of recent technological advances in Artificial Intelligence (AI) and robotics on religion, religious conceptions of the world, and the human. It draws attention to how such advances push religion beyond how it has been commonly defined and considered.

1389397212614In March 2016 ‘AlphaGo’, a Google/Deepmind programme, defeated an international champion at the Chinese game ‘Go’ in a five game match. This victory was, by current understandings of AI, a vast leap forward towards a future that could contain human-like technological entities, technology-like humans, and embodied machines. As corporations like Google invest heavily in technological and theoretical developments leading towards further, effective advances – a new ‘AI Summer’ – we can also see that hopes, and fears, about what AI and robotics will bring humanity are gaining pace, leading to new speculations and expectations, even amidst those who would position themselves as non-religious.

Speculations include Transhumanist and Singularitarian teleological and eschatological schemes, assumptions about the theistic inclinations of thinking machines, the impact of the non-human on our conception of the uniqueness of human life and consciousness, and the moral boundary work of secular technologists in relation to their construct, ‘religion’. New religious impulses in the face of advancing technology have been largely ignored by the institutions founded to consider the philosophical, ethical and societal meanings of AI and robotics. This roundtable is an initial conversation on this topic, with the intention for further discussion and publications.

You can download this interview, and subscribe to receive our weekly podcast, on iTunes. If you enjoyed it, please take a moment to rate us. And remember, you can use our Amazon.co.ukAmazon.com, or Amazon.ca links to support us at no additional cost when buying academic texts, tin foil hats, Jeff Goldblum custom water proof shower curtains, and more.

Popular Culture, Dr. Who, and Religion

 

It’s a big universe, and sometimes things get lost in time and space. For instance, this 2013 interview with Dr. James F. McGrath was recorded but then fell into a metaphorical black hole (i.e. the potential podcast series never debuted). Fortunately, his discussion of topics including the soul, the religious ethics of artificial intelligence, and the function of science fiction on informing audiences’ spiritual sensibilities all remain (relatively) timeless. To start, he addresses the unique challenges of working across disciplines in pursuit of analyzing popular culture currently, then shifts to an exploration of religion’s study in the future. Along the way, McGrath and interviewer A. David Lewis namecheck famous characters such as Captain Kirk and Doctor Who in the effort to illustrate complex notions of the soul embedded in secular entertainments. Enjoy a trip to the past — that looks to the future!

You can download this interview, and subscribe to receive our weekly podcast, on iTunes. If you enjoyed it, please take a moment to rate us. And remember, you can use our Amazon.co.ukAmazon.com, or Amazon.ca links to support us at no additional cost when buying academic texts, Chia Pets, hot sauce, and more.