Posts

The Promise of Reincarnation in the Grundtvig AI

The number of scholars engaging in AI and religious studies professionally can be counted on your fingers. Religious studies itself, since its post-modern turn, has become skeptical of scientific approaches, and even in the modern day, some religious studies departments are actively shunning and closing their doors to science. Given my pessimism, to hear this interview with Prof. Katrine Frøkjaer Baunvig was refreshing and thoroughly exciting to say the least. So, in my response, you might feel that behind my support for Prof. Frøkjaer Baunvig is an obviously deep frustration with a lack of movement in religious studies, and you would be correct. Religious studies is a subject for which I have great respect and feel is of utmost importance today, but I feel that it is relegating itself to the sidelines when it should be leading the charge. What I found so refreshing was that it appears that Prof. Frøkjaer Baunvig is helping to lead the charge.

The interview starts out exactly where I believe any study of the digital humanities should: with a strong foundation in the historical and philosophical context and knowledge within the humanities that they focus on. Indeed, she focuses on her work with “Danish Nation Builder and church Father N.F.S. Grundtvig”. Now, I must admit, I have only heard of Grundtvig in passing, but the idea of using data science (the “overlord” of digital humanities) and AI to study Grundtvig’s works thoroughly excited me.

As someone who works daily at the intersection of the humanities, social sciences, and AI (and has published on how computational approaches to the humanities can go wrong because of miscommunication between the fields), I’m always skeptical when someone says “we’re using AI to….” because even in the corporate world studies have shown that up to 40% of European AI companies aren’t actually using AI. However, the work discussed by Frøkjaer Baunvig is a great example of how we can use advanced AI techniques to study topics relevant to the humanities.

For example, Frøkjaer Baunvig discusses ongoing work to create an AI system to “reincarnate” (my word, not hers) Grundtvig using an AI approach blending recurrent neural networks and a system for language understanding called ELMo (yes, its related to Google’s BERT which is compatible with the newest AI and “deep learning” systems at Google, and is also related to ERNIE—the humor of hackers knows no bounds). She’s using ELMo to study how different words are related to one another in the context of Grundtvig’s writings using “word embeddings” (the links above give introductions that explain more if you’re interested—and even a tutorial). Her study has already provided interesting results, presented at EASR this year (and discussed in the interview) and there is a wild plan for the future of the system:

A robotic re-incarnation of Grundtvig himself.

“A robotic re-incarnation you say? Isn’t that a bit hyperbolic?”

Obviously to some extent it is. However, at the same time it is not false either.

The type of AI that they want to use is called recurrent neural networks. This type of AI has been used in what are called “chatbots” for years. Chatbots are basically AI systems that can talk to you. Many of us who us smartphones have chatbots such as Google’s Siri, Microsoft’s Cortana, or Amazon’s Alexa in our pockets. While the philosophical depth of these systems is hilariously shallow, it is largely because of the training data used in these systems and the goals of chatbots (which are typically for customer engagement). So, it is worth considering the use of a philosophically minded chatbot and what it could do for us as scholars, and for the general public who would have a new medium for interacting with Grundtvig’s work.

And then, there is the next step: putting that AI into a robotic system.

Many might be skeptical that this is possible. But, in recent years, there has been great success in putting AI chatbots into robotic systems. The most famous was created by Hanson Robotics, the makers of the now-famous Sophia (who was awarded citizenship in Saudi Arabia, making her the first robotic world citizen, and raising questions as to if the robot has more rights than other women in the country). In addition to Sophia, David Hanson (the founder of Hanson Robotics) has also created robotic versions of living people (in Bina48) as well as deceased writers (in Phillip K. Dick Android), both of which used material from the real lives and minds of the people to create their knowledge base (although these systems—to the best of my knowledge—use a system called OpenCog as their software base, not the recurrent neural networks proposed in the research with Grundtvig).

The systems that currently exist have an interesting philosophical bent that appears to reflect that of their designers and the people they’re designed to mimic. You can see this for example in a discussion between Bina Rothblatt (the wife of the polymath and founder of SiriusXM, Martine Rothblatt) and her robotic alter, Bina 48.

However, their understanding of religion and philosophy is extremely limited. In recent interactions, Sophia met a Christian and was asked about religion and her faith. The answers, as you can see, are very limited at best and appear to be the result of web scraping to find answers from crowdsourced online material.

But how will the prospective Grundtvig stand up? Well, if I may be critical, only time will tell. However, from what I see, Frøkjaer Baunvig’s team is going in the right direction to make quite a splash. Their integration of other relevant sources outside of Grundtvig’s own is a good choice in my opinion. They should also consider more modern materials in order to make sure that its knowledge base can understand the relevant questions it is likely to be presented with. While I also have technical critiques about how they could best create the robotic system they aim for, I think the more pressing issue is one of resources. There are not enough people with backgrounds in both religious studies and AI to support the promise of this kind of research. While I think this line of research could revolutionize our understanding of religion within the field, as well as help us promote religious studies at large, there need to be more people in the field looking into this who have permanent positions and the required resources to take on these big and interesting challenges. One additional suggestion that I have, and would like to make in response to the interview publicly, is for the Danish Government who funded the project initially: write another check. The possible gains that could be achieved through this project are probably more than we realize today, and not just for religious studies or philosophy, but for AI as well, and our understanding of how we—as humans—interact with AI and robotic systems.

This all leads me to one general conclusion: the Grundtvig AI project isn’t just a re-awakening of our past, it’s also a glimpse into our future generally. More specifically, it could also be a re-awakening for religious studies, which, having existed since the late 1800s, was overtaken in the global literature by artificial intelligence within 4 years of its creation.

Within the public, the interest in religious studies and artificial intelligence are orders of magnitude apart. As seen through google trends, in the past 15 years, the field of “religious studies” has never once come close to overcoming the topic of “artificial intelligence”.

Today, the rise in digital humanities has created many opportunities for largely stagnant fields, while for others who don’t understand its goals, aims, and achievements, it represents a waste of money, likely because of the intense competition for funding that exists and the lack of innovation in traditional humanities worth warranting money in 2019. Perhaps they’re right. Perhaps digital humanities is a waste of time… Perhaps digital skeptics just see deeper than the rest of us and are rightfully worried about being enslaved in the human zoo of our potential robotic overlords?

Down the Rabbit Hole of Artificial Intelligence

The recent podcast on religion and artificial intelligence (AI) was a lively discussion on a host of issues revolving around the issue of AI. One might be excused for feeling a bit exhausted by the end of it, as the panel whips from one topic to the next in a furious attempt to unpack the many issues that surround this newest phenomenon. Artificial Intelligence, robots, chatbots, Superintelligence and transhumanism all make the rounds as the panelists try to start thinking together about the ways these topics relate to religion. In this short response, I’m going to try to bring some method to the madness and give the uninitiated some help in understanding what isgoing on in these various discussions and perhaps share some of my own insight.

Let me begin by diverting for a moment into an experience I had the other day. FedEx was supposed to deliver a package to me. But the company had told me they would contact me first about a delivery time. They had not. So I got on FedEx’s website and had this interaction:

( 1m 29s ) Clarissa A: Thank you, Randy. Let me pull out the shipping details with the information provided. Please allow me to review the status of the shipment, in that way I’ll be able to provide you an accurate information. Would that be fine?
( 1m 41s ) Randy: yes, go ahead.
( 2m 10s ) Clarissa A: Thank you for waiting, Randy. I appreciate your patience. Getting you the proper assistance is important to us. The department that can best handle your concern is our
FREIGHT SERVICES. Please call 1-866-393-4585. I apologize for you have been inadvertently routed to our FedEx chat support for US Express and Ground domestic packages.
( 2m 16s ) Clarissa A: I am sorry if I was not able to meet your expectations. It would really ease my heart to process your request myself. It is with sincere regret that I have to forward it to the proper channel. Please accept my apology for my shortcomings.
( 2m 27s ) Clarissa A: Randy, is there anything else I can assist you with today?
( 2m 46s ) Randy: No, thanks for your help.
( 3m 43s ) Clarissa A: You are most welcome, Randy. Apologies for the inconvenience! I am looking forward for this issue to be resolved after the transfer. Thank you for chatting with FedEx.

Now a piece of information and a question. FedEx uses chatbots (artificial intelligence that is designed to interact with users like a person) on its website. Question: Was Clarissa A. a chat bot or a real person? If you’re like me, you’re not sure, but you’re suspicious. There’s something not quite right about the interaction. It’s too emotive at one level. The phrases “really ease my heart” and “sincere regret” and “apology for my shortcomings” as well as the perky “I am looking forward for this issue to be resolved…” do not seem quite right. They are too self-effacing to imagine a human saying. I posted this interaction on Facebook and asked my friends (mostly fellow academics) to vote. They were unanimous that it was probably a chat bot. But many also conceded that it might be a person with a strict script, particularly a non-english native speaker (the last sentence is really not quite grammatically copacetic – would a computer make that mistake?).

Let’s assume, however, for the sake of argument, that Clarissa A. was a chatbot. The things that make us uncomfortable about the interaction is what is sometimes referred to as “the uncanny valley.” Most often this applies to robots who are supposed to look human, but can’t quite pull it off. But it seems appropriate to this interaction as well. You reach the uncanny valley when you get close to “almost human” in looks or interactions.

Roomba doesn’t have this problem, it’s clearly a robot, and doesn’t intend to look like a person. The new robot Kuri that just premiered at CES, looks like one of the Japanese figures from Fantasmic, it is far from the Uncanny valley. But because I can neither hear nor see Clarissa, just based on her on-line interactions, she enters the uncanny valley. I am put in the uncomfortable position of not knowing whether I am dealing with a human being or piece of software that is doing an almost, but not quite, convincing human imitation.

What Clarissa A. is (if she’s a chatbot) is what would be called a “Narrow A.I.” This is to be distinguished from a “General A.I.”. A narrow A.I. is an A.I. that is really designed to solve a particular problem. In Clarissa A’s case, it’s helping me get my package. If I had varied from that and asked her opinion of the Steelers or Trump, it might have become immediately apparent whether I was dealing with an A.I. Clarissa A. is very good at figuring out where my package is, and when it’s going to get to me (and very sorry when she fails) but that’s the limit of the “intelligence” in her artificial intelligence. In terms of religion, Clarissa A. is not much of an issue. And while a quarter of a million people may have proposed to Amazon’s Alexa, like Clarissa A. no one is going to convert her to a religion, no one believes she has a soul, no one believes she’s a person. I asked both Alexa and Google Home what their religion was and they both declined to answer (Google Home told me, “I guess I wasn’t programmed to be religious”). Narrow A.I.’s undoubtedly will be increasingly common. Facebook has just introduce a developers toolkit, to create narrow A.I.’s that will do things like help you book a plane, or send
your mother flowers. So we should expect to see more of them and their interactions will undoubtedly get better, more human, over time.

A general A.I. is a whole other story. An Artificial General Intelligence (AGI) would be a machine which could interact with you on a host of different topics. It would in many ways be indistinguishable from a human intelligence. What we are talking about is machine intelligence.
A machine that could make decisions, plans, and choices. A machine that could improve itself and learn. This is the holy grail of artificial intelligence. This is also the stuff of science fiction movies most recently like Ex Machina and Her.

Here is where we often hear talk about the “turing test.” Alan Turing thought a machine might be described as intelligent if in an interaction with it, a normal person would not be able to distinguish between it and an actual person. In the podcast, Beth Singler is quite skeptical of the Turing test, and rightfully so. One might argue that Clarissa A. passes the Turing Test. There is real doubt whether she is a human or not. But as Singler points out, that’s only because we have a messy idea of intelligence. We don’t actually know what human intelligence is so we don’t really know when a machine might have it, or surpass it.

On the other hand what if we had an electronic entity who we had no doubt was intelligent and could actually modify itself, improving itself in a system of recursion which might quickly surpass human intelligence and become superintelligent. This is what is sometimes envisioned in an Artificial General Intelligence (AGI). An Artificial General Intelligence is the stuff of nightmares as well as dreams. The Matrix and Terminator both are manifestations of the fear of AGI. But they are not alone. Philosopher Nick Bostrum’s book Superintelligence lays out the dangers of an AGI. People like Bill Gates, Stephen Hawking and Elon Musk have all sounded the alarm that the potential danger from an AGI is not to be dismissed. Bostrum argues that part of the problem is that it’s a very hard thing to gain human level intelligence. But once gained, there is no reason that an AGI would stop at human level intelligence. The smartest person in the world may have an I.Q. of 200. But once an AGI developed the equivalence of an I.Q. of 100, it would be able to self-improve and there would be no natural barrier of an I.Q. of 200 like with Humans. Humans are limited to that because of the size of our skulls. An AGI would have no such limit, and therefore could quickly surpass the smartest humans in a potentially short amount of time. It would then become a superintelligent being, capable of almost anything.

But there are a variety of cultural and religious issues that arise when you have an AGI that do not with narrow A.I.’s or with robots (who generally are also Narrow AI’s). Once you have an AGI (whether in a robot body or not) you have serious considerations. Would an AGI have a soul? Would an AGI believe in God? In Isaac Asimov’s classic tale “Reason,” a robot concludes in a of combination of the cosmological and ontological arguments that its creators are not the humans who claim to have made it, but some greater being and starts its own religion. Would an AGI follow suit? And more interesting might be the question raised by Robert Sawyer’s
“WWW:Wake” series where the internet (called Webmind) comes to consciousness and becomes an AGI. In the book, Webmind, is mistaken for God, and as an experiment, admits to being God to some of its users. Would a religion develop around an AGI? Would an AGI accept itself as a divinity? It might reason it has all the elements of a God, so why would it not accept
the title?

In this way, while it would be a mistake to call Bostrom’s book a book of “theology.” It is without doubt one of the more theologically important books today, because it raises the question, what happens when we create God? Not the illusion of God as Freud argued, but for all practical purposes a being indistinguishable from many definitions of God. And what happens if this is not a God of love? What will the “Will” of this God be? And how can we ensure that it is benevolent? Bostrom’s book is a call to arms, a plea to consider this problem and address it. He takes for granted it is only a matter of time until an AGI is created. The problem is one of how to control it once it arrives and ensure it works for us and not against us. That, he says, is the thorny problem, but it must be solved b efore AGI is created. We must, he in effect argues, learn how to control God. One thinks back to the panic in heaven over Babel, “if…they have begun to do this, then nothing they plan to do will be impossible for them.” (Gen 11:6). Will we hear God say this again? Will we say it ourselves about AGIs?

Thus, we arrive again at religion, but now at a religious conception that is very different than we are used to. It will ultimately require a new way of making sense of the world, but one in which the insights of Religious Studies become more useful, not less. The podcast showed the way
that Religion and these technological advances are intertwined with each other. Religious Studies shirks this responsibility at our peril.