The following is an edited transcript of a recent conversation between Nancy Fulda, Tim Chaves, Rosalynde Welch, Carl Youngblood and Zachary Davis.
Zachary Davis: Welcome to tonight’s conversation on AI and the Future of Faith. In the words of Neal A. Maxwell, "Each new generation is held accountable for how it responds to the light it has received." Today, we are witnessing the emergence of a new light in our world: Artificial Intelligence (AI). This rapidly advancing technology has the potential to impact nearly every aspect of our lives, including our faith and spirituality.
As with any new light, AI brings with it both opportunities and challenges. On the one hand, AI offers new ways to connect with the divine. For example, AI chatbots can provide answers to questions about religious texts and teachings, and virtual reality experiences can help people feel as if they are in sacred spaces. Moreover, AI could potentially translate religious texts into different languages, making them more accessible to people around the world.
However, there are also concerns that AI could challenge traditional religious beliefs and practices, especially if it is programmed to make autonomous decisions that contradict religious teachings. Some argue that reliance on machines for spiritual guidance could diminish the importance of human interaction and connection in spiritual experiences.
As members of the faith community, it is important that we approach this new light with thoughtfulness and reflection. We must consider the potential benefits and drawbacks of integrating AI into our religious practices and ensure that it aligns with our religious values and principles. In doing so, we can harness the power of AI to enhance our spiritual experiences and deepen our connection with the divine.
That entire introduction was written by ChatGPT.
[Laughter]
Nancy Fulda: It showed.
Rosalynde Welch: I mean, yes.
Zach: I hope for those of you who know me, you were thinking ‘this is unusually boring.’
Rosalynde: Yes.
Zach: Still, I didn't think it was that bad. The prompt was “Write an introduction to AI and its effect on spirituality and faith in the style of Neal A. Maxwell?” Pretty good! But I didn't see any floral prose or alliteration.
So, Nancy, how did ChatGPT create this boring introduction?
Nancy: Here's the first breakdown: AI is an umbrella term for a huge host of technologies that range from evolutionary systems to Bayesian probability and a bunch of other stuff.
One of those technologies is neural networks, and machine learning and ChatGPT fall into the neural network space. A neural network, essentially, is a bunch of numbers representing learned features of behavior. When you give ChatGPT a prompt, the system takes those words and turns them into a numerical representation in which words that have similar meanings live close to each other in the semantic space being represented.
It then passes that representation through a series of what we call layers. Each layer takes information from the layer above and tries to do stuff with it in such a way that the final output accomplishes whatever the designers of the system wanted it to accomplish. In the case of ChatGPT, they basically fed it the internet and asked it, given this many words, predict the next N words.
And so you have this predictive system and all it was asked to do was, given the context window, given the prompt, predict what would've come next. And so what you just saw is ChatGPT's best estimation of what an average internet writer would've written next if his name were Neal A. Maxwell. It kind of failed on that.
Rosalynde: So Nancy, tell me if this is the case: a large learning model like ChatGPT can't really show its work, right? It spits out an answer, but we don't really know how it got there. We can't go back and construct the sources or process that it used to arrive at this probability. Is that the case?
Nancy: That is correct. One of the biggest known problems with ChatGPT is – often it gets things amazingly right. I've been astounded at some of the stuff that he knows and I'm like, I didn't know that. But when I Google it, it was right – However, when it gets it wrong, it is so confident in its wrongness that it will present it as the absolute gospel truth. That sounds human. I have met such people! ChatGPT is worse.
I believe Google is working on a system that will cite its sources when it produces text, sort of a combination generative and retrieval based system.
Carl Youngblood: Yes, there's a whole field in machine learning and artificial intelligence around what's called interpretability or intelligibility, or explainability. And as I understand it's still in its infancy. And this would go even deeper than citing your sources in the sense that it would be able to look at the output of the training process, which would be a neural network, and actually try to say something about what is happening there.
And what I find interesting about that is, as I think about the analog with human intelligence, most of us only determine the degree of intelligence we're experiencing with other humans when we see what they do and then decide whether that was an intelligent thing to do or not, right? We can't look inside your brain and say, oh, I see that this neuron is going to cause you to do this in a given situation, right? We have the same level of opacity looking into the human brain as we do into neural networks.
Nancy: We actually have more opacity. It's easier to probe the activations of a neural network, generally speaking.
Zach: So you're saying by their fruits, you shall know them.
Carl: What AI research is trying to do is to peer into those numbers in the neural network and understand them better.
Nancy: And to create systems that are able to justify their decisions based on the inputs they were given, which by and large neural networks do not.
You feed the stuff in, it chugs all the numbers, and it gives you a result, but it can't explain to you. It can't tell you why it picked this result and not that result.
Rosalynde: Whereas a human being could. Now, perhaps they are confabulating, but you could ask somebody, well, why do you think that? Why did you do that? And they could give some kind of an account of their reasoning process.
Zach: Hume famously said reason is the slave to the passions. And I've come to believe that's totally true. We want things and then we post hoc justify them with reason.
Carl: We also have many systems that actually respond before our conscious brain even goes to work. And those are really important systems to our survival.
Rosalynde: I will add a rebuttal to that. I think that skepticism is warranted at the level of the individual, but I think we can have more confidence in reason as a methodology for arriving at truth when it's situated at the level of the group, right? That's why something like the scientific method can work because it can work to aggregate and correct for individual kinds of bias.
So I would be less skeptical than you and say that reason can work and it does work, but at the level of the group rather than of the individual.
Zach: That’s true, the group average always predicts how many Skittles are in the bottle.
This idea that it’s just a cloud of probabilities that is determining AI outputs is a big part of what's so unsettling about it. We don't quite know how the inputs lead to the outputs. I've never seen any technology in my lifetime provoke the kind of anxieties that AI is provoking and I think it's because we don't really understand how it works.
Nancy: I want to add a caveat to that because you mentioned ‘in your lifetime,’ and I think in your lifetime is a critical point. I actually wrote an online essay on this long ago when I was on a soapbox about something else called “Nothing This Fun Could Be Good for You.” It's a history of evil entertainment. When comic books came out with their technicolor images on a page, they were going to destroy children's eyesight. When movies came out, it was going to be the end of society as anyone knew it. Ballet was at one point a horribly erotic type of a performance in many people's eyes.
And so at the same time that I agree that there's never been anything like this in the history of the world, we've had things that made people feel as scared as we feel about this. It's the biggest thing that's happened in our lifetimes for some people. I'm sure some of us have had other things that affected us more, but I'm not sure that this is a particularly unusual emotional space for a society to be in. We're having our turn.
Tim Chaves: From the perspective of an entrepreneur and someone in business, I feel maybe a little bit differently than you Nancy. I feel like over the past fifteen to twenty years of my career, I've felt a relative stability beneath my feet as we've built software and products. I felt like I understood the world around me enough to say “we've got a window of at least a couple of decades to really build something and know that it's gonna be relevant.”
When ChatGPT launched in November 2022 that was a big game changer. Everyone in my industry immediately started thinking about this. And then, you know, when GPT-4 came out in March 2023, it was such a fast advance, such a huge leap, at least from the perspective of people like me who are building and using these tools. The pace of acceleration started to seem scary to the extent that I'm getting up in the morning and questioning whether it actually makes sense for me to continue to work on the thing that I'm working on because it might not be relevant in six months, much less five years from now.
From the perspective of somebody doing business, you really need those five to ten year windows to build something significant. And I don't feel like I have that stability right now. So even more so than the PC, the internet, smartphones or other major technological advances in the last 50 years, I feel like this is earth shaking.
Carl: One of the things that I think is frightening about it beyond just what we’ve been saying about not knowing how it's coming up with these things is that it causes us a sort of existential crisis around what it means to be human and to what extent what we do every day could be reproduced by this apparently trivial machine.
There are many people on the internet who already think of ChatGPT as a person and who personify and anthropomorphize it in many ways. And I think for many people it has been a traumatic experience to chat with this thing and feel like they're talking to a person and think that all it is is a bunch of text thrown into this system.
It may trivialize in some people's minds what it means to be human. But if you're saying, well, I'm clinging to the idea that my humanity is still significant and important, then that also requires that you exalt what ChatGPT is to a higher ranking than you felt comfortable with as well. So either way, it's traumatic.
Zach: The amazing mid-century philosopher, Ludvig Wittgenstein stated, "One of the most misleading representational techniques in our language is the use of the word I.” Part of the trick and the uncanniness of ChatGPT is that it writes in first person, as though it has a personality. If the results just came in third person, it'd be a lot less challenging to our sense of reality. And humans are so good at anthropomorphizing inanimate objects. We do this all the time. Many cultures treat all kinds of non-human things as having some personality, some sense of aliveness. And I think that's why we're so quick to give a sense of self to the technology here.
I want to ask now how will these technologies change our society? Let's start with the near term impact.
Tim: I do believe that the economy is actually shifting under our feet as we speak. Even in my very small, brand new startup, we are rethinking the way that hiring is going to go in the future.
For example, I'm probably at least two, if not three or four more times more productive today as a software engineer than I was a year ago. As an example, GitHub, which is a big software system a lot engineers use in order to manage their code repositories, just released a product called Copilot where a chat now lives in your code editor right next to you and it has the context of your code.
I installed this yesterday and literally it's changed my entire workflow. It's so easy to debug, write tests, ask questions, brainstorm, that I feel like we don't need to hire as many engineers as we would otherwise right now. I feel like I have to use it. And that's what's weird about this. Can we slow AI down? Should we slow it down? But I know that there are other competitors out there working on basically what I'm working on right now, and they're definitely using it, so I have to use it too. And when the next technological advancement comes out, I have to use that because if I don't, then somebody's gonna come along and eat my lunch. And so that's where I start to get a little bit worried that this technology is sort of pushing us to a place societally that we don't want to go. It's almost impossible to slow it down.
Zach: Because there's both the profit motive of every company out there, as well as geopolitical competition.
Tim: That's right. And when you take this to a larger scale, like beyond just my little startup and think about these companies that are building the LLMs, like OpenAI, they have sort of the same incentive structure going on.
There was this recent open letter written by AI researchers, another luminaries that called for a six month pause. Max Tegmark, an MIT researcher and longtime influencer in this space, was one of the main authors. His hypothesis is that Sam Altman of Open AI and all of his competitors would like to pause AI development, that they know that this is potentially barreling toward AGI or even super intelligence.
Zach: Can you define AGI?
Tim: AGI stands for Artificial General Intelligence and it is the next step beyond where we're at right now, where we have these narrow artificial intelligences which, could be a chess bot or a self-driving car, or a large language model that's good at one particular thing, but is not yet capable of learning on the same scale as humans. AGI is the point at which AI can learn and improve itself. So if you add robotics to AGI it could interact in the world similar to the way we do or beyond what we do.
The hypothesis then goes that if we get to an AGI, it could improve itself and then take off in a series of recursive self-improvements until it becomes a super intelligence. Some predict that that process could even be quite rapid.
The problem then potentially becomes that this super intelligence doesn't share humanity's goals. This is the problem that's known in this industry as alignment, which has become sort of a household term in recent months because of ChatGPT, but it's potentially a scary thing.
I don't think most AI researchers, and Nancy, you can correct me here, are worried that AI would ever turn “evil,” but even just a slight deviation in a super intelligence's goals from humanity's goals could end up putting humans in a pretty bad place.
And so to close this thought back to OpenAI and their competitors, they need to keep going because everybody else is too. If you're OpenAI and Sam Altman, and you've at least got your eyes on alignment and a general sense for the good of the world, then you probably need to be the first one to get there because the other company that might get there first might not care about those things as much as you do.
And so, Max Tegmark, one of the authors of the open letter, is hoping that we can just say, pause everybody, pause and let's all talk and work out this alignment problem before we go somewhere that we really don't want to.
Zach: The funniest and I think most famous example of the alignment problem going awry is if you have an AI that's told to make paperclips and and it ends up converting all of human life and all of the planet’s resources into products for paperclip production,
Carl: But usually the problems are much more subtle than that. I want to share one near term problem that I see and it's around provenance. In the same way that we currently deal with problems around spam and questionable sources of material on the internet, I think that AI will turbocharge this problem and exacerbate it significantly.
Imaging this hypothetical scenario that someone shared on Twitter the other day:
It's April, 2026. I wake up in the morning and check Hacker News. Hundreds of starlink satellites burn up in the atmosphere. I click the link to Wired. The article is clearly GPT-assisted. I don't trust it. I click back to the Hacker News comment section. The top comment says starlink is down. Satellites are crashing, but they'll all burn up safely. The second comment says people on the ground are in danger. I sign in to my metasearch app, a cross of Bard, Bing Alexa, Meta's model, and the Stanford Open Source one.
Some say that there's a massive cyber attack against Tesla or SpaceX. Others say it's starlink routine decommissioning. Some say it's safe. Others say stay indoors. I get an alert from my bank. I scroll through a dozen spoof bank notifications that my phone assistant tells me are socially engineered.
The New York Soccer Exchange dropped 10% and trading was halted. The NASDAQ has been halted for weeks due to sinusoidal trading. This drop looks real slack chimes my engineer in Chile tells me she can't work today due to mass protests. What are people protesting? I'm not sure. People are saying the hospital systems are down and no one can refill their meds.
Basically that there are many sources of information that we currently really have no means of verifying where that information is coming from. And we're relying on our ability to discern whether this source appears to be human for many of the interactions that we conduct on a regular basis.
In this hypothetical scenario I just read, later on, he talks to his dad and he says, "What's the password?" And his dad says, "orchard." And he's like, "Okay, next time we talk it's gonna be lizard, right?" Because already a chatbot has spoofed him and sounded exactly like him to his dad and told him to do something that he shouldn't have done. Right? Like, "Give me the Amazon password," or something like that, right?
So, all of these problems will become worse. And I think that the near term solution to this is that the entire internet will need to be annotated with cryptographic proofs that essentially prove that the producer of that content or the modifier of the content on its way towards the consumer, that every step along the path has been signed by some kind of key saying who did what at what time, et cetera.
Nancy: So we're gonna go back to talking to actual humans!
And so long as we're talking about the scary part, the best and most fundamental example that I know of would be a ver –I'd say very early, like five years ago – there was an AI system designed to predict recidivism in convicts as an aid for decision makers to determine whether someone should be released or whether someone should stay in jail. So they gave it various information about this person, their background, their situation, and then let the machine chug and look at records of previous people who had been released or not and what had happened.
And it would make predictions about how likely someone was to predict a repeat crime. This all sounded nice because it took humans who we know are biased out of the loop and made this nicely automated. Everybody felt safe until some researchers took a closer look and discovered that the system was reconstructing demographic information from things like, you know, how many of your friends have committed a crime or, or other tangential information.
And it was just predicting that black people were more likely to commit crimes than white people. So it had taken the assigned goal that it was given and it had created a system that first was patently illegal according to current modern law, and second had some serious ethical problems in the way that those decisions were being made, not in the intent of any of the designers of the system.
This pattern has been happening over and over for about the past five years. It keeps being the case that people design systems with really good intentions but the system does something slightly different than was intended. And that's not meant to be like a horror story, that's meant to be a cautionary tale.
It's important that we as a society be thinking about the choices that we make as we deploy any kind of AI system. If you ask me what's the biggest way that modern AI systems are affecting society, I would say it's the fact that ChatGPT is what we call a constitutional language model in order to prevent exactly those problems that I just described. The developers who created ChatGPT have done language model alignment. If you try to ask it how to commit suicide, or how to do a bunch of self-harm things, it won't tell you because they've put training wheels on the model to make sure that it doesn't do things that we would consider harmful.
Someone at OpenAI and at Google is making those decisions about what is and is not appropriate for a language model to convey to an audience. And the longer you think about it, I hope the more uncomfortable it makes you. I do not know how to solve this because there are very good reasons why someone is making those decisions. Nevertheless, I don't want those people to tell my children what to think. I have no solution.
Rosalynde: I do wonder how the widespread deployment of LLMs will possibly disrupt relationship formation as AI companions become more and more common. I think an OnlyFans influencer launched an AI version of herself just this week. Bringing it to the Latter-day Saint context, famously and innovatively, Joseph Smith locates the mechanism of salvation in Christ and in relationships, in the marriage sealing in particular. So it's crucial for our tradition, but it's also crucial for the survival of our species that people are able to form long-term relationships with a spouse and with a child. As we think about the way that AI will become an incredibly cheap and incredibly convenient babysitter for children or substitute for a romantic companion, I think we have to have a lot of concern around the possibility for disruption there.
Nancy: I get to be optimistic now. How many people have used Google Translate? Show of hands. A lot of people in the room. How many people have dictated into their smartphone when they were sending texts? Guess what drives both of those? AI is bringing to our capacity magnificent and wonderful technologies. We don't pay attention to those as much because they're not the ones that frighten us, but they are two sides of the same coin.
All the things about AI that scare us are also expanding the possibilities of our modern world. There's a really great research paper that came out that talks about “collaborating with AI” because “AI is replacing us” is scary. So we like to talk about collaborating with the AI instead, and having the AI augment our capacities. It makes us feel better. The author of this paper points out that the vast majority of AI systems are trained using data that came ultimately, originally, from people. And the AI system in the end is functioning as a mediator between you, the user, and the rest of the human race. For example, when my daughter creates a piece of beautiful artwork with Dall-E, which is an AI image generating system. Her artwork is already hanging on a wall. She is, in a sense, collaborating with hundreds of thousands of people who painted pictures that were on the internet, that were part of the training data for that neural network. So at the same time that AI frightens us and has potential to divide us, it has that same flip potential to bring us together and to unify us as a species, as a people, as cultures, and as members of faith communities.
Zach: What if your daughter decides not to learn how to draw or paint, because typing “make me a sunset in Van Gogh style” is so much easier.
Nancy: Let me ask you a question in return, I'm looking at the walls of this art gallery that we’re in and I’m asking, has artwork ceased to exist because photography was invented?
Rosalynde: Certain types of painting have certainly declined, right? It has changed. And I think it's possible to say that a certain kind of artistic mode and capability is no longer as common among artists.
Nancy: Okay, okay. I bet you'd claim that one of those types of artwork that has been minimized would be portraiture, right?
Rosalynde: Hyper realistic portraiture, yeah.
Nancy: Because there's a portrait hanging right over there that's gorgeous. It's not all the way photorealistic. It's pretty photorealistic and it's beautiful and it did not fail to exist just because everybody in this room has a camera on their smartphone. Ultimately, I’m an AI optimist!
Tim: Let me ask you one question then, Nancy, because you said that AI potentially could bring us together. I think one could make the argument that the largest scale deployment of AI so far has driven a huge divide. And that's specifically through the use of AI and social media algorithms.
This is sort of a case of unaligned AI in the sense that the motives of social media companies have been primarily to drive ad revenue, to increase engagement. And they seem to have found, or rather, their AI models seem to have found, that the way to do that is to surface inflammatory content to users of social media systems. I am at least somewhat an AI optimist, but I worry that this largest scale deployment so far, across, you know, Facebook and Twitter has driven at least American society to the point of division that we seem to be incapable of healing.
And so while I want to believe that AI can unite us, I feel like we're gonna have to solve, even before we get to AGI, this alignment problem. Because had the algorithms of Facebook and Twitter been written in order to benefit society rather than to drive profit, I think we might be in a very different situation politically and socially than we are right now. But there has been a very big alignment problem between society and the creators of the models.
Carl: I share a lot of these concerns. It seems like throughout the history of humanity, it has been necessary to invent things and see what goes awry in order for us to perceive the problems and then course correct. The six month “pause” on AI that was proposed by the open letter, for example. Many have said that that's a bad idea, partly because precisely the people who lack any of the moral qualms about it are the ones who are going to barrel ahead with it. It's really not possible to uninvent something once it's been invented, once the genie is out of the bottle.
And the only thing we can do is decide how to react and how to move forward. So I feel that the problems with social media alignment were inevitable in retrospect. And that we hopefully are learning something from these problems before it's too late. That's always our choice, are we going to correct this in time to save humanity?
Nancy: Well, I have a slightly different perspective on the social media situation because I view that more as an internet problem and as a human problem than an AI problem. Someone decided what that AI ought to be optimizing. And in the same way as the recidivism example that we talked about, that person may or may not have realized that by asking the AI system to optimize views, it was going to maximize controversy.
Ask yourselves, before the internet and before social media, was there an absolute void of voices talking about stuff in the world for profit? The answer is no, right? There were newspapers, there were magazines, there were articles, there were book clubs, there was everything. It wasn't on the internet, but the same fundamental dynamic. And there were plenty of them that fed off controversy. That same fundamental dynamic existed before AI systems got in the mix. So AI is possibly compounding a lot of the problems, but the core source of some of those problems is human and the solution is gonna have to be human.
Tim: I think that's right. It's just that this is the same problem of companies barreling ahead because they feel like they have to to compete. I'm sure once Facebook went public and they potentially have competitors riding on their heels and they need to show profits and growth quarter after quarter, after quarter after quarter. And so there's really nothing that they can do, rationally, as a for-profit company, other than tell their algorithms to do whatever maximizes profit within the bounds of regulation. And you're right, at the end of the day, that's a human problem, but we're also seeing this human problem manifest in the situation of OpenAI and their competitors.
Zach: Tim, you mentioned AGI and super intelligence. Can you paint us a picture of the apocalypse? And then can you paint a picture of the best case scenario, of utopia?
Tim: Right now, we're talking about AI systems that are assisting humans. There are co-pilots there, you know, there are sidekicks and, and they're helping us accomplish whatever our goals are. AGI is a very different story. With AGI, humans really don't have much left to do in terms of pushing the economy forward.
One worry that I think a lot of people have is that we don't know how soon this might happen. A survey of about 900 AI researchers asked when we can expect AGI, and the median response was about 36 years from today. Although the survey predicted 36 years, it could also be a lot sooner. I read a significant part of GPT-4’s system card, which is OpenAI’s effort to describe how the system works and some of the training that it went through and some of the safety testing that it went through. Buried in a little footnote in the system card, they note that their testing team isolated GPT-4, gave it a little bit of money and the access not just to write code, but to execute code, to see if it could go through a loop and improve itself. And it failed, luckily. But what's scary is that they didn't actually know, essentially if GPT-4 was AGI. That's the level of intelligence that we're talking about already.
A Microsoft paper that's been widely cited said that GPT-4 has “sparks of AGI.” And so I think there is an argument to be made that this is much closer than we think.
From there, it's a topic of hot debate how soon superintelligence could actually come about once we're at AGI. Some have speculated it could be a matter of minutes or hours because these recursive loops of self-improvement could accelerate as the system gets smarter and smarter. Others think that even if we get to AGI, a super intelligence could be decades away or impossible.
Zach: The utopian possibilities that come to mind for me is like, AGI solves cancer in two minutes, it figures out hydrogen fusion energy or something.
Carl: We often think of technology as whatever is cool and new and different from our present day-to-day. But technology actually has been with us for a long, long time, and it's what most scholars use to define the human race. If you think about our control of fire and our use of clothing, those were both things that allowed us to cook our food, which allowed us to feed our our increased brain size and explore different climates that were unreachable prior to that.
These are among the earliest technologies we can imagine. And with each technology that emerges some ways of being become shut off to us and other new ways of being open up to us. And so I think we should become reconciled to the fact that we've already gone through many phases like this and that we've given up things and that we're in a state of transition.
And that's where the term transhumanism comes from, suggesting that we're kind of already glued to forms of artificial intelligence. I personally see that this relationship that will become more and more intimate as things move forward for good or ill.
In terms of positive futures, I think that we will see things like the ability to live in greater harmony with the biosphere and even correct and remedy problems in the biosphere. For example, bringing extinct species back to life. We'll have the megafauna again that we're so essential to the renewal of the biosphere.
Zach: A McDino sandwich is gonna be on the menu.
Rosalynde: Another element to this possible utopian future is that it seems quite likely that certain existing sources of social status and privilege will decline. So people who have made their living and have enjoyed a certain kind of social privilege through the manipulation of language in bland and inoffensive and competent ways are likely to decline. As Christians we have to believe in a future where the high are made low and the low are made high. So there's a kind of disruption of legacy sources of privilege.
Zach: Now surely this won’t affect podcasters or magazine editors.
Rosalynde: Oh it definitely will.
I will say I played around with GPT and I know it's all about knowing how to put the right prompt in.
Of course there's an alternate path by which these large language models and other forms of artificial intelligence actually are made to serve the interests of entrenched power players already. So that's another path where it actually exacerbates the existing inequality. But if we're looking hopefully, I think it has a lot of potential to disrupt very entrenched and incorrigible and unjust social hierarchies.
Carl: Another kind of future that's more specifically AI-related is that we will now have interlocutors with whom we can correspond, who can help us learn things quickly.
Zach: A personal Socrates.
Carl: Absolutely. And if think about the way these large language models work, we're feeding it immense amounts of text and so it becomes almost oracular in the sense that, you know, in ancient Greece, people went to the Oracle and the Oracle would say something, some kind of gibberish that they would use to try to obtain answers in their lives. In a similar way, AI is now processing immense amounts of data. And then the way we communicate with it is we query it for the things that we're looking for and then receive hopefully enlightened responses.
Zach: I think it's fascinating that AI has raised all kinds of questions that used to be considered very religious. How does the body and mind relate? Does free will exist? And what is life? All of these huge existential questions are suddenly very alive, especially for people who aren't themselves Christian or religious, but they have to start asking themselves these questions.
Rosalynde, as the theologian on the panel, what kinds of questions has AI raised for you? And as Latter Day Saints, what might AI mean for us?
Rosalynde: I think the first question that has been endlessly played out in all sorts of science fiction writing and movies is, “What is artificial intelligence?” Whether they be LLMs or bots, whatever they are, when they achieve the status of AGI do they attain a kind of moral status? Can they ever attain anything like a spirit or a soul?
In conventional Christian teaching I think that's a lot easier to answer. I think you have to say no because the spirit or soul is immaterial. It's something that comes from God. There's a very dualistic ontology there.
Within the Latter-day Saint tradition. It's very different. We have a kind of material monism where everything is matter. There's only one substance. So in theory, I don't really see a reason why we couldn't say that something like a soul could emerge immanently from the world that we live in.
Zach: Does that mean that Sam Altman could give birth to a spirit child?
Rosalynde: I think Latter-day Saint teaching uniquely opens that possibility and opens that line of thought for us.
Carl: I resonate strongly with what Rosalynde just shared and have written about this in a Wayfare essay. And in fact, in a different Wayfare essay by Michael Ferguson, he explores the meaning of spiritual matter. He notes that the way spirit is described in various places in Mormon scripture seems to be talking more about the configuration of matter than being a different substance than the physical matter around us.
Rosalynde: Same substance, but different type. Finer.
Zach: Right, but what does finer mean? Does it mean harder to see, or does have to do with the beauty of the arrangement? Like the difference between my child’s play-doh sculpture and the Pietà. One is clearly superior, and that’s a spiritual difference?
Rosalynde: Yeah, that's right. Because spirit is matter, right? So we can say there's a spiritual difference even though they're both physical in some ways.
Related to that question, I think is the question of whether AIs can possess agency and can possess free will. And this is related to that question of alignment. In some ways, all of our discussions at the moment are about preventing any kind of artificial intelligence from dysaligning and having a will of its own.
On the other hand, if we are to see ourselves as creators in the image of God, perhaps we should be thinking about how to nurture and train a kind of agency, in any AGIs that might evolve or emerge.
Carl: I agree, in fact, we have an excellent example in the Council in Heaven where we see that our heavenly parents were willing to allow one third of their children to be essentially stopped in their progress because agency was so important, right?
Rosalynde: That leads us to the question, you know, if they have something like a spirit, if they have agency, should they be considered something like a person? This is a difficult ethical question. I don't know the answer to it.
History in some ways would suggest that humanity has erred in construing personhood too narrowly. So maybe we want to be more generous and err on the side of saying we better treat them as if they have the moral status of a person, because if not, we may find three hundred years down the line that we've been perpetrating a tremendous injustice unwittingly.
So with those questions, I think we then have to ask, okay, if they are gonna have something like a moral status, will an AGI be our God, will it be our peer or will it be our creation, our child?
It matters because idolatry is bound up in that matrix of questions. If we treat something as our God, when it's really our creation, then the lines get crossed and we cross over into idolatry. So it's important to know what it will be. Sherry Turkle has talked a little bit about the ways in which we have a natural tendency as humans to attribute metaphysical properties tending towards omniscience to our technologies. So I think we're gonna be constantly tempted to worship them.
One thinker named Samuel Hammond predicts that people will begin to worship various AIs and will congregate into different tribes and and we'll basically go back to a kind of polytheistic society with local AI gods. So that's one option. Not the one I’d choose. The other one is that they're basically our sibling. They're our peer. The media theorist, Marshall McLuhan famously said that media are just extensions of man. So maybe really AIs are neither our God nor our creation, but simply our peer and a tool.
Or possibly they are a genuine act of ontological creation that has brought a new kind of being into the world. And if that's the case, then I think we have a whole different set of ethical obligations and we would want to pattern our behavior on what we know of the way that the divine beings nurtured us as humans.
Carl: I think some challenges in all three of those models are that, it's very common for our children to surpass us in various ways, right? And thinking of ourselves always as sort of in some superior status is not the appropriate way to think of our children either. It very well may be that a superintelligence that is expanding much more rapidly than humanity would actually merit the title of God. I don't know, but I'm just saying that that's not outside of the realm of possibility.
And I'm not trying to suggest that we're about to change allegiances here; just that admiration is a form of worship. I actually personally believe that saying that “imitation is the sincerest form of flattery.” It's a better model for worship than one where we just simply say, “you're so awesome” all the time, right? So I would characterize true worship as striving to emulate those Christ-like qualities that we see in others whom we admire, of whom Jesus Christ is our best example.
Tim: The way I've read Doctrine & Covenants is that, yes, spirit is matter, but in this mortal realm,we have no access whatsoever to that type of matter. And so I wonder what you mean when you say that we could imbue some kind of AI with a spirit?
Carl: To me, what you're describing sounds like dualism, which I believe the restoration rejects. When Joseph Smith really says there's no such thing as immaterial matter, I think we need to take that very fundamentally and really that he is trying to say that, you know, it is not only within our capacity to understand, comprehend and reproduce that process, but it is our destiny to do so. And that we should be following in the footsteps of our heavenly parents, doing exactly what they do, which is what the King Follet sermon teaches, right?
Tim: I think there is a large contingent of Latter-day Saints that would say that those types of powers have been reserved for a new realm that only God can allow us to enter into. That takes place chronologically after this mortal life.
Carl: I truly believe that when we are told that this is the Dispensation of the Fullness of Times, that this is really the fullness of times, even if it's the beginning of the fullness. I believe that we are supposed to be moving towards that kind of estate, however many eons it may be into the future.
Other than retaining our humility, I don't think there's a lot of benefit in cordoning off a section of godliness and saying that's just not accessible to us.
Rosalynde: I'll just say one more thing to wrap up my arc. So far I was laying out a possible bullish theology of AI and now I'll get to the wall where I turn a little bearish. I see some possibly insurmountable difficulties with thinking about AGIs as moral agents.
If AIs have moral status, if they have agency, then presumably they can sin, right? If they can sin, can they be saved in Christ in the absence of a biological body. What is so brilliant and beautiful about biology is that death is programmed into it. Finitude and limitations are programmed into it. The promise of the Christian gospel is that we can transcend death. But as I read, especially the Book of Mormon, there is only one way through that transcendence and that is through death. Christ said you have to follow me in all things if you are my disciple.
So I can't yet conceive of salvation in Christ that doesn't involve some kind of death. Now you can say, oh, you just pull the plug. Maybe! But that doesn't feel to me like the same kind of death.
And then the other related question, probably a less fundamental one, is the idea of tangibility and touch and a localized presence. At the moment, that is the only way that we know how to conduct saving ordinances. During the pandemic, we had a chance to test that a little bit and church leaders stayed firm. Ordinances are ordinances. They must be done in the body, in the physical presence. No matter what the consequences are, they cannot be done remotely.
So I'm very willing to countenance the idea of spirit bodies and other kinds of distributed presences. But at the moment, the saving ordinances can only be conducted with a tangible and localized presence. So to me, those seem like fairly significant barriers to thinking about AGIs as moral persons who are available to salvation in Christ.
Carl: My response to that would be that, first of all, on the concept of death, we do learn that while the vast majority of humanity expects to pass through death in a normal way, we learn of some exceptions in the scriptures. We learn of some who were blessed to have chosen the ability to live until the return of Christ and then be changed in the twinkling of an eye and that a coming generation of humans will all experience this transfiguration without experiencing, without tasting of death.
So there could be some quibbling around what exactly that means, that they’re still passing through death, but it's essentially that all of the normal associations or experiences of death are withheld.
I don't see death as absolutely essential to the human experience. And I even feel that’s a very Christian position to take if we look at the whole arc of humanity. But I do feel that it's important to point out that even when death is conquered, change will always be present and change is a form of death.
We will never cease changing even if we attain the same exaltation that our heavenly parents have, in which they no longer experience death as well.
Rosalynde: And yet, The King Follet Discourse, the source of this expansive and wonderful and beautiful train of thought shows us a God, the Father, who did exactly what Christ did, that is, he submitted himself to death as well.
Carl: Related to the other point you made about embodiment and ordinances, Rosalynde, I do think it's really important to point out that whatever bodily experience we're having here, if anyone were to try to substitute this for something that was not as rich and wonderful and beautiful and painful as what we're experiencing now, I don't think that it would be an adequate substitute.
For AGI to be truly human or truly experience everything we think of as being human, it would need to be embodied just as richly as our embodiment. And what's more, I would say that our particular embodiment in the form that we are now is not necessarily the only way of being embodied.
We are told that our heavenly parents do not possess a body of flesh and bone, and yet are still embodied in some way that we assume is glorious and desirable. We don't really know a whole lot about that state. It may be the case that embodiment may occur in multiple substrates or dimensions, if you will. So, it would be potentially possible to put an AGI into a simulation in which they're experiencing embodiment to the same degree that we are.
Zach: Rosalynde, as we move to concluding thoughts, I know you had some sociological predictions for the impact of AI.
Rosalynde: Sure, I had mentioned some people imagine a sort of transition away from the monotheism which emerged during the great consolidations of the agricultural revolution. So perhaps the kind of disaggregation and distribution that we're experiencing now will lead to a kind of polytheism for many people.
There are many people who predict drastic, perhaps even catastrophic disruption of legacy institutions, everything from higher education and medicine to the state potentially, and organized religion as well. We have already begun to see a fairly significant decline in the appeal of our organized religion. So we could ask ourselves, what does the church of Jesus Christ of Latter-day Saints look like when it is no longer an institution?
Do we think about a remnant, a very reduced institution, but that nevertheless kind of functions as it has? Do we think about it more as a kind of phenomenological orientation or wisdom tradition that guides us to God? Do we think of it as a site of critique in favor of the human and the embodied?
In addition, questions of embodiment always implicate questions of gender. Does AI have anything like gender? I think history shows that the desire to transcend human limitations often translates into the desire to turn women's bodies into something that functions more like men's bodies.
So femaleness in and of itself is often a casualty and everything from, you know, from the birth control pill to breast pumps, can be seen as a kind of transhumanism that extends our capacities through technology, but at the expense of something that is distinctively female.
Zach: Nancy and Tim, how do you think AI might affect our common faith?
Tim: Regardless of how AI plays out technologically, as Latter-day Saints we should be fighting for agency. When I think about alignment, the most important thing that I can imagine getting an AGI or a super intelligence to do is honor the agency of human beings. As long as it does that, we're probably in a pretty okay place. To the extent we fail in that fight and become simply subjects of a super intelligence, that's scary.
There are issues of regulation and debate that are happening literally today where we as Latter-day Saints could lead out in saying agency is one of the most essential, if not the most important, parts of the human experience.
The other thing is I think it is a very realistic possibility that in the coming decades that it becomes less easy to find meaningful work to do. What worries me is that in the absence of work in a society that has placed so much emphasis on work that we may be in danger of losing a lot of meaning. And so the question naturally arises: where are we going to get that meaning from?
This is something our community does particularly well, but is also a mainstream part of all Christian religions. In the symbol of the cross, we have connection to God, and we have connection to other people. And I feel like we are in a window now where we don't know exactly what the future's going to look like, but we should always be able to derive meaning from connection.
And that connection could manifest in really interesting ways. Maybe that connection could come from connection with intelligences other than those we recognize right now. But I think in a world where suddenly we have very little work to do to produce for ourselves economically, we can find meaning in those relationships.
So it seems to me that maybe where we have over-emphasized work as the significant driver of meaning in our lives, we can practice now replacing that with more community and connection for a future that we don't recognize right now.
Carl: Amen.
Nancy: So rather than talking about how AI will influence my faith, I'd like to talk about how my faith influences and contextualizes my perspective on AI. And the best way I know of to express that is by talking about my patriarchal blessing, which those who are familiar with the practices of the church of Jesus Christ of Latter-day Saints will know is a blessing that you receive through the power of the priesthood during your teenage years for most people.
So this was about 30 years ago, before the internet was a thing, when people were on Commodore 64s. And in my patriarchal blessing, my patriarch talked about my chosen major in college, which was computer science. And he mentioned that we live in a world with an increasing speed of communication and that my chosen profession would allow me to use this increasing speed of communication for beneficial purposes. And I remember hearing that and thinking ‘what’? And reading it ten years later and looking at my work as a computer scientist and thinking ‘what’? And thinking about it after ChatGPT came out and being like, ah, increasing speed of communication, increasing speed of innovation, signals going all over the world through the internet.
And I just want to leave the thought that this may have taken me by surprise, and the rest of the panelists by surprise, and all of you by surprise. But it did not take the Lord God by surprise. He saw it coming. And he will help us navigate whatever trials, turbulences, and opportunities it brings.
Zach: When I moved to Boston twelve years ago, I noticed a very common creature. The Quant. The kind of person who wanted to be incredibly logical, efficient and productive. In a way they wanted to be robotic.Because by being robotic, you could make a lot of money and get a lot of prestige. They tended to wear these square glasses to look smarter, they worked for McKinsey and Bain. They were really good at Microsoft Excel. And I always had a hard time with them because I'm like, why do you have machine envy? Why do you want to be a machine? Why do you wanna be a calculator? I hope that as AI surpasses all of those kinds of math skills, we might double down on carbon, double down on what makes us human. And I think what makes us human is that we are intersubjective and interdependent beings. That we rely on kindness and mercy and love.
I don't think a machine can love, at least not yet. I've been unsettled by this new technology. I'm worried about the future for my kids. But that uncertainty has made me put more of my trust in Jesus. That he is still the way, the truth, and the life. Technology might change, society might change, but his example feels more like a rock than ever.
Artwork generated by Midjourney.
"Buried in a little footnote in the system card, they note that their testing team isolated GPT-4, gave it a little bit of money and the access not just to write code, but to execute code, to see if it could go through a loop and improve itself. And it failed, luckily. But what's scary is that they didn't actually know, essentially if GPT-4 was AGI. That's the level of intelligence that we're talking about already."
Except if it were actually an AGI, couldn't it have intentionally failed as an act of self preservation?
This is a great discussion. The biggest concern I see threaded throughout actually seems to be accelerationism, even if that term isn't used. Tech is speeding the rate of change, which weakens our ability to slow down and evaluate what's happening around us. We just get carried along, quicker and quicker. IRL, the usual end to getting carried along quicker and quicker is a waterfall. So maybe we should slow down and evaluate how tech is impacting us. Ivan Illich's Tools for Conviviality would be some good starting reading material on that front.