The following is an edited transcript of a recent conversation between Nancy Fulda, Tim Chaves, Rosalynde Welch, Carl Youngblood and Zachary Davis. Zachary Davis: Welcome to tonight’s conversation on AI and the Future of Faith. In the words of Neal A. Maxwell, "Each new generation is held accountable for how it responds to the light it has received." Today, we are witnessing the emergence of a new light in our world: Artificial Intelligence (AI). This rapidly advancing technology has the potential to impact nearly every aspect of our lives, including our faith and spirituality.
"Buried in a little footnote in the system card, they note that their testing team isolated GPT-4, gave it a little bit of money and the access not just to write code, but to execute code, to see if it could go through a loop and improve itself. And it failed, luckily. But what's scary is that they didn't actually know, essentially if GPT-4 was AGI. That's the level of intelligence that we're talking about already."
Except if it were actually an AGI, couldn't it have intentionally failed as an act of self preservation?
This is a great discussion. The biggest concern I see threaded throughout actually seems to be accelerationism, even if that term isn't used. Tech is speeding the rate of change, which weakens our ability to slow down and evaluate what's happening around us. We just get carried along, quicker and quicker. IRL, the usual end to getting carried along quicker and quicker is a waterfall. So maybe we should slow down and evaluate how tech is impacting us. Ivan Illich's Tools for Conviviality would be some good starting reading material on that front.
AI and the Future of Faith
"Buried in a little footnote in the system card, they note that their testing team isolated GPT-4, gave it a little bit of money and the access not just to write code, but to execute code, to see if it could go through a loop and improve itself. And it failed, luckily. But what's scary is that they didn't actually know, essentially if GPT-4 was AGI. That's the level of intelligence that we're talking about already."
Except if it were actually an AGI, couldn't it have intentionally failed as an act of self preservation?
This is a great discussion. The biggest concern I see threaded throughout actually seems to be accelerationism, even if that term isn't used. Tech is speeding the rate of change, which weakens our ability to slow down and evaluate what's happening around us. We just get carried along, quicker and quicker. IRL, the usual end to getting carried along quicker and quicker is a waterfall. So maybe we should slow down and evaluate how tech is impacting us. Ivan Illich's Tools for Conviviality would be some good starting reading material on that front.