“For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known” 1 Corinthians 13:12
Writing to the saints in Corinth, living under the shadow of the ruins of Acrocorinth, reminders of a once upright, rigid, and thriving community built high up on the hillside, Paul’s description of our reality has always made for a compelling thought experiment. For many, the allusion to Plato’s ‘Allegory of the Cave’ or the Elysian mysteries was plain to see. What is less certain, however, is the reason for our blindness. Clues, signs, instructions are all around us. And yet, most, if not all of it, utterly incomprehensible and indecipherable.
We live in an age of information overload. Just as in the past, aided at each turn by revolutionary technological advances—printing press, telegraph, telephone, computers, internet—it seems that the production, collection, and dissemination of information sits on the cusp of another and potentially unprecedented leap forward. Artificial intelligence promises to exponentially increase the availability and production of information in all of its forms, while also rendering our interface with it more difficult.
Paradoxically, perhaps, as the nature and amount of information increases in complexity and size, the delivery mechanisms, whether text, audio, or video-based, for such information continues to shrink in direct proportion to decreasing attention spans and the ever-narrowing echo-chambers of its users.
If “the medium is the message,” as Marshall McLuhan was keen to point out, then one wonders what the message is. These of course are not new problems, but the democratization of information, the proliferation of new AI-super charged means of production and delivery, and the erosion of trust across a variety of institutions and social norms, make for an explosive combination.
Instead of producing better informed and more discerning voters, customers, and believers, we often witness mass hysteria, group think, the embrace of wide-spread conspiracy theories, atavistic ideologies, folk myths, as well as the perpetuation of collective errors, socially constructed identities, and enforced conformity.
It is against this backdrop that the Church of Jesus Christ of Latter-day Saints released on March 13, 2024 a set of guidelines that both aims to explain how the Church intends to make use of AI as well as counsel members on how to protect themselves from the deluge of misinformation and disinformation. Promising transparency, accountability, privacy, and security, in the way that the Church intends to use AI, the guidelines echo a similar set of principles–transparency, inclusion, accountability, impartiality, reliability, security and privacy–laid out by the Vatican in its 2020 Rome Call.
The Church guidelines are a good and necessary first step when it comes to setting some standards for engaging with AI. Its focus for the moment remains the creation of a framework that helps Church officers, employees, and lay members to use AI in an ethical way. As a fuller picture of AI-related challenges and opportunities manifests itself both within our religious community and in the society more broadly, I suspect that we will find that a deeper and more sustained consideration will be required of the Church as an institution and as a community of believers, that may require it to look beyond its immediate milieu and perhaps engage with the broader global conversation on the governance of AI.
For the moment, however, the overarching concern, and perhaps rightly so, is the immediate impact that AI may have on its members as they are confronted with AI-generated information aimed at sowing mistrust and doubt. Addressing these specific concerns, Elder Gerrit W. Gong of the Quorum of 12 Apostles recommended that the best defense against these threats consists of relying on the Holy Spirit, Wisdom, and Trusted Sources.
Elder Gong’s reference to trusted sources and to trust more generally draws attention to one of the 21st century’s most troubling developments: the erosion of social trust at alarming rates. While the story is complex and multicausal, there is no doubt that increased access to internet connectivity and the democratization of information from the 1990s onward, meant that traditional sources of authority were exposed as too slow, too old, too opaque, too unaccountable, too vertical.
Universities, governments, media outlets, religious organizations, once considered the only legitimate sources of status and power, were increasingly forced to loosen their grip once their monopoly over knowledge and information, the only source of their legitimacy, was no longer secure. The end result, has been the emergence of overlapping and yet fragmented layers of ‘reality’, where each individual and or members of a group live their own ‘truth’, apart from that of others, with each of them empowered by instantaneous access to an almost infinite supply of information, both trivial and profound, sacred and profane, long-lasting and ephemeral.
Due to the erosion of trust in traditional sources of authority, under the best circumstances, this fragmentation into various and competing versions of realities has resulted in societal alienation, polarization, or imposition of discriminatory identity-based classifications. I say under the best circumstances because until now we have been working under the assumption that the source of the problem has been one’s inability or willingness to ‘give an equal shake’ to all the available information.
Whether it was cognitive biases or tribal affinities, be they religious, political, economic, or identity based, that stopped us from properly interpreting the materials available to us, in the overwhelming number of cases, there was never any doubt as to their provenance: they were created by humans and for humans.
That may no longer be the case. AI-generated content will likely exceed human produced ones in the next few years. While many have expressed optimism that current limitations and failings of large language models like ChatGPT or Bard will likely be overcome or at least minimized in the future, some are less sanguine.
Recent reports, for example, show that not only is ChatGPT performance plateauing, it may actually be getting worse. Evidence that these models suffer from hallucinations, fabrications, and even laziness, continue to pile up. The end result is the production, dissemination, and consumption of facts, news, analyses, etc that are defective by default.
And that’s before we consider the concerns and the pathologies resulting from algorithmic and set up biases, the first the result of incomplete and biased data and erroneous design, the latter, the outcome of biased individuals making decisions leading to pathological outcomes. The problem with AI-curated and disseminated misinformation and disinformation is likely to become one of the greatest challenges looking ahead, one that is made even more difficult due to the onslaught on historical and authoritative sources of trust.
Interestingly, this may create an opportunity for old and new institutions, alike. As some have already suggested, whereas the democratization of information in the 1990s favored the sidestepping of institutions, the proliferation of AI-induced and enabled misinformation and disinformation may provide a way back for them as it becomes increasingly difficult to distinguish fact from figment, reality from science fiction.
The ‘comeback’ of the institutions and trust is not a foregone conclusion, however. New institutional forms of signaling trust may make it difficult for ‘old’ institutions to regain their footing. Despite the historic usefulness of some institutions as ‘trust signifiers’ giving their seal of approval to some information and withholding it from others, current and future generations of users, customers, and believers, may not find highly rigid and hierarchical forms of governance appealing. ‘Old’ institutions like universities, governments, religious organizations etc, must then avoid becoming what users have already rejected. To regain their legitimacy, they must first recognize that horizontality, transparency, and openness are non-negotiable principles upon which every attempt to regain trust must be built.
Here, AI points to a promising path. In his 2004 Wisdom of the Crowds book, James Surowiecki makes the argument that diverse groups of people can collectively make better decisions than individual experts. Also known as collective, swarm, or hive intelligence, the wisdom of the crowd argument relies on the observed collective problem-solving ability or decision-making capacity of a group, where each individual contributes their own knowledge, perspectives, and experiences to arrive at a solution or decision. A concept often observed in social insects like bees or ants, the collective behavior of the group leads to complex, coordinated actions.
AI can help enhance hive intelligence in several ways. Firstly, AI algorithms can analyze vast amounts of data contributed by individuals within the group to identify patterns, trends, and correlations that might not be immediately apparent to humans. Additionally, AI can facilitate communication and collaboration among group members by providing platforms for sharing information, coordinating efforts, and synthesizing diverse viewpoints.
Furthermore, AI-powered prediction models and decision-making tools can assist the group in evaluating different options and selecting the most optimal course of action based on the collective input. As a result, overall, AI technologies have the potential to augment and amplify the collective intelligence of groups, enabling more effective problem-solving and decision-making processes.
Swarm-based predictive models have already contributed in a variety of fields including medicine, UN decision-making, business resource allocation, marketing, and even voting. No wonder why humans created secular and religious cooperative networks and institutions in the first place. It turns out that while misinformation and disinformation, and therefore distrust thrive in an atomized, polarized, and isolated society, the opposite is true in a society that ‘comes together.’
Scriptural accounts as well modern prophetic exhortations tell a consistent story, one that emphasizes both the indispensability of the Church as a physical manifestation of the Restoration as well as one’s own responsibility to ‘get it right’ through their own direct communication channels. That’s why at a time when misinformation and disinformation are likely to proliferate at unprecedented rates, and thus potentially worsening existing polarization, factionalism, and conflict, it becomes ever more vital to attune oneself to the proper frequency.
But that may not be enough. The work of building Zion was never intended to be an individual effort or even about the individual. Nor is it a hierarchical one. Instead, it was always a community-based work aimed at community building. After all Joseph Smith’s vision of a heavenly sociality was one teeming and buzzing with activity. It was a hive!
A version of this article was published on the Global Governance Institute website, available here.
Medlir Mema Ph.D. is Head of Program on AI and Global Governance at the Global Governance Institute. Follow him on X and LinkedIn. If you are interested in learning more about the impact of AI on politics, law, and society, check out his “IR in the Age of AI.”