Wayfare Artificial Intelligence Use Policy

A GUIDE FOR PROSPECTIVE WAYFARE AUTHORS

Good writing shapes the heart of the author at least as much as it does that of the reader. In its aim to practice the fuller arts of human creativity, Wayfare acknowledges artificial intelligence (AI)--and large language models (LLMs, like ChatGPT, Claude, Gemini, etc.) in particular--as an emerging practice interacting with human creativity. We hold as an organizing principle that all creative Wayfare work should serve a core mission of promoting human creativity within the bounds of that same core mission: artful humanity. Therefore, while also recognizing a fuller accounting of the environmental, social, and economic costs of LLM overuse below, we outline the following minimum condition as the AI Use Policy for prospective Wayfare authors:

Prospective Wayfare authors will be asked to acknowledge LLM use in the submission process. Please know that Wayfare does not publish AI-composed content; we publish words written by people.

In other words, Wayfare welcomes work from authors that, when asked, would willingly acknowledge that the author used (an unpaid or paid) LLM account for any part of the production process except composition itself: for example, the author might acknowledge using LLMs to consult on research questions, brainstorm, outline the draft, reverse outline previous author-composed drafts, and then to proof edit and format the style of the author-composed draft. But Wayfare does not publish essays whose words are composed by AI. Words submitted to Wayfare should be written by the named author, not LLMs.

Here is a more detailed list of process steps in which LLMs use may be accepted:

Identifying target audiences; determining purpose; conducting initial research; outlining structure; brainstorming preliminary ideas; developing an outline or structure; elaborating on evidence and examples; reviewing the consistency of the prose; revising for main idea consistency; reversing rough drafts into outlines; assessing for content and argument strength; checking for clarity and concision of writing; verifying facts and references; refining sentence structure and word choice; reviewing for tone and voice consistency; performing word counts, spelling, grammar, syntax, punctuation checks, among other proof editing actions; ensuring proper formatting; revising for readability; reading aloud; or preparing supplementary materials (figures, tables, appendices, etc.).

Here is a more detailed list of process steps in which LLM use is not accepted:

Composing, drafting, writing, or summarizing the words of the piece the author claims as their own.

The following list of other concerns about LLM overuse fall outside of the bounds of Wayfare’s specific AI use policy but are offered here for the prospective author’s consideration:

Appropriate large language model (LLM) use can support human creativity. At the same time, LLM overuse raises still other ethical, social, and environmental concerns that fall outside of the Wayfare AI Use policy. A few bear repeating here: With regard to ethical concerns, LLM models use, without compensation or acknowledgement, whole generations of previous authors’ published words; the reliance on discriminatory biases and dirty data informing AI models as well as the “hallucinations” (we prefer “concoctions” as a term not implying human perception), “model collapse,” and synthetic data that loop back into those models; the repetition of views that reinforce “filter bubbles”; a lack of transparency, accountability, liability, user privacy, and explainability of the same AI models; and the development of deepfakes, AI “slop” (online content spam), disinformation, and misinformation. In terms of social concerns, LLM overuse may shift culture, industry, and market pressures in ways that disproportionately affect already disadvantaged or displaced workers and their communities and lower the reward for both author and reader’s critical thinking skills, agency, and off-the-grid creative adaptability.

In terms of economic concerns, LLM overuse--both paid and unpaid--may exacerbate socioeconomic divisions between user classes, displace jobs and career arcs, and further compound and escalate sociocultural division and injustices; both fed by and feeding LLM overuse, uneven global AI innovation, market volatility, business models, and state regulation may further grow the gaps between those with and those without; the scaling costs of building and maintaining AI systems, such as the hardware, energy, and expert personnel, may exclude smaller businesses, developing economies, and disadvantaged language communities.

In terms of environmental concerns, AI systems leave significant carbon footprints and consume large amounts of energy and electricity for both computation and cooling (in 2025, a search entered into an LLM costs at least ten times more water than that same search entered into a search engine). Such use stresses water scarce communities, especially desert and island peoples. The overuse of LLMs also speeds electronic waste, the depletion and extraction of rare earth metals, and other environmentally and humanely destructive mining processes and labor practices. This list is not meant to condemn LLM use as a whole but rather to invite appropriate consideration for our prospective authors weighing their own ethical creative practices in the current age of artificial intelligence.