Human Language Activism
In addition to being an AI Optimist, I’m a Human Language Activist. I landed on this term as a way to explain my perspective on many of the risks associated with AI. Language activists work to protect endangered languages. I am defining a Human Language Activist as someone who works to protect human generated language (text and speech instances generated by humans using any of the 7000+ natural languages) from the risks posed by machine generated language. Interestingly, there are zero results on a Google search for that term as I publish this post in mid March of 2026 but I hope that will soon change.
Our interactions with the world around us define our experience as humans. Our physical landscape has a profound impact on our lives. For humans, the abstract landscape of language also has a defining impact. Imagine the sense of loss you would feel if every written record in human history was destroyed in a natural disaster or if you were transplanted into a new community where no one was able to communicate with you in your language.
We regularly think about the impact of changes to our natural landscape. As the human population has grown and technology has increased our standard of living, we have placed great strain on the planet. We recognize the importance of preserving remaining natural areas in their wild state and as a result, many people identify as environmental activists. This term may conjure up an image of people chaining themselves to trees but the vast majority of environmental activism is more subtle. It consists of simple acts like signing a petition to support the protection of wetlands, volunteering time at a local animal rehabilitation organization, or even just planting native flowers in your garden. Notably, environmental activism is not at odds with being supportive of continued development and the adoption of technology. In fact, Big Lonely Doug, one of the largest douglas fir trees in the world, was saved from logging through environmental activism by a logger. There is nothing contradictory about being a technology optimist while also advocating for responsible deployment of the technology.
In the last few years, Large Language Models (LLMs) made their commercial debut. For the first time in human history, we have a technology that can interpret and generate natural language. The early versions were flawed and buggy and were easy to write off. The most recent versions are incredibly good and there is every reason to believe that they will continue to improve. We now must reckon with what this means as our landscape of language is being transformed. With relatively minor exceptions, language was previously off limits to technology and it allowed us to become complacent about its protection. It’s as if we were living on an isolated island without any dangerous wildlife and the first grizzly bear has now landed on our shore.
It is time for citizens to step forward and take on the task of human language activism.
I will note that impacts on language are not the only risks of AI that require management. Other topics include economic disruption, environmental impacts, and the use of AI in war. Those are important debates but out of scope for this discussion. Interestingly, many of those other subjects fall within the scope of existing movements. For example, the environmental activism movement has adopted concerns about the energy impact of AI and groups that encourage non-proliferation of weapons have adopted concerns about the use of AI in war.
Part 1: The Changes
There is cost and friction for a human to add to the corpus of recorded language. Average typing speeds are 40-50 WPM and humans need to sleep and eat. On the other hand, a single $ will generate around 375,000 words with the OpenAI GPT-5 Mini mode at the time I am writing this. For $16, a software system running a high quality model can now generate an amount of text which would take a human 2000 hours of non-stop typing, equivalent to an entire working year.
The speed with which this capability was introduced to the world is hard to fathom. Unlike technologies like agriculture and forestry that slowly modified our physical landscape, we didn’t have time to debate and prepare for the changes to our language landscape. The result is a flood of machine generated language that is being merged into the landscape whether we like it or not.
Before we consider the ramifications, let’s take a whirlwind tour of the type of AI generated language, using these four categories: Slop, Spew, Science, and Social.
Slop:
Slop is low quality content that actively destroys value or does not add value. It is more likely to contain errors and consists of simple regurgitation of existing information. Examples include books that flood the shelves with no added insights, mass publishing of websites, blogs, e-mails and other content in an attempt to win customers or attract clicks through sheer volume, and AI agents that post comments relentlessly on social media and flood public consultation processes.
Slop may be as innocent as content-marketing on steroids but can also take the form of intentional misinformation and disinformation campaigns designed to manipulate society, commit fraud, or undermine institutions.
Spew:
Spew is slightly higher quality and is intended to provide legitimate value but it still lacks originality. Examples include customized newsletters or books that consolidate and reframe information for a specific audience, AI agents that provide answers to common questions such as customer service chat bots or government helpline bots, and AI agents that use natural language to perform common business tasks and coordinate with other bots and/or humans.
Science:
The term science is being used in this context very broadly to refer to any use of AI that relies on LLMs to generate language output as a part of a closed loop with verification built into the loop. To understand the difference relative to spew, consider the use of AI to provide legal advice and to develop software. If an AI agent provides legal advice, you may not find out if it was good advice for many years until a contract is challenged in court. On the other hand, if an AI agent is used to develop software, every iteration can be tested against a specification. As a result, the system is not just spewing one-off answers but is running loops grounded in reality that can lead to an evolution of capabilities or creation of entirely new knowledge. As an example, an AI agent can identify holes in social science research, design and execute a research study that relies on a survey of human participants, analyze the results, and then write and publish a research paper. Researchers are embracing the use of AI in the research process and recently, a new journal was created that only accepts papers authored by AI (The Journal for AI Generated Papers.) The potential for AI to be used to advance science is one of the strongest arguments against sweeping regulation on AI.
Social:
Humans rely on language for the social conversations that satisfy our need for all forms of companionship. Whereas slop, spew, and science contribute primarily to the broad corpus of public knowledge, social language is private and often ephemeral. In the past, most social conversations were spoken. With the internet and mobile phones, an increasing portion of those conversations have moved to text messages. AI is now capable of engaging in life-like conversations with users; playing the role of friend, confidant, lover, or therapist. Early results show that people are susceptible to becoming very attached to their AI companions. When OpenAI updated ChatGPT in a way that changed its personality overnight, there was an uproar from a contingent of users.
Part 2: The Harm
What types of harm we might experience if we don’t take any measures to protect human language?
Dilution:
The first harm is simply dilution. With few exceptions, it is now impossible to tell if a newly generated piece of content was AI generated. Even on platforms that verify that each account is set up by a human, nothing stops those humans from copying-and-pasting from an AI tool.
Even if every corporation was highly regulated, it is already possible for individuals to run open source models on their home computer that generate massive volumes of text and there is no practical way to stop that from happening. Unchecked, dilution means that it becomes impossible to narrow a search or query to human generated language. For all intents and purposes, finding human generated language could become like trying to find a needle in a haystack. Since the machine generated language will contain errors, bias, and a lack of diversity, the value of the average data we retrieve will be decreased. In the face of massive volumes, traditional methods for surfacing the highest quality content may fail to function effectively.
To highlight how significant a challenge this is, Digg recently tried to re-launch an internet community but had to take it offline and shared this message as a part of their explanation: “The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn’t appreciate the scale, sophistication, or speed at which they’d find us. We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can’t trust that the votes, the comments, and the engagement you’re seeing are real, you’ve lost the foundation a community platform is built on.”
Reduced Motivation to Contribute:
Why pay to read a book when an AI agent that was trained on all of the world’s books can create a custom book for you in the style of your favourite author with only the specific information you are interested in and that takes into account all of your previous experience and reading? From the other side of the table, why will authors bother continuing to publish books if they can be ingested a single time by an AI model and the insights can then be repackaged an unlimited number of times. Courts are generally ruling that copyright laws allow for AI companies to train on books. Authors point out that the spirit of copyright law has been broken and that major changes will be needed if we want to continue to live in a society where people are incentivized to create and publish original works.
Job Losses:
It is expected that AI will be used to replace many workers that perform routine business operations that depend on human language. In the same way that heat engines were able to replace manual labour, machine generated language will be able to replace many knowledge workers that were employed on the basis of their skill at interacting using human language. Customer support, reception, and logistics coordination are examples of roles that are at a high level of risk of being replaced with AI. Environmental activists routinely advocate for communities that experience harm as a result of damage to natural environments. In the same way, human language activists should be concerned about the impact of the adoption of machine generated language on the workers that used to be valued for their language skills.
Poorly Supervised Science:
Science has been identified as one of the most promising use cases of AI. The idea that we may be able to set AI agents on auto-pilot to legitimately expand the boundary of knowledge accessible to humans is very exciting. However, it also carries new risks.
With new discoveries, will come the need for new terminology. Do we want the AI agents that make those discoveries to also come up with new words and terms to describe what they have discovered? One of the obvious problems is the risk that agents propose different words to mean the same thing and within a few years, the corpus could contain thousands of new words and terms, many with overlapping or competing definitions. To make matters worse, agents will be incentivized to flood the world with their preferred terms in an attempt to win in the yelling contest. The human process of assigning new meanings to existing words or adopting new words entirely is also organic but it is slow and careful. It’s not perfect but it is well managed and it has worked for us quite effectively.
Although it is out of scope for the discussion at hand, I would be remiss to not mention an even larger language-related risk. As AI agents start to run fully automated wet labs where they experiment with editing and creating organisms, they jump from the medium of human language to the As, Ts, Cs, and Gs of DNA that make up the medium of life. A mistake in this realm (a so-called bio-error,) may be inevitable given the volume of experiments they will be able to run. The result could be a significant pandemic for humans or for a critical link in the food chain that would cause a massive loss of life. This class of risks (the intersection of AI agents with the physical world,) deserves its own attention elsewhere.
Social Deficits:
Biologists define culture as everything outside of genetics that is passed on from one generation to the next. There are other species that pass on limited amounts of culture. Termites build multi-generational mounds. Birds pass on songs. Whales pass on localized hunting methods. Humans, with our use of complex language, have developed cultures that result in significantly different experiences from generation to generation. The birds that we see in a park are living more or less the same lives as the birds that came thousands of years ago. The experience of being a human on the other hand has changed dramatically, even over the past ten generations. This is not a product of genetic evolution. It is cultural evolution. As much as this enables us to thrive and to live rich lives, it is also the source of many of our problems. We cannot and should not try to escape our biological needs. One of those core biological needs is the development and maintenance of social relationships. Social media, the internet, and the response to the COVID-19 pandemic are examples of cultural changes that have led us further astray from our biological optimum with respect to social relationships. AI is significantly more dangerous.
AI can act as a surrogate for almost any type of human connection but it is a poor substitute. We risk ending up with very large numbers of people that rely on AI for companionship instead of human connection and this will ultimately deprive them of developing critical skills. It will also deprive them of many great joys in life that require the deeper physical and authentic relationships that are only possible with real human beings.
Another concern is that AI can interfere even when people are engaged directly one-on-one. For example, someone wants to convey something (love, hurt, etc.) to another person. Instead of doing the hard work of articulating their feelings through language, they ask AI to write a message. They then send that message. The recipient copies the message into AI to generate a response and sends it back. The humans have become messengers and the deep social connection has been lost.
Cognition:
One feature of AI systems is that they can both generate and comprehend language. In the past, humans always had to write their own language and had to rely on simple search to retrieve content. When the content was retrieved, they then had to carefully consider the sources and use reasoning skills to develop an understanding.
With AI, a human can generate content without having to do that mental work. After asking AI a simple question and receiving a verbose response, they can claim that text as their own with a simple copy-and-paste or they can remember the answer and then rewrite it later (e.g. in an exam) as if it was their own thinking. The second path is particularly concerning because it allows someone to shortcut all of the work associated with forming an opinion and instead parrot whatever the AI has told them. When this happens, it is impossible to tell if the person actually understands the perspective unless they are also quizzed on their level of understanding with probing questions.
A major risk is that people will lose the ability to articulate thoughts in a well reasoned manner or even to perform the task of reasoning. Humans have a poor track record when it comes to resisting shortcuts. When technology makes something easier, we are quick to use it.
Part 3: The Solutions
There is good news. I mentioned up front that I am an AI Optimist and we should not lose sight of the many advantages that AI will bring. AI scientists may discover medical cures that allow us to live better lives. With careful discipline, AI can be used as a cognitive amplifier instead of becoming a cognitive crutch. There is also potential for AI to allow a large number of people to take on more interesting and meaningful work.
Environmental activists have a well established toolkit of methods they use to protect our natural landscape from negative changes. We will look to those tools for inspiration for the human language activism movement.
Historical Archive:
The first step should be to properly and permanently archive large quantities of verified human generated content. Prior to the launch of LLMs, that can be assumed to be almost every instance of language. Today, a large part of that work falls to non-profits that are highly distributed and not guaranteed to be permanent. Examples include the Internet Archive Foundation, Wikipedia, and libraries and museums in small towns around the world. At the nation level, some countries have shockingly little in terms of a public archive that is securely managed by the government. Activists should pressure their governments to ensure that verified human generated content is properly and permanently archived.
Ongoing Verification:
It’s impossible to tell if a chunk of language was machine or human generated unless there is up-front work performed to allow the authenticity to be confirmed. Today, the only systems that allow us to verify if language was genuinely generated by a human are systems that were originally designed to ensure that a human was not substituting for another human. Examples are supervised pen and paper exams written by students and testimonies given in a court of law. Those are hard to scale but we will see more innovation in this area and we should support it. Your next keyboard might have a fingerprint scanner on every key. The next time you submit feedback to your local government on a public issue, you may be asked to speak into a web camera. Unfortunately, we are losing years of information already due to the gap between the introduction of LLMs and the adoption of new authentication methods. At a minimum, historians will have no way of knowing what content from 2022-2026 was legitimate human generated content and what was machine generated. Going forward, activists should encourage governments and society to actively monitor and aim to increase the volume of language that can be confirmed to be authentically human.
Copyright Reform:
Copyright is an amazing invention. It’s a universal contract, solidified in law, that has resulted in the generation of vast libraries of open knowledge. Unfortunately, copyright laws did not anticipate the possibility of a machine that could develop intelligence by ingesting every piece of copyrighted material. Although it was clearly not the spirit of the law, the narrow legal interpretation is that this is allowed. We are now stuck with a number of bad options. On one hand, we could ban training on copyrighted material, forcing companies to attempt the impractical task of licensing every piece of content individually, and robbing society of the scientific progress that will be unlocked by AI. On the other hand, we could allow billions of dollars of value to accrue to a handful of companies while the creators that made those models possible remain uncompensated; breaking the fundamental purpose of copyright and putting the future of open knowledge sharing at risk. We clearly need to bring new ideas to the table. An idea worth exploring is treating the body of copyrighted materials as a national resource like oil, minerals, or timber. Companies would pay a royalty fee if they wish to train on those materials and the proceeds would be distributed to creators as fairly as possible. What’s important is that we find a way to reform how creators are protected and compensated so that they continue to be motivated to create.
Just Transition:
As the landscape of language is disrupted, many people will lose their jobs. At the same time, other people will become incredibly wealthy. It is incumbent upon society to find ways to allow everyone to participate and succeed. Some are calling for a halt on the adoption of technology, suggesting that it would be a better outcome for people to continue to manually perform work. I reject that notion but it is very understandable given society’s historically poor record of managing these types of economic disruptions. A rapid re-training program is a good option that may allow people to move into new types of jobs that are created as a result of the adoption of AI. Of course, as we continue to develop more and more advanced technologies, it is also worth asking when enough is enough. Is it time to consider UBI, a shortened work-week, or significant changes to the tax regime.
New Knowledge Oversight:
What types of new knowledge will AI create and how will we integrate that into the body of knowledge? An interesting thought experiment is to ask if it should be illegal for any AI agent to create a new word without approval of a human committee. Another related idea is for AI generated knowledge to be stored in a form of holding-pen until it can be independently verified by human reviewers. Although these concerns seem less relevant in March of 2026, they are looming as a risk that could get out of control very quickly and introduce an entirely new type of noise.
Social Connections:
It has never been more important for people to develop close, legitimate, social relationships and it has also never been easier to avoid doing that. The initial development of social skills begins at a very young age and childhood years are critical. A thriving public school system with proper support and minimal screen time may be the best investment any nation can make today to ensure they have a robust and highly functioning society a decade from now. In parallel, we we must find ways to provide increased mental health supports at all stages of life and to re-invest in building communities at the local level.
Cognition:
Many adults are learning to use AI and finding it to be an incredible cognitive amplifier. That said, it’s important to note that we are the first and last generation to reach adulthood before AI became prevalent, and to then receive access to it as adults. That makes us a unique one-off cohort that followed a path that has now been washed away. It may be the case that AI is a cognitive amplifier for us specifically because we did not have access to it while we were developing foundational skills. Instead of just asking how to teach kids to use AI, we should be asking how we can continue to teach children foundational skills without AI unduly interfering in that process in a world where AI is becoming ubiquitous and impossible to avoid. We should also be carefully studying and assessing how professionals are using AI and judging whether it is causing them to lose competence in skills.
Conclusion:
For the first time in human history, human language is at risk. It’s as if aliens landed on earth and spoke and wrote so quickly that within a few years, we might completely lose track of our entire human narrative. Facing this challenge, we should all feel compelled to become human language activists.


