The 2026 Headlines That Will Make Me Click
This is my version of a 2026 prediction post. This isn’t a list of the things that I believe are most likely but rather a list of things that I am looking out for most intently.
1) “The rise of ungovernable AI”
It won’t matter how tightly big tech companies are regulated when people can run state-of-the-art open-source customized models on computers in their own homes. This is a point that most regulators are still missing. They are regulating AI in a similar way to the automobile industry when it really needs to be treated much more like the alcohol industry (prohibition didn’t have a chance of working.) There should be a significant emphasis on programs and approaches that help society to safely and responsibly adapt to the post-AI reality. Government will not be able to eliminate all of the risks with rules alone.
2) “Iterative improvement loops are more important than individual model strength”
It’s good that the underlying models are going to continue to improve but I think the most dramatic developments in 2026 will come from agentic systems that take existing models and deploy them in systems that work towards a goal with continuous improvement loops. For many use-cases, I believe we have already crossed the point where model strength was the primary bottleneck. Humans didn’t go from stone tools to flying to the moon by evolving smarter brains. Once a threshold of intelligence was crossed, iterative loops fueled the development of improved technology.
3) “Humanoid robot figures out how to mash potatoes”
LLMs are powerful for specific use-cases but everything becomes more interesting when multiple modalities are combined. AI chatbots sometimes lack common sense because they have never interacted with anything physical in the real world. Humanoid robotics are now completing impressive tasks using VLAs (Vision-Language-Action) models and I think that integration of modalities is going to take off in a big way in 2026 as an important stepping stone towards AGI.
4) “A new platform guarantees that all content is human generated”
In late 2025, there was an essay contest that I was thinking of entering. I decided not to bother because there was no reliable method for the organizers to filter out AI submissions. Across school, work, and relationships, we are facing a massive authenticity crisis. It will become increasingly challenging to verify if content is AI generated. The likely solution will be to do the opposite and make more serious efforts to verify human authenticity. Will the next blogging platform use your webcam to verify that your fingers did the typing?
5) “Fallout as AI reviews previously written material”
There are decades of digital and digitized material that can now be reviewed by AI to look for signs of plagiarism, mistakes, etc. We should expect to see witch hunts on a potentially massive scale. There are a lot of motivations (political, legal, competitive) for parties to put previous work under a magnifying glass. Some of this will be productive (finding errors in previously published academic research that may help push scientific understanding forward,) while other uses may be primarily destructive or costly to society (patent trolling at scale, legal discovery getting completely out of hand, etc.)
6) “Risk-averse industries become the most rapid adopters of LLM-based AI”
Risk-averse industries (healthcare, banking, engineering,) have naturally been cautious in their adoption of the latest generation of AI systems powered by LLMs. I expect that at some point in the near future, that is going to switch and adoption of AI as a second set of eyes will become extremely rapid. As a member of the public, I recently fed a lengthy engineering report from a municipality into an LLM-based system and it correctly identified an error. Risk-averse industries will take the stance that asking LLM-based systems to review their work is now a part of the basic standard of care. How else could they defend an error that any taxpayer, customer, and patient could find with a free and widely available tool?
Steve Jones


