How Generative ‘AI’ Pollutes Search Results

In February 2026 I forgot to use noai.duckduckgo.com and saw a result from their AI assistant at the top of my search results. Like a lot of things produced by ‘generative AI’ it looks fun at first glance but sad as soon as you pay attention. Today I will post about what is wrong with this answer and with the whole premise
Search Assist is Confidently Wrong
The search assist box cites two sources. The first is a magazine article about how an investor told TV audience that YouTube was very profitable in 2008 and 2009. The very second sentence of the article says that Google itself denied this. Personally, I trust the CEO of a publicly traded company who can be thrown in prison if he lies over an investor, although it is certainly possible that there is some cooking of the books. In addition, a company can be profitable in one year but not in another. Based on this article it would be reasonable to say “an investor claimed that YouTube was profitable in 2009” but that is two major differences from the actual result The second source is a magazine article about Google’s growing revenues. Google does not seem to have discussed costs, and revenue and expenses can increase at the same time. Whereas services like Netflix only host things that large numbers of people want to pay for, YouTube hosts anything people want to share, and that is expensive. So the search-assist box has one false sentence, and one which is true but irrelevant. It pollutes results by making a simple, authoritative statement which is false.
Search results now contain many computer-generated articles like https://www.clrn.org/does-youtube-operate-at-a-loss/. This article cites no evidence but denies that YouTube operates at a loss anyways. It has no value over a reddit threat where people tell themselves that YouTube must be profitable because its big and has ads. Hosting unlimited videos and streaming them everywhere for free is expensive and Internet ads are probably a net negative (most of the money goes to mobile data providers, and they waste readers’ time much more than traditional print ads). If websites had to pay for the data their ads consume, like YouTube has to pay for serving all those videos, they would look much simpler.
The computer-generated results box and the computer-generated article have negative value. They add to the bog of confident speculation that flood social media, and increase the burden of wading through it. Every marketer and propagandist knows that repetition is powerful. So if you don’t want to be fooled, its a really good idea to block unreliable sources of information before you are exposed to them. Brandolini’s Law teaches us that its harder to prove something wrong than claim it, and the fable of John Henry teaches us that its foolish to try to work faster than a machine. A machine that spews plausible lies is a lot like the company’s steam drill.
Using an actual search, I have found statements by Google executives in 2010 and 2016 that YouTube was not yet profitable, and someone familiar with the figures who told a reporter that its income was roughly equal to its expenses in 2014.1 Their competitor vidme had trouble figuring out how Google could make the numbers work without feeding extra cash into YouTube. It also seems to me that if YouTube was profitable while so many streaming services and social media sites go bankrupt or go private, Google would boast about that. So it seems to me that YouTube was probably not profitable up to 2016, and that it probably still struggles.
The computer-generated answer reminds me of misleading headlines and captions on human journalism. People who flip past a headline like “Google sets all-time records as search and YouTube profits soar” might be surprised to read an article which just talks about YouTube revenues. If you sell something for less than it costs, your sales revenue may be high, but your costs will be even higher. Many people will never read the whole article and will just take away the headline (journalists used to be trained to write stories in a reverse pyramid, with the most important information at the start of the article where people will read it, but seem to have given up that practice). Reducing a complex story to a dozen words takes expertise and attention to detail and is easy to do badly.
This Is Bad Automation
This is the sort of thing that could be automated. Because press releases and journalism are posted online in English, it would be possible for a computer to sift through them and determine if any senior Google staffer has claimed that YouTube made a profit. Its possible that one day a computer could sort the same search results and say “Google is secretive about YouTube’s revenue. On three occasions from 2010 to 2016, Google executive said that it was losing money or just breaking even. An investor claimed that it was profitable in 2009 but was immediately contradicted.” But DuckDuckGo’s AiAssist cannot even do that right. Instead, it imitates something darker.
AiAssist imitates Pravda in the USSR and American journalism a few years ago. It tries to figure out the party line, and push that as hard as it can, rather than asking what is true and finding out empirically. To do this you don’t have to know anything about the world, just what authorities say about the world. It was pitiful to watch journalists with no scientific education try to learn the official truth on a topic like airborne infection control where the evidence is rapidly shifting and previous best practice turns out to have been made up. And its shameful watching reporters looking for the official truth when the only way to know if YouTube is profitable is to audit it or have someone share the books. You cannot know whether YouTube is profitable by searching the Internet, you have to investigate, and investigating and persuading sources to talk is their job! The closest I have found is a statement by “a person familiar with the figures” to the Wall Street Journal that YouTube roughly broke even in 2014 (archive).
Its hard and expensive to go out into the world and learn things. Its also difficult and pricey to become an expert who can evaluate large contradictory and diverse bodies of evidence with nuance. It takes time and effort to build trusted relationships with people who have access to secret information and might share it. And these companies can’t be bothered to code tools that answer “has any reliable source claimed this?” So instead they try to create a list of authorities and issue decrees based on them.
A few years ago, Google tried to rely more on authorities by upranking all results on certain domains. The result was a flood of sponsored content and mattress ads on formerly respected news and education sites, while specialist sites languished on the second and third page of results. This is a childish approach to epistemology, because most of us learn that uncle Roy has great advice on fishing or auto repair but should not be listened to on the Hollow Earth or the (((globalists))). People and sources are trustworthy on some topics but not others. It corrupts news organizations, because if they have been producing useful accurate information on some topics, it offers them wealth if they mix it with cheap propaganda. But making a list of good sites is cheap and easy to automate and that is all that the companies that dominate social media and web search care about.
These ‘high-tech’ companies have a way of deciding what is true which was respectable in the twelfth century. That bad thinking would just be their problem but they spew its consequences across the Internet and social media and leave it for the rest of us to clean up. If nobody knows something, the best search result is to say that, not to pretend that there is an unquestioned truth. And sifting through what others have said cannot substitute for going out and finding out what is true. Sometimes you can make people pretend that the emperor is wearing clothes. That won’t stop him from getting pneumonia if he spends all day in the rain wearing nothing but his crown and immense self-confidence.
I don’t publish ads or third-party content, but I do have to pay for the bandwidth that the chatbot companies use up. I am always glad when people share, comment, talk about my writing with friends, or donate.
(scheduled 12 March 2026)
Edit 2026-03-15: explain in what sense web ads are economically negative
- All links are under The Political Economy of American Tech Companies Since 2008 (the model has changed a bit since the end of the Zero Real Interest Rates policy, but the ‘generative AI’ companies are still burning tens of billions of dollars of other people’s money a year by promising that some day soon they will build God or let the bosses fire everyone). ↩︎


