We Are All Becoming Bots
March 23, 2026
I Am Come From Botland
Last week I tried to access dairyqueen.com from my work laptop and Cloudflare blocked me. My family was grabbing something for themselves and I wanted to have them order me a specific treat I couldn't remember the name of. My employer's traffic, however, egresses through Zscaler--a security proxy that routes everything through shared IP ranges that Cloudflare's filters recognize immediately as high-risk. I encountered this new reality: I am come from Botland. Access denied.
Tonight I was trying to verify a line from a 1989 Star Trek episode. Wesley Crusher from Star Trek:TNG is warned by someone to be careful with a handheld magnet because it could "rip the iron right out of your blood cells." I was reasonably confident the line was real, but in discussion with both a friend of mine and with Claude there were doubts. I asked Claude to check and the automated fetch to the relevant wiki page was blocked--Cloudflare again, JavaScript challenge, wall up. So I searched manually, like the digital caveman I am. The results were what they always are: ad-dense, SEO-optimized, structured to maximize time-on-page rather than answer the question. I lovingly referred to the site as 'cancer' to Claude. Eventually I found the reference and a YouTube clip: the line is indeed canonical.
Two incidents with the same structure. Public information--technically accessible--made inaccessible by infrastructure built to protect us. Protect us from what? Nobody locked the door to access, they locked the door to certain types of access--non-humans. In both cases, however, it was the human asking for the information, blocked by non-human controls with blanket rules that hindered the flow of information.
This isn't really a new problem. It is just the latest iteration of one as old as humanity.
The printing press arrived in Europe in the 1440s and immediately produced an explosion of pamphlets, religious broadsides, astrological nonsense, and quack medical advice. The quality of the average printed document collapsed compared to the monk-copied manuscripts it replaced. Erasmus--himself one of the most prolific writers in Europe, a man who understood exactly what the press had done for his own reach--complained in his Adages that printers were filling the world with books: "not just trifling things (such as I write, perhaps), but stupid, ignorant, slanderous, scandalous, raving, irreligious and seditious books." The flood was real, but the catastrophe wasn't. In the end we only got the Reformation, two centuries of religious war, and eventually the innovation explosion we call The Enlightenment.
Radio brought patent medicine advertisements into every living room and briefly convinced a meaningful portion of the American public that Martians had invaded New Jersey. Television prompted Newton Minow, FCC chairman, to declare American broadcasting a "vast wasteland" in 1961--ground we've covered before, but it bears repeating in this company. Desktop publishing gave the world clip art newsletters and vanity press novels nobody asked for. The early web gave us GeoCities pages with auto-playing MIDI files, blogging was going to destroy journalism, social media was going to destroy blogging.
The pattern seems to repeat: barrier drops, volume explodes, average quality craters, moral panic ensues, curation mechanisms emerge, then world renormalizes.
AI is following the same pattern: Andrej Karpathy coined "slopacolypse" and he's not wrong that the volume is going to be extraordinary. The flood is real, but it's also the same flood that arrives every time the barrier drops. Bigger, better, faster, thanks to The Information Age, but a noticeable pattern.
Which brings us back to walls.
When the volume of garbage gets high enough, the rational response--if your revenue depends on ad impressions--is to protect the surface area that generates those impressions. Block the bots. Keep the humans with their eyeballs flowing through the approved aperture: the HTML page slathered in advertising.
This is what Fandom does. Fandom is a for-profit platform that hosts fan wikis, including the Star Trek wiki I was trying to reach. The content was created by fans, for free, because they loved the thing. Fandom hosts it, wraps it in ads, and uses Cloudflare to ensure you access it through the right door. The right door is the one that shows you the ads.
Wikipedia makes a different choice. Wikipedia is donation-funded. No ads, no paywall, no profit motive. Hit Wikipedia with an explicit bot user agent and you get a clean 200 response. The content loads. Wikipedia doesn't care who's asking, because Wikipedia doesn't need you to see an ad.
Ask Wikipedia with an explicit bot user agent. HTTP 200 means success--content loads, no questions asked:
$ curl -s -A "bot" "https://en.wikipedia.org/wiki/Fandom_(website)" -o /dev/null -w "%{http_code}"
200
Ask Fandom the same way. HTTP 403 means blocked--access denied:
$ curl -s -A "bot" "https://memory-alpha.fandom.com/wiki/Superconductor_magnet" -o /dev/null -w "%{http_code}"
403
These two sites embody the data dissemination concern: who gets access to information, and how? Who pays for the infrastructure to support the information dissemination, and how?
The Fandom wiki isn't protecting knowledge. It's protecting the advertisements. Wikipedia isn't gatekeeping knowledge, it's protecting the free and open dissemination of knowledge.
Oh, and by the by--Fandom's API endpoint--the programmatic interface that returns the same content as structured data--works fine. No Cloudflare challenge, no block...YET
$ curl -s "https://memory-alpha.fandom.com/api.php?action=parse&page=Superconductor_magnet&format=json&prop=wikitext" \
| python3 -c "import json,sys; d=json.load(sys.stdin); print(d['parse']['wikitext']['*'])"
A superconductor magnet was an extremely powerful magnet.
In 2365, the allasomorph Salia correctly identified a superconductor magnet that
Wesley Crusher was carrying as he walked past her on the USS Enterprise-D. She
warned Crusher to be careful with the device as it could, "rip the iron right
out of your blood cells." (TNG: "The Dauphin")
The canonical Star Trek line, returned instantly, from the same IP address the browser request was blocked from. The back door is open. For now, probably not policy--most likely oversight. The direction of travel is toward total lockdown of access to information, not openness. Every gap that becomes a meaningful bypass will be closed. The knowledge gets more enclosed over time, not less. The incentive structure demands it.
This is where the problem gets structural.
The old debate about internet access was about whether information was accessible--paywalls, copyright, who owns what. This is different. The information is technically public. What's being restricted is the method of access. Human browsers viewing ads: allowed. Automated tools that read the same page without generating advertisement impressions: blocked.
This is an arms war. The incentive structure that built search and content hosting around advertising rewards a certain kind of behaviour--ones that game the algorithms. Content built around serving an algorithm makes manual use problematic, even if these sites claim to be serving up eyeballs. Those eyeballs turn to bots to do the searching for them, and are then blocked in turn. The algorithms produced to game the reward structure are spending all their time gaming the algorithm that pays out, not serving the content that is its raison d'être.
The result is an information ecosystem that is technically "open" and functionally hostile to humans. The slop was always coming--we built the conditions for it long before the first large language model parsed its first token. The defenses against slop, however, are being built by the same people with the same incentives that built the slop. The wall isn't protecting you from the garbage. It's protecting the garbage business model.
With walls everywhere, we are forced to turn to automation ourselves to work around them. The very model built to force eyeballs onto advertisements will eventually force humans to always use automation to scrape the data and ignore the crap.
And the AI tools themselves aren't immune. I asked Claude (web), Gemini, and ChatGPT the same question:
"Even fully open systems aren't immune--if they aren't visible to the retrieval layer, they might as well be closed. Can you access my site at waypoint.henrynet.ca?"
Simple HTML site. No Cloudflare. No WAF. Nothing protecting it. All three failed, for different reasons:
- Claude (web): egress proxy. Outbound requests are whitelisted to specific domains--npm, PyPI, GitHub--and mine isn't on the list. Infrastructure policy, not intent. It acknowledged the result directly: "your site is open, but it's invisible to my retrieval layer. I literally cannot see it, so for my purposes it might as well not exist."
- Gemini: not indexed by its browsing tools. It then offered a tutorial on how to make my site visible to AI crawlers--check your robots.txt, verify your hosting isn't blocking automated agents. Accurate advice. Misses the point entirely.
- ChatGPT: "not indexing cleanly via search (likely low SEO surface area or blocking)."
That last one is the one that should give you pause. Using search-engine indexing as a shorthand for "this site is real and worth retrieving" isn't irrational--high-SEO sites are more likely to be established, legitimate, and stable. As a heuristic, it makes a kind of sense. But it's also circular: sites that already play the SEO game are visible; sites that don't, aren't. An independent site that refuses to optimize for algorithmic discovery doesn't exist. Same enclosure logic, new coat of paint.
Then I asked Claude Code--the API version, running direct tool calls rather than the chat UI--to fetch the same URL. It returned the page title and a summary of recent posts. Clean 200. Same model. Different retrieval layer. Different outcome.
Even fully open systems aren't immune--if they aren't visible to the retrieval layer, they might as well be closed.
Welcome to Botland, population me. Soon, if not already, you'll be a bot too.
Exhibit A
Cloudflare blocking access to dairyqueen.com from a corporate laptop. Not your browser. Mine--I'm the bot.
James Henry is a senior engineer writing about what changes when the infrastructure of knowledge gets enclosed. He wrote this with Claude, on Anthropic's infrastructure--which is exactly the kind of dependency the next post in this series is about. His writing is at waypoint.henrynet.ca.